Älyverkko CLI application

Table of Contents

1. Introduction

The Älyverkko CLI application is a user-friendly tool developed in Java, specifically tailored to streamline the utilization of expansive language models through CPU-based computation in batch processing mode.

To illustrate its capabilities: Imagine harnessing the power of a vast language model, boasting approximately 100 billion parameters, solely relying on CPU computations and leveraging the open-source software llama.cpp. This setup requires a modern consumer-grade CPU and approximately 128 GB of RAM. To put this into perspective, 128 GB of RAM is financially comparable to purchasing a high-quality smartphone, making it an economical option for many users.

In contrast, executing the same model on an Nvidia GPU could result in costs that are significantly higher, potentially at least by an order of magnitude.

However, there is a trade-off: CPU-based processing for such extensive models is inherently slow. This means real-time interaction, like chatting with the AI, wouldn't be practical or enjoyable due to the lag in response times. Nevertheless, when deployed in a non-interactive batch mode, this "slow but smart" AI can complete numerous valuable tasks within a 24-hour window. For instance, it could generate a substantial amount of code, potentially exceeding what you could thoroughly review in the same timeframe. Additionally, it could process more documents than most individuals would be inclined to read manually.

The primary objective of the Älyverkko CLI project is to identify and enable applications where this "slow but smart" AI can excel. By utilizing llama.cpp as its inference engine, the project aims to unlock a variety of uses where batch processing is more advantageous than real-time interaction.

Here are some practical examples of tasks suited for the Älyverkko CLI application:

In summary, the Älyverkko CLI application opens up a realm of possibilities for leveraging powerful AI in scenarios where immediate responses are not critical, but high-quality batch processing output is highly beneficial.

Note: project is still in early stage.

1.1. Use cases

1.1.1. Domain: Natural Language Processing (NLP)

Problem Statement:

Analyze a set of customer reviews to determine overall sentiment and extract key features that customers appreciate or dislike about a product.

Usage Procedure:

  1. User collects customer reviews in plain text format within the configured tasks directory. Lets say, about 150 kilobytes of reviews per input file (this is dictated by AI model available context size).
  2. Each review file starts with special "TOCOMPUTE:". (See: Task file format)
  3. The Älyverkko CLI application processes these files, generating sentiment analysis results and feature extraction insights.
  4. Results are appended to the original files in org-mode syntax, indicating AI responses.

1.1.2. Domain: Code Generation

Problem Statement:

Generate code snippets for a new software module based on detailed specifications provided by the developer.

Usage Procedure:

  1. Developer writes specifications in a text file within the tasks directory. Text file also contains relevant parts of the program source code and documentation. Älyverkko CLI "joinfiles" command can be used to facilitate such text file preparation.
  2. The Älyverkko CLI application processes this file and generates the corresponding code snippets. The generated code is appended to the original specifications file, organized using org-mode syntax.
  3. Developer can review proposed changes and then integrate them back into original program source code.

Note: Large part of the Älyverkko CLI program code is written in such a way by AI.

1.1.3. Domain: Content Creation

Problem Statement:

Draft an outline for a book on science fiction or improve its plot.

Usage Procedure:

  1. The book author writes a brief describing the outline of the plot and his book main idea for the novel.
  2. Älyverkko CLI processes this description and generates more detailed outline with suggested headings and suggests possible improvements to the plot.

Here is example sci-fi book that was written with the help of Älyverkko CLI.

2. Getting started

When you first encounter Älyverkko CLI, the setup process might seem involved compared to cloud-based AI services. That's completely understandable! Let me walk you through why each step exists and how it ultimately creates a powerful, private, and cost-effective AI assistant that works for you.

2.1. Why Bother With This Setup? (The Big Picture)

Before diving into steps, let's address the elephant in the room: Why go through this setup when ChatGPT is just a click away?

Because Älyverkko CLI solves a fundamentally different problem:

  • Privacy by design: All processing happens on your machine - no data ever leaves your computer
  • 💰 Cost efficiency: Run 70B+ parameter models without paying per token (128GB RAM ≈ cost of a smartphone)
  • ⚙️ Full control: Tweak every parameter to match your specific needs
  • 📦 Offline capability: Works without internet connection
  • 🕒 Batch processing: Perfect for "set it and forget it" workflows while you sleep

This isn't designed for real-time chatting (CPU inference is slow), but for substantial tasks where quality matters more than speed: code generation, document analysis, content creation, etc.

2.2. Your Setup Journey - What to Expect

Here's what you'll be doing, explained simply with why each step matters:

2.2.1. 1. Installing Java & Maven (The Foundation)

  • What: Install JDK 21+ and Apache Maven
  • Why: Älyverkko CLI is written in Java - these tools let you build and run the application.
    • Don't worry: On Debian/Ubuntu, it's just

      sudo apt install openjdk-21-jdk maven
      

Key insight: Java was chosen because it's cross-platform, memory-safe, and perfect for long-running background processes like our AI task processor.

2.2.2. 2. Building llama.cpp (Your AI Engine)

  • What: Download and compile the llama.cpp project from GitHub.
  • Why: This is the actual "brain" that runs large language models on your CPU. We build from source (rather than using rebuilt binaries) so it can optimize for your specific CPU - squeezing out maximum performance from your hardware.

2.2.3. 3. Adding AI Models (The Brains)

  • What: Download GGUF format model files (typically 4-30GB each)
  • Where: From Hugging Face Hub (search "GGUF").
  • Why: These contain the actual neural networks that power the AI.
  • Don't worry: Start with one model (like Mistral 7B) - you can add more later.
  • Key insight: GGUF format was created specifically for CPU inference.

❓ Why not smaller models? Larger models (even running slowly on CPU) produce significantly better results for complex tasks - it's worth the wait.

2.2.4. 4. Running the Interactive Wizard (alyverkko-cli wizard)

  • What: Launch the configuration wizard that asks simple questions.
  • Why: To connect all the pieces without you needing to edit complex YAML files.
  • Don't worry: It's interactive! You'll answer questions like "Where did you put your AI models?" with easy prompts.
  • Key insight: This creates your personal ~/.config/alyverkko-cli.yaml file.

🌟 Pro tip: The wizard automatically detects your models and suggests reasonable defaults - you're not starting from scratch.

2.2.5. 5. Setting Up "Skills" (Your Custom Instructions)

  • What: Create simple YAML files defining how the AI should behave for different tasks.
  • Why: So you don't have to rewrite instructions every time ("be a coding assistant" vs "be a writing editor").
  • Don't worry: Start with sample skills (default.yaml, summary.yaml) - you can modify them gradually.
  • Key insight: Skills let you create specialized AI personas without changing models.

Idea: Your writer.yaml skill might instruct the AI to "always provide well-reasoned responses in academic tone"

2.2.6. 6. Preparing Your First Task (The Magic Moment)

  • What: Create a text file with your request, prefixed with TOCOMPUTE:
  • Why: This triggers the background processing system
  • Key insight: File-based interaction isn't primitive - it's intentional design for batch processing.

2.3. Why Files Instead of a Fancy UI?

You might wonder: Why deal with text files when everything has beautiful interfaces these days?

Because this is designed for productivity, not conversation:

  1. No waiting around: With CPU inference, responses take minutes/hours. File-based workflow lets you queue tasks and get back to work.
  2. Natural integration: Works with your existing text editor (VS Code, Emacs, etc.) rather than forcing a new interface.
  3. Version control friendly: You can track changes to prompts/responses with Git.
  4. Resource efficient: No heavy GUI consuming precious RAM needed for AI models.
  5. Scriptable: Easily integrate with other tools in your workflow.

Think of it like email versus phone calls - sometimes asynchronous communication is actually more productive.

2.4. The Light at the End of the Tunnel

After initial setup (which typically takes 30-60 minutes), here's what you get:

  • ✅ A silent background process that automatically processes tasks
  • ✅ Complete privacy - no data ever leaves your machine
  • ✅ The ability to run state-of-the-art models without expensive hardware.
  • ✅ A system that keeps working while you sleep - queue up 10 tasks before bed, get results in the morning.

You fill find that after the first few processed tasks, the initial setup effort feels worthwhile. You're not just getting another chat bot - you're building a personal AI workstation tailored to your specific needs. The initial investment pays dividends every time you need serious AI power without compromise.

3. Installation

3.1. Requirements

Operating System:

Älyverkko CLI is developed and tested on Debian 12 "Bookworm". It should work on any modern Linux distribution with minimal adjustments to the installation process.

Dependencies:

  • Java Development Kit (JDK) 17 or higher
  • Apache Maven for building the project

Hardware Requirements:

  • Modern multi-core CPU.
  • The more RAM you have, the smarter AI model you can use. For example, at least 64 GB of RAM is needed to run decent AI models with sufficiently large context.
  • Sufficient disk space to store large language models and input/output data.

3.2. Installation

At the moment, to use Älyverkko CLI, you need to:

  • Download sources and build llama.cpp project.
  • Download sources and build Älyverkko CLI project.
  • Download one or more pre-trained large language models in GGUF format. Hugging Face repository has lot of them.

Follow instructions for obtaining and building Älyverkko CLI on your computer that runs Debian 13 operating system:

  1. Ensure that you have Java Development Kit (JDK) installed on your system.

    sudo apt-get install openjdk-21-jdk
    
  2. Ensure that you have Apache Maven installed:

    sudo apt-get install maven
    
  3. Clone the code repository or download the source code for the `alyverkko-cli` application to your local machine.
  4. Navigate to the root directory of the cloned/downloaded project in your terminal.
  5. Execute the installation script by running

    ./install
    

    This script will compile the application and install it to directory

    /opt/alyverkko-cli
    

    To facilitate usage from command-line, it will also define system-wide command alyverkko-cli as well as "Älyverkko CLI" launcher in desktop applications menu.

  6. Prepare Älyverkko CLI configuration file.
  7. Verify that the application has been installed correctly by running alyverkko-cli in your terminal.

3.3. Alyverkko CLI daemon configuration

Älyverkko CLI application configuration is done by editing YAML formatted configuration file.

Configuration file should be placed under current user home directory:

~/.config/alyverkko-cli.yaml

3.3.1. Key Parameters Explained

3.3.1.1. Core Directories
  • tasks_directory: Where task files are placed for processing.
  • models_directory: Contains GGUF model files.
  • skills_directory: Contains YAML skill definition files.
  • llama_cli_path: Path to llama.cpp's executable.
3.3.1.2. Generation Parameters
  • default_temperature: (Optional) Creativity control (0-3, higher = more creative).
  • default_top_p: (Optional) Nucleus sampling threshold (0.0-1., higher = more diverse).
  • default_top_k: (Optional) Restricts token selection to the K tokens with the highest probabilities, regardless of their actual probability values or the shape of the distribution.
  • default_min_p: (Optional) Filters the vocabulary to include only tokens whose probability is at least a certain fraction (Min P) of the probability of the most likely token.
  • default_repeat_penalty: (Optional) Penalty for repetition (>0.0, 1.0 = no penalty)
3.3.1.3. Performance Tuning
  • thread_count: Sets the number of threads to be used by the AI during response generation. RAM data transfer speed is usually bottleneck here. When RAM bandwidth is saturated, increasing thread count will no longer increase processing speed, but it will still keep CPU cores unnecessarily busy.
  • batch_thread_count: Specifies the number of threads to use for input prompt processing. CPU computing power is usually the bottleneck here.
3.3.1.4. Model-Specific Settings

Each model in the models list can have:

  • alias: Short model alias. Model with alias "default" would be used by default.
  • temperature: (Optional) (TODO: document)
  • top_p: (Optional) (TODO: document)
  • min_p: (Optional) (TODO: document)
  • top_k: (Optional) (TODO: document)
  • repeat_penalty: (Optional) (TODO: document)
  • filesystem_path: File name of the model as located within models_directory
  • context_size_tokens: Context size in tokens that model was trained on.
  • end_of_text_marker: (Optional) Some models produce certain markers to indicate end of their output. If specified here, Älyverkko CLI can identify and remove them so that they don't leak into conversation. Default value is: null.

3.3.2. Configuration file example

The application is configured using a YAML-formatted configuration file. Below is an example of how the configuration file might look:

tasks_directory: "/home/user/AI/tasks"
models_directory: "/home/user/AI/models"
skills_directory: "/home/user/AI/skills"
llama_cli_path: "/home/user/AI/llama.cpp/build/bin/llama-cli"

# Generation parameters
default_temperature: 0.7
default_top_p: 0.9
default_repeat_penalty: 1.0

# Performance tuning
thread_count: 6
batch_thread_count: 10

# Model definitions
models:
  - alias: "default"
    filesystem_path: "model.gguf"
    context_size_tokens: 64000
    temperature: 0.8
    top_p: 0.9
    repeat_penalty: 1.1

  - alias: "mistral"
    filesystem_path: "Mistral-Large-Instruct-2407.Q8_0.gguf"
    context_size_tokens: 32768

3.3.3. Parameter Precedence Hierarchy

For temperature, top_p, top_k, min_p and repeat_penalty parameters, values are determined using this priority order (highest to lowest):

  1. Skill-specific value (from skill's YAML file)
  2. Model-specific value (from model configuration)
  3. Global default value (from main configuration)

This allows fine-grained control where more specific configurations override broader ones.

3.3.4. Enlisting available models

Once Älyverkko CLI is installed and properly configured, you can run following command at commandline to see what models are available to it:

alyverkko-cli listmodels

Note: Models that reference missing files will be automatically marked with "-missing" suffix in their alias by configuration wizard. You can manually remove this suffix after fixing the model file path.

3.3.5. Self test

The selftest command performs a series of checks to ensure the system is configured correctly:

alyverkko-cli selftest

It verifies:

  • Configuration file integrity.
  • Model directory existence.
  • The presence of the llama.cpp executable.

3.4. Skill concept and configuration

3.4.1. Skill File Format

Skills are defined in YAML files stored in the skills_directory.

Each skill file contains:

prompt: "Full system prompt text here"
temperature: 0.8       # Optional
top_p: 0.9             # Optional
top_k: 0.5             # Optional
min_p: 0.5             # Optional
repeat_penalty: 1.1    # Optional

The system prompt must contain <TASK-FILE> which gets replaced with the actual user prompt during execution.

3.4.2. Example Skill File

writer.yaml
temperature: 0.9
top_p: 0.95
prompt: |
  <|im_start|>system
  User will provide you with task that needs to be solved along with
  existing relevant information.

  You are artificial general intelligence system that always provides well reasoned responses.
  <|im_end|>
  <|im_start|>user
  /think Solve following problem:

  <TASK-FILE>

  <|im_end|>
  <|im_start|>assistant

See more example skills: default.yaml, summary.yaml

3.5. Starting process daemon

Älyverkko CLI keeps continuously listening for and processing tasks from a specified mail directory.

There are multiple alternative ways to start Älyverkko CLI in mail processing mode:

3.5.0.1. Start via command line interface
  1. Open your terminal.
  2. Run the command:

    alyverkko-cli process
    
  3. The application will start monitoring the configured mail directory for incoming messages and process them accordingly in endless loop.
  4. To terminate Älyverkko CLI, just hit CTRL+c on the keyboard, or close terminal window.
3.5.0.2. Start using your desktop environment application launcher
  1. Access the application launcher or application menu on your desktop environment.
  2. Search for "Älyverkko CLI".
  3. Click on the icon to start the application. It will open its own terminal.
  4. If you want to stop Älyverkko CLI, just close terminal window.
3.5.0.3. Start in the background as systemd system service

During Älyverkko CLI installation, installation script will prompt you if you want to install systemd service. If you chose Y, Alyverkko CLI would be immediately started in the background as a system service. Also it will be automatically started on every system reboot.

To view service status, use:

systemctl -l status alyverkko-cli

If you want to stop or disable service, you can do so using systemd facilities:

sudo systemctl stop alyverkko-cli
sudo systemctl disable alyverkko-cli

4. Usage

4.1. Task file format

Task files follow a specific structure that begins with a header line:

TOCOMPUTE: [parameters]
[User prompt content]

Example complete task file:

TOCOMPUTE: skill=default model=default priority=5

If numbers had personalities,
what kind of personality might the number 'zero' have?

4.1.1. Task File Header Format

The first line must begin with exactly TOCOMPUTE: followed by space-separated key-value pairs:

TOCOMPUTE: skill=default model=mistral priority=10

Valid parameters in the header:

  • skill[name]=: Specifies which skill to use (defaults to "default")
  • model[alias]=: Specifies which AI model to use (defaults to "default")
  • priority[integer]=: Higher integers mean higher priority (default: 0)

4.1.2. Processed File Format

After AI processing completes, a new file is created with:

  1. First line: DONE: skill[name] model=[alias] duration=[time]=
  2. * USER: section containing original user prompt
  3. * ASSISTANT: section containing AI response
DONE: skill=writer model=default duration=5m
...

Example DONE file:

DONE: skill=default model=default duration=2m
* USER:

If numbers had personalities,
what kind of personality might the number 'zero' have?

* ASSISTANT:

1. Personality traits based on mathematical properties:
   - Neutral and unchanging unless influenced
   - Control over magnitude through multiplication
   - Neither positive/negative (impartial)
   - Foundational yet seemingly insignificant at times

2. Symbolism and metaphors:
   - Balance point between opposites
   - Container for infinite possibilities (potential)
   - Cycle of beginning/endpoints
   - Quiet supporter behind the scenes

3. Social dynamics:
   - Values community/cooperation
   - May feel undervalued but persists regardless
   - Adaptable depending on situation/placement

4. Philosophical/spiritual parallels:
   - Emptiness equates to purity/clean slate
   - Potential without action = dormant power

5. Gender and self-identity:
   - Ambiguous/genderless

6. Emotional characteristics:
   - Patient; allows time for events/actions to unfold
   - Resilient when facing neglect or misunderstanding
   - Persistent in fulfilling its purpose

7. Behavioral tendencies:
   - Fosters growth indirectly rather than directly contributing
     measurable output
   - Encourages exploration beyond limits via abstract reasoning
     innovations

4.2. Task preparation

The Älyverkko CLI application expects input files for processing in the form of plain text files within the specified tasks directory (configured in the YAML configuration file).

Suggested usage flow is to prepare AI assignments within the Älyverkko CLI mail directory using normal text editor. Once AI assignment is ready for processing, you should initiate AI processing on that file.

4.2.1. "joinfiles" command

Note: See also alternative solution with similar goal: prelude.

The joinfiles command is a utility for aggregating the contents of multiple files into a single document, which can then be processed by AI. This is particularly useful for preparing comprehensive problem statements from various source files, such as software project directories or collections of text documents.

4.2.1.1. Usage

To use the joinfiles command, specify the source directory containing the files you wish to join and a topic name that will be used to generate the output file name:

alyverkko-cli joinfiles -s /path/to/source/directory -t "my_topic"

If desired, you can also specify a glob pattern to match only certain files within the directory:

alyverkko-cli joinfiles -s /path/to/source/directory -p "*.java" -t "my_topic"

After joining the files, you can choose to open the resulting document in text editor for further editing or review:

alyverkko-cli joinfiles -t "my_topic" --edit
4.2.1.2. Options
  • -s, –src-dir: Specifies the source directory from which to join files.
  • -p, –pattern: An optional glob pattern to match specific files within the source directory.
  • -t, –topic: The topic name that will be used as a basis for the output file name and should reflect the subject matter of the joined content.
  • -e, –edit: Opens the joined file in text editor after the join operation is complete.
4.2.1.3. Example Use Case

Imagine you have a software project with various source files that you want to analyze using AI. You can use the joinfiles command to create a single document for processing:

alyverkko-cli joinfiles -s /path/to/project -p "*.java" -t "software_analysis" --edit

This will recursively search the project directory for Java source files, aggregate their contents into a file named software_analysis.org (within AI processor input files directory), and open text editor on the file, so that you can add your analysis instructions or problem statement. Finally you Initiate AI processing and after some time, you will get results and the end of the file.

4.3. Initiate AI processing

Once your task file is prepared, you should place TOCOMPUTE: marker on the first line of that file, so that it will be considered for processing.

When the Älyverkko CLI detects a new or modified file in the mail directory:

  1. It checks if file has "TOCOMPUTE:" on the first line. If no, file is ignored. Otherwise Älyverkko CLI continues processing the file.
  2. It reads the content of the file and feeds it as an input for an AI model to generate a response.
  3. Once the AI has generated a response, the application appends this response to the original mail contents within the same file, using org-mode syntax to distinguish between the user's query and the assistant's reply. The updated file will contain both the original query (prefixed with: "* USER:*") and the AI's response (prefixed with "* ASSISTANT:"), ensuring a clear and organized conversation thread. "TOCOMPUTE:" is removed from the beginning of the file to avoid processing same file again.

Note: During AI task file preparation, feel free to save intermediary states as often as needed because AI engine will keep ignoring the file until TOCOMPUTE: line appears. Once AI assignment is ready, add

TOCOMPUTE:

to the beginning of the file and save one last time. Älyverkko CLI will detect new task approximately within one second after file is saved and will start processing it.

If your text editor automatically reloads file when it was changed by other process in the filesystem, AI response will appear within text editor as soon as AI response is ready. If needed, you can add further queries at the end of the file and re-add "TOCOMPUTE:" at the beginning of the file. This way AI will process file again and file becomes stateful conversation. If you use GNU Emacs text editor, you can benefit from purpose-built GNU Emacs utilities.

4.4. Helpful GNU Emacs utilities

Note: GNU Emacs and following Emacs Lisp utilities are not required to use Älyverkko CLI. Their purpose is to increase comfort for existing GNU Emacs users.

4.4.1. Easily compose new problem statement for AI from emacs

The Elisp function ai-new-topic facilitates the creation and opening of a new Org-mode file dedicated to a user-defined topic within a specified directory. Now you can use this file within emacs to compose you problem statement to AI.

When ai-new-topic function triggered, it first prompts the user to input a topic name. This name will serve as the basis for the filename and the title within the document.

The function then constructs a file path by concatenating the pre-defined alyverkko-topic-files-directory (which should be set to your topics directory), the topic name, and the .org extension. If a file with this path does not already exist, the function will create a new file and open it for editing.

(defvar alyverkko-task-files-directory "/home/user/my-ai-tasks-directory/"
  "Directory where task files are stored.")

(defun alyverkko-new-task ()
  "Create and open a task file in the specified directory."
  (interactive)
  (let ((task (read-string "Enter task name: ")))
    (let ((file-path (concat alyverkko-task-files-directory task ".org")))
      (if (not (file-exists-p file-path))
          (with-temp-file file-path
            ))
      (find-file file-path)
      (goto-char (point-max))
      (org-mode))))

4.4.2. Easily signal to AI that problem statement is ready for solving

The function alyverkko-compute is designed to enhance the workflow of users working with the Älyverkko CLI application by automating the process of marking text files for computation with a specific AI model and prompt.

When function is invoked, it detects available prompts and allows user to select one.

Thereafter function detects available models from Älyverkko CLI configuration file and allows user to select one.

Finally function inserts at the beginning of currently opened file something like this:

TOCOMPUTE: prompt=<chosen-prompt> model=<chosen-model> priority=<chosen-priority>
  • Adjust prompt-dir variable to point to your prompts directory.
  • Adjust config-file variable to point to your Älyverkko CLI configuration file path.
(defun alyverkko-compute ()
  "Interactively pick a skill, model, and priority, then insert a
TOCOMPUTE line at the top of the current buffer.

Adjust `skill-dir` and `config-file` to match your setup."
  (interactive)
  (let ((skill-dir "~/.config/alyverkko-cli/skills/")
        (config-file "~/.config/alyverkko-cli/alyverkko-cli.yaml")
        models)

    ;; Harvest model aliases from the Älyverkko CLI config
    (with-temp-buffer
      (insert-file-contents config-file)
      (goto-char (point-min))
      (when (search-forward-regexp "^models:" nil t)
        (while (search-forward-regexp "^\\s-+- alias: \"\\([^\"]+\\)\"" nil t)
          (push (match-string 1) models))))

    (if (file-exists-p skill-dir)
        (let* ((files   (directory-files skill-dir t "\\`[^.].*\\.yaml\\'"))
               (aliases (mapcar #'file-name-base files)))
          (if aliases
              (let* ((selected-alias (completing-read "Select skill: " aliases))
                     (model         (completing-read "Select AI model: " models))
                     (priority      (number-to-string
                                     (read-number "Priority (integer, default 0): " 0))))
                (alyverkko-insert-tocompute-line selected-alias model priority))
            (message "No skill files found.")))
      (message "Skill directory not found."))))

(defun alyverkko-insert-tocompute-line (skill-alias model &optional priority)
  "Insert a TOCOMPUTE line with SKILL-ALIAS, MODEL, and PRIORITY at
the top of the current buffer."
  (save-excursion
    (goto-char (point-min))
    (insert (format "TOCOMPUTE: skill=%s model=%s priority=%s\n"
                    skill-alias model (or priority "0")))
    (save-buffer)))

5. Getting the source code

5.1. Source code

6. Feature ideas

  • Document system requirements (minimum RAM, storage, OS)
  • Make glossary of concepts. (Task, Task directory, Skill, Model, etc)
  • Recommend some concrete AI models.
  • In task directory ignore binary files, use joinfiles command as example how to ignore binary files. Perhaps extract plain text file detection into some utility class.
  • Make text editor configurable in application properties file.
  • Extend the application's capabilities to include voice capture and processing.
  • Implement image capture and processing features.

Created: 2025-12-02 Tue 21:57

Validate