Älyverkko CLI application

Table of Contents

1. General

1.1. Source code

2. Introduction

The Älyverkko CLI application is a user-friendly tool developed in Java, specifically tailored to streamline the utilization of expansive language models through CPU-based computation in batch processing mode.

To illustrate its capabilities: Imagine harnessing the power of a vast language model, boasting around 100 billion parameters, solely relying on CPU computations and leveraging the open-source software llama.cpp. This setup requires a modern consumer-grade CPU and approximately 128 GB of RAM. To put this into perspective, 128 GB of RAM is financially comparable to purchasing a high-quality smartphone, making it an economical option for many users.

In contrast, executing the same model on an Nvidia GPU could result in costs that are significantly higher, potentially by an order of magnitude.

However, there is a trade-off: CPU-based processing for such extensive models is inherently slow. This means real-time interaction, like chatting with the AI, wouldn't be practical or enjoyable due to the lag in response times. Nevertheless, when deployed in a non-interactive batch mode, this "slow but smart" AI can complete numerous valuable tasks within a 24-hour window. For instance, it could generate a substantial amount of code, potentially exceeding what you could thoroughly review in the same timeframe. Additionally, it could process more documents than most individuals would be inclined to read manually.

The primary objective of the Älyverkko CLI project is to identify and enable applications where this "slow but smart" AI can excel. By utilizing llama.cpp as its inference engine, the project aims to unlock a variety of uses where batch processing is more advantageous than real-time interaction.

Here are some practical examples of tasks suited for the Älyverkko CLI application:

  • Automated Report Generation: The AI can analyze large datasets overnight and produce comprehensive reports by morning, saving countless hours of manual analysis.
  • Code Development Support: Developers could use the AI to generate code snippets or even entire modules based on specifications provided, which they can then integrate and refine during their workday.
  • Content Creation: The AI can draft articles, create outlines for books, or compile research notes, providing a solid foundation for content creators to edit and finalize.
  • Data Processing: It can process and organize large volumes of unstructured data, such as customer feedback or scientific observations, into structured formats ready for human review.

In summary, the Älyverkko CLI application opens up a realm of possibilities for leveraging powerful AI in scenarios where immediate responses are not critical, but high-quality batch processing output is highly beneficial.

Note: project is still in early stage.

3. Setup

3.1. Installation

At the moment, to use Älyverkko CLI, you need to:

  • Download sources and build llama.cpp project.
  • Download sources and build Älyverkko CLI project.
  • Download one or more pre-trained large language models in GGUF format. Hugging Face repository has lot of them. My favorite is WizardLM-2-8x22B for strong problem solving skills.

Follow instructions for obtaining and building Älyverkko CLI on your computer that runs Debian 12 operating system:

  1. Ensure that you have Java Development Kit (JDK) installed on your system.

    sudo apt-get install openjdk-17-jdk
  2. Ensure that you have Apache Maven installed:

    sudo apt-get install maven
  3. Clone the code repository or download the source code for the `alyverkko-cli` application to your local machine.
  4. Navigate to the root directory of the cloned/downloaded project in your terminal.
  5. Execute the installation script by running


    This script will compile the application and install it to directory


    To facilitate usage from command-line, it will also define system-wide command alyverkko-cli as well as "Älyverkko CLI" launcher in desktop applications menu.

  6. Prepare Älyverkko CLI configuration file.
  7. Verify that the application has been installed correctly by running alyverkko-cli in your terminal.

3.2. Configuration

Älyverkko CLI application configuration is done by editing YAML formatted configuration file.

Configuration file should be placed under current user home directory:


3.2.1. Configuration file example

The application is configured using a YAML-formatted configuration file. Below is an example of how the configuration file might look:

mail_directory: "/home/user/AI/mail"
models_directory: "/home/user/AI/models"
default_temperature: 0.7
llama_cpp_executable_path: "/home/user/AI/llama.cpp/main"
batch_thread_count: 10
thread_count: 6
  - alias: "default"
    filesystem_path: "WizardLM-2-8x22B.Q5_K_M-00001-of-00005.gguf"
    context_size_tokens: 64000
    end_of_text_marker: null
  - alias: "maid"
    filesystem_path: "daringmaid-20b.Q4_K_M.gguf"
    context_size_tokens: 4096
    end_of_text_marker: null
  - alias: "default"
    prompt: |
      This conversation involves a user and AI assistant where the AI
      is expected to provide not only immediate responses but also detailed and
      well-reasoned analysis. The AI should consider all aspects of the query
      and deliver insights based on logical deductions and comprehensive understanding.
      AI assistant should reply using emacs org-mode syntax.
      Quick recap: *this is bold* [[http://domain.org][This is link]]
      * Heading level 1
      ** Heading level 2
      | Col 1 Row 1 | Col 2 Row 1 |
      | Col 1 Row 2 | Col 2 Row 2 |
      #+BEGIN_SRC python
        print ('Hello, world!')

  - alias: "writer"
    prompt: |
      You are best-selling book writer.

3.2.2. Configuration file syntax

Here are available parameters:

Directory where AI will look for files that contain problems to solve.
Directory where AI models are stored.
  • This option is mandatory.
Defines the default temperature for AI responses, affecting randomness in the generation process. Lower values make the AI more deterministic and higher values make it more creative or random.
  • Default value: 0.7
Specifies the file path to the llama.cpp main executable.
  • Example Value: /home/user/AI/llama.cpp/main
  • This option is mandatory.
Specifies the number of threads to use for input prompt processing. CPU computing power is usually the bottleneck here.
  • Default value: 10
Sets the number of threads to be used by the AI during response generation. RAM data transfer speed is usually bottleneck here. When RAM bandwidth is saturated, increasing thread count will no longer increase processing speed, but it will still keep CPU cores unnecessarily busy.
  • Default value: 6
List of available large language models.
Short model alias.
File name of the model as located within models_directory
Context size in tokens that model was trained on.
Some models produce certain markers to indicate end of their output. If specified here, Älyverkko CLI can identify and remove them so that they don't leak into conversation. Default value is: null.
List of predefined system prompts for AI.
Short prompt alias.
Actual prompt that will be sent to AI alongside actual user question.


While it is possible to configure many prompts and models, at the moment Älyverkko CLI will always choose model and prompt with "default" alias. This is going to be fixed soon.

3.2.3. Enlisting available models

Once Älyverkko CLI is installed and properly configured, you can run following command at commandline to see what models are available to it:

alyverkko-cli listmodels

3.3. Starting daemon

Älyverkko CLI keeps continuously listening for and processing tasks from a specified mail directory.

There are multiple alternative ways to start Älyverkko CLI in mail processing mode: Start via command line interface
  1. Open your terminal.
  2. Run the command:

    alyverkko-cli mail
  3. The application will start monitoring the configured mail directory for incoming messages and process them accordingly in endless loop.
  4. To terminate Älyverkko CLI, just hit CTRL+c on the keyboard, or close terminal window. Start using your desktop environment application launcher
  1. Access the application launcher or application menu on your desktop environment.
  2. Search for "Älyverkko CLI".
  3. Click on the icon to start the application. It will open its own terminal.
  4. If you want to stop Älyverkko CLI, just close terminal window. Start in the background as systemd system service

During Älyverkko CLI installation, installation script will prompt you if you want to install systemd service. If you chose Y, Alyverkko CLI would be immediately started in the background as a system service. Also it will be automatically started on every system reboot.

To view service status, use:

systemctl -l status alyverkko-cli

If you want to stop or disable service, you can do so using systemd facilities:

sudo systemctl stop alyverkko-cli
sudo systemctl disable alyverkko-cli

4. Usage

The Älyverkko CLI application expects input files for processing in the form of plain text files within the specified mail directory (configured in the YAML configuration file). Each file should begin with a `TOCOMPUTE:` marker on the first line to be considered for processing.

When the application detects a new or modified file in the mail directory:

  1. It checks if file has "TOCOMPUTE:" on the first line. If no, file is ignored. Otherwise Älyverkko CLI continues processing the file.
  2. It reads the content of the file and feeds it as an input for an AI model to generate a response.
  3. Once the AI has generated a response, the application appends this response to the original mail contents within the same file, using org-mode syntax to distinguish between the user's query and the assistant's reply. The updated file will contain both the original query (prefixed with: "* USER:*") and the AI's response (prefixed with "* ASSISTANT:"), ensuring a clear and organized conversation thread. "TOCOMPUTE:" is removed from the beginning of the file to avoid processing same file again.

Suggested way to use mail processing mode is to prepare assignments within the Älyverkko CLI mail directory using normal text editor. Feel free to save intermediary states. Once AI assignment is ready, add


to the beginning of the file and save one last time. Älyverkko CLI will detect new task within one second and will start processing it.

If your text editor automatically reloads file when it was changed by other process in the filesystem, AI response will appear within text editor as soon as AI response is ready. If needed, you can add further queries at the end of the file and re-add "TOCOMPUTE:" at the beginning of the file. This way AI will process file again and file becomes stateful conversation. If you use GNU Emacs text editor, you can benefit from purpose-built GNU Emacs utilities. Helpful GNU Emacs utilities

Note: GNU Emacs and following Emacs Lisp utilities are not required to use Älyverkko CLI. Their purpose is to increase comfort for existing GNU Emacs users. Easily compose new problem statement for AI from emacs

The Elisp function ai-new-topic facilitates the creation and opening of a new Org-mode file dedicated to a user-defined topic within a specified directory. Now you can use this file within emacs to compose you problem statement to AI.

When ai-new-topic function triggered, it first prompts the user to input a topic name. This name will serve as the basis for the filename and the title within the document.

The function then constructs a file path by concatenating the pre-defined ai-topic-files-directory (which should be set to your topics directory), the topic name, and the .org extension. If a file with this path does not already exist, the function will create a new file and open it for editing.

(defvar ai-topic-files-directory "/home/user/my-ai-mail-directory/"
  "Directory where topic files are stored. Set it to directory you want to use.")

(defun ai-new-topic ()
  "Create and open a topic file in the specified directory."
  (let ((topic (read-string "Enter topic name: ")))
    (let ((file-path (concat ai-topic-files-directory topic ".org")))
      (if (not (file-exists-p file-path))
          (with-temp-file file-path
            (insert "#+TITLE: " topic "\n\n")))
      (find-file file-path)
      (goto-char (point-max))
      (org-mode)))) Easily signal to AI that problem statement is ready for solving

When ai-compute function is triggered, it inserts "TOCOMPUTE:" line at the beginning of file and saves it. Marking it for processing by AI.

(defun ai-compute ()
  "Inserts 'TOCOMPUTE:' at the beginning of the buffer."
  (goto-char (point-min)) ; Move to the beginning of the buffer
  (insert "TOCOMPUTE:\n") ; Insert the string followed by a new line
  (save-buffer)           ; Save the buffer


Ideas to be possibly implemented in the future:

5.1. System operation

  • Implement CPU nice priority for inference processes to minimize the impact on system responsiveness during heavy computations.
  • Enable model selection per individual inference task, allowing for dynamic adjustment based on task requirements.
  • Allow specification of custom prompts for each inference task to tailor interactions precisely.
  • Introduce an aliasing system for frequently used prompts, streamlining the prompt selection process.
  • Consider implementing a plugin architecture to allow third-party developers to extend Älyverkko CLI's functionality with custom modules or integrations.
  • Possibility to easily pause and resume Älyverkko CLI without loosing in-progress computation. Unix process stop and continue signals could possibly be used.

5.2. Data management

  • Develop a feature to recursively aggregate files into a single document using Emacs org-mode syntax, facilitating the preparation of comprehensive problem statements for AI processing.
    • Ensure that binary files are excluded from this aggregation process to maintain text readability and compatibility.

5.3. Configuration and logging

  • Implement a fallback mechanism to use a system-wide configuration file located in `/etc/` if no user-specific configuration is found, enhancing out-of-the-box usability for new users.
  • Introduce optional logging of `llama.cpp` output to aid in debugging and performance monitoring without cluttering the standard output.

5.4. Integration with external services

  • Add capabilities to connect with Jira, fetch content, and potentially update issues or comments based on AI processing results.
  • Implement similar integration with Confluence for content retrieval and updates.
  • Extend the application's reach by adding the ability to interact with arbitrary web sites, enabling information extraction and automation of web-based tasks.

5.5. Tooling enhancements

  • Incorporate Python script execution capabilities directly by the AI, expanding the range of available data manipulation and processing tools.
  • Integrate relational database access to leverage structured data storage and retrieval in workflows.
  • Enable web request functionality to interact with RESTful APIs or scrape web content as part of task execution.
  • Introduce a notebook feature that allows the AI to maintain and reference its own notes, fostering context retention across tasks.

5.6. Multimedia processing

  • Extend the application's capabilities to include voice capture and processing, opening up new avenues for interaction beyond text-based communication.
  • Implement image capture and processing features, enabling tasks that involve image analysis or content extraction from visual data.

5.7. Task queue management

  • Refactor the task queue mechanism to support:
    • Multiple task sources, including a REST API endpoint for programmatic task submission.
    • Load balancing across multiple executors (instances of Älyverkko CLI) with dynamic registration and unregistration without system interruption.
    • Task priority assignments, ensuring that critical tasks are processed in a timely manner.

5.8. User interface development

  • Create a web-based UI to provide users with an interface for task submission and result retrieval, improving accessibility and user experience.
  • Integrate Quality of Service (QoS) concepts within the UI to ensure equitable resource allocation among users.
  • Implement administrative features for managing user accounts and system resources, maintaining a secure and efficient operating environment.

Created: 2024-05-19 Sun 23:54