Älyverkko CLI application setup
Table of Contents
1. Requirements
Operating System:
Älyverkko CLI is developed and tested on Debian 12 "Bookworm". It should work on any modern Linux distribution with minimal adjustments to the installation process.
Dependencies:
- Java Development Kit (JDK) 17 or higher
- Apache Maven for building the project
Hardware Requirements:
- Modern multi-core CPU.
- The more RAM you have, the smarter AI model you can use. For example, at least 64 GB of RAM is needed to run pretty decent WizardLM-2-8x22B AI model.
- Sufficient disk space to store large language models and input/output data.
2. Installation
At the moment, to use Älyverkko CLI, you need to:
- Download sources and build llama.cpp project.
- Download sources and build Älyverkko CLI project.
- Download one or more pre-trained large language models in GGUF format. Hugging Face repository has lot of them. My favorite is WizardLM-2-8x22B for strong problem solving skills.
Follow instructions for obtaining and building Älyverkko CLI on your computer that runs Debian 12 operating system:
Ensure that you have Java Development Kit (JDK) installed on your system.
sudo apt-get install openjdk-17-jdk
Ensure that you have Apache Maven installed:
sudo apt-get install maven
- Clone the code repository or download the source code for the `alyverkko-cli` application to your local machine.
- Navigate to the root directory of the cloned/downloaded project in your terminal.
Execute the installation script by running
./install
This script will compile the application and install it to directory
/opt/alyverkko-cli
To facilitate usage from command-line, it will also define system-wide command alyverkko-cli as well as "Älyverkko CLI" launcher in desktop applications menu.
- Prepare Älyverkko CLI configuration file.
- Verify that the application has been installed correctly by running alyverkko-cli in your terminal.
3. Configuration
Älyverkko CLI application configuration is done by editing YAML formatted configuration file.
Configuration file should be placed under current user home directory:
~/.config/alyverkko-cli.yaml
3.1. Configuration file example
The application is configured using a YAML-formatted configuration file. Below is an example of how the configuration file might look:
mail_directory: "/home/user/AI/mail" models_directory: "/home/user/AI/models" default_temperature: 0.7 llama_cpp_dir_path: "/home/user/AI/llama.cpp/" batch_thread_count: 10 thread_count: 6 prompts_directory: "/home/user/.config/alyverkko-cli/prompts" models: - alias: "default" filesystem_path: "WizardLM-2-8x22B.Q5_K_M-00001-of-00005.gguf" context_size_tokens: 64000 end_of_text_marker: null - alias: "mistral" filesystem_path: "Mistral-Large-Instruct-2407.Q8_0.gguf" context_size_tokens: 32768 end_of_text_marker: null
3.2. Configuration file syntax
Here are available parameters:
- mail_directory
- Directory where AI will look for files that contain problems to solve.
- models_directory
- Directory where AI models are stored.
- This option is mandatory.
- prompts_directory
Directory where prompts are stored.
Example prompts directory content:
default.txt
writer.txt
Prompt name is file name without extension. File extension should be txt.
Example content for writer.txt:
You are best-selling book writer.
- default_temperature
- Defines the default temperature for AI
responses, affecting randomness in the generation process. Lower
values make the AI more deterministic and higher values make it more
creative or random.
- Default value: 0.7
- llama_cpp_dir_path
- Specifies the filesystem path to the cloned
and compiled llama.cpp directory.
- Example Value: home/user/AI/llama.cpp
- This option is mandatory.
- batch_thread_count
- Specifies the number of threads to use for
input prompt processing. CPU computing power is usually the
bottleneck here.
- Default value: 10
- thread_count
- Sets the number of threads to be used by the AI
during response generation. RAM data transfer speed is usually
bottleneck here. When RAM bandwidth is saturated, increasing thread
count will no longer increase processing speed, but it will still
keep CPU cores unnecessarily busy.
- Default value: 6
- models
- List of available large language models.
- alias
- Short model alias. Model with alias "default" would be used by default.
- filesystem_path
- File name of the model as located within models_directory
- context_size_tokens
- Context size in tokens that model was trained on.
- end_of_text_marker
- Some models produce certain markers to indicate end of their output. If specified here, Älyverkko CLI can identify and remove them so that they don't leak into conversation. Default value is: null.
3.3. Enlisting available models
Once Älyverkko CLI is installed and properly configured, you can run following command at commandline to see what models are available to it:
alyverkko-cli listmodels
3.4. Self test
The selftest command performs a series of checks to ensure the system is configured correctly:
alyverkko-cli selftest
It verifies:
- Configuration file integrity.
- Model directory existence.
- The presence of the llama.cpp executable.
4. Starting daemon
Älyverkko CLI keeps continuously listening for and processing tasks from a specified mail directory.
There are multiple alternative ways to start Älyverkko CLI in mail processing mode:
4.0.1. Start via command line interface
- Open your terminal.
Run the command:
alyverkko-cli mail
- The application will start monitoring the configured mail directory for incoming messages and process them accordingly in endless loop.
- To terminate Älyverkko CLI, just hit CTRL+c on the keyboard, or close terminal window.
4.0.2. Start using your desktop environment application launcher
- Access the application launcher or application menu on your desktop environment.
- Search for "Älyverkko CLI".
- Click on the icon to start the application. It will open its own terminal.
- If you want to stop Älyverkko CLI, just close terminal window.
4.0.3. Start in the background as systemd system service
During Älyverkko CLI installation, installation script will prompt you if you want to install systemd service. If you chose Y, Alyverkko CLI would be immediately started in the background as a system service. Also it will be automatically started on every system reboot.
To view service status, use:
systemctl -l status alyverkko-cli
If you want to stop or disable service, you can do so using systemd facilities:
sudo systemctl stop alyverkko-cli sudo systemctl disable alyverkko-cli