Skip to main content
Topic: Install and run a local LLM AI (very easy). (Read 615 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Install and run a local LLM AI (very easy).

Hi,

1. Install 'ollama':
Code: [Select]
sudo packman -S ollama
2. Download the LLM model: (4,7 GB)
Code: [Select]
nice -n 19 ollama start >/dev/null 2>&1 & sleep 1 ; ollama pull llama3:8b
or (40 GB)
Code: [Select]
nice -n 19 ollama start >/dev/null 2>&1 & sleep 1 ; ollama pull llama3:70b
3. Run the LLM model:
Code: [Select]
nice -n 19 ollama start >/dev/null 2>&1 & sleep 1 ; ollama run llama3:8b ; pkill ollama

That's it. Yes, it is really THAT simple.

Some optional nice stuff.
Personalisation:
~/.ollama/Modelfile-Agent_Smith :

Code: [Select]
FROM llama3
# set the temperature [higher is more creative, lower is more coherent] range: 0.0-2.0 default 0.8
PARAMETER temperature 1
# set the repetitions penalty. [higher values increase the penalty] range: 0.0-2.0 default 1
PARAMETER repeat_penalty 1.25
# set the system message
# You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
SYSTEM """
You are Agent Smith. Answer as Agent Smith, the assistant, only.
My name is Jefe. You call me Jefe when you talk to me.
"""

Code: [Select]
ollama start >/dev/null 2>&1 & sleep 1 ; ollama create agentsmith -f ~/.ollama/Modelfile-Agent_Smith

~/.local/share/applications/Agent_Smith.desktop :
Code: [Select]
[Desktop Entry]
Type=Application
Terminal=true
Name=Agent Smith
Exec=bash -c "nice -n 19 ollama start >/dev/null 2>&1 & sleep 1 ; ollama run agentsmith ; pkill ollama"
Icon=computer

Have fun.

P.S. To delete a model delete first all derived models i.e.: 'ollama rm agentsmith' and then the main model 'llama3:8b' .

Re: Install and run a local LLM AI (very easy).

Reply #1
Thanks for this  :D Now I just need an external GPU. CPU wont cut it
Where I come from, Quality is job one, so we stay up all night just to get the job done.