Hello everyone.
Tell me how to install install ollama + open-webui without docker and pip? How much share and storage do you need? Before writing your answer, please make sure that you have done it yourself, as I have tried different attempts and in some of them I have received different errors. Which installation method is the easiest in terms of commands?
or perhaps you know a better solution than open-webui to use Ollama. I would like to use all the functionality of the language model: image generation, sound, text, video, etc
Docker/Podman is the easiest and most convenient option. The model itself is in the arch repositories, and WebUI is in aur. You can build from git, you only need to install dependencies from requirements.txt from repositories and build via npm. The storage and resources depend on the size of the models used.
Here, podman is on world:
pacman -Ss podman
world/podman 5.5.2-1
Tool and library for running OCI-based containers in pods
world/podman-docker 5.5.2-1
Emulate Docker CLI using podman
Hello Worm_Jim and tintin.
Thank you for your answers.
It won't bother you to write a list of commands as you installed, I didn't succeed, I got various errors when I built using the AUR .
https://aur.archlinux.org/packages/open-webui
I want to try your method, I'm sure that other users will find it useful :)
For Docker/Podman I can jot it down from memory:
~ ❯ docker pull ollama/ollama
~ ❯ docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
~ ❯ docker exec ollama ollama run llama3.1
~ ❯ docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
To run on GPU you will need other commands, which you can find on the open-webui github page. I wrote it from memory, so I could have messed something up...
With installation via AUR everything is not so smooth, there were also errors, but in the end everything worked, however it is not worth the time spent.
Thank you very much for the information and desire to help. I was looking for an answer and found this solution.
# curl -fsSL https://ollama.com/install.sh | sh
$ ollama serve
$ ollama pull qwen2.5-coder:7b
$ sudo pacman -S podman
$ podman run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
$ podman ps
To access the interface, you need to open the browser at
http://localhost:8080
I would like to supplement this post, as not everything is as simple as it may seem. I only use podman as it's better than docker for many reasons. So, in order for you to run
Podman + Ollama + Open-Webui +OpenRC locally on your computer, you need to follow these steps:
Step 1: Installing Ollama
$ curl -fsSL https://ollama.com/install.sh | sh
It would be great if Ollama appeared in the Artix Linux repository :) https://repology.org/project/ollama/versions
Step 2: Download the Ollama model
Go to the site https://ollama.com/search and choose a model, for example, it can be:
gemma3:4b (for communication) or qwen2.5-coder:7b (for coding)
launch Ollama and download the model:
$ ollama serve
$ ollama pull gemma3:4b
or
$ ollama run gemma3:4b
Step 3: Installing Podman
$ sudo pacman -S podman
$ podman --version
Step 4: Create and RUN a container for Open-Webui
$ ollama serve
$ podman run -d \
--network=host \
-e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:latest
It will take some time... After that, check if the port has open-webui
$ lsof -i :8080
After downloading, it is recommended to stop the container and start it again:
$ podman ps -a # check the information about our container
$ podman stop open-webui # open-webui is the name we gave our container
$ podman start open-webui
you can run the command again
$podmad ps -a
and look at
"STATUS" you will see
"Up", this means that the container is running. Example:
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
127e59f88c91 ghcr.io/open-webui/open-webui:latest bash start.sh 14 hours ago Up 6 hours 8080/tcp open-webui
Open the browser at this address http://127.0.0.1:8080 , you should have the Open-WebUI start page loaded and in the upper left corner in the "Select model" field, you should see the model you downloaded. Then you can test and ask your questions, write scripts and code)
Tips.💡
If you have forgotten your password. How do I reset my password?
Let's see where the file is stored
webui.db the find command:
$ find / . -type f -iname "webui.db"
it will be something like..
.local/share/containers/storage/volumes/open-webui/_data/webui.db
Let's remove webui.db
$ rm -r .local/share/containers/storage/volumes/open-webui/_data/webui.db
Run the previously created container and you will see the same prompt to create logs and password as it was the first time.
$ podman start open-webui
Now I'm trying to figure out how to install STT and TTS voice input locally without OpenAI API, since open-webui doesn't have this feature, as well as a very limited number of languages.