I would like to supplement this post, as not everything is as simple as it may seem. I only use podman as it's better than docker for many reasons. So, in order for you to run Podman + Ollama + Open-Webui +OpenRC locally on your computer, you need to follow these steps:
Step 1: Installing Ollama
$ curl -fsSL https://ollama.com/install.sh | sh
It would be great if Ollama appeared in the Artix Linux repository
https://repology.org/project/ollama/versions
Step 2: Download the Ollama model
Go to the site https://ollama.com/search and choose a model, for example, it can be:
gemma3:4b (for communication) or qwen2.5-coder:7b (for coding)
launch Ollama and download the model:
$ ollama serve
$ ollama pull gemma3:4b
or
$ ollama run gemma3:4b
Step 3: Installing Podman
$ sudo pacman -S podman
$ podman --version
Step 4: Create and RUN a container for Open-Webui
$ ollama serve
$ podman run -d \
--network=host \
-e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:latest
It will take some time... After that, check if the port has open-webui
$ lsof -i :8080
After downloading, it is recommended to stop the container and start it again:
$ podman ps -a # check the information about our container
$ podman stop open-webui # open-webui is the name we gave our container
$ podman start open-webui
you can run the command again $podmad ps -a
and look at "STATUS" you will see "Up", this means that the container is running. Example:
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
127e59f88c91 ghcr.io/open-webui/open-webui:latest bash start.sh 14 hours ago Up 6 hours 8080/tcp open-webui
Open the browser at this address http://127.0.0.1:8080 , you should have the Open-WebUI start page loaded and in the upper left corner in the "Select model" field, you should see the model you downloaded. Then you can test and ask your questions, write scripts and code)