Skip to main content
Topic: How do I install Ollama + Open-Webui without docker and pip? (Read 2421 times) previous topic - next topic
0 Members and 2 Guests are viewing this topic.

How do I install Ollama + Open-Webui without docker and pip?

  Hello everyone.
 Tell me how to install install ollama + open-webui without docker and pip? How much share and storage do you need? Before writing your answer, please make sure that you have done it yourself, as I have tried different attempts and in some of them I have received different errors. Which installation method is the easiest in terms of commands?

or perhaps you know a better solution than open-webui to use Ollama. I would like to use all the functionality of the language model: image generation, sound, text, video, etc

Re: How do I install Ollama + Open-Webui without docker and pip?

Reply #1
Docker/Podman is the easiest and most convenient option. The model itself is in the arch repositories, and WebUI is in aur. You can build from git, you only need to install dependencies from requirements.txt from repositories and build via npm. The storage and resources depend on the size of the models used.

Re: How do I install Ollama + Open-Webui without docker and pip?

Reply #2
Docker/Podman is the easiest and most convenient option.

Here, podman is on world:

Code: [Select]
pacman -Ss podman
world/podman 5.5.2-1
    Tool and library for running OCI-based containers in pods
world/podman-docker 5.5.2-1
    Emulate Docker CLI using podman

Re: How do I install Ollama + Open-Webui without docker and pip?

Reply #3
Hello Worm_Jim and  tintin.

Thank you for your answers.
It won't bother you to write a list of commands as you installed, I didn't succeed, I got various errors when I built using the AUR .
https://aur.archlinux.org/packages/open-webui
 I want to try your method, I'm sure that other users will find it useful  :)

Re: How do I install Ollama + Open-Webui without docker and pip?

Reply #4
For Docker/Podman I can jot it down from memory:

Code: [Select]
~ ❯ docker pull ollama/ollama                         
~ ❯ docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama                                                 
~ ❯ docker exec ollama ollama run llama3.1      
~ ❯ docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

To run on GPU you will need other commands, which you can find on the open-webui github page. I wrote it from memory, so I could have messed something up...

With installation via AUR everything is not so smooth, there were also errors, but in the end everything worked, however it is not worth the time spent.

Re: How do I install Ollama + Open-Webui without docker and pip?

Reply #5
Thank you very much for the information and desire to help. I was looking for an answer and found this solution.

Code: [Select]
 # curl -fsSL https://ollama.com/install.sh | sh 
 
$ ollama serve
 
$ ollama pull qwen2.5-coder:7b
 
$ sudo pacman -S  podman
 
$ podman run -d --network=host -v open-webui:/app/backend/data  -e OLLAMA_BASE_URL=http://127.0.0.1:11434  --name open-webui  --restart always ghcr.io/open-webui/open-webui:main
 
$ podman ps

To access the interface, you need to open the browser at
Code: [Select]
http://localhost:8080 

Re: How do I install Ollama + Open-Webui without docker and pip?

Reply #6
I would like to supplement this post, as not everything is as simple as it may seem. I only use podman as it's better than docker for many reasons. So, in order for you to run Podman + Ollama + Open-Webui +OpenRC locally on your computer, you need to follow these steps:

Step 1: Installing Ollama

Code: [Select]
$  curl -fsSL https://ollama.com/install.sh | sh

It would be great if Ollama appeared in the Artix Linux repository :) https://repology.org/project/ollama/versions

Step 2: Download the Ollama model

Go to the site https://ollama.com/search  and choose a model, for example, it can be:

gemma3:4b (for communication)  or  qwen2.5-coder:7b (for coding)

launch Ollama and download the model:

Code: [Select]
$  ollama serve

$ ollama  pull gemma3:4b 

or

$  ollama run gemma3:4b

Step 3: Installing Podman

Code: [Select]
$ sudo pacman -S podman

$ podman --version

Step 4: Create and RUN a container for Open-Webui

Code: [Select]
$ ollama serve 

Code: [Select]
$ podman  run -d   \
   --network=host \ 
  -e OLLAMA_BASE_URL=http://127.0.0.1:11434   \ 
  -v open-webui:/app/backend/data    \
  --name open-webui   \
  --restart always   \
   ghcr.io/open-webui/open-webui:latest


It will take some time... After that, check if the port has open-webui

Code: [Select]
$ lsof -i :8080 

After downloading, it is recommended to stop the container and start it again:

Code: [Select]
$ podman ps -a         # check the information about our container 

$ podman stop open-webui   # open-webui is the name we gave our container

$ podman start  open-webui

you can run the command again 
Code: [Select]
$podmad ps -a
  and look at "STATUS" you will see "Up", this means that the container is running. Example:

Code: [Select]
$ podman ps -a
CONTAINER ID  IMAGE                                 COMMAND        CREATED       STATUS      PORTS       NAMES
127e59f88c91  ghcr.io/open-webui/open-webui:latest  bash start.sh  14 hours ago  Up 6 hours  8080/tcp    open-webui

Open the browser at this address http://127.0.0.1:8080 , you should have the Open-WebUI start page loaded and in the upper left corner in the "Select model" field, you should see the model you downloaded. Then you can test and ask your questions, write scripts and code)




















 

Re: How do I install Ollama + Open-Webui without docker and pip?

Reply #7
Tips.💡

 If you have forgotten your password. How do I reset my password? 

Let's see where the file is stored webui.db the find command:

Code: [Select]
$ find  / .   -type    f  -iname  "webui.db"

it will be something like..

Code: [Select]
.local/share/containers/storage/volumes/open-webui/_data/webui.db

Let's remove webui.db

Code: [Select]
$ rm  -r  .local/share/containers/storage/volumes/open-webui/_data/webui.db


Run the previously created container and you will see the same prompt to create logs and password as it was the first time.

Code: [Select]
$ podman start  open-webui 

Now I'm trying to figure out how to install STT and TTS voice input locally without OpenAI API, since open-webui doesn't have this feature, as well as a very limited number of languages.