Here is the Guide How to Intall Ollama on Windows.
✅ Requirements:
- Windows 10/11 (64-bit)
- At least 8 GB RAM (16 GB+ recommended)
- CPU-only works fine (though some models are faster on GPU, GPU support is still limited on Windows)
- WSL2 (Windows Subsystem for Linux) is required (Ollama installs it automatically)
🪜 Step-by-Step Installation
- Download Ollama
- Go to: https://ollama.com/
- Click Download for Windows
- Install Ollama
- Run the installer.
- It will set up WSL2 automatically if you don’t have it.
- Follow the prompts (restarts may be required).
- Open Terminal (CMD, PowerShell, or Windows Terminal)
- You can now run commands like:
ollama run llama2
✅ Done! It will download the model (e.g. LLaMA 2) and run it locally.
🧪 Try Other Models
After installation, you can run other models like:
ollama run mistral
ollama run gemma
ollama run codellama
ollama run orca-mini
Full model list here: https://ollama.com/library
🔥 1. Open WebUI for Ollama (ollama-webui)
This is a popular project built with [Node.js + React] and connects directly to Ollama’s local API.
✅ Easy Way to Run It:
Option A: Use Ollama WebUI by Open WebUI
This one’s plug-and-play:
👉 GitHub: https://github.com/open-webui/open-webui
🚀 Steps:
- Make sure Ollama is installed and running:
ollama run mistral # or just `ollama serve`
Install Docker (if you don’t have it):
Download Docker Desktop
Run Open WebUI using Docker:
docker run -d \
-p 3000:3000 \
-e 'OLLAMA_BASE_URL=http://host.docker.internal:11434' \
--name open-webui \
ghcr.io/open-webui/open-webui:main
Open in Browser:
Go to: http://localhost:3000
You’ll now see a ChatGPT-like interface connected to Ollama!