
Ollama
Run large language models locally with an OpenAI-compatible API. Supports Llama, Qwen, Mistral, DeepSeek, Gemma and 100+ open models.
Deploy Ollama in 3 Steps
Connect Your VPS
Add your server credentials to Server Compass
Select Ollama
Choose from our template library
Deploy & Configure
Fill in settings and click Deploy
Learn How to Deploy Ollama
DIY Ollama Deployment
Learn how to self-host Ollama with this hands-on deployment guide.
Start a Secure Shell Session
Open your terminal and connect to your server. Replace the IP address with your VPS IP.
# SSH into your server
ssh root@your-server-ip
# Using a custom SSH key
ssh -i ~/.ssh/id_rsa root@your-server-ipFirst time? Need Docker? Install it: curl -fsSL https://get.docker.com | sh
Prepare Your Workspace
Set up a clean directory for your application.
# Create and navigate to project directory
mkdir -p ~/apps/ollama
cd ~/apps/ollamaSet Up Container Configuration
Set up the container stack using this Docker Compose configuration:
services:
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
environment:
- OLLAMA_KEEP_ALIVE=5m
restart: unless-stopped
volumes:
ollama_data:
PORTHost port to expose Ollama API(default: 11434)KEEP_ALIVEModel keep-alive duration(default: 5m)Bring Up the Application
Launch your application stack in the background.
# Start the containers in detached mode
docker compose up -d
# Check if containers are running
docker compose ps
# View logs
docker compose logs -fConfigure Firewall
Configure your firewall to permit external connections.
# Allow the application port through firewall
sudo ufw allow 11434/tcp
sudo ufw reload
# Access your app at:
# http://your-server-ip:11434Prefer a visual interface? Use Server Compass.
Let Server Compass handle the complexity. Deploy Ollama with a simple, intuitive interface.
- Visual configuration UI
- One-click deployment
- Automatic SSL setup
- Zero-downtime updates
- Built-in monitoring
- One-click rollbacks
After Deployment
After deploying Ollama with Server Compass, complete these steps to finish setup
Open the Ollama tab in ServerCompass to manage models and test the API
Pull your first model (Qwen3.5-9B recommended)
Use the API section to get endpoint URL and code snippets
Test with the built-in chat interface
Need help? Check out our documentation for detailed guides.
Ollama FAQ
Common questions about self-hosting Ollama
How do I deploy Ollama with Server Compass?
Simply download Server Compass, connect to your VPS, and select Ollama from the templates list. Fill in the required configuration and click Deploy. The entire process takes under 3 minutes.
What are the system requirements for Ollama?
Ollama requires a minimum of 8192MB RAM. We recommend a VPS with at least 16384MB RAM for optimal performance. Any modern Linux server with Docker support will work.
Can I migrate my existing Ollama data?
Yes! Server Compass provides volume mapping that allows you to import existing data. You can also use standard Ollama backup and restore procedures.
How do I update Ollama to the latest version?
Server Compass makes updates easy. Simply click the Update button in your deployment dashboard, and the latest Ollama image will be pulled and deployed with zero downtime.
Is Ollama free to self-host?
Ollama is open-source software. You only pay for your VPS hosting (typically $5-20/month) and optionally Server Compass ($29 one-time). No subscription fees or per-seat pricing.
Related Templates
View all Development
PocketBase
Open-source backend in a single file with realtime database, auth, and file storage

Appwrite
Open-source backend-as-a-service - self-hosted Firebase alternative

Parse Server
Open-source backend framework with dashboard

Supabase
Full Supabase self-hosted with Kong, GoTrue Auth, Realtime, and Studio
Ready to Self-Host Ollama?
Download Server Compass and deploy Ollama to your VPS in under 3 minutes. No Docker expertise required.
Download Server Compass