This folder contains different docker-compose.yml configurations for various use cases.
Use this if: You want complete privacy with zero external API dependencies
Features:
- Ollama: Local LLM and embeddings (mistral, llama, etc.)
- Speaches: Local TTS (text-to-speech) and STT (speech-to-text)
- Everything runs on your machine - nothing sent to cloud
- Perfect for privacy, offline work, or air-gapped environments
Setup:
- Copy to your project folder as
docker-compose.yml - Run:
docker compose up -d - Download models (see file comments for commands)
- Configure all providers in UI (detailed instructions in file)
Requirements:
- Minimum: 8GB RAM, 20GB disk, 4 CPU cores
- Recommended: 16GB+ RAM, NVIDIA GPU (8GB+ VRAM), 50GB disk
Documentation:
Use this if: You want free TTS/STT but use cloud LLMs
Features:
- Speaches: Local text-to-speech and speech-to-text
- Use with cloud LLM providers (OpenAI, Anthropic, etc.)
- Great for podcast generation without TTS API costs
- Private audio processing
Setup:
- Copy to your project folder as
docker-compose.yml - Run:
docker compose up -d - Download speech models (see file for commands)
- Configure cloud LLM + local Speaches in UI
Documentation:
Use this if: You want to run AI models locally without API costs
Features:
- Includes Ollama service for local AI models
- No external API keys needed (for LLM and embeddings)
- Full privacy - everything runs on your machine
- Great for testing or privacy-focused deployments
Setup:
- Copy to your project folder as
docker-compose.yml - Run:
docker compose up -d - Pull a model:
docker exec open_notebook-ollama-1 ollama pull mistral - Configure in UI: Settings → API Keys → Add Ollama (URL:
http://ollama:11434)
Recommended models:
- LLM:
mistral,llama3.1,qwen2.5 - Embeddings:
nomic-embed-text,mxbai-embed-large
Use this if: You need all services in one container (not recommended)
docker-compose.yml in root) for better reliability and easier troubleshooting.
Features:
- Single container includes SurrealDB, API, and Frontend
- Simpler for very constrained environments
- Less flexible for debugging and scaling
Use this if: You're contributing to Open Notebook or developing custom features
Features:
- Hot-reload for code changes
- Separate backend and frontend services
- Build from source instead of using pre-built images
- Includes development tools and debugging
Prerequisites:
- Python 3.11+
- Node.js 18+
- uv (Python package manager)
Setup: See Development Guide
- Choose the example that fits your use case
- Copy the file to your project folder:
cp examples/docker-compose-ollama.yml docker-compose.yml
- Edit the
OPEN_NOTEBOOK_ENCRYPTION_KEYvalue - Run the services:
docker compose up -d
You can combine features from multiple examples. Common customizations:
Add this to the main docker-compose.yml:
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama_models:/root/.ollama
restart: always
volumes:
ollama_models:Add to open_notebook service environment:
- BASIC_AUTH_USERNAME=admin
- BASIC_AUTH_PASSWORD=your-secure-password- Discord: Join our community
- Issues: GitHub Issues