-
-
Notifications
You must be signed in to change notification settings - Fork 169
6. Harbor Compose Setup
There are a few key points to how Harbor uses Docker that are instrumental for the integration of all the services together.
Harbor organizes service-related files in the services/ directory:
harbor/
compose.yml # Base compose file (always included)
.env # Shared environment configuration
services/
compose.ollama.yml # Service compose files
compose.webui.yml
compose.x.ollama.webui.yml # Cross-service files
ollama/ # Service directories
override.env
modelfiles/
webui/
override.env
configs/
Docker offers a multi-file configuration capabilities out of the box. However, it's rarely scaled beyond a few files. The main reason is that it quickly becomes non-trivial to reference multiple files dynamically for various scenarios.
Harbor solves it by using a dynamic file-matcher that revolves around the service handles (ollama, webui, tts, etc.). Those are matched against the docker compose files in the services/ directory to form a set of services and related configurations to run.
It starts with a simple direct match rule, where compose file includes the service handle. For example:
# handle
ollama
# compose file (in services/)
services/compose.ollama.ymlYou can see which files are matching your config by running:
harbor defaults
# webui
# ollama
harbor cmd -h # -h - human readable output
# docker compose
# - compose.yml
# - services/compose.ollama.yml
# - services/compose.webui.yml
# - services/compose.x.ollama.nvidia.yml
# - services/compose.x.ollama.webui.ymlLet's review the rules that made these files to be included.
File name uses OR for default segment matching:
# Will match whenever either litellm
# langfuse or postgres handles are mentioned
services/compose.litellm.langfuse.postgres.yml
# Will be included for any of below launches
harbor up litellm
harbor up langfuse
harbor up postgresIt allows to create setups where, for example, multiple backends are all using the same DB instance.
When there's an .x. in the file name, all mentioned handles are required to be present, effectively making it an AND rule:
# Will only match when both litellm and langfuse
# handles are mentioned
services/compose.x.litellm.langfuse.yml
# Match
harbor up litellm langfuse
# No match
harbor up litellm
harbor up langfuse
harbor up ttsThis allows implementing cases where one services has a direct impact on configuration of another. For example, configuring something in Open WebUI when SearxNG is running.
Matched automatically, for example:
# Will be included to any "harbor up" if
# calls if the nvidia container toolkit
# and drivers are installed
services/compose.service.nvidia.ymlDefault services are picked from the config and are always included. They can be set via the .env file or the CLI:
# Via .env
harbor config list | grep SERVICES_DEFAULT
# SERVICES_DEFAULT="webui ollama"
# CRUD for defaults
harbor defaults ls
harbor defaults add tts
harbor defaults rm webui
# Using "harbor config"
harbor config get services.default
# webui ollamaApart from these, harbor also always mentions root compose.yml file, which is the base for all the services.
The .env file is used for all the services at once. It's done for the end-user convenience at the expense of potential conflicts. When service's env is overly abstract - Harbor will prefer to use prefixed HARBOR_ variables and add them via environment section in the compose file.
Some services have their configs assembled at runtme from multiple files. It's useful for cases where running a service might have an impact on configuration of another service.
For example: tts and webui. When tts is running, webui should be configured to use it as a TTS backend.
harbor cmd
# -f webui.yml
# -f ollama.yml
# See sections above to learn why these
# files are included
harbor cmd tts -h
# docker compose
# - compose.yml
# - services/compose.ollama.yml
# - services/compose.tts.yml
# - services/compose.webui.yml
# - services/compose.x.ollama.nvidia.yml
# - services/compose.x.tts.nvidia.yml
# - services/compose.x.webui.ollama.yml
# - services/compose.x.webui.tts.ymlwebui.yml comes with a default config.json file, that's used by the WebUI. x.webui.tts.yml mounts additional file to the container - config.tts.json, a subset of webui's own configuration.
At runtime, we replace container's entrypoint with a custom one that is aware of all the optionally added volumes. It collect all configuration pieces from them and assemebles them together into a final config.json for the WebUI to consume.
See the config merger util for more details.
Config mergers come with additional bit of functionality related to env interpolation.
{
"key": "${HARBOR_ENV_VAR}"
}Would be replaced with the value of HARBOR_ENV_VAR from the .env file.
HARBOR_ENV_VAR="value"After config merging, becomes:
{
"key": "value"
}{
"items": [
"item0",
"${...HARBOR_ENV_VAR}"
]
}Would be replaced with the value of HARBOR_ENV_VAR from the .env file, but as a list of items.
HARBOR_ENV_VAR="item1;item2;item3"After config merging, becomes:
{
"items": [
"item0",
"item1",
"item2",
"item3"
]
}