Extended from: fastapi-vs-litestar-vs-django-bolt-vs-django-ninja-benchmarks Additions: Django REST Framework + GraphQL (Graphene v3, Graphene v2, Strawberry)
Disclaimer: This is an informal benchmark run on a local development machine without proper isolation. It does not follow benchmarking best practices such as Docker containerization, CPU pinning, or elimination of background process interference. Results may vary significantly in production environments. Take these numbers as a rough indicator, not absolute truth.
REST frameworks:
- FastAPI - ASGI framework with Pydantic
- Litestar - High-performance ASGI framework
- Django Ninja - Django + Pydantic API framework
- Django Bolt - Rust-powered Django API framework
- Django REST Framework - Traditional Django REST API framework
GraphQL frameworks:
- Graphene v3 - Django + Graphene (v3)
- Graphene v2 - Django + Graphene (v2, separate venv)
- Strawberry FastAPI - ASGI GraphQL (FastAPI)
- Strawberry Django - Django + Strawberry (Django integration)
Infrastructure notes:
- Graphene v2 and v3 both use the same minimal Django settings (
django_project/settings_graphql.py) for a fair comparison. - Strawberry reads/writes the same SQLite DB file (
django_benchmark.db) and uses a table that mirrors Django’sbenchmark_users.
REST:
| Endpoint | Description |
|---|---|
/json-1k |
Returns ~1KB JSON response |
/json-10k |
Returns ~10KB JSON response |
/db |
10 reads from SQLite database |
/slow |
Mock API with 2 second delay |
/nplus1 |
Batch-loaded related data |
/mutate |
Mutation (write) test |
GraphQL (POST /graphql):
| Query | Description |
|---|---|
json1k |
Returns ~1KB JSON response |
json10k |
Returns ~10KB JSON response |
users |
10 reads from SQLite database |
slow |
Mock API with 2 second delay |
nplus1 |
N+1 style test with batched resolving |
- Python 3.12+ for the main environment
- Python 3.10 (or lower) for Graphene v2 venv
- uv package manager
- bombardier HTTP benchmarking tool
Install bombardier:
go install github.com/codesenberg/bombardier@latestIf bombardier is installed but not found, add Go's bin to your PATH:
export PATH="$(go env GOPATH)/bin:$PATH"# Setup (run once)
./scripts/setup.sh
# Optional: set up Graphene v2 venv (runs on port 8007)
PYTHON_BIN=python3.10 ./scripts/setup_graphene_v2.sh
# Run all benchmarks (will include Graphene v2 if venv exists)
./scripts/run_all.sh
# Run GraphQL-only benchmarks (Graphene v2/v3 + Strawberry Django)
./scripts/run_graphql.sh
# Short run (no long tests, fewer warmups)
./scripts/run_all.sh -d 5 -w 200 -r 1
# Or with custom options
./scripts/run_all.sh -c 200 -d 15 -r 5Notes:
- Server logs are written to
.benchmark_logs/to reduce console noise. - Graphene v2 runs in a separate venv at
.venv-graphene-v2. - Graphene v2 and v3 use the same minimal Django settings module for GraphQL benchmarks.
- Graphene v3 runs on port 8008. Graphene v2 runs on port 8007. Strawberry FastAPI runs on port 8006. Strawberry Django runs on port 8009.
- A GraphQL-only combined graph is generated as
graphs/benchmark_combined_graphql.pngwhen./scripts/run_all.shis used.
./scripts/setup.sh./scripts/run_fastapi.sh # Port 8001
./scripts/run_litestar.sh # Port 8002
./scripts/run_ninja.sh # Port 8003
./scripts/run_bolt.sh # Port 8004
./scripts/run_drf.sh # Port 8005
./scripts/run_strawberry.sh # Port 8006 (Strawberry FastAPI)
./scripts/run_graphene_v2.sh # Port 8007 (requires .venv-graphene-v2)
./scripts/run_graphene_v3.sh # Port 8008
./scripts/run_strawberry_django.sh # Port 8009uv run python bench.py-c, --connections Concurrent connections (default: 100)
-d, --duration Duration per endpoint in seconds (default: 10)
-w, --warmup Warmup requests (default: 1000)
-r, --runs Runs per endpoint (default: 3)
-o, --output Output file (default: BENCHMARK_RESULTS.md)
--frameworks Frameworks to benchmark (default: all)
--skip-slow Skip the /slow endpoint
--graphql-combined-graph Generate graphs/benchmark_combined_graphql.png
| Framework | Port |
|---|---|
| FastAPI | 8001 |
| Litestar | 8002 |
| Django Ninja | 8003 |
| Django Bolt | 8004 |
| Django DRF | 8005 |
| Strawberry FastAPI | 8006 |
| Graphene v2 | 8007 |
| Graphene v3 | 8008 |
| Strawberry Django | 8009 |
Generated artifacts:
BENCHMARK_RESULTS.mdfor the latest run summary.graphs/benchmark_combined.pngfor all frameworks.graphs/benchmark_combined_graphql.pngfor GraphQL-only comparisons (graphene v2/v3 + strawberry-django).- Per-endpoint graphs like
graphs/benchmark_json_1k.png,graphs/benchmark_db.png, etc.
High-level flow:
- Start the selected framework servers.
- Warm up each endpoint/query to avoid cold-start bias.
- Run
bombardieragainst each endpoint/query for a fixed duration. - Capture best-of-N runs, then emit
BENCHMARK_RESULTS.md. - Generate per-endpoint graphs plus combined summaries.
The benchmark runner lives in bench.py, and scripts/run_all.sh / scripts/run_graphql.sh orchestrate server startup and teardown.
