Skip to content

Latest commit

 

History

History
37 lines (28 loc) · 2.27 KB

File metadata and controls

37 lines (28 loc) · 2.27 KB

Middleware samples

This folder contains focused middleware samples for Agent, chat clients, tools, sessions, and runtime context behavior.

Files

File Description
agent_and_run_level_middleware.py Demonstrates combining agent-level and run-level middleware.
chat_middleware.py Shows class-based and function-based chat middleware that can observe, modify, and override model calls.
class_based_middleware.py Shows class-based agent and function middleware.
decorator_middleware.py Demonstrates middleware registration with decorators.
exception_handling_with_middleware.py Shows how middleware can handle failures and recover cleanly.
function_based_middleware.py Shows function-based agent and function middleware.
middleware_termination.py Demonstrates stopping a middleware pipeline early.
override_result_with_middleware.py Shows how middleware can replace the normal result.
runtime_context_delegation.py Demonstrates delegating work with runtime context data.
session_behavior_middleware.py Shows how middleware interacts with session-backed runs.
shared_state_middleware.py Demonstrates sharing mutable state across middleware invocations.
usage_tracking_middleware.py Demonstrates one chat middleware function that tracks per-call usage in non-streaming and streaming tool-loop runs.

Running the usage tracking sample

The new usage tracking sample uses OpenAIResponsesClient, so set the usual OpenAI responses environment variables first:

export OPENAI_API_KEY="your-openai-api-key"
export OPENAI_RESPONSES_MODEL_ID="gpt-4.1-mini"

Then run:

uv run samples/02-agents/middleware/usage_tracking_middleware.py

The sample forces a tool call so you can see middleware output for each inner model call in both non-streaming and streaming modes.