You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/guides/grafana-mcp-server-gemini.md
+6-15Lines changed: 6 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -101,7 +101,7 @@ _List all Prometheus and Loki datasources._
101
101
102
102
### Logs Inspection
103
103
104
-
The sequence initiates with the User Prompt: "I would like to filter logs based on the device_name=edge-device-01 label. Are there logs about nginx in the last 5 minutes?". At this stage, the Gemini model performs intent parsing. It identifies the specific metadata required—a label (device_name) and a keyword (nginx)—and realizes it needs external data to fulfill the request. This triggers the list_datasources tool through the MCP Server to locate the telemetry backend.
104
+
Gemini performs intent parsing and translates the request into a LogQL query: `{device_name="edge-device-01"} |= "nginx"`. This query targets specific logs, extracting raw OpenTelemetry (OTel) data that includes container metadata and system labels, which Gemini then uses to identify the source of the issue.
105
105
106
106

107
107
@@ -138,20 +138,11 @@ Imagine you get a page that an application is slow. You could:
138
138
4. Use query_loki_logs to search for "error" or "timeout" messages during the time of the spike.
139
139
5. If you find the root cause, use create_incident to start the formal response and add_activity_to_incident to log your findings.
140
140
141
-
## Next steps?
141
+
## Next steps
142
142
143
-
This use case demonstrates the future of Operational Intelligence: moving away from manual dashboard hunting and complex query syntax toward a conversational, proactive troubleshooting experience.
144
-
145
-
By bridging the gap between your terminal and Grafana's telemetry via the Docker MCP Toolkit, you empower your DevOps team to detect silent failures—like the filesystem error identified in our example—before they escalate into full-scale outages.
146
-
147
-
Don't let critical logs get buried under layers of infrastructure noise. Start automating your incident response and log analysis today.
148
-
149
-
Take the next step:
150
-
151
-
- Deploy the connector: Follow the 15-minute guide above to link your local Gemini CLI to your production Grafana instance.
152
-
153
-
- Scale the solution: Explore how to share these MCP configurations across your SRE team for unified troubleshooting.
154
-
155
-
- Optimize your queries: Experiment with advanced LogQL prompts to create automated health reports.
143
+
- Learn about [Advanced LogQL queries](https://grafana.com/docs/loki/latest/query/log_queries/)
144
+
- Set up [Team-wide MCP configurations](https://modelcontextprotocol.io/docs/develop/connect-local-servers)
145
+
- Explore [Grafana alerting with MCP](https://github.com/grafana/mcp-grafana)
146
+
- Get help in the [Docker Community Forums](https://forums.docker.com)
156
147
157
148
Need help setting up your Docker MCP environment or customizing your Gemini prompts? Visit the [Docker Community Forums](https://forums.docker.com) or see the [MCP Troubleshooting Guide](https://docs.docker.com/guides/grafana-mcp-server-gemini).
0 commit comments