Included Prometheus interceptor support for gRPC streaming#1858
Included Prometheus interceptor support for gRPC streaming#1858RobertSamoilescu merged 6 commits intoSeldonIO:masterfrom
Conversation
sakoush
left a comment
There was a problem hiding this comment.
LGTM. I left some minor comments.
| ) | ||
|
|
||
| interceptors = [] | ||
| self._interceptors = [] |
There was a problem hiding this comment.
why are we changing it, was that a bug?
There was a problem hiding this comment.
Was not a bug. Just needed a way to access the interceptors list for testing.
| @@ -0,0 +1,129 @@ | |||
| import pytest | |||
There was a problem hiding this comment.
thanks for adding these tests. ideally we also should test other metrics / errors but can happen as a follow up PR.
| async def get_stream_request(request): | ||
| yield request | ||
|
|
||
| # send 10 requests |
There was a problem hiding this comment.
| # send 10 requests | |
| # send 1 requests |
There was a problem hiding this comment.
That should be 10, but I forgot to update the num_request var.
| num_words = len(request_text.split()) | ||
|
|
||
| assert int(counted_requests) == num_requests | ||
| assert int(counted_requests) * num_words == int(counted_responses) |
There was a problem hiding this comment.
question: how are we actually counting the actual words in the responses?
There was a problem hiding this comment.
The model used is a dummy text one which split words by white space. Each word is going to be streamed back by the model.
0afebdc to
55a271e
Compare
This PR includes Prometheus interceptor support for gRPC streaming. Currently for gRPC streaming, we have to set
"metrics_endpoint": null, thus Prometheus logs cannot be scraped. It also updates docs and test for Prometheus interceptor.