fix: ensure streaming chunks are immediately flushed to console#6424
Merged
jackgerrits merged 3 commits intomicrosoft:mainfrom May 1, 2025
Merged
fix: ensure streaming chunks are immediately flushed to console#6424jackgerrits merged 3 commits intomicrosoft:mainfrom
jackgerrits merged 3 commits intomicrosoft:mainfrom
Conversation
Added `flush=True` to the `aprint` call when handling `ModelClientStreamingChunkEvent` message to ensure each chunk is immediately displayed as it arrives.
Contributor
Author
|
@microsoft-github-policy-service agree |
Contributor
Author
|
@ekzhu Could you please take a look at this tiny PR when you have a moment? I’d appreciate your review. Thanks! |
jackgerrits
approved these changes
May 1, 2025
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6424 +/- ##
=======================================
Coverage 78.55% 78.55%
=======================================
Files 225 225
Lines 16523 16523
=======================================
Hits 12979 12979
Misses 3544 3544
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
ChrisBlaa
pushed a commit
to ChrisBlaa/autogen
that referenced
this pull request
May 8, 2025
…osoft#6424) Added `flush=True` to the `aprint` call when handling `ModelClientStreamingChunkEvent` message to ensure each chunk is immediately displayed as it arrives. <!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? When handling `ModelClientStreamingChunkEvent` message, streaming chunks weren't guaranteed to be displayed immediately, as Python's stdout might buffer output without an explicit flush instruction. This could cause visual delays between when `chunk_event` objects are added to the message queue and when users actually see the content rendered in the console. <!-- Please give a short summary of the change and the problem this solves. --> ## Related issue number None <!-- For example: "Closes microsoft#1234" --> ## Checks - [x] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Added
flush=Trueto theaprintcall when handlingModelClientStreamingChunkEventmessage to ensure each chunk is immediately displayed as it arrives.Why are these changes needed?
When handling
ModelClientStreamingChunkEventmessage, streaming chunks weren't guaranteed to be displayed immediately, as Python's stdout might buffer output without an explicit flush instruction. This could cause visual delays between whenchunk_eventobjects are added to the message queue and when users actually see the content rendered in the console.Related issue number
None
Checks