Conversation
ibiscp
pushed a commit
that referenced
this pull request
Feb 28, 2023
JSON interface for Langchain #1
ogabrielluiz
pushed a commit
that referenced
this pull request
Aug 14, 2023
updating from dev branch
ogabrielluiz
pushed a commit
that referenced
this pull request
Nov 1, 2023
Bedrock Embeddings custom component
joaoguilhermeS
pushed a commit
to joaoguilhermeS/langflow
that referenced
this pull request
May 20, 2024
<fix> sharing /temp folder between discord and backend
3 tasks
t0rtila
referenced
this pull request
in ONLYOFFICE/langflow
Feb 24, 2025
Co-authored-by: Nasrullo Nurullaev <nasrullo.nurullaev@onlyoffice.com> Co-committed-by: Nasrullo Nurullaev <nasrullo.nurullaev@onlyoffice.com>
ogabrielluiz
pushed a commit
that referenced
this pull request
Apr 2, 2025
Improve variable name and simplify conditional
github-merge-queue bot
pushed a commit
that referenced
this pull request
Jun 26, 2025
…7933) * fixes * fix: Update SQLAlchemy import to SQLModel in flow_runner.py * [autofix.ci] apply automated fixes * Update flow_runner tests to match new LangflowRunnerExperimental API (#1) * Initial plan * Update test_flow_runner.py to match new LangflowRunnerExperimental API Co-authored-by: barnuri <13019522+barnuri@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: barnuri <13019522+barnuri@users.noreply.github.com> * [autofix.ci] apply automated fixes * patch-1 - fix lint * patch-1 - tweaks_values * patch-1 - tweaks_values * lint --------- Co-authored-by: Gabriel Luiz Freitas Almeida <gabriel@langflow.org> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Edwin Jose <edwin.jose@datastax.com> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: barnuri <13019522+barnuri@users.noreply.github.com>
Khurdhula-Harshavardhan
pushed a commit
to JigsawStack/langflow
that referenced
this pull request
Jul 1, 2025
…angflow-ai#7933) * fixes * fix: Update SQLAlchemy import to SQLModel in flow_runner.py * [autofix.ci] apply automated fixes * Update flow_runner tests to match new LangflowRunnerExperimental API (langflow-ai#1) * Initial plan * Update test_flow_runner.py to match new LangflowRunnerExperimental API Co-authored-by: barnuri <13019522+barnuri@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: barnuri <13019522+barnuri@users.noreply.github.com> * [autofix.ci] apply automated fixes * patch-1 - fix lint * patch-1 - tweaks_values * patch-1 - tweaks_values * lint --------- Co-authored-by: Gabriel Luiz Freitas Almeida <gabriel@langflow.org> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Edwin Jose <edwin.jose@datastax.com> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: barnuri <13019522+barnuri@users.noreply.github.com>
rodrigosnader
added a commit
that referenced
this pull request
Sep 27, 2025
…e compatibility This commit fixes two critical bugs that completely broke Agent memory in the main branch: ## Bug #1: Inheritance Method Call Error - Fixed incorrect method calls in Agent component inheritance - Changed `get_base_inputs()` to `_base_inputs` in: - src/lfx/src/lfx/components/agents/agent.py:157 - src/lfx/src/lfx/base/agents/agent.py:229 ## Bug #2: Message Type Incompatibility - Fixed type checking in Agent base class to handle both Message types - Memory returns `langflow.schema.message.Message` but Agent expected `lfx.schema.message.Message` - Updated type check to use duck typing instead of strict isinstance check - Changed in src/lfx/src/lfx/base/agents/agent.py:148-150 ## Impact - Agents can now remember conversation context across messages - Memory functionality restored to same level as release-1.6.0 - Fixes issue where agents would forget user information immediately ## Test Results - Before: Agent says "I don't have access to your name or occupation" - After: Agent says "Your name is VICTORY TEST, and you work as a memory bug hunter" 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
github-merge-queue bot
pushed a commit
that referenced
this pull request
Sep 29, 2025
…e compatibility (#10008) * fix: Restore Agent memory functionality by fixing inheritance and type compatibility This commit fixes two critical bugs that completely broke Agent memory in the main branch: ## Bug #1: Inheritance Method Call Error - Fixed incorrect method calls in Agent component inheritance - Changed `get_base_inputs()` to `_base_inputs` in: - src/lfx/src/lfx/components/agents/agent.py:157 - src/lfx/src/lfx/base/agents/agent.py:229 ## Bug #2: Message Type Incompatibility - Fixed type checking in Agent base class to handle both Message types - Memory returns `langflow.schema.message.Message` but Agent expected `lfx.schema.message.Message` - Updated type check to use duck typing instead of strict isinstance check - Changed in src/lfx/src/lfx/base/agents/agent.py:148-150 ## Impact - Agents can now remember conversation context across messages - Memory functionality restored to same level as release-1.6.0 - Fixes issue where agents would forget user information immediately ## Test Results - Before: Agent says "I don't have access to your name or occupation" - After: Agent says "Your name is VICTORY TEST, and you work as a memory bug hunter" 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * fix: Update data_to_messages function to accept both Data and Message types This commit modifies the `data_to_messages` function to accept a list of both `Data` and `Message` types, enhancing type compatibility. The function's docstring has been updated to reflect the new input type and return type, ensuring clarity in its usage. * fix: improve message validation in Agent This commit updates the chat history processing in the LCAgentComponent to ensure that only messages with valid 'text' data are included. The method now checks for the presence of 'text' in the message data before converting it to the appropriate format. Additionally, the base input retrieval method has been changed from `_base_inputs` to `get_base_inputs()` for consistency and clarity. * fix: enhance chat history validation to support Data type * fix: improve input handling to support dynamic message conversion --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Gabriel Luiz Freitas Almeida <gabriel@langflow.org> Co-authored-by: Edwin Jose <edwin.jose@datastax.com>
codeflash-ai bot
added a commit
that referenced
this pull request
Jan 15, 2026
The optimized code achieves a **16% speedup** by reducing Python interpreter overhead through three focused micro-optimizations in the `run_response_to_workflow_response` function: ## Key Optimizations 1. **Replaced `hasattr` + attribute access with `getattr`** (Lines 171-173, 239-241) - Original: `if hasattr(run_output, "outputs") and run_output.outputs:` followed by accessing `run_output.outputs` twice - Optimized: `outs = getattr(run_output, "outputs", None)` followed by `if outs:` - **Why faster**: `hasattr` internally catches AttributeError exceptions, making it slower than `getattr` with a default. This eliminates redundant attribute lookups and exception handling overhead. - **Impact**: Saves ~6μs per iteration in the output building loop (14 hits × ~300ns improvement visible in line profiler) 2. **Converted terminal node list to set for membership testing** (Lines 183-184) - Original: `terminal_vertices = [v for v in graph.vertices if v.id in terminal_node_ids]` (list membership is O(n)) - Optimized: `term_set = set(terminal_node_ids)` then `[v for v in graph.vertices if v.id in term_set]` (set membership is O(1)) - **Why faster**: With ~200+ vertices in large-scale tests, the list comprehension performs better with O(1) set lookups instead of O(n) list scans - **Impact**: Particularly beneficial in `test_large_scale_many_vertices_processing_efficiency` where 200 vertices are processed 3. **Simplified metadata extraction logic** (Lines 239-243) - Original: `if hasattr(vertex_output_data, "metadata") and vertex_output_data.metadata:` - Optimized: `vm = getattr(vertex_output_data, "metadata", None)` then `if vm:` - **Why faster**: Same `getattr` benefit as #1—avoids exception handling and reduces attribute access calls from 2 to 1 ## Performance Impact The line profiler shows these optimizations primarily benefit: - **Output data map construction**: Reduced from 118μs to 99μs per component_id extraction (214 hits) - **Terminal vertex filtering**: Small but measurable improvement when converting to set (~25μs overhead amortized across 219 vertex checks) - **Metadata updates**: Reduced from 112μs to 107μs per metadata check (213 hits) ## Test Results Context All test cases pass with identical behavior. The optimizations are particularly effective for: - **Large-scale scenarios** (`test_large_scale_many_vertices_processing_efficiency`): Set-based filtering scales better with 200 vertices - **Workflows with many outputs**: Each terminal vertex processes faster due to reduced attribute access overhead - **Typical workflows** (8-12 nodes): Benefits accumulate across multiple attribute checks per vertex These are classic Python micro-optimizations that reduce interpreter overhead without changing algorithmic complexity, making the code measurably faster for typical workflow conversion operations while maintaining identical functionality.
cfchase
pushed a commit
to cfchase/langflow
that referenced
this pull request
Feb 26, 2026
…cp-component-patch feat: Add OAuth support for MCP HTTP-based servers
This was referenced Mar 3, 2026
erichare
added a commit
that referenced
this pull request
Mar 26, 2026
Adapts the helper-method pattern from langflow-ai/sdk PR #1 (Janardan Singh Kavia, IBM Corp., Apache 2.0) to the V1 RunResponse and RunOutput models. New methods on RunOutput: - has_errors() — True if any component output or artifact contains an error key New methods on RunResponse: - get_chat_output() alias for first_text_output() - get_all_outputs() returns list[RunOutput] (typed, not strings) - get_text_outputs() alias for all_text_outputs() - has_errors() True if any output block has an error - is_completed() True when outputs exist and no errors - is_failed() True when no outputs or has_errors() - is_in_progress() always False (V1 runs are synchronous) These allow callers to use a uniform status-check pattern that stays compatible with the upcoming BackgroundJob API. 26 new tests added to tests/test_models.py. Co-Authored-By: Janardan Singh Kavia <jkavia@ibm.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
erichare
added a commit
that referenced
this pull request
Mar 26, 2026
Adapts the BackgroundJob pattern from langflow-ai/sdk PR #1 (Janardan Singh Kavia, IBM Corp., Apache 2.0) to run over the existing V1 /api/v1/run/{id} endpoint via asyncio.create_task(). New additions: langflow_sdk/background_job.py (new file): - BackgroundJob wraps an asyncio.Task[RunResponse] - is_running() / is_completed() / is_failed() — task-state helpers - wait_for_completion(timeout=) — awaits with LangflowTimeoutError on expiry; uses asyncio.shield() so the task survives a timeout and can be awaited again - cancel() — requests task cancellation; idempotent, returns bool langflow_sdk/exceptions.py: - LangflowTimeoutError — new exception raised by wait_for_completion() langflow_sdk/client.py: - AsyncLangflowClient.run_background() — creates the task and returns a BackgroundJob immediately, leaving the event loop free langflow_sdk/__init__.py: - BackgroundJob and LangflowTimeoutError added to public API / __all__ tests/test_background_job.py (new file, 26 tests): - BackgroundJob status helpers (is_running/completed/failed) - wait_for_completion success, timeout, shield-survives-timeout, exception re-raise, multiple awaits - cancel() lifecycle and idempotency - AsyncLangflowClient.run_background() integration Co-Authored-By: Janardan Singh Kavia <jkavia@ibm.com> Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
added custom node memory