fix: Handle params required for watsonX#10979
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis PR extends the Langflow platform to support IBM WatsonX and Ollama LLM providers across 22 starter project templates and the core model infrastructure. Changes include adding provider-specific configuration fields (URLs, project IDs), updating the LLM builder to accept these parameters, and implementing dynamic UI field visibility based on the selected provider. Changes
Sequence DiagramsequenceDiagram
participant User
participant UI as Build Config
participant LMC as LanguageModelComponent
participant get_llm as get_llm Factory
participant LLM as LLM Provider<br/>(WatsonX/Ollama/OpenAI)
User->>UI: Select model provider
UI->>LMC: update_build_config(model_field)
activate LMC
LMC->>LMC: Detect provider from model
alt Provider is IBM WatsonX
LMC->>UI: Show base_url_ibm_watsonx,<br/>project_id (required)
else Provider is Ollama
LMC->>UI: Show ollama_base_url
else Other Provider
LMC->>UI: Hide provider-specific fields
end
LMC-->>UI: Return updated build_config
deactivate LMC
User->>LMC: Trigger build_model()
activate LMC
LMC->>LMC: Gather provider-specific URLs<br/>(watsonx_url, watsonx_project_id,<br/>ollama_base_url)
LMC->>get_llm: Call with model + provider params
deactivate LMC
activate get_llm
alt Provider requires validation
get_llm->>get_llm: Validate required params<br/>(e.g., WatsonX URL/project)
end
get_llm->>LLM: Initialize with provider-specific config
LLM-->>get_llm: Return configured LLM instance
get_llm-->>LMC: Return LLM
deactivate get_llm
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touchesImportant Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 error, 3 warnings)
✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
❌ Your patch status has failed because the patch coverage (30.30%) is below the target coverage (40.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #10979 +/- ##
==========================================
- Coverage 33.24% 32.33% -0.92%
==========================================
Files 1394 1394
Lines 66040 66068 +28
Branches 9772 9778 +6
==========================================
- Hits 21958 21365 -593
- Misses 42956 43576 +620
- Partials 1126 1127 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (1)
885-1180: Fix wrong JSON exception when parsing httpx responses.httpx.Response.json raises ValueError, not json.JSONDecodeError. Current except block won’t catch it and will throw. Catch ValueError (optionally both for safety).
Apply:
- else: - try: - result = response.json() - except json.JSONDecodeError: - self.log("Failed to decode JSON response") - result = response.text.encode("utf-8") + else: + try: + result = response.json() + except (ValueError, json.JSONDecodeError): + self.log("Failed to decode JSON response") + result = response.text.encode("utf-8")src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1)
2612-2629: Missing required-flag reset logic when hiding provider-specific fields.The
update_build_configmethod setsrequired=Truefor WatsonX fields when that provider is selected, but it does not reset the required flags back toFalsewhen the provider changes to a non-WatsonX option. This can cause validation failures if a user switches providers.Additionally, Ollama fields have no
requiredflag management—it's unclear whetherollama_base_urlshould be required when Ollama is selected.Apply this diff to properly manage required flags:
# Show/hide watsonx fields is_watsonx = provider == "IBM WatsonX" build_config["base_url_ibm_watsonx"]["show"] = is_watsonx build_config["project_id"]["show"] = is_watsonx if is_watsonx: build_config["base_url_ibm_watsonx"]["required"] = True build_config["project_id"]["required"] = True + else: + build_config["base_url_ibm_watsonx"]["required"] = False + build_config["project_id"]["required"] = False # Show/hide Ollama fields is_ollama = provider == "Ollama" build_config["ollama_base_url"]["show"] = is_ollama + if is_ollama: + build_config["ollama_base_url"]["required"] = True + else: + build_config["ollama_base_url"]["required"] = False
♻️ Duplicate comments (6)
src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2)
2541-2542: Same provider toggle/required and typing adjustments as noted in Research Translation Loop.Replicate the fixes:
- Keep WatsonX required flags in sync with visibility.
- Widen field_value type hint (expects list[dict]).
- Prefer StrInput for ollama_base_url.
2863-2864: Duplicate: apply the same adjustments here too.This second LanguageModelComponent block has the same patterns; apply the same fixes for required flags, typing, and ollama_base_url input.
src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (1)
952-969: Duplicate of File 1 issues: Missing required-flag reset and Ollama required-flag management.This file contains the identical LanguageModelComponent code as File 1. The same major issue applies here:
requiredflags are not reset when hiding provider-specific fields, and Ollama field requirements are not explicitly managed. See detailed comments in the "Custom Component Generator.json" review for the recommended fix.src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1)
1560-1561: Same fixes as above for this duplicated LanguageModelComponent block.Please apply the same three changes (reset required flags, normalized provider detection, switch
ollama_base_urltoStrInput) here as well.src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)
3403-3404: Apply same fixes to this duplicated block.src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)
839-1045: LanguageModelComponent: same WatsonX/Ollama wiring +requiredtweak as in Research AgentThis
LanguageModelComponentuses the same Python code as the ones in the Research Agent starter: provider‑specific inputs,build_modelforwarding WatsonX/Ollama params intoget_llm, andupdate_build_configtoggling field visibility.The earlier suggestion about making WatsonX
requiredflags symmetric on provider change applies here as well:is_watsonx = provider == "IBM WatsonX" build_config["base_url_ibm_watsonx"]["show"] = is_watsonx build_config["project_id"]["show"] = is_watsonx - if is_watsonx: - build_config["base_url_ibm_watsonx"]["required"] = True - build_config["project_id"]["required"] = True + build_config["base_url_ibm_watsonx"]["required"] = is_watsonx + build_config["project_id"]["required"] = is_watsonx
🧹 Nitpick comments (26)
src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (2)
885-1180: Avoid sending JSON body with GET/DELETE.You always pass json=processed_body. For GET/DELETE, omit body to prevent unexpected server behavior.
Example:
- request_params = { - "method": method, - "url": url, - "headers": headers, - "json": processed_body, - "timeout": timeout, - "follow_redirects": follow_redirects, - } + request_params = { + "method": method, + "url": url, + "headers": headers, + "timeout": timeout, + "follow_redirects": follow_redirects, + } + if method in {"POST", "PUT", "PATCH"} and processed_body: + request_params["json"] = processed_body
914-930: Safer default for redirects.Template sets Follow Redirects to true by default; SSRF bypass risk is called out. Default this to false and let users opt-in.
- "value": true + "value": falsesrc/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)
1429-1456: Use text input for URLs instead of MessageInput.ollama_base_url shouldn’t be a MessageInput (handle-capable). Prefer StrInput or MessageTextInput to avoid type/handle confusion.
- MessageInput( + StrInput( name="ollama_base_url", display_name="Ollama API URL", info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}", value=DEFAULT_OLLAMA_URL, show=False, real_time_refresh=True, - load_from_db=True, ),src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (3)
964-965: Reset required flags when switching away from WatsonX.Mirror the fix suggested in Blog Writer to avoid hidden required fields blocking saves.
- if is_watsonx: - build_config["base_url_ibm_watsonx"]["required"] = True - build_config["project_id"]["required"] = True + build_config["base_url_ibm_watsonx"]["required"] = is_watsonx + build_config["project_id"]["required"] = is_watsonx
929-963: Use text input for ollama_base_url.Prefer StrInput/MessageTextInput over MessageInput for a plain URL value.
- MessageInput( + StrInput( name="ollama_base_url", display_name="Ollama API URL", info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}", value=DEFAULT_OLLAMA_URL, show=False, real_time_refresh=True, - load_from_db=True, ),
964-965: Guard against unknown providers.Add an else branch to hide both WatsonX/Ollama fields for unrecognized providers.
is_ollama = provider == "Ollama" build_config["ollama_base_url"]["show"] = is_ollama + if not (is_watsonx or is_ollama): + build_config["base_url_ibm_watsonx"]["show"] = False + build_config["project_id"]["show"] = False + build_config["ollama_base_url"]["show"] = Falsesrc/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (3)
1407-1408: Reset required flags when provider changes (prevent hidden-required blocking).You set required=True for WatsonX fields but never reset to False when switching away. Hidden-but-required can block saves/validation.
Apply within update_build_config:
- build_config["base_url_ibm_watsonx"]["show"] = is_watsonx - build_config["project_id"]["show"] = is_watsonx - if is_watsonx: - build_config["base_url_ibm_watsonx"]["required"] = True - build_config["project_id"]["required"] = True + build_config["base_url_ibm_watsonx"]["show"] = is_watsonx + build_config["project_id"]["show"] = is_watsonx + # keep required in sync with show + build_config["base_url_ibm_watsonx"]["required"] = is_watsonx + build_config["project_id"]["required"] = is_watsonx
1407-1408: Type hint mismatch for field_value.Signature says str but branch expects list[dict]. Adjust to avoid confusion and future typing issues.
-from lfx.field_typing import LanguageModel +from lfx.field_typing import LanguageModel +from typing import Any @@ - def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None): + def update_build_config(self, build_config: dict, field_value: list[dict] | Any, field_name: str | None = None):
1407-1408: Prefer StrInput for ollama_base_url.Ollama URL is plain text. MessageInput implies message payloads. Use StrInput for correctness and simpler serialization.
- MessageInput( + StrInput( name="ollama_base_url", display_name="Ollama API URL", info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}", value=DEFAULT_OLLAMA_URL, show=False, real_time_refresh=True, load_from_db=True, ),src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)
2305-2327: Field visibility logic is clear, but required state should be reset on provider switch.The dynamic show/hide logic correctly displays and hides provider-specific fields based on the selected provider. However, the required flag is only set to True for WatsonX fields but never reset to False when switching to a different provider. While this doesn't prevent form submission (hidden required fields are typically not validated), it's semantically inconsistent. Consider explicitly resetting required=False for watsonx fields when the provider is not WatsonX.
# Suggested refinement is_watsonx = provider == "IBM WatsonX" build_config["base_url_ibm_watsonx"]["show"] = is_watsonx build_config["base_url_ibm_watsonx"]["required"] = is_watsonx # Set to False when not watsonx build_config["project_id"]["show"] = is_watsonx build_config["project_id"]["required"] = is_watsonx # Set to False when not watsonx is_ollama = provider == "Ollama" build_config["ollama_base_url"]["show"] = is_ollamasrc/lfx/src/lfx/components/models_and_agents/embedding_model.py (1)
32-59: Align WatsonX detection with_build_kwargsand hardenfield_valuehandlingThe new provider-aware toggling is on the right track, but a couple of small tweaks would make it more robust and consistent:
_build_kwargstreats both"IBM WatsonX"and"IBM watsonx.ai"as WatsonX providers, whileupdate_build_configonly checks for"IBM WatsonX". This can lead to WatsonX-specific fields not being shown/required when the provider label is"IBM watsonx.ai".field_value[0]is assumed to be a dict; a defensive check avoids surprises if the UI ever sends a different shape.- Resetting
requiredwhenis_watsonxbecomesFalsekeeps the build config state clean when switching providers.You could address all three with something like:
- # Show/hide provider-specific fields based on selected model - if field_name == "model" and isinstance(field_value, list) and len(field_value) > 0: - selected_model = field_value[0] - provider = selected_model.get("provider", "") - - # Show/hide watsonx fields - is_watsonx = provider == "IBM WatsonX" + # Show/hide provider-specific fields based on selected model + if field_name == "model" and isinstance(field_value, list) and field_value: + selected_model = field_value[0] if isinstance(field_value[0], dict) else {} + provider = selected_model.get("provider", "") + + # Show/hide watsonx fields (support both provider labels used in _build_kwargs) + is_watsonx = provider in {"IBM WatsonX", "IBM watsonx.ai"} build_config["base_url_ibm_watsonx"]["show"] = is_watsonx build_config["project_id"]["show"] = is_watsonx build_config["truncate_input_tokens"]["show"] = is_watsonx build_config["input_text"]["show"] = is_watsonx - if is_watsonx: - build_config["base_url_ibm_watsonx"]["required"] = True - build_config["project_id"]["required"] = True + build_config["base_url_ibm_watsonx"]["required"] = is_watsonx + build_config["project_id"]["required"] = is_watsonxThis keeps the UI logic resilient to provider-label variations and avoids stale
requiredflags when switching away from WatsonX.src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json (3)
1742-1743: Type hint mismatch for field_value.Signature uses str but code expects list[dict] for model. Widen the type to avoid confusion and tooling errors.
- def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None): + def update_build_config(self, build_config: dict, field_value: list[dict] | str | None, field_name: str | None = None):
1742-1743: Use imported constant for WatsonX default URL instead of hardcoding.Keeps defaults in one place.
- url_value = ( - self.base_url_ibm_watsonx - if hasattr(self, "base_url_ibm_watsonx") and self.base_url_ibm_watsonx - else "https://us-south.ml.cloud.ibm.com" - ) + url_value = ( + self.base_url_ibm_watsonx + if hasattr(self, "base_url_ibm_watsonx") and self.base_url_ibm_watsonx + else IBM_WATSONX_URLS[0] + )
1742-1743: Optional: clarify Ollama UX by hiding API key and surfacing base URL.Ollama doesn’t need an API key; hiding it reduces confusion and ensures api_base is visible for local hosts.
- # Show/hide watsonx fields + # Show/hide watsonx fields is_watsonx = "watsonx" in prov ... + # Ollama-specific UI + is_ollama = prov == "ollama" + if "api_key" in build_config: + build_config["api_key"]["show"] = not is_ollama + build_config["api_key"]["required"] = False + if "api_base" in build_config: + # Ensure base URL is visible so users can point to non-default Ollama hosts + build_config["api_base"]["show"] = Truesrc/backend/base/langflow/initial_setup/starter_projects/Market Research.json (3)
1313-1313: Reset required flags when switching away from WatsonXHidden-but-required fields can fail validation. Ensure required=False when not selected.
Apply within the same block:
- build_config["base_url_ibm_watsonx"]["show"] = is_watsonx - build_config["project_id"]["show"] = is_watsonx - build_config["base_url_ibm_watsonx"]["required"] = bool(is_watsonx) - build_config["project_id"]["required"] = bool(is_watsonx) + for k in ("base_url_ibm_watsonx", "project_id"): + build_config[k]["show"] = is_watsonx + build_config[k]["required"] = is_watsonx
1313-1313: Use StrInput for URLs instead of MessageInputollama_base_url is a URL/string, not a message. Switch to StrInput to avoid message-type semantics and keep validation simple.
Apply this input change:
- MessageInput( + StrInput( name="ollama_base_url", display_name="Ollama API URL", info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}", value=DEFAULT_OLLAMA_URL, show=False, real_time_refresh=True, - load_from_db=True, ),
1313-1313: Fix type hint for field_valueupdate_build_config treats field_value as list[dict] when field_name=="model" but the signature says str.
Use a union hint:
- def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None): + def update_build_config(self, build_config: dict, field_value: list[dict] | str, field_name: str | None = None):src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (3)
988-988: Reset required flags when WatsonX is not selectedPrevent hidden required-field validation by synchronously clearing required when not WatsonX.
Adopt the for-loop pattern to set show/required together based on is_watsonx.
988-988: Prefer StrInput for ollama_base_urlChange MessageInput → StrInput for URL configuring Ollama.
Update the input declaration accordingly, mirroring the prior diff.
988-988: Correct update_build_config type hintAllow list[dict] | str for field_value to match actual usage.
Adjust the function signature as previously suggested.
src/lfx/src/lfx/components/models_and_agents/language_model.py (2)
106-106: Consider updating type annotation forfield_value.The parameter is typed as
strbut the code handles it as alistwhenfield_name == "model". Consider using a union type for clarity.- def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None): + def update_build_config(self, build_config: dict, field_value: str | list, field_name: str | None = None):
119-121: Consider adding defensive check for model structure.If
field_value[0]is not a dict, calling.get()will raise anAttributeError. While the UI should always provide the expected structure, a defensive check improves robustness.# Show/hide provider-specific fields based on selected model - if field_name == "model" and isinstance(field_value, list) and len(field_value) > 0: + if field_name == "model" and isinstance(field_value, list) and len(field_value) > 0 and isinstance(field_value[0], dict): selected_model = field_value[0] provider = selected_model.get("provider", "")src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (2)
1235-1236: Normalize provider detection to avoid brittle string equality.Exact match on
"IBM WatsonX"may miss variants (e.g., “IBM watsonx”, “watsonx.ai”). Recommend case-insensitive contains for robustness.- provider = selected_model.get("provider", "") - # Show/hide watsonx fields - is_watsonx = provider == "IBM WatsonX" + provider = (selected_model.get("provider", "") or "").strip() + provider_lc = provider.lower() + # Show/hide watsonx fields + is_watsonx = "watsonx" in provider_lc ... - # Show/hide Ollama fields - is_ollama = provider == "Ollama" + # Show/hide Ollama fields + is_ollama = provider_lc == "ollama"
1235-1236: Use a text input forollama_base_url(not a Handle/Message input).
MessageInputis a handle input for graph data; URLs should be plain text. UseStrInput(orMessageTextInput) to avoid type/UX confusion and ensure persistence is a simple string.- MessageInput( + StrInput( name="ollama_base_url", display_name="Ollama API URL", info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}", value=DEFAULT_OLLAMA_URL, show=False, real_time_refresh=True, - load_from_db=True, ),src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)
3076-3077: Optional: Align “API Key” labeling for multi-provider context.The embedded template fields still label the key as “OpenAI API Key”. Consider standardizing to “API Key” for provider-agnostic UX (applies to these components’ template blocks). Low priority.
Also applies to: 3403-3404
src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)
1988-2517: LanguageModelComponent WatsonX/Ollama wiring looks good; consider symmetricrequiredhandlingThe new inputs and
build_modelcall correctly passbase_url_ibm_watsonx/project_id/ollama_base_urlintoget_llm, which matches the new provider‑specific logic inunified_models.get_llm.In
update_build_config, you only ever set the WatsonX fields to required when WatsonX is selected and never clear that flag when switching to another provider. It’s a small UX/validation risk if hidden-but-required fields are still honored by the frontend.You can make this symmetric and self‑correcting with:
# Show/hide watsonx fields is_watsonx = provider == "IBM WatsonX" build_config["base_url_ibm_watsonx"]["show"] = is_watsonx build_config["project_id"]["show"] = is_watsonx - if is_watsonx: - build_config["base_url_ibm_watsonx"]["required"] = True - build_config["project_id"]["required"] = True + # Only require these fields when WatsonX is actually selected + build_config["base_url_ibm_watsonx"]["required"] = is_watsonx + build_config["project_id"]["required"] = is_watsonxSame comment applies to the second
LanguageModelComponentdefinition in this file.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (26)
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json(3 hunks)src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Market Research.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json(3 hunks)src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json(1 hunks)src/lfx/src/lfx/base/models/unified_models.py(2 hunks)src/lfx/src/lfx/base/models/watsonx_constants.py(1 hunks)src/lfx/src/lfx/components/models_and_agents/embedding_model.py(2 hunks)src/lfx/src/lfx/components/models_and_agents/language_model.py(1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-11-24T19:47:28.997Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-11-24T19:47:28.997Z
Learning: Applies to src/backend/tests/**/*.py : Test component build config updates by calling `to_frontend_node()` to get the node template, then calling `update_build_config()` to apply configuration changes
Applied to files:
src/lfx/src/lfx/components/models_and_agents/embedding_model.py
📚 Learning: 2025-11-24T19:46:09.104Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-11-24T19:46:09.104Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new components to the appropriate subdirectory under `src/backend/base/langflow/components/` (agents/, data/, embeddings/, input_output/, models/, processing/, prompts/, tools/, or vectorstores/)
Applied to files:
src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.jsonsrc/backend/base/langflow/initial_setup/starter_projects/Research Agent.jsonsrc/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json
📚 Learning: 2025-06-26T19:43:18.260Z
Learnt from: ogabrielluiz
Repo: langflow-ai/langflow PR: 0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Applied to files:
src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.jsonsrc/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json
📚 Learning: 2025-08-11T16:52:26.755Z
Learnt from: edwinjosechittilappilly
Repo: langflow-ai/langflow PR: 9336
File: src/backend/base/langflow/base/models/openai_constants.py:29-33
Timestamp: 2025-08-11T16:52:26.755Z
Learning: The "gpt-5-chat-latest" model in the OpenAI models configuration does not support tool calling, so tool_calling should be set to False for this model in src/backend/base/langflow/base/models/openai_constants.py.
Applied to files:
src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json
🧬 Code graph analysis (2)
src/lfx/src/lfx/components/models_and_agents/embedding_model.py (1)
src/lfx/src/lfx/base/models/unified_models.py (1)
update_model_options_in_build_config(932-1086)
src/lfx/src/lfx/base/models/watsonx_constants.py (1)
src/lfx/src/lfx/base/models/model_metadata.py (1)
create_model_metadata(20-47)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
- GitHub Check: Lint Backend / Run Mypy (3.12)
- GitHub Check: Lint Backend / Run Mypy (3.13)
- GitHub Check: Lint Backend / Run Mypy (3.11)
- GitHub Check: Lint Backend / Run Mypy (3.10)
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
- GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
- GitHub Check: Run Backend Tests / LFX Tests - Python 3.10
- GitHub Check: Test Docker Images / Test docker images
- GitHub Check: Test Starter Templates
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: test-starter-projects
- GitHub Check: Update Component Index
🔇 Additional comments (28)
src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (2)
1463-1464: Build passes provider URLs correctly — LGTM.Forwarding watsonx_url, watsonx_project_id, ollama_base_url into get_llm is correct.
1463-1464: Normalize provider name check.String compare uses "IBM WatsonX". Ensure it matches actual provider label returned by options; otherwise fields won’t show.
#!/bin/bash # Inspect provider field values in language model options payload rg -nP '"provider"\s*:\s*".+?"' src -C1src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1)
964-965: Build passes provider URLs correctly — LGTM.Verified: The
get_llm()function inunified_models.pycorrectly accepts and processes all provider-specific parameters (watsonx_url,watsonx_project_id,ollama_base_url). Parameter forwarding is implemented with proper metadata-driven mapping and validation.src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (2)
1531-1531: Dynamic field visibility correctly separates model option updates from provider-specific UI logic.The refactored
update_build_config()cleanly separates concerns: first updating model options viaupdate_model_options_in_build_config(), then handling provider-specific field visibility in a separate conditional block. This improves readability and maintainability.Also applies to: 1858-1858
1531-1531: Build model correctly passes provider-specific parameters to get_llm.The
build_model()method properly extracts and passeswatsonx_url,watsonx_project_id, andollama_base_urlusing safegetattr()calls with None defaults. Theget_llm()function signature inunified_models.pyaccepts all three parameters as keyword-only arguments, each with None as the default value, confirming this defensive approach is appropriate and prevents AttributeError if components are instantiated without setting these fields.src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (2)
1367-1367: WatsonX-specific fields correctly marked as required when provider is selected.The logic in
update_build_config()marksbase_url_ibm_watsonxandproject_idas required when the IBM WatsonX provider is active (lines checkingif is_watsonx: build_config[...]["required"] = True), ensuring users cannot accidentally omit required configuration for that provider.
1367-1367: DEFAULT_OLLAMA_URL is properly defined and accessible.The constant is correctly defined at the module level in
src/lfx/src/lfx/components/models_and_agents/language_model.pyline 13 asDEFAULT_OLLAMA_URL = "http://localhost:11434"and is properly used within the same module. The Memory Chatbot.json file appears to be a starter project template containing serialized component code, so there is no accessibility issue.src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (4)
2241-2318: Well-structured provider-specific field definitions.The new input fields (base_url_ibm_watsonx, project_id, ollama_base_url) are correctly defined with appropriate input types, defaults, and visibility controls. Field names are consistent across the inputs list and template configuration.
2275-2304: Correct provider parameter mapping in build_model().The method safely passes provider-specific parameters (watsonx_url, watsonx_project_id, ollama_base_url) to get_llm() using getattr() with safe None defaults. Field name mappings are consistent and correctly align the UI input names with the get_llm() parameter names.
2275-2330: Good defensive programming and consistent implementation details.The code correctly imports IBM_WATSONX_URLS, defines DEFAULT_OLLAMA_URL with a sensible fallback, and uses getattr() with safe defaults throughout. The implementation is consistent across the component and aligns with the broader provider-aware configuration pattern described in the PR.
2305-2327: Verify provider name standardization and matching logic in update_build_config().The review comment references case-sensitive string matching for provider names ("IBM WatsonX", "Ollama"), but the implementation details and upstream provider name standardization could not be verified. Ensure that provider names are standardized consistently throughout the system and that provider name matching uses case-insensitive comparison or a safe lookup strategy if provider names can vary in format.
src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (4)
1323-1330: Good defensive approach in build_model.The method correctly uses
getattrwithNonedefaults to safely pass provider-specific parameters toget_llm. This pattern handles missing attributes gracefully and avoids attribute errors.
1332-1350: Excellent separation of concerns in update_build_config.The method cleanly separates model option updates from provider-specific field logic. The provider visibility logic is well-structured, with safe defaults (fields hidden if provider is unrecognized) and proper required-field toggles. The symmetrical handling of WatsonX and Ollama providers improves readability.
1310-1328: Well-designed input configuration for provider-specific fields.The provider-specific fields are properly hidden by default, have sensible defaults/options (IBM URLs populated, Ollama localhost default), and use appropriate input types. The project_id is initially optional but will be marked required when needed via
update_build_config—this is a clean pattern for conditional requirements.
1299-1299: Verify provider string consistency and field_value structure assumptions in update_build_config.The code in the review assumes
field_valueis a list with at least one dictionary element whenfield_name == "model", and provider string matching uses "IBM WatsonX" and "Ollama". However,watsonx_constants.pydefines provider metadata as "IBM Watsonx" (without capital X), creating a potential mismatch. Additionally, the code assumesfield_value[0]contains a "provider" key without defensive checks—if the structure differs, this will fail silently or raise KeyError.src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1)
2628-2629: IBM_WATSONX_URLS import is available and correctly configured.The
IBM_WATSONX_URLSconstant is properly defined inlfx.base.models.watsonx_constants(lines 65-72) as a list of six valid API endpoint URLs. The import statement is correct and functional.src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (1)
1450-1467:⚠️ Missing field definitions for new WatsonX and Ollama input fields in component template.The Python code in the
codefield declares three new inputs (base_url_ibm_watsonx,project_id, andollama_base_url), but the corresponding field definitions appear to be missing from thetemplatesection. This causes:
- UI will not display these fields – Users won't see WatsonX/Ollama configuration options because they're not defined in the template.
- Runtime KeyError risk – The
update_build_configmethod accessesbuild_config["base_url_ibm_watsonx"],build_config["project_id"], andbuild_config["ollama_base_url"], which will raiseKeyErrorif not defined in the template.Add field definitions to the
templatesection of each LanguageModelComponent instance for all three new fields, ensuring they match the input declarations in the Python code (DropdownInput forbase_url_ibm_watsonx, StrInput forproject_id, MessageInput forollama_base_url).src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)
3258-3274: Provider-specific wiring for LanguageModelComponent looks correct
build_modelnow forwardswatsonx_url,watsonx_project_id, andollama_base_urlintoget_llm, andupdate_build_configcleanly composesupdate_model_options_in_build_configwith simple provider-based visibility/required toggles for WatsonX and Ollama. This is consistent with the shared unified_models helpers and should give WatsonX/Ollama the parameters they need without impacting other providers.src/lfx/src/lfx/base/models/watsonx_constants.py (2)
60-61: LGTM! Good consolidation of model metadata.The combination of LLM and embedding models into
WATSONX_MODELS_DETAILEDis well-structured, and the comment clearly explains the purpose. This provides a convenient single constant for accessing all WatsonX models.
3-25: No changes needed. The pattern of marking multiple models withdefault=Trueis intentional and consistent across the entire codebase. Thedefaultflag indicates models are "default/recommended options" for user visibility, not selection logic. Multiple defaults per provider are supported by design—the selection mechanism prioritizes user preferences first, then falls back to the first available model. OpenAI and other providers follow the same pattern.src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json (1)
1742-1743: Provider labels and param mappings are correctly aligned.Verification confirms
get_embedding_model_optionsreturnsprovider="IBM WatsonX"with metadata mappingurlandproject_idcorrectly in the param_mapping dictionary (unified_models.py:650-657). The UI toggle (embedding_model.py:50) correctly checksprovider == "IBM WatsonX", and the kwargs building (embedding_model.py:246, 249) correctly accessesparam_mapping["url"]andparam_mapping["project_id"]for WatsonX embeddings. The defensive check accepting both "IBM WatsonX" and "IBM watsonx.ai" variants (line 238) provides extra robustness but is not needed given the unified provider label used throughout the model metadata.src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1)
1313-1313: Validate IBM_WATSONX_URLS shape for DropdownInputEnsure IBM_WATSONX_URLS is structured as a simple list of strings (not dicts with label/value pairs), since DropdownInput expects
optionsto be a flat list of displayable strings. The current code accessesIBM_WATSONX_URLS[0]as thevalue, which assumes the constant contains indexable string elements.src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (2)
988-988: No compatibility issues detected. IBM_WATSONX_URLS is a simple list of strings that correctly matches DropdownInput's expected options parameter type (list[str]). The template usage is valid.
988-988: Remove this review comment—Document Q&A.json uses a safe delegation patternThe code in Document Q&A.json (line 988) delegates to
update_model_options_in_build_configand does not contain hardcoded provider string comparisons likeprovider == "IBM WatsonX"orprovider == "Ollama". The template has already been refactored to avoid the brittle equality checks. The concern about provider matching exists in the actual source components (e.g.,src/lfx/src/lfx/components/models_and_agents/language_model.py), not in this starter template file.src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)
1969-1969: LGTM - Starter project updated with provider-aware model configuration.The embedded LanguageModelComponent code correctly includes the new WatsonX and Ollama provider support, aligning with the broader PR changes across starter projects.
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (1)
1271-1271: LGTM - All three LanguageModelComponent nodes updated consistently.The embedded code for all three language model instances in the prompt chain correctly includes the provider-specific configuration for WatsonX and Ollama.
Also applies to: 1593-1593, 1914-1914
src/lfx/src/lfx/components/models_and_agents/language_model.py (1)
94-104: LGTM - Safe attribute access for provider-specific parameters.Using
getattr()withNonedefault handles cases where attributes may not be set, preventing AttributeError when non-WatsonX/Ollama providers are used.src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)
1167-1379: [Your rewritten review comment text here]
[Exactly ONE classification tag]
src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json
Outdated
Show resolved
Hide resolved
src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json
Outdated
Show resolved
Hide resolved
src/backend/base/langflow/initial_setup/starter_projects/Market Research.json
Outdated
Show resolved
Hide resolved
src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json
Outdated
Show resolved
Hide resolved
src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json
Show resolved
Hide resolved
src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json
Outdated
Show resolved
Hide resolved
src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json
Outdated
Show resolved
Hide resolved
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This pull request updates the code logic for the
LanguageModelComponentin theBasic Prompt Chaining.jsonstarter project to better support provider-specific configuration for IBM WatsonX and Ollama models. The changes improve how the component dynamically shows or hides input fields based on the selected model provider, and ensure that provider-specific parameters are passed when building the language model.Provider-specific configuration enhancements:
build_modelmethod now passeswatsonx_url,watsonx_project_id, andollama_base_urltoget_llm, enabling correct configuration for IBM WatsonX and Ollama providers. [1] [2]update_build_configmethod has been expanded to dynamically show or hide fields for WatsonX and Ollama based on the selected model provider, and to require WatsonX-specific fields when that provider is chosen. [1] [2]General improvements:
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.