Skip to content

fix: Handle params required for watsonX#10979

Merged
erichare merged 21 commits intomainfrom
fix-model-providers-watsonx
Dec 23, 2025
Merged

fix: Handle params required for watsonX#10979
erichare merged 21 commits intomainfrom
fix-model-providers-watsonx

Conversation

@erichare
Copy link
Collaborator

@erichare erichare commented Dec 11, 2025

This pull request updates the code logic for the LanguageModelComponent in the Basic Prompt Chaining.json starter project to better support provider-specific configuration for IBM WatsonX and Ollama models. The changes improve how the component dynamically shows or hides input fields based on the selected model provider, and ensure that provider-specific parameters are passed when building the language model.

Provider-specific configuration enhancements:

  • The build_model method now passes watsonx_url, watsonx_project_id, and ollama_base_url to get_llm, enabling correct configuration for IBM WatsonX and Ollama providers. [1] [2]
  • The update_build_config method has been expanded to dynamically show or hide fields for WatsonX and Ollama based on the selected model provider, and to require WatsonX-specific fields when that provider is chosen. [1] [2]

General improvements:

  • The logic for updating model options in the build configuration is now separated from the provider-specific field visibility logic, making the code easier to read and maintain. [1] [2]

Summary by CodeRabbit

Release Notes

  • New Features
    • Added IBM WatsonX language model provider support with provider-specific configuration fields
    • Added Ollama language model provider support with dynamic configuration
    • Updated embedding models to support IBM WatsonX integration
    • Enhanced starter project templates with multi-provider language model support
    • Improved configuration interface with dynamic field visibility based on selected provider

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 11, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This PR extends the Langflow platform to support IBM WatsonX and Ollama LLM providers across 22 starter project templates and the core model infrastructure. Changes include adding provider-specific configuration fields (URLs, project IDs), updating the LLM builder to accept these parameters, and implementing dynamic UI field visibility based on the selected provider.

Changes

Cohort / File(s) Change Summary
Starter Project Template Updates
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json, Basic Prompting.json, Blog Writer.json, Custom Component Generator.json, Document Q&A.json, Financial Report Parser.json, Hybrid Search RAG.json, Image Sentiment Analysis.json, Instagram Copywriter.json, Market Research.json, Meeting Summary.json, Memory Chatbot.json, Portfolio Website Code Generator.json, Research Agent.json, Research Translation Loop.json, SEO Keyword Generator.json, Text Sentiment Analysis.json, Twitter Thread Generator.json, Vector Store RAG.json, Youtube Analysis.json
Updated embedded LanguageModelComponent code to support IBM WatsonX and Ollama providers via provider-specific URL/project ID parameters and dynamic field visibility in update_build_config based on selected provider.
Starter Project Metadata Updates
src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json, Pokédex Agent.json
Updated component code hashes and metadata for EmbeddingModelComponent and APIRequestComponent.
Core LLM Builder
src/lfx/src/lfx/base/models/unified_models.py
Extended get_llm function signature to accept watsonx_url, watsonx_project_id, and ollama_base_url as optional parameters; added provider-specific parameter injection logic for WatsonX (with validation) and Ollama.
WatsonX Configuration Constants
src/lfx/src/lfx/base/models/watsonx_constants.py
Added new WATSONX_DEFAULT_LLM_MODELS constant containing IBM WatsonX LLM metadata; updated WATSONX_MODELS_DETAILED to combine LLM and embedding models.
Language Model Component
src/lfx/src/lfx/components/models_and_agents/language_model.py
Updated build_model to pass provider-specific URLs to get_llm; enhanced update_build_config to dynamically show/hide WatsonX and Ollama fields and enforce required flags based on provider selection.
Embedding Model Component
src/lfx/src/lfx/components/models_and_agents/embedding_model.py
Refactored update_build_config to explicitly assign build_config result and apply provider-specific field visibility logic for WatsonX fields.

Sequence Diagram

sequenceDiagram
    participant User
    participant UI as Build Config
    participant LMC as LanguageModelComponent
    participant get_llm as get_llm Factory
    participant LLM as LLM Provider<br/>(WatsonX/Ollama/OpenAI)

    User->>UI: Select model provider
    UI->>LMC: update_build_config(model_field)
    activate LMC
    LMC->>LMC: Detect provider from model
    alt Provider is IBM WatsonX
        LMC->>UI: Show base_url_ibm_watsonx,<br/>project_id (required)
    else Provider is Ollama
        LMC->>UI: Show ollama_base_url
    else Other Provider
        LMC->>UI: Hide provider-specific fields
    end
    LMC-->>UI: Return updated build_config
    deactivate LMC
    
    User->>LMC: Trigger build_model()
    activate LMC
    LMC->>LMC: Gather provider-specific URLs<br/>(watsonx_url, watsonx_project_id,<br/>ollama_base_url)
    LMC->>get_llm: Call with model + provider params
    deactivate LMC
    activate get_llm
    alt Provider requires validation
        get_llm->>get_llm: Validate required params<br/>(e.g., WatsonX URL/project)
    end
    get_llm->>LLM: Initialize with provider-specific config
    LLM-->>get_llm: Return configured LLM instance
    get_llm-->>LMC: Return LLM
    deactivate get_llm
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Core logic changes are straightforward (provider detection, field visibility toggling, parameter injection) but applied consistently across 22 starter project files and 4 Python modules
  • Pattern repetition reduces per-file complexity: The same provider-aware logic is replicated across starter projects, making individual reviews faster once the pattern is understood
  • Notable areas requiring attention:
    • Validation logic in get_llm for WatsonX (URL/project_id requirements)
    • Provider detection logic in update_build_config to ensure all providers are correctly mapped
    • Consistency of field names and visibility rules across all 22 starter projects
    • New WATSONX_DEFAULT_LLM_MODELS constant definition and how it integrates with existing model lists

Possibly related PRs

  • Both PRs modify LanguageModelComponent implementation in starter templates and add IBM WatsonX and Ollama support with provider-specific field handling and model construction wiring.
  • Both PRs introduce provider-specific URL/configuration support (watsonx, ollama) and dynamic UI field visibility in embedding/model components.
  • Both PRs extend the unified model infrastructure by adding provider-specific parameters to model initialization and updating watsonx-related constants.

Suggested labels

enhancement, multi-provider-support, ui/ux

Suggested reviewers

  • edwinjosechittilappilly

Pre-merge checks and finishing touches

Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (1 error, 3 warnings)
Check name Status Explanation Resolution
Test Coverage For New Implementations ❌ Error PR deleted test file test_mcp_component_output.py without adding new test coverage for WatsonX and Ollama provider functionality. Add comprehensive tests for new WatsonX/Ollama parameters in unified_models.py, language_model_component.py, and embedding_model_component.py with validation and provider-specific field coverage.
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Test Quality And Coverage ⚠️ Warning PR introduces significant functional changes to language_model.py, embedding_model.py, unified_models.py, and watsonx_constants.py with new WatsonX and Ollama provider support, conditional field visibility, and validation logic, but provides zero test coverage. Add comprehensive pytest test coverage for: (1) get_llm() parameter injection and validation; (2) LanguageModelComponent field visibility and required flag toggling; (3) provider-specific logic across all components; (4) error handling and edge cases like missing required fields.
Test File Naming And Structure ⚠️ Warning Pull request modifies critical production code in multiple files without corresponding test coverage for new WatsonX/Ollama parameters and validation logic. Create comprehensive tests for new get_llm parameters, update_build_config provider-specific logic, and watsonx_constants in the test suite.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: Handle params required for watsonX' directly addresses the PR's core objective of passing required parameters to WatsonX and handling provider-specific configurations, making it a concise and accurate summary of the primary change.
Excessive Mock Usage Warning ✅ Passed The pull request does not introduce excessive mock usage in tests. Mocks are appropriately targeted at external dependencies while core logic is tested with real objects.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the bug Something isn't working label Dec 11, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
@codecov
Copy link

codecov bot commented Dec 11, 2025

Codecov Report

❌ Patch coverage is 30.30303% with 23 lines in your changes missing coverage. Please review.
✅ Project coverage is 32.33%. Comparing base (bd29a42) to head (3afc76b).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/lfx/src/lfx/base/models/unified_models.py 25.80% 23 Missing ⚠️

❌ Your patch status has failed because the patch coverage (30.30%) is below the target coverage (40.00%). You can increase the patch coverage or adjust the target coverage.
❌ Your project status has failed because the head coverage (39.48%) is below the target coverage (60.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main   #10979      +/-   ##
==========================================
- Coverage   33.24%   32.33%   -0.92%     
==========================================
  Files        1394     1394              
  Lines       66040    66068      +28     
  Branches     9772     9778       +6     
==========================================
- Hits        21958    21365     -593     
- Misses      42956    43576     +620     
- Partials     1126     1127       +1     
Flag Coverage Δ
backend 49.08% <ø> (-3.42%) ⬇️
frontend 15.37% <ø> (ø)
lfx 39.48% <30.30%> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
src/lfx/src/lfx/base/models/watsonx_constants.py 100.00% <100.00%> (ø)
src/lfx/src/lfx/base/models/unified_models.py 23.23% <25.80%> (-0.07%) ⬇️

... and 61 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (1)

885-1180: Fix wrong JSON exception when parsing httpx responses.

httpx.Response.json raises ValueError, not json.JSONDecodeError. Current except block won’t catch it and will throw. Catch ValueError (optionally both for safety).

Apply:

-            else:
-                try:
-                    result = response.json()
-                except json.JSONDecodeError:
-                    self.log("Failed to decode JSON response")
-                    result = response.text.encode("utf-8")
+            else:
+                try:
+                    result = response.json()
+                except (ValueError, json.JSONDecodeError):
+                    self.log("Failed to decode JSON response")
+                    result = response.text.encode("utf-8")
src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1)

2612-2629: Missing required-flag reset logic when hiding provider-specific fields.

The update_build_config method sets required=True for WatsonX fields when that provider is selected, but it does not reset the required flags back to False when the provider changes to a non-WatsonX option. This can cause validation failures if a user switches providers.

Additionally, Ollama fields have no required flag management—it's unclear whether ollama_base_url should be required when Ollama is selected.

Apply this diff to properly manage required flags:

  # Show/hide watsonx fields
  is_watsonx = provider == "IBM WatsonX"
  build_config["base_url_ibm_watsonx"]["show"] = is_watsonx
  build_config["project_id"]["show"] = is_watsonx
  if is_watsonx:
      build_config["base_url_ibm_watsonx"]["required"] = True
      build_config["project_id"]["required"] = True
+ else:
+     build_config["base_url_ibm_watsonx"]["required"] = False
+     build_config["project_id"]["required"] = False

  # Show/hide Ollama fields
  is_ollama = provider == "Ollama"
  build_config["ollama_base_url"]["show"] = is_ollama
+ if is_ollama:
+     build_config["ollama_base_url"]["required"] = True
+ else:
+     build_config["ollama_base_url"]["required"] = False
♻️ Duplicate comments (6)
src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2)

2541-2542: Same provider toggle/required and typing adjustments as noted in Research Translation Loop.

Replicate the fixes:

  • Keep WatsonX required flags in sync with visibility.
  • Widen field_value type hint (expects list[dict]).
  • Prefer StrInput for ollama_base_url.

2863-2864: Duplicate: apply the same adjustments here too.

This second LanguageModelComponent block has the same patterns; apply the same fixes for required flags, typing, and ollama_base_url input.

src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (1)

952-969: Duplicate of File 1 issues: Missing required-flag reset and Ollama required-flag management.

This file contains the identical LanguageModelComponent code as File 1. The same major issue applies here: required flags are not reset when hiding provider-specific fields, and Ollama field requirements are not explicitly managed. See detailed comments in the "Custom Component Generator.json" review for the recommended fix.

src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1)

1560-1561: Same fixes as above for this duplicated LanguageModelComponent block.

Please apply the same three changes (reset required flags, normalized provider detection, switch ollama_base_url to StrInput) here as well.

src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)

3403-3404: Apply same fixes to this duplicated block.

src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)

839-1045: LanguageModelComponent: same WatsonX/Ollama wiring + required tweak as in Research Agent

This LanguageModelComponent uses the same Python code as the ones in the Research Agent starter: provider‑specific inputs, build_model forwarding WatsonX/Ollama params into get_llm, and update_build_config toggling field visibility.

The earlier suggestion about making WatsonX required flags symmetric on provider change applies here as well:

        is_watsonx = provider == "IBM WatsonX"
        build_config["base_url_ibm_watsonx"]["show"] = is_watsonx
        build_config["project_id"]["show"] = is_watsonx
-        if is_watsonx:
-            build_config["base_url_ibm_watsonx"]["required"] = True
-            build_config["project_id"]["required"] = True
+        build_config["base_url_ibm_watsonx"]["required"] = is_watsonx
+        build_config["project_id"]["required"] = is_watsonx
🧹 Nitpick comments (26)
src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (2)

885-1180: Avoid sending JSON body with GET/DELETE.

You always pass json=processed_body. For GET/DELETE, omit body to prevent unexpected server behavior.

Example:

-            request_params = {
-                "method": method,
-                "url": url,
-                "headers": headers,
-                "json": processed_body,
-                "timeout": timeout,
-                "follow_redirects": follow_redirects,
-            }
+            request_params = {
+                "method": method,
+                "url": url,
+                "headers": headers,
+                "timeout": timeout,
+                "follow_redirects": follow_redirects,
+            }
+            if method in {"POST", "PUT", "PATCH"} and processed_body:
+                request_params["json"] = processed_body

914-930: Safer default for redirects.

Template sets Follow Redirects to true by default; SSRF bypass risk is called out. Default this to false and let users opt-in.

-                "value": true
+                "value": false
src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)

1429-1456: Use text input for URLs instead of MessageInput.

ollama_base_url shouldn’t be a MessageInput (handle-capable). Prefer StrInput or MessageTextInput to avoid type/handle confusion.

-        MessageInput(
+        StrInput(
             name="ollama_base_url",
             display_name="Ollama API URL",
             info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}",
             value=DEFAULT_OLLAMA_URL,
             show=False,
             real_time_refresh=True,
-            load_from_db=True,
         ),
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (3)

964-965: Reset required flags when switching away from WatsonX.

Mirror the fix suggested in Blog Writer to avoid hidden required fields blocking saves.

-            if is_watsonx:
-                build_config["base_url_ibm_watsonx"]["required"] = True
-                build_config["project_id"]["required"] = True
+            build_config["base_url_ibm_watsonx"]["required"] = is_watsonx
+            build_config["project_id"]["required"] = is_watsonx

929-963: Use text input for ollama_base_url.

Prefer StrInput/MessageTextInput over MessageInput for a plain URL value.

-        MessageInput(
+        StrInput(
             name="ollama_base_url",
             display_name="Ollama API URL",
             info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}",
             value=DEFAULT_OLLAMA_URL,
             show=False,
             real_time_refresh=True,
-            load_from_db=True,
         ),

964-965: Guard against unknown providers.

Add an else branch to hide both WatsonX/Ollama fields for unrecognized providers.

             is_ollama = provider == "Ollama"
             build_config["ollama_base_url"]["show"] = is_ollama
+            if not (is_watsonx or is_ollama):
+                build_config["base_url_ibm_watsonx"]["show"] = False
+                build_config["project_id"]["show"] = False
+                build_config["ollama_base_url"]["show"] = False
src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (3)

1407-1408: Reset required flags when provider changes (prevent hidden-required blocking).

You set required=True for WatsonX fields but never reset to False when switching away. Hidden-but-required can block saves/validation.

Apply within update_build_config:

-            build_config["base_url_ibm_watsonx"]["show"] = is_watsonx
-            build_config["project_id"]["show"] = is_watsonx
-            if is_watsonx:
-                build_config["base_url_ibm_watsonx"]["required"] = True
-                build_config["project_id"]["required"] = True
+            build_config["base_url_ibm_watsonx"]["show"] = is_watsonx
+            build_config["project_id"]["show"] = is_watsonx
+            # keep required in sync with show
+            build_config["base_url_ibm_watsonx"]["required"] = is_watsonx
+            build_config["project_id"]["required"] = is_watsonx

1407-1408: Type hint mismatch for field_value.

Signature says str but branch expects list[dict]. Adjust to avoid confusion and future typing issues.

-from lfx.field_typing import LanguageModel
+from lfx.field_typing import LanguageModel
+from typing import Any
@@
-    def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):
+    def update_build_config(self, build_config: dict, field_value: list[dict] | Any, field_name: str | None = None):

1407-1408: Prefer StrInput for ollama_base_url.

Ollama URL is plain text. MessageInput implies message payloads. Use StrInput for correctness and simpler serialization.

-        MessageInput(
+        StrInput(
             name="ollama_base_url",
             display_name="Ollama API URL",
             info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}",
             value=DEFAULT_OLLAMA_URL,
             show=False,
             real_time_refresh=True,
             load_from_db=True,
         ),
src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)

2305-2327: Field visibility logic is clear, but required state should be reset on provider switch.

The dynamic show/hide logic correctly displays and hides provider-specific fields based on the selected provider. However, the required flag is only set to True for WatsonX fields but never reset to False when switching to a different provider. While this doesn't prevent form submission (hidden required fields are typically not validated), it's semantically inconsistent. Consider explicitly resetting required=False for watsonx fields when the provider is not WatsonX.

# Suggested refinement
is_watsonx = provider == "IBM WatsonX"
build_config["base_url_ibm_watsonx"]["show"] = is_watsonx
build_config["base_url_ibm_watsonx"]["required"] = is_watsonx  # Set to False when not watsonx
build_config["project_id"]["show"] = is_watsonx
build_config["project_id"]["required"] = is_watsonx  # Set to False when not watsonx

is_ollama = provider == "Ollama"
build_config["ollama_base_url"]["show"] = is_ollama
src/lfx/src/lfx/components/models_and_agents/embedding_model.py (1)

32-59: Align WatsonX detection with _build_kwargs and harden field_value handling

The new provider-aware toggling is on the right track, but a couple of small tweaks would make it more robust and consistent:

  • _build_kwargs treats both "IBM WatsonX" and "IBM watsonx.ai" as WatsonX providers, while update_build_config only checks for "IBM WatsonX". This can lead to WatsonX-specific fields not being shown/required when the provider label is "IBM watsonx.ai".
  • field_value[0] is assumed to be a dict; a defensive check avoids surprises if the UI ever sends a different shape.
  • Resetting required when is_watsonx becomes False keeps the build config state clean when switching providers.

You could address all three with something like:

-        # Show/hide provider-specific fields based on selected model
-        if field_name == "model" and isinstance(field_value, list) and len(field_value) > 0:
-            selected_model = field_value[0]
-            provider = selected_model.get("provider", "")
-
-            # Show/hide watsonx fields
-            is_watsonx = provider == "IBM WatsonX"
+        # Show/hide provider-specific fields based on selected model
+        if field_name == "model" and isinstance(field_value, list) and field_value:
+            selected_model = field_value[0] if isinstance(field_value[0], dict) else {}
+            provider = selected_model.get("provider", "")
+
+            # Show/hide watsonx fields (support both provider labels used in _build_kwargs)
+            is_watsonx = provider in {"IBM WatsonX", "IBM watsonx.ai"}
             build_config["base_url_ibm_watsonx"]["show"] = is_watsonx
             build_config["project_id"]["show"] = is_watsonx
             build_config["truncate_input_tokens"]["show"] = is_watsonx
             build_config["input_text"]["show"] = is_watsonx
-            if is_watsonx:
-                build_config["base_url_ibm_watsonx"]["required"] = True
-                build_config["project_id"]["required"] = True
+            build_config["base_url_ibm_watsonx"]["required"] = is_watsonx
+            build_config["project_id"]["required"] = is_watsonx

This keeps the UI logic resilient to provider-label variations and avoids stale required flags when switching away from WatsonX.

src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json (3)

1742-1743: Type hint mismatch for field_value.

Signature uses str but code expects list[dict] for model. Widen the type to avoid confusion and tooling errors.

-    def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):
+    def update_build_config(self, build_config: dict, field_value: list[dict] | str | None, field_name: str | None = None):

1742-1743: Use imported constant for WatsonX default URL instead of hardcoding.

Keeps defaults in one place.

-                url_value = (
-                    self.base_url_ibm_watsonx
-                    if hasattr(self, "base_url_ibm_watsonx") and self.base_url_ibm_watsonx
-                    else "https://us-south.ml.cloud.ibm.com"
-                )
+                url_value = (
+                    self.base_url_ibm_watsonx
+                    if hasattr(self, "base_url_ibm_watsonx") and self.base_url_ibm_watsonx
+                    else IBM_WATSONX_URLS[0]
+                )

1742-1743: Optional: clarify Ollama UX by hiding API key and surfacing base URL.

Ollama doesn’t need an API key; hiding it reduces confusion and ensures api_base is visible for local hosts.

-            # Show/hide watsonx fields
+            # Show/hide watsonx fields
             is_watsonx = "watsonx" in prov
             ...
+            # Ollama-specific UI
+            is_ollama = prov == "ollama"
+            if "api_key" in build_config:
+                build_config["api_key"]["show"] = not is_ollama
+                build_config["api_key"]["required"] = False
+            if "api_base" in build_config:
+                # Ensure base URL is visible so users can point to non-default Ollama hosts
+                build_config["api_base"]["show"] = True
src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (3)

1313-1313: Reset required flags when switching away from WatsonX

Hidden-but-required fields can fail validation. Ensure required=False when not selected.

Apply within the same block:

-            build_config["base_url_ibm_watsonx"]["show"] = is_watsonx
-            build_config["project_id"]["show"] = is_watsonx
-            build_config["base_url_ibm_watsonx"]["required"] = bool(is_watsonx)
-            build_config["project_id"]["required"] = bool(is_watsonx)
+            for k in ("base_url_ibm_watsonx", "project_id"):
+                build_config[k]["show"] = is_watsonx
+                build_config[k]["required"] = is_watsonx

1313-1313: Use StrInput for URLs instead of MessageInput

ollama_base_url is a URL/string, not a message. Switch to StrInput to avoid message-type semantics and keep validation simple.

Apply this input change:

-        MessageInput(
+        StrInput(
             name="ollama_base_url",
             display_name="Ollama API URL",
             info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}",
             value=DEFAULT_OLLAMA_URL,
             show=False,
             real_time_refresh=True,
-            load_from_db=True,
         ),

1313-1313: Fix type hint for field_value

update_build_config treats field_value as list[dict] when field_name=="model" but the signature says str.

Use a union hint:

-    def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):
+    def update_build_config(self, build_config: dict, field_value: list[dict] | str, field_name: str | None = None):
src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (3)

988-988: Reset required flags when WatsonX is not selected

Prevent hidden required-field validation by synchronously clearing required when not WatsonX.

Adopt the for-loop pattern to set show/required together based on is_watsonx.


988-988: Prefer StrInput for ollama_base_url

Change MessageInput → StrInput for URL configuring Ollama.

Update the input declaration accordingly, mirroring the prior diff.


988-988: Correct update_build_config type hint

Allow list[dict] | str for field_value to match actual usage.

Adjust the function signature as previously suggested.

src/lfx/src/lfx/components/models_and_agents/language_model.py (2)

106-106: Consider updating type annotation for field_value.

The parameter is typed as str but the code handles it as a list when field_name == "model". Consider using a union type for clarity.

-    def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):
+    def update_build_config(self, build_config: dict, field_value: str | list, field_name: str | None = None):

119-121: Consider adding defensive check for model structure.

If field_value[0] is not a dict, calling .get() will raise an AttributeError. While the UI should always provide the expected structure, a defensive check improves robustness.

         # Show/hide provider-specific fields based on selected model
-        if field_name == "model" and isinstance(field_value, list) and len(field_value) > 0:
+        if field_name == "model" and isinstance(field_value, list) and len(field_value) > 0 and isinstance(field_value[0], dict):
             selected_model = field_value[0]
             provider = selected_model.get("provider", "")
src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (2)

1235-1236: Normalize provider detection to avoid brittle string equality.

Exact match on "IBM WatsonX" may miss variants (e.g., “IBM watsonx”, “watsonx.ai”). Recommend case-insensitive contains for robustness.

-            provider = selected_model.get("provider", "")
-            # Show/hide watsonx fields
-            is_watsonx = provider == "IBM WatsonX"
+            provider = (selected_model.get("provider", "") or "").strip()
+            provider_lc = provider.lower()
+            # Show/hide watsonx fields
+            is_watsonx = "watsonx" in provider_lc
             ...
-            # Show/hide Ollama fields
-            is_ollama = provider == "Ollama"
+            # Show/hide Ollama fields
+            is_ollama = provider_lc == "ollama"

1235-1236: Use a text input for ollama_base_url (not a Handle/Message input).

MessageInput is a handle input for graph data; URLs should be plain text. Use StrInput (or MessageTextInput) to avoid type/UX confusion and ensure persistence is a simple string.

-        MessageInput(
+        StrInput(
             name="ollama_base_url",
             display_name="Ollama API URL",
             info=f"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}",
             value=DEFAULT_OLLAMA_URL,
             show=False,
             real_time_refresh=True,
-            load_from_db=True,
         ),
src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)

3076-3077: Optional: Align “API Key” labeling for multi-provider context.

The embedded template fields still label the key as “OpenAI API Key”. Consider standardizing to “API Key” for provider-agnostic UX (applies to these components’ template blocks). Low priority.

Also applies to: 3403-3404

src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)

1988-2517: LanguageModelComponent WatsonX/Ollama wiring looks good; consider symmetric required handling

The new inputs and build_model call correctly pass base_url_ibm_watsonx / project_id / ollama_base_url into get_llm, which matches the new provider‑specific logic in unified_models.get_llm.

In update_build_config, you only ever set the WatsonX fields to required when WatsonX is selected and never clear that flag when switching to another provider. It’s a small UX/validation risk if hidden-but-required fields are still honored by the frontend.

You can make this symmetric and self‑correcting with:

        # Show/hide watsonx fields
        is_watsonx = provider == "IBM WatsonX"
        build_config["base_url_ibm_watsonx"]["show"] = is_watsonx
        build_config["project_id"]["show"] = is_watsonx
-        if is_watsonx:
-            build_config["base_url_ibm_watsonx"]["required"] = True
-            build_config["project_id"]["required"] = True
+        # Only require these fields when WatsonX is actually selected
+        build_config["base_url_ibm_watsonx"]["required"] = is_watsonx
+        build_config["project_id"]["required"] = is_watsonx

Same comment applies to the second LanguageModelComponent definition in this file.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9463c4a and 3f73c4f.

📒 Files selected for processing (26)
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (3 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (3 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1 hunks)
  • src/lfx/src/lfx/base/models/unified_models.py (2 hunks)
  • src/lfx/src/lfx/base/models/watsonx_constants.py (1 hunks)
  • src/lfx/src/lfx/components/models_and_agents/embedding_model.py (2 hunks)
  • src/lfx/src/lfx/components/models_and_agents/language_model.py (1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-11-24T19:47:28.997Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-11-24T19:47:28.997Z
Learning: Applies to src/backend/tests/**/*.py : Test component build config updates by calling `to_frontend_node()` to get the node template, then calling `update_build_config()` to apply configuration changes

Applied to files:

  • src/lfx/src/lfx/components/models_and_agents/embedding_model.py
📚 Learning: 2025-11-24T19:46:09.104Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-11-24T19:46:09.104Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new components to the appropriate subdirectory under `src/backend/base/langflow/components/` (agents/, data/, embeddings/, input_output/, models/, processing/, prompts/, tools/, or vectorstores/)

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json
  • src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json
  • src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json
📚 Learning: 2025-06-26T19:43:18.260Z
Learnt from: ogabrielluiz
Repo: langflow-ai/langflow PR: 0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json
📚 Learning: 2025-08-11T16:52:26.755Z
Learnt from: edwinjosechittilappilly
Repo: langflow-ai/langflow PR: 9336
File: src/backend/base/langflow/base/models/openai_constants.py:29-33
Timestamp: 2025-08-11T16:52:26.755Z
Learning: The "gpt-5-chat-latest" model in the OpenAI models configuration does not support tool calling, so tool_calling should be set to False for this model in src/backend/base/langflow/base/models/openai_constants.py.

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json
🧬 Code graph analysis (2)
src/lfx/src/lfx/components/models_and_agents/embedding_model.py (1)
src/lfx/src/lfx/base/models/unified_models.py (1)
  • update_model_options_in_build_config (932-1086)
src/lfx/src/lfx/base/models/watsonx_constants.py (1)
src/lfx/src/lfx/base/models/model_metadata.py (1)
  • create_model_metadata (20-47)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: Lint Backend / Run Mypy (3.12)
  • GitHub Check: Lint Backend / Run Mypy (3.13)
  • GitHub Check: Lint Backend / Run Mypy (3.11)
  • GitHub Check: Lint Backend / Run Mypy (3.10)
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
  • GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
  • GitHub Check: Run Backend Tests / LFX Tests - Python 3.10
  • GitHub Check: Test Docker Images / Test docker images
  • GitHub Check: Test Starter Templates
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: test-starter-projects
  • GitHub Check: Update Component Index
🔇 Additional comments (28)
src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (2)

1463-1464: Build passes provider URLs correctly — LGTM.

Forwarding watsonx_url, watsonx_project_id, ollama_base_url into get_llm is correct.


1463-1464: Normalize provider name check.

String compare uses "IBM WatsonX". Ensure it matches actual provider label returned by options; otherwise fields won’t show.

#!/bin/bash
# Inspect provider field values in language model options payload
rg -nP '"provider"\s*:\s*".+?"' src -C1
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1)

964-965: Build passes provider URLs correctly — LGTM.

Verified: The get_llm() function in unified_models.py correctly accepts and processes all provider-specific parameters (watsonx_url, watsonx_project_id, ollama_base_url). Parameter forwarding is implemented with proper metadata-driven mapping and validation.

src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (2)

1531-1531: Dynamic field visibility correctly separates model option updates from provider-specific UI logic.

The refactored update_build_config() cleanly separates concerns: first updating model options via update_model_options_in_build_config(), then handling provider-specific field visibility in a separate conditional block. This improves readability and maintainability.

Also applies to: 1858-1858


1531-1531: Build model correctly passes provider-specific parameters to get_llm.

The build_model() method properly extracts and passes watsonx_url, watsonx_project_id, and ollama_base_url using safe getattr() calls with None defaults. The get_llm() function signature in unified_models.py accepts all three parameters as keyword-only arguments, each with None as the default value, confirming this defensive approach is appropriate and prevents AttributeError if components are instantiated without setting these fields.

src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (2)

1367-1367: WatsonX-specific fields correctly marked as required when provider is selected.

The logic in update_build_config() marks base_url_ibm_watsonx and project_id as required when the IBM WatsonX provider is active (lines checking if is_watsonx: build_config[...]["required"] = True), ensuring users cannot accidentally omit required configuration for that provider.


1367-1367: DEFAULT_OLLAMA_URL is properly defined and accessible.

The constant is correctly defined at the module level in src/lfx/src/lfx/components/models_and_agents/language_model.py line 13 as DEFAULT_OLLAMA_URL = "http://localhost:11434" and is properly used within the same module. The Memory Chatbot.json file appears to be a starter project template containing serialized component code, so there is no accessibility issue.

src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (4)

2241-2318: Well-structured provider-specific field definitions.

The new input fields (base_url_ibm_watsonx, project_id, ollama_base_url) are correctly defined with appropriate input types, defaults, and visibility controls. Field names are consistent across the inputs list and template configuration.


2275-2304: Correct provider parameter mapping in build_model().

The method safely passes provider-specific parameters (watsonx_url, watsonx_project_id, ollama_base_url) to get_llm() using getattr() with safe None defaults. Field name mappings are consistent and correctly align the UI input names with the get_llm() parameter names.


2275-2330: Good defensive programming and consistent implementation details.

The code correctly imports IBM_WATSONX_URLS, defines DEFAULT_OLLAMA_URL with a sensible fallback, and uses getattr() with safe defaults throughout. The implementation is consistent across the component and aligns with the broader provider-aware configuration pattern described in the PR.


2305-2327: Verify provider name standardization and matching logic in update_build_config().

The review comment references case-sensitive string matching for provider names ("IBM WatsonX", "Ollama"), but the implementation details and upstream provider name standardization could not be verified. Ensure that provider names are standardized consistently throughout the system and that provider name matching uses case-insensitive comparison or a safe lookup strategy if provider names can vary in format.

src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (4)

1323-1330: Good defensive approach in build_model.

The method correctly uses getattr with None defaults to safely pass provider-specific parameters to get_llm. This pattern handles missing attributes gracefully and avoids attribute errors.


1332-1350: Excellent separation of concerns in update_build_config.

The method cleanly separates model option updates from provider-specific field logic. The provider visibility logic is well-structured, with safe defaults (fields hidden if provider is unrecognized) and proper required-field toggles. The symmetrical handling of WatsonX and Ollama providers improves readability.


1310-1328: Well-designed input configuration for provider-specific fields.

The provider-specific fields are properly hidden by default, have sensible defaults/options (IBM URLs populated, Ollama localhost default), and use appropriate input types. The project_id is initially optional but will be marked required when needed via update_build_config—this is a clean pattern for conditional requirements.


1299-1299: Verify provider string consistency and field_value structure assumptions in update_build_config.

The code in the review assumes field_value is a list with at least one dictionary element when field_name == "model", and provider string matching uses "IBM WatsonX" and "Ollama". However, watsonx_constants.py defines provider metadata as "IBM Watsonx" (without capital X), creating a potential mismatch. Additionally, the code assumes field_value[0] contains a "provider" key without defensive checks—if the structure differs, this will fail silently or raise KeyError.

src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1)

2628-2629: IBM_WATSONX_URLS import is available and correctly configured.

The IBM_WATSONX_URLS constant is properly defined in lfx.base.models.watsonx_constants (lines 65-72) as a list of six valid API endpoint URLs. The import statement is correct and functional.

src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (1)

1450-1467: ⚠️ Missing field definitions for new WatsonX and Ollama input fields in component template.

The Python code in the code field declares three new inputs (base_url_ibm_watsonx, project_id, and ollama_base_url), but the corresponding field definitions appear to be missing from the template section. This causes:

  1. UI will not display these fields – Users won't see WatsonX/Ollama configuration options because they're not defined in the template.
  2. Runtime KeyError risk – The update_build_config method accesses build_config["base_url_ibm_watsonx"], build_config["project_id"], and build_config["ollama_base_url"], which will raise KeyError if not defined in the template.

Add field definitions to the template section of each LanguageModelComponent instance for all three new fields, ensuring they match the input declarations in the Python code (DropdownInput for base_url_ibm_watsonx, StrInput for project_id, MessageInput for ollama_base_url).

src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)

3258-3274: Provider-specific wiring for LanguageModelComponent looks correct

build_model now forwards watsonx_url, watsonx_project_id, and ollama_base_url into get_llm, and update_build_config cleanly composes update_model_options_in_build_config with simple provider-based visibility/required toggles for WatsonX and Ollama. This is consistent with the shared unified_models helpers and should give WatsonX/Ollama the parameters they need without impacting other providers.

src/lfx/src/lfx/base/models/watsonx_constants.py (2)

60-61: LGTM! Good consolidation of model metadata.

The combination of LLM and embedding models into WATSONX_MODELS_DETAILED is well-structured, and the comment clearly explains the purpose. This provides a convenient single constant for accessing all WatsonX models.


3-25: No changes needed. The pattern of marking multiple models with default=True is intentional and consistent across the entire codebase. The default flag indicates models are "default/recommended options" for user visibility, not selection logic. Multiple defaults per provider are supported by design—the selection mechanism prioritizes user preferences first, then falls back to the first available model. OpenAI and other providers follow the same pattern.

src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json (1)

1742-1743: Provider labels and param mappings are correctly aligned.

Verification confirms get_embedding_model_options returns provider="IBM WatsonX" with metadata mapping url and project_id correctly in the param_mapping dictionary (unified_models.py:650-657). The UI toggle (embedding_model.py:50) correctly checks provider == "IBM WatsonX", and the kwargs building (embedding_model.py:246, 249) correctly accesses param_mapping["url"] and param_mapping["project_id"] for WatsonX embeddings. The defensive check accepting both "IBM WatsonX" and "IBM watsonx.ai" variants (line 238) provides extra robustness but is not needed given the unified provider label used throughout the model metadata.

src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1)

1313-1313: Validate IBM_WATSONX_URLS shape for DropdownInput

Ensure IBM_WATSONX_URLS is structured as a simple list of strings (not dicts with label/value pairs), since DropdownInput expects options to be a flat list of displayable strings. The current code accesses IBM_WATSONX_URLS[0] as the value, which assumes the constant contains indexable string elements.

src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (2)

988-988: No compatibility issues detected. IBM_WATSONX_URLS is a simple list of strings that correctly matches DropdownInput's expected options parameter type (list[str]). The template usage is valid.


988-988: Remove this review comment—Document Q&A.json uses a safe delegation pattern

The code in Document Q&A.json (line 988) delegates to update_model_options_in_build_config and does not contain hardcoded provider string comparisons like provider == "IBM WatsonX" or provider == "Ollama". The template has already been refactored to avoid the brittle equality checks. The concern about provider matching exists in the actual source components (e.g., src/lfx/src/lfx/components/models_and_agents/language_model.py), not in this starter template file.

src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)

1969-1969: LGTM - Starter project updated with provider-aware model configuration.

The embedded LanguageModelComponent code correctly includes the new WatsonX and Ollama provider support, aligning with the broader PR changes across starter projects.

src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (1)

1271-1271: LGTM - All three LanguageModelComponent nodes updated consistently.

The embedded code for all three language model instances in the prompt chain correctly includes the provider-specific configuration for WatsonX and Ollama.

Also applies to: 1593-1593, 1914-1914

src/lfx/src/lfx/components/models_and_agents/language_model.py (1)

94-104: LGTM - Safe attribute access for provider-specific parameters.

Using getattr() with None default handles cases where attributes may not be set, preventing AttributeError when non-WatsonX/Ollama providers are used.

src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)

1167-1379: [Your rewritten review comment text here]
[Exactly ONE classification tag]

erichare and others added 2 commits December 11, 2025 11:03
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 11, 2025

Frontend Unit Test Coverage Report

Coverage Summary

Lines Statements Branches Functions
Coverage: 17%
16.68% (4707/28211) 9.98% (2177/21801) 10.96% (679/6192)

Unit Test Results

Tests Skipped Failures Errors Time
1830 0 💤 0 ❌ 0 🔥 24.117s ⏱️

@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 23, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 23, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 23, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 23, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 23, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 23, 2025
@github-actions github-actions bot added the lgtm This PR has been approved by a maintainer label Dec 23, 2025
@erichare erichare added this pull request to the merge queue Dec 23, 2025
Merged via the queue into main with commit 9ce7d84 Dec 23, 2025
90 of 93 checks passed
@erichare erichare deleted the fix-model-providers-watsonx branch December 23, 2025 22:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working lgtm This PR has been approved by a maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants