Skip to content

feat: enable real-time refresh for Base URL input in ChatOllamaComponent#9346

Merged
ogabrielluiz merged 5 commits intolangflow-ai:mainfrom
philnash:ollama-llm-component
Aug 26, 2025
Merged

feat: enable real-time refresh for Base URL input in ChatOllamaComponent#9346
ogabrielluiz merged 5 commits intolangflow-ai:mainfrom
philnash:ollama-llm-component

Conversation

@philnash
Copy link
Member

@philnash philnash commented Aug 11, 2025

Entering the Ollama URL was not triggering the refresh of the model list. Adding real_time_refresh=True to the settings of the MessageTextInput causes an update to trigger update_build_config and reload the models.

Summary by CodeRabbit

  • New Features
    • Enabled real-time refresh for the Base URL input in the Chat Ollama configuration. As you type or edit the URL, changes are immediately reflected without needing to blur the field or refresh the page. This improves responsiveness and streamlines testing or switching endpoints, providing quicker feedback during setup. No other settings or behaviors are affected.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 11, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Adds real_time_refresh=True to the Base URL MessageTextInput in ChatOllamaComponent, enabling real-time UI refresh for that specific field. No other logic, inputs, or control flow changed.

Changes

Cohort / File(s) Summary
Ollama Chat component UI input refresh
src/backend/base/langflow/components/languagemodels/ollama.py
Set real_time_refresh=True on the Base URL MessageTextInput within ChatOllamaComponent; no other modifications.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Aug 11, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (4)
src/backend/base/langflow/components/languagemodels/ollama.py (4)

31-38: Enabling real-time refresh on Base URL risks aggressive network churn and UX regressions

Real-time updates on a free-text URL field will trigger update_build_config on each keystroke. With current logic, this can:

  • Fire multiple network validations per character.
  • Potentially overwrite the user’s in-progress input with a fallback URL from URL_LIST.

See proposed guards and ordering fixes below to prevent janky UX and reduce unnecessary requests.


199-205: Short-circuit validation and add a small timeout to avoid excessive I/O while typing

Guard against obviously invalid/incomplete URLs and limit the validation request time. This prevents a network call on every keystroke until the URL looks plausibly valid.

 async def is_valid_ollama_url(self, url: str) -> bool:
     try:
-        async with httpx.AsyncClient() as client:
-            return (await client.get(urljoin(url, "api/tags"))).status_code == HTTP_STATUS_OK
+        # Fast-fail for incomplete/invalid inputs during typing
+        if not url or not url.startswith(("http://", "https://")):
+            return False
+        # Keep validation snappy to avoid piling requests
+        async with httpx.AsyncClient(timeout=2.0) as client:
+            resp = await client.get(urljoin(url, "api/tags"))
+            return resp.status_code == HTTP_STATUS_OK
     except httpx.RequestError:
         return False

225-246: Do not overwrite the user's Base URL while they are typing

With real_time_refresh, this block will replace the user’s incomplete input with a fallback URL from URL_LIST, causing the field to “fight” the user mid-typing. Only apply the fallback when not actively editing base_url.

-        if field_name in {"base_url", "model_name"}:
-            if build_config["base_url"].get("load_from_db", False):
-                base_url_value = await self.get_variables(build_config["base_url"].get("value", ""), "base_url")
-            else:
-                base_url_value = build_config["base_url"].get("value", "")
+        if field_name in {"base_url", "model_name"}:
+            # Prefer the live value from the UI while editing base_url
+            if field_name == "base_url":
+                base_url_value = (field_value or "").strip()
+            elif build_config["base_url"].get("load_from_db", False):
+                base_url_value = await self.get_variables(build_config["base_url"].get("value", ""), "base_url")
+            else:
+                base_url_value = build_config["base_url"].get("value", "")
 
             if not await self.is_valid_ollama_url(base_url_value):
-                # Check if any URL in the list is valid
-                valid_url = ""
-                check_urls = URL_LIST
-                if self.base_url:
-                    check_urls = [self.base_url, *URL_LIST]
-                for url in check_urls:
-                    if await self.is_valid_ollama_url(url):
-                        valid_url = url
-                        break
-                if valid_url != "":
-                    build_config["base_url"]["value"] = valid_url
-                else:
-                    msg = "No valid Ollama URL found."
-                    raise ValueError(msg)
+                if field_name == "base_url":
+                    # Don't clobber user input during typing; let model list handling below react accordingly.
+                    pass
+                else:
+                    # Fallback only when not actively editing base_url
+                    valid_url = ""
+                    check_urls = URL_LIST
+                    if self.base_url:
+                        check_urls = [self.base_url, *URL_LIST]
+                    for url in check_urls:
+                        if await self.is_valid_ollama_url(url):
+                            valid_url = url
+                            break
+                    if valid_url != "":
+                        build_config["base_url"]["value"] = valid_url
+                    else:
+                        msg = "No valid Ollama URL found."
+                        raise ValueError(msg)

246-257: Prefer the live Base URL value from build_config when fetching models

After the above change, use the UI’s current value first so the models refresh from what the user just typed. Fall back to self.base_url only if needed.

-        if field_name in {"model_name", "base_url", "tool_model_enabled"}:
-            if await self.is_valid_ollama_url(self.base_url):
-                tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled
-                build_config["model_name"]["options"] = await self.get_models(self.base_url, tool_model_enabled)
-            elif await self.is_valid_ollama_url(build_config["base_url"].get("value", "")):
-                tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled
-                build_config["model_name"]["options"] = await self.get_models(
-                    build_config["base_url"].get("value", ""), tool_model_enabled
-                )
-            else:
-                build_config["model_name"]["options"] = []
+        if field_name in {"model_name", "base_url", "tool_model_enabled"}:
+            tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled
+            candidate_url = build_config["base_url"].get("value", "") or self.base_url
+            if await self.is_valid_ollama_url(candidate_url):
+                build_config["model_name"]["options"] = await self.get_models(candidate_url, tool_model_enabled)
+            elif await self.is_valid_ollama_url(self.base_url):
+                build_config["model_name"]["options"] = await self.get_models(self.base_url, tool_model_enabled)
+            else:
+                build_config["model_name"]["options"] = []
🧹 Nitpick comments (1)
src/backend/base/langflow/components/languagemodels/ollama.py (1)

295-323: Add a timeout to model discovery; consider lightweight caching to mitigate rapid refreshes

With real-time updates, model discovery may be invoked frequently. A small timeout helps keep the UI responsive; optional short-lived caching can reduce repeated calls.

-            async with httpx.AsyncClient() as client:
+            async with httpx.AsyncClient(timeout=(self.timeout or 5.0)) as client:
                 # Fetch available models
                 tags_response = await client.get(tags_url)

Optional follow-up (no code shown): cache results per base_url for a short TTL and reuse if a new update arrives before expiration.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b093c1f and 167c314.

📒 Files selected for processing (1)
  • src/backend/base/langflow/components/languagemodels/ollama.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
src/backend/base/langflow/components/**/*.py

📄 CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)

src/backend/base/langflow/components/**/*.py: Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Implement async component methods using async def and await for asynchronous operations
Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately

Files:

  • src/backend/base/langflow/components/languagemodels/ollama.py
{src/backend/**/*.py,tests/**/*.py,Makefile}

📄 CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)

{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code

Files:

  • src/backend/base/langflow/components/languagemodels/ollama.py
src/backend/**/components/**/*.py

📄 CodeRabbit Inference Engine (.cursor/rules/icons.mdc)

In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

Files:

  • src/backend/base/langflow/components/languagemodels/ollama.py

@philnash philnash force-pushed the ollama-llm-component branch from 167c314 to 5dd0b55 Compare August 11, 2025 01:21
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Aug 11, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Aug 11, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Aug 12, 2025
Copy link
Member

@Cristhianzl Cristhianzl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Aug 19, 2025
@ogabrielluiz ogabrielluiz enabled auto-merge August 26, 2025 13:25
@ogabrielluiz ogabrielluiz added the lgtm This PR has been approved by a maintainer label Aug 26, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Aug 26, 2025
@sonarqubecloud
Copy link

@ogabrielluiz ogabrielluiz disabled auto-merge August 26, 2025 17:08
@ogabrielluiz ogabrielluiz merged commit 1eae241 into langflow-ai:main Aug 26, 2025
23 of 30 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request lgtm This PR has been approved by a maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants