feat: enable real-time refresh for Base URL input in ChatOllamaComponent#9346
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughAdds real_time_refresh=True to the Base URL MessageTextInput in ChatOllamaComponent, enabling real-time UI refresh for that specific field. No other logic, inputs, or control flow changed. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Actionable comments posted: 0
🔭 Outside diff range comments (4)
src/backend/base/langflow/components/languagemodels/ollama.py (4)
31-38: Enabling real-time refresh on Base URL risks aggressive network churn and UX regressionsReal-time updates on a free-text URL field will trigger update_build_config on each keystroke. With current logic, this can:
- Fire multiple network validations per character.
- Potentially overwrite the user’s in-progress input with a fallback URL from URL_LIST.
See proposed guards and ordering fixes below to prevent janky UX and reduce unnecessary requests.
199-205: Short-circuit validation and add a small timeout to avoid excessive I/O while typingGuard against obviously invalid/incomplete URLs and limit the validation request time. This prevents a network call on every keystroke until the URL looks plausibly valid.
async def is_valid_ollama_url(self, url: str) -> bool: try: - async with httpx.AsyncClient() as client: - return (await client.get(urljoin(url, "api/tags"))).status_code == HTTP_STATUS_OK + # Fast-fail for incomplete/invalid inputs during typing + if not url or not url.startswith(("http://", "https://")): + return False + # Keep validation snappy to avoid piling requests + async with httpx.AsyncClient(timeout=2.0) as client: + resp = await client.get(urljoin(url, "api/tags")) + return resp.status_code == HTTP_STATUS_OK except httpx.RequestError: return False
225-246: Do not overwrite the user's Base URL while they are typingWith real_time_refresh, this block will replace the user’s incomplete input with a fallback URL from URL_LIST, causing the field to “fight” the user mid-typing. Only apply the fallback when not actively editing base_url.
- if field_name in {"base_url", "model_name"}: - if build_config["base_url"].get("load_from_db", False): - base_url_value = await self.get_variables(build_config["base_url"].get("value", ""), "base_url") - else: - base_url_value = build_config["base_url"].get("value", "") + if field_name in {"base_url", "model_name"}: + # Prefer the live value from the UI while editing base_url + if field_name == "base_url": + base_url_value = (field_value or "").strip() + elif build_config["base_url"].get("load_from_db", False): + base_url_value = await self.get_variables(build_config["base_url"].get("value", ""), "base_url") + else: + base_url_value = build_config["base_url"].get("value", "") if not await self.is_valid_ollama_url(base_url_value): - # Check if any URL in the list is valid - valid_url = "" - check_urls = URL_LIST - if self.base_url: - check_urls = [self.base_url, *URL_LIST] - for url in check_urls: - if await self.is_valid_ollama_url(url): - valid_url = url - break - if valid_url != "": - build_config["base_url"]["value"] = valid_url - else: - msg = "No valid Ollama URL found." - raise ValueError(msg) + if field_name == "base_url": + # Don't clobber user input during typing; let model list handling below react accordingly. + pass + else: + # Fallback only when not actively editing base_url + valid_url = "" + check_urls = URL_LIST + if self.base_url: + check_urls = [self.base_url, *URL_LIST] + for url in check_urls: + if await self.is_valid_ollama_url(url): + valid_url = url + break + if valid_url != "": + build_config["base_url"]["value"] = valid_url + else: + msg = "No valid Ollama URL found." + raise ValueError(msg)
246-257: Prefer the live Base URL value from build_config when fetching modelsAfter the above change, use the UI’s current value first so the models refresh from what the user just typed. Fall back to self.base_url only if needed.
- if field_name in {"model_name", "base_url", "tool_model_enabled"}: - if await self.is_valid_ollama_url(self.base_url): - tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled - build_config["model_name"]["options"] = await self.get_models(self.base_url, tool_model_enabled) - elif await self.is_valid_ollama_url(build_config["base_url"].get("value", "")): - tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled - build_config["model_name"]["options"] = await self.get_models( - build_config["base_url"].get("value", ""), tool_model_enabled - ) - else: - build_config["model_name"]["options"] = [] + if field_name in {"model_name", "base_url", "tool_model_enabled"}: + tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled + candidate_url = build_config["base_url"].get("value", "") or self.base_url + if await self.is_valid_ollama_url(candidate_url): + build_config["model_name"]["options"] = await self.get_models(candidate_url, tool_model_enabled) + elif await self.is_valid_ollama_url(self.base_url): + build_config["model_name"]["options"] = await self.get_models(self.base_url, tool_model_enabled) + else: + build_config["model_name"]["options"] = []
🧹 Nitpick comments (1)
src/backend/base/langflow/components/languagemodels/ollama.py (1)
295-323: Add a timeout to model discovery; consider lightweight caching to mitigate rapid refreshesWith real-time updates, model discovery may be invoked frequently. A small timeout helps keep the UI responsive; optional short-lived caching can reduce repeated calls.
- async with httpx.AsyncClient() as client: + async with httpx.AsyncClient(timeout=(self.timeout or 5.0)) as client: # Fetch available models tags_response = await client.get(tags_url)Optional follow-up (no code shown): cache results per base_url for a short TTL and reuse if a new update arrives before expiration.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/backend/base/langflow/components/languagemodels/ollama.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
src/backend/base/langflow/components/**/*.py
📄 CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)
src/backend/base/langflow/components/**/*.py: Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Implement async component methods using async def and await for asynchronous operations
Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately
Files:
src/backend/base/langflow/components/languagemodels/ollama.py
{src/backend/**/*.py,tests/**/*.py,Makefile}
📄 CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)
{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code
Files:
src/backend/base/langflow/components/languagemodels/ollama.py
src/backend/**/components/**/*.py
📄 CodeRabbit Inference Engine (.cursor/rules/icons.mdc)
In your Python component class, set the
iconattribute to a string matching the frontend icon mapping exactly (case-sensitive).
Files:
src/backend/base/langflow/components/languagemodels/ollama.py
167c314 to
5dd0b55
Compare
|



Entering the Ollama URL was not triggering the refresh of the model list. Adding
real_time_refresh=Trueto the settings of theMessageTextInputcauses an update to triggerupdate_build_configand reload the models.Summary by CodeRabbit