Describe the bug
ollama models context size not properly imported/reflected
Where is it happening?
To Reproduce
import 128K ollama model (ex Yarn-mistral 7b-128k) show model details / max model tokens in UI
Expected behavior
Screenshots / context
If applicable, please add screenshots or additional context
