š¦ Local Models
Local models allow you to run nearly any open source LLM locally, on your machine. Through our new integration with Ollama, youāll now have access to more than 100 AI models from various providers ranging from small 135M to massive 671B parameter models.
Weāve also added experimental support for AI Extensions with Local Models. Since tool choice and streaming for tool calls arenāt supported by Ollama just yet, AI Extensions can be a bit unreliable when using Local Models. However, if you want to try it out, you can enable it by going to AI Settings. Keep in mind, it likely wonāt be as reliable as using non-local models.
⨠New
- Local Models:Ā Get started byĀ installing Ollama. Then download Ollama models directly from the Raycast Settings, in the Local Models section of the AI tab by copy & paste model names. You can find the list of all available Ollama modelsĀ here.
- Local Models:Ā Support for Vision with local models supporting it. You can find the list of supporting modelsĀ here.
- Local Models:Ā Experimental support for AI Extensions with local models that support tool calls. You can find the list of supporting modelsĀ here.
š Improvements
- MCP: Improved error reporting when stdio servers fail to run
- MCP: Improved compatibility with server JSON schemas
- MCP: Added a
Copy to Clipboard
action in Manage Servers
š Fixes
- Onboarding: Fixed image assets not loading in the onboarding pages
- AI: Fixed issue where community AI Extensions were not disabled when AI was disabled
- AI: Fixed remote tool calls in AI Commands
- AI: Disable default tools in AI Commands
- AI: The AI Chat/Quick AI
Web Search
setting now works even if theAsk Web
command is disabled - MCP: Fix handling of quoted server arguments
- MCP: Server updates are no longer saved if the updated server fails to run
- Export: Prevent the export/import view from resizing the main window