edit
or apply
models are specified, respectively.
Recommended Chat models
Model role | Best open models | Best closed models | Notes |
---|---|---|---|
Chat Edit | Closed and open models have pretty similar performance |
Best overall experience
For the best overall Chat experience, you will want to use a 400B+ parameter model or one of the frontier models.Claude Opus 4.1 and Claude Sonnet 4 from Anthropic
Our current top recommendations are Claude Opus 4.1 and Claude Sonnet 4 from Anthropic.- Hub
- YAML
View the Claude Opus 4.1 model block or Claude Sonnet 4 model block on the hub.
Gemma from Google DeepMind
If you prefer to use an open-weight model, then the Gemma family of Models from Google DeepMind is a good choice. You will need to decide if you use it through a SaaS model provider, e.g. Together, or self-host it, e.g. Ollama.- Hub
- YAML
- Ollama
- Together
Add the Ollama Gemma 3 27B block from the hub
GPT-4o from OpenAI
If you prefer to use a model from OpenAI, then we recommend GPT-4o.- Hub
- YAML
Add the OpenAI GPT-4o block from the hub
Grok-2 from xAI
If you prefer to use a model from xAI, then we recommend Grok-2.- Hub
- YAML
Add the xAI Grok-2 block from the hub
Gemini 2.0 Flash from Google
If you prefer to use a model from Google, then we recommend Gemini 2.0 Flash.- Hub
- YAML
Add the Gemini 2.0 Flash block from the hub
Local, offline experience
For the best local, offline Chat experience, you will want to use a model that is large but fast enough on your machine.Llama 3.1 8B
If your local machine can run an 8B parameter model, then we recommend running Llama 3.1 8B on your machine (e.g. using Ollama or LM Studio).- Hub
- YAML
- Ollama
Add the Ollama Llama 3.1 8b block from the hub
DeepSeek Coder 2 16B
If your local machine can run a 16B parameter model, then we recommend running DeepSeek Coder 2 16B (e.g. using Ollama or LM Studio).- YAML
- Ollama
- LM Studio
- Msty
config.yaml