Connecting a Model
NexusCore is model-agnostic. Connect any LLM provider — local or cloud — and start coding with AI.
Supported Providers
| Provider | Example Models | Connection |
|---|---|---|
| Ollama | Llama 3, CodeLlama, Mistral | Local endpoint (localhost:11434) |
| LM Studio | Any GGUF model | Local endpoint (localhost:1234) |
| llama.cpp | Any GGUF model | Local endpoint (configurable port) |
| vLLM | Any HuggingFace model | Local endpoint (configurable port) |
| OpenAI | GPT-4o, GPT-4, o1 | API key |
| Anthropic | Claude 4, Claude 3.5 Sonnet | API key |
| Gemini Pro, Gemini Ultra | API key | |
| Any OpenAI-compatible | Various | API key + base URL |
NexusIDE Setup Wizard
When NexusIDE launches for the first time (or when no model is configured), a setup wizard guides you through configuration:
- "I have a local model server" — Enter the endpoint URL for Ollama, LM Studio, or another local server
- "I have a cloud API key" — Enter an API key for OpenAI, Anthropic, Google, or another provider
- "I need to set up a local model" — Get one-click install links for LM Studio and Ollama
The wizard auto-detects running servers on common ports and pre-fills the endpoint URL when possible.
Auto-Detection
NexusIDE probes common local endpoints on startup:
| Port | Server |
|---|---|
localhost:1234 | LM Studio |
localhost:11434 | Ollama |
localhost:8080 | llama.cpp |
localhost:5000 | vLLM / custom |
When a server is detected, you'll see a notification: "Detected Ollama running on port 11434. Add as a provider?"
You can also trigger detection manually from the command palette: NexusCore: Detect Local Models
INFO
Auto-detection only probes localhost and 127.0.0.1 — it never makes external network requests.
NexusCore CLI Configuration
Add a Provider
# Local model server
nexus-cli models add ollama --endpoint http://localhost:11434
# Cloud provider with API key
nexus-cli models add openai --endpoint https://api.openai.com/v1 --api-key sk-...Auto-Detect Local Servers
nexus-cli models detectExample output:
Scanning local endpoints...
✓ Ollama detected on localhost:11434 (3 models available)
✓ LM Studio detected on localhost:1234 (1 model available)
Add these providers? [Y/n]List Providers and Models
nexus-cli models listProviders:
ollama (localhost:11434) — connected
• llama3.1:8b
• codellama:13b
• mistral:7b
openai (api.openai.com) — connected
• gpt-4o
• gpt-4o-miniSet Default Model
nexus-cli models default llama3.1:8bRemove a Provider
nexus-cli models remove ollamaConfig File
Model providers are stored in .nexuscore/config.yaml (shared between CLI and IDE):
models:
providers:
- name: ollama
endpoint: http://localhost:11434
type: ollama
- name: openai
endpoint: https://api.openai.com/v1
type: openai
api_key_env: OPENAI_API_KEY
default_model: llama3.1:8b
routing:
fast: llama3.1:8b
code: codellama:13b
reasoning: gpt-4oEnvironment Variables
You can also configure providers via environment variables:
| Variable | Description |
|---|---|
NEXUS_MODEL_PROVIDER | Default provider name |
NEXUS_API_KEY | API key for the default provider |
NEXUS_MODEL | Default model name |
NEXUS_BASE_URL | Custom API base URL |
OPENAI_API_KEY | OpenAI API key |
ANTHROPIC_API_KEY | Anthropic API key |
GOOGLE_API_KEY | Google AI API key |
Model Routing
NexusCore supports assigning different models to different task types:
| Mode | Purpose | Example |
|---|---|---|
| fast | Quick edits, completions | llama3.1:8b |
| code | Code generation, refactoring | codellama:13b |
| reasoning | Complex tasks, architecture | gpt-4o |
Configure routing in the NexusIDE Models panel or in the config file under models.routing.
When a model fails during a request, NexusCore automatically falls back to the next provider in the chain and notifies you in the chat panel.
Cloud Model Brokering (Pro)
Pro and Studio tier users can store API keys securely on the Nexus Suite server and access cloud models through the brokering service:
- Sign in:
nexus-cli auth login - Store your API key:
nexus-cli models set-key openai - Cloud models appear alongside local models in the model selector
The brokering service uses the OpenAI-compatible chat completions format, so it works as a standard provider. Your request and response content is never stored or logged — it's pass-through only.
Studio tier users get priority routing with lower latency and higher rate limits.
TIP
Manage your cloud API keys on the Portal at nexus-suite.dev/account/keys.
Switching Models
From NexusIDE
- Click the model name in the status bar to open a quick-pick dropdown
- Or use the model selector in the chat panel input area
- Models are grouped by provider with badges showing the assigned mode (fast/code/reasoning)
From NexusCore CLI
# Switch default model
nexus-cli models default gpt-4o
# Use a specific model for one session
nexus-cli chat --model codellama:13bTroubleshooting
"No model providers configured"
Run nexus-cli models detect to scan for local servers, or add one manually with nexus-cli models add.
Connection failed
- Verify the model server is running
- Check the endpoint URL and port
- For cloud providers, verify your API key is valid
Model not found
Run nexus-cli models list to see available models on each provider. The model name must match exactly.