Skip to content

Connecting a Model

NexusCore is model-agnostic. Connect any LLM provider — local or cloud — and start coding with AI.

Supported Providers

ProviderExample ModelsConnection
OllamaLlama 3, CodeLlama, MistralLocal endpoint (localhost:11434)
LM StudioAny GGUF modelLocal endpoint (localhost:1234)
llama.cppAny GGUF modelLocal endpoint (configurable port)
vLLMAny HuggingFace modelLocal endpoint (configurable port)
OpenAIGPT-4o, GPT-4, o1API key
AnthropicClaude 4, Claude 3.5 SonnetAPI key
GoogleGemini Pro, Gemini UltraAPI key
Any OpenAI-compatibleVariousAPI key + base URL

NexusIDE Setup Wizard

When NexusIDE launches for the first time (or when no model is configured), a setup wizard guides you through configuration:

  1. "I have a local model server" — Enter the endpoint URL for Ollama, LM Studio, or another local server
  2. "I have a cloud API key" — Enter an API key for OpenAI, Anthropic, Google, or another provider
  3. "I need to set up a local model" — Get one-click install links for LM Studio and Ollama

The wizard auto-detects running servers on common ports and pre-fills the endpoint URL when possible.

Auto-Detection

NexusIDE probes common local endpoints on startup:

PortServer
localhost:1234LM Studio
localhost:11434Ollama
localhost:8080llama.cpp
localhost:5000vLLM / custom

When a server is detected, you'll see a notification: "Detected Ollama running on port 11434. Add as a provider?"

You can also trigger detection manually from the command palette: NexusCore: Detect Local Models

INFO

Auto-detection only probes localhost and 127.0.0.1 — it never makes external network requests.

NexusCore CLI Configuration

Add a Provider

bash
# Local model server
nexus-cli models add ollama --endpoint http://localhost:11434

# Cloud provider with API key
nexus-cli models add openai --endpoint https://api.openai.com/v1 --api-key sk-...

Auto-Detect Local Servers

bash
nexus-cli models detect

Example output:

Scanning local endpoints...
  ✓ Ollama detected on localhost:11434 (3 models available)
  ✓ LM Studio detected on localhost:1234 (1 model available)

Add these providers? [Y/n]

List Providers and Models

bash
nexus-cli models list
Providers:
  ollama (localhost:11434) — connected
    • llama3.1:8b
    • codellama:13b
    • mistral:7b
  openai (api.openai.com) — connected
    • gpt-4o
    • gpt-4o-mini

Set Default Model

bash
nexus-cli models default llama3.1:8b

Remove a Provider

bash
nexus-cli models remove ollama

Config File

Model providers are stored in .nexuscore/config.yaml (shared between CLI and IDE):

yaml
models:
  providers:
    - name: ollama
      endpoint: http://localhost:11434
      type: ollama
    - name: openai
      endpoint: https://api.openai.com/v1
      type: openai
      api_key_env: OPENAI_API_KEY
  default_model: llama3.1:8b
  routing:
    fast: llama3.1:8b
    code: codellama:13b
    reasoning: gpt-4o

Environment Variables

You can also configure providers via environment variables:

VariableDescription
NEXUS_MODEL_PROVIDERDefault provider name
NEXUS_API_KEYAPI key for the default provider
NEXUS_MODELDefault model name
NEXUS_BASE_URLCustom API base URL
OPENAI_API_KEYOpenAI API key
ANTHROPIC_API_KEYAnthropic API key
GOOGLE_API_KEYGoogle AI API key

Model Routing

NexusCore supports assigning different models to different task types:

ModePurposeExample
fastQuick edits, completionsllama3.1:8b
codeCode generation, refactoringcodellama:13b
reasoningComplex tasks, architecturegpt-4o

Configure routing in the NexusIDE Models panel or in the config file under models.routing.

When a model fails during a request, NexusCore automatically falls back to the next provider in the chain and notifies you in the chat panel.

Cloud Model Brokering (Pro)

Pro and Studio tier users can store API keys securely on the Nexus Suite server and access cloud models through the brokering service:

  1. Sign in: nexus-cli auth login
  2. Store your API key: nexus-cli models set-key openai
  3. Cloud models appear alongside local models in the model selector

The brokering service uses the OpenAI-compatible chat completions format, so it works as a standard provider. Your request and response content is never stored or logged — it's pass-through only.

Studio tier users get priority routing with lower latency and higher rate limits.

TIP

Manage your cloud API keys on the Portal at nexus-suite.dev/account/keys.

Switching Models

From NexusIDE

  • Click the model name in the status bar to open a quick-pick dropdown
  • Or use the model selector in the chat panel input area
  • Models are grouped by provider with badges showing the assigned mode (fast/code/reasoning)

From NexusCore CLI

bash
# Switch default model
nexus-cli models default gpt-4o

# Use a specific model for one session
nexus-cli chat --model codellama:13b

Troubleshooting

"No model providers configured"

Run nexus-cli models detect to scan for local servers, or add one manually with nexus-cli models add.

Connection failed

  • Verify the model server is running
  • Check the endpoint URL and port
  • For cloud providers, verify your API key is valid

Model not found

Run nexus-cli models list to see available models on each provider. The model name must match exactly.

Released under the MIT License.