Getting Claude Code to work with an [LLM gateway](https://code.claude.com/docs/en/llm-gateway) using [LiteLLM Proxy](https://docs.litellm.ai/docs/proxy/quick_start).
## Setting Up Workspace
Make a folder to manage all the files for this work.
```zsh
mkdir litellm_proxy
cd litellm_proxy
```
And let's create a virtual environment with UV so we can install the dependencies.
```zsh
# Create the virtual environment & source it
uv venv
source .venv/bin/activate
# Install the required dependency
uv pip install 'litellm[proxy]'
```
## Configuring the Proxy
Now all we need to do is setup our [LiteLLM proxy](https://docs.litellm.ai/docs/proxy/configs). I have made a config with all current (non-deprecated) models enabled so that I can have my pick.
*Not all models are shown for brevity*
```yaml
model_list:
# ── Anthropic Claude ──────────────────────────────────────────────────────
- model_name: claude-sonnet-4.6
litellm_params:
model: github_copilot/claude-sonnet-4.6
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
- model_name: claude-haiku-4.5
litellm_params:
model: github_copilot/claude-haiku-4.5
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
- model_name: claude-opus-4.6
litellm_params:
model: github_copilot/claude-opus-4.6
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
# ── OpenAI GPT ────────────────────────────────────────────────────────────
- model_name: gpt-4
litellm_params:
model: github_copilot/gpt-4
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
- model_name: gpt-4.1
litellm_params:
model: github_copilot/gpt-4.1
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
- model_name: gpt-5-mini
litellm_params:
model: github_copilot/gpt-5-mini
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
- model_name: gpt-5.2
litellm_params:
model: github_copilot/gpt-5.2
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
- model_name: gpt-5.3-codex
model_info:
mode: responses
litellm_params:
model: github_copilot/gpt-5.3-codex
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
# ── Google Gemini ─────────────────────────────────────────────────────────
- model_name: gemini-3-flash
litellm_params:
model: github_copilot/gemini-3-flash
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
- model_name: gemini-3.1-pro
litellm_params:
model: github_copilot/gemini-3.1-pro
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
# ── xAI ──────────────────────────────────────────────────────────────────
- model_name: grok-code-fast-1
litellm_params:
model: github_copilot/grok-code-fast-1
extra_headers: {"Editor-Version": "vscode/1.85.1", "Copilot-Integration-Id": "vscode-chat"}
litellm_settings:
drop_params: true
general_settings:
master_key: os.environ/LITELLM_MASTER_KEY
database_url: os.environ/DATABASE_URL
```
Make sure you generate secure secrets for `LITELLM_MASTER_KEY` and `DATABASE_URL`. I use MacOS keychain to hold these secrets for me.
The `DATABASE_URL` is built from the Postgres password stored in the keychain. Store it once:
```zsh
security add-generic-password -a litellm-proxy -s DATABASE_URL -w \
"postgresql://postgres:$(security find-generic-password -a postgres -s postgres -w)@localhost:5432/litellm"
```
LiteLLM will then read it via:
```zsh
DATABASE_URL="$(security find-generic-password -a litellm-proxy -s DATABASE_URL -w)"
```
I run postgresql in podman:
```zsh
podman run -d --rm --name litellm-db \
-e POSTGRES_PASSWORD="$(security find-generic-password -a postgres -s postgres -w)" \
-e POSTGRES_DB=litellm \
-v litellm-data:/var/lib/postgresql/data \
-p 127.0.0.1:5432:5432 \
postgres:16-alpine
```
## Starting the Proxy
With the config and database running, start the proxy:
```zsh
litellm --config config.yaml
```
The proxy will be available at `http://localhost:4000`. You can view the UI and change settings by going to <http://localhost:4000/ui>
## Configuring Claude
The next thing is just to configure your Claude Code settings so the requests go through the LiteLLM proxy. I have set the following environment variables for CC.
```zsh
# Sets the generated LiteLLM Proxy Master Key
ANTHROPIC_AUTH_TOKEN="$(security find-generic-password -s litellm-proxy -a claude-code -w)"
ANTHROPIC_BASE_URL=http://localhost:4000
ANTHROPIC_MODEL=claude-sonnet-4.6
ANTHROPIC_SMALL_FAST_MODEL=gpt-5-mini
```
After you have everything set you can run Claude!
## References & Links
- [Using Claude Code with GitHub Copilot: A Guide](https://blog.f12.no/wp/2025/09/22/using-claude-code-with-github-copilot-a-guide/)
- [claude-code-over-github-copilot](https://github.com/kjetiljd/claude-code-over-github-copilot)