Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an OpenRouter provider #921

Merged
merged 6 commits into from
Feb 6, 2025
Merged

Add an OpenRouter provider #921

merged 6 commits into from
Feb 6, 2025

Conversation

jhrozek
Copy link
Contributor

@jhrozek jhrozek commented Feb 4, 2025

  • Move _get_base_url to the base provider
  • Add an openrouter provider - OpenRouter is a "muxing provider" which itself provides access to multiple models and providers. It speaks a dialect of the OpenAI protocol, but for our purposes, we can say it's OpenAI.

There are some differences in handling the requests, though:

  1. we need to know where to forward the request to, by default this is https://openrouter.ai/api/v1, this is done by setting the base_url parameter
  2. we need to prefix the model with openrouter/. This is a lite-LLM-ism (see https://docs.litellm.ai/docs/providers/openrouter) which we'll be able to remove once we ditch litellm

Initially I was considering just exposing the OpenAI provider on an
additional route and handling the prefix based on the route, but I think
having an explicit provider class is better as it allows us to handle
any differences in OpenRouter dialect easily in the future.

Initially I only tested this with Cline, using DeepSeek for Plan and Anthropic
for Act. I still need to test other assistants (Continue).

Related: #878

@jhrozek
Copy link
Contributor Author

jhrozek commented Feb 4, 2025

Oh I also need to add some tests.

src/codegate/server.py Outdated Show resolved Hide resolved
@jhrozek
Copy link
Contributor Author

jhrozek commented Feb 4, 2025

This was my Cline config
Screenshot 2025-02-04 at 22 27 01

@jhrozek
Copy link
Contributor Author

jhrozek commented Feb 4, 2025

@danbarr when and if this is accepted, we'll probably want to document how to use OR with this provider. I noticed we suggest to use VLLM now in the docs for continue.

@danbarr
Copy link
Contributor

danbarr commented Feb 4, 2025

Yeah vllm worked because both it and OpenRouter are an OpenAI-compatible API. Originally I thought we could use our existing /openai provider endpoint but that's where we ran into the LiteLLM automatic routing based on the model name.

In order to properly support "muxing providers" like openrouter, we'll
have to tell litellm (or in future a native implementation), what server
do we want to proxy to. We were already doing that with Vllm, but since
are about to do the same for OpenRouter, let's move the `_get_base_url`
method to the base provider.
OpenRouter is a "muxing provider" which itself provides access to
multiple models and providers. It speaks a dialect of the OpenAI protocol, but
for our purposes, we can say it's OpenAI.

There are some differences in handling the requests, though:
1) we need to know where to forward the request to, by default this is
   `https://openrouter.ai/api/v1`, this is done by setting the base_url
   parameter
2) we need to prefix the model with `openrouter/`. This is a
   lite-LLM-ism (see https://docs.litellm.ai/docs/providers/openrouter)
   which we'll be able to remove once we ditch litellm

Initially I was considering just exposing the OpenAI provider on an
additional route and handling the prefix based on the route, but I think
having an explicit provider class is better as it allows us to handle
any differences in OpenRouter dialect easily in the future.

Related: #878
We can later alias it to openai if we decide to merge them.
@jhrozek
Copy link
Contributor Author

jhrozek commented Feb 5, 2025

Oh I also need to add some tests.

tests added

@jhrozek jhrozek merged commit 1be0bfe into main Feb 6, 2025
9 checks passed
@jhrozek jhrozek deleted the cline_openrouter branch February 6, 2025 07:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants