-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an OpenRouter provider #921
Conversation
Oh I also need to add some tests. |
@danbarr when and if this is accepted, we'll probably want to document how to use OR with this provider. I noticed we suggest to use VLLM now in the docs for continue. |
Yeah vllm worked because both it and OpenRouter are an OpenAI-compatible API. Originally I thought we could use our existing /openai provider endpoint but that's where we ran into the LiteLLM automatic routing based on the model name. |
In order to properly support "muxing providers" like openrouter, we'll have to tell litellm (or in future a native implementation), what server do we want to proxy to. We were already doing that with Vllm, but since are about to do the same for OpenRouter, let's move the `_get_base_url` method to the base provider.
OpenRouter is a "muxing provider" which itself provides access to multiple models and providers. It speaks a dialect of the OpenAI protocol, but for our purposes, we can say it's OpenAI. There are some differences in handling the requests, though: 1) we need to know where to forward the request to, by default this is `https://openrouter.ai/api/v1`, this is done by setting the base_url parameter 2) we need to prefix the model with `openrouter/`. This is a lite-LLM-ism (see https://docs.litellm.ai/docs/providers/openrouter) which we'll be able to remove once we ditch litellm Initially I was considering just exposing the OpenAI provider on an additional route and handling the prefix based on the route, but I think having an explicit provider class is better as it allows us to handle any differences in OpenRouter dialect easily in the future. Related: #878
We can later alias it to openai if we decide to merge them.
e0150f0
to
920d882
Compare
tests added |
There are some differences in handling the requests, though:
https://openrouter.ai/api/v1
, this is done by setting the base_url parameteropenrouter/
. This is a lite-LLM-ism (see https://docs.litellm.ai/docs/providers/openrouter) which we'll be able to remove once we ditch litellmInitially I was considering just exposing the OpenAI provider on an
additional route and handling the prefix based on the route, but I think
having an explicit provider class is better as it allows us to handle
any differences in OpenRouter dialect easily in the future.
Initially I only tested this with Cline, using DeepSeek for Plan and Anthropic
for Act. I still need to test other assistants (Continue).
Related: #878