Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

option to set the model this app uses #397

Open
blake41 opened this issue Dec 1, 2024 · 6 comments
Open

option to set the model this app uses #397

blake41 opened this issue Dec 1, 2024 · 6 comments

Comments

@blake41
Copy link

blake41 commented Dec 1, 2024

Is there a reason you hard code different models in different calls to the lite llm's completion function?

It would be really nice to be able to pass in an env variable and set the model systemwide. I'm assuming the reason we can't do this is because you need to use the more powerful models for certain functionality to work well?

@brianlmerritt
Copy link

@yoheinakajima uses litellm, and according to the documentation here a config file tells litellm which models to use.

Step 1. CREATE config.yaml
Example litellm_config.yaml

model_list:

  • model_name: gpt-3.5-turbo
    litellm_params:
    model: azure/
    api_base: os.environ/AZURE_API_BASE # runs os.getenv("AZURE_API_BASE")
    api_key: os.environ/AZURE_API_KEY # runs os.getenv("AZURE_API_KEY")
    api_version: "2023-07-01-preview"

@blake41
Copy link
Author

blake41 commented Dec 20, 2024

appreciate the comment! I think the docs indicate that this is only for when you're using the litellm gateway? I would also assume that if you are specifying in the function call the model you want that would be the model used, and the config file would set a default?

@brianlmerritt
Copy link

I think the best approach would be to create a wrapper for litellm and use environmental variables. A search for gpt- will give you all the code to change.

You might also want to look at a babyagi fork (or write your own fork)

https://github.com/saten-private/BabyCommandAGI

Not sure if a pull request would be appreciated, as there are a few advanced branches now (none of which seem to allow model or service selection) but have a go :D

@blake41
Copy link
Author

blake41 commented Dec 21, 2024

appreciate it, yeah i could just fork and change the code myself, just figured it would be nicer to be in the main project! and was curious if there was some reason it was hard coded as opposed to parameterized

@brianlmerritt
Copy link

brianlmerritt commented Dec 23, 2024

No idea, but you could fork it, update to really use litellm, and send a pull request. If it checks for a .env file and uses that but then sets the defaults to openai models if the .env doesn't exist then no one else needs to change anything. Or ignore me :D

@blake41
Copy link
Author

blake41 commented Dec 23, 2024

not important enough to me but appreciate your suggestion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants