-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
option to set the model this app uses #397
Comments
@yoheinakajima uses litellm, and according to the documentation here a config file tells litellm which models to use. Step 1. CREATE config.yaml model_list:
|
appreciate the comment! I think the docs indicate that this is only for when you're using the litellm gateway? I would also assume that if you are specifying in the function call the model you want that would be the model used, and the config file would set a default? |
I think the best approach would be to create a wrapper for litellm and use environmental variables. A search for gpt- will give you all the code to change. You might also want to look at a babyagi fork (or write your own fork) https://github.com/saten-private/BabyCommandAGI Not sure if a pull request would be appreciated, as there are a few advanced branches now (none of which seem to allow model or service selection) but have a go :D |
appreciate it, yeah i could just fork and change the code myself, just figured it would be nicer to be in the main project! and was curious if there was some reason it was hard coded as opposed to parameterized |
No idea, but you could fork it, update to really use litellm, and send a pull request. If it checks for a .env file and uses that but then sets the defaults to openai models if the .env doesn't exist then no one else needs to change anything. Or ignore me :D |
not important enough to me but appreciate your suggestion |
Is there a reason you hard code different models in different calls to the lite llm's completion function?
It would be really nice to be able to pass in an env variable and set the model systemwide. I'm assuming the reason we can't do this is because you need to use the more powerful models for certain functionality to work well?
The text was updated successfully, but these errors were encountered: