-
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for Google Gemini 1.5 pro/flash models #1190
base: main
Are you sure you want to change the base?
Conversation
Thanks for raising this PR @rtmcrc! Please resolve the errors raised by each of the checks so that we can proceed reviewing your PR. |
modified: gpt_engineer/core/ai.py
Done |
Thanks for this; we will check it! |
@captivus could you please take a look at this one? I asked you and @zigabrencic to review it. Would love to see this merged soon so we support Gemini 1.5! |
load_dotenv(dotenv_path=os.path.join(os.getcwd(), ".env")) | ||
|
||
|
||
def model_env(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a way so that we "merge" the model_env()
and load_env_if_needed()
functions? So to simplify the logic here a bit?
Or what was the idea behind adding this function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, do as you consider it is efficient. I've made a separate function so that it's not called twice.
load_env_if_needed()
is called on line 460 and model_env()
on 298
Have no clue why it was placed so far, maybe there was a reason, didn't get much into it. 🤷♂️
@captivus can you please review when you catch a moment= |
No description provided.