Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

small updates to llama model class #70

Closed
wants to merge 9 commits into from
Closed
3 changes: 2 additions & 1 deletion eureka_ml_insights/models/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,8 @@ def create_request(self, text_prompt, query_images=None, *args, **kwargs):
user_content = {"role": "user", "content": text_prompt}
if query_images:
if len(query_images) > 1:
raise ValueError("Llama vision model does not support more than 1 image.")
logging.warning("Llama vision model does not support more than 1 image.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tested this? Return None from create_request, passes the None to get_response and I'm not sure how urllib.request.urlopen behaves if Request object is None.

As we discussed, please adjust this return value in a way that you can handle the situation properly in handle_request_error, and return do_return True so the request does not get attempted again.

return None
encoded_images = self.base64encode(query_images)
user_content["content"] = [
{"type": "text", "text": text_prompt},
Expand Down
Loading