Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: fallback to CPU if OOM reached #85

Open
thiswillbeyourgithub opened this issue Sep 15, 2024 · 0 comments
Open

Feature Request: fallback to CPU if OOM reached #85

thiswillbeyourgithub opened this issue Sep 15, 2024 · 0 comments

Comments

@thiswillbeyourgithub
Copy link
Contributor

thiswillbeyourgithub commented Sep 15, 2024

Hi,

In my setup, sometimes a user can trigger ollama pretty hard and use a lot of VRAM, making whisper unavailable during the keep alive of ollama.

It's very annoying to lose the audio in that case. I think it would be better to always return the transcription even if more slowly, so I'm suggesting adding an env variable to add the CPU backend as a fallback if the cuda one encountered an OOM error.

What do you think?

Edit: alternatively, specify list of fallback models maybe? Like if loading large-v3 failed, try with base

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant