You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently updated OpenLLM and noticed some changes in how models can be deployed. Previously, I was able to run an LLM locally on my CPU by setting the environment variable DTYPE=float32. This method allowed me to bypass the need for a GPU, which is essential for my current setup as I don't have access to one.
Is it still possible to run the llm locally on the CPU? if yes how?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi everyone,
I recently updated OpenLLM and noticed some changes in how models can be deployed. Previously, I was able to run an LLM locally on my CPU by setting the environment variable DTYPE=float32. This method allowed me to bypass the need for a GPU, which is essential for my current setup as I don't have access to one.
Is it still possible to run the llm locally on the CPU? if yes how?
Beta Was this translation helpful? Give feedback.
All reactions