Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Aborted" after Offline Model download #933

Open
4 of 14 tasks
SchinkTasia opened this issue Oct 8, 2024 · 7 comments
Open
4 of 14 tasks

"Aborted" after Offline Model download #933

SchinkTasia opened this issue Oct 8, 2024 · 7 comments
Labels
fix Fix something that isn't working as expected

Comments

@SchinkTasia
Copy link

SchinkTasia commented Oct 8, 2024

Describe the bug

When i add a new offline ,odel and start the chat with any input., i can see that khoj starts to download the model in the console. but after it finished downloading the model, khoj crashes with nothing more than "Aborted (core dumped)". I used the -vv as start-parameter but i dont get any more information.

To Reproduce

i used 2 own offline models and the one that is preinstalled. Nothign worked.

Screenshots

image

Platform

  • Server:
    • Cloud-Hosted (https://app.khoj.dev)
    • Self-Hosted Docker
    • Self-Hosted Python package
    • Self-Hosted source code
  • Client:
    • Obsidian
    • Emacs
    • Desktop app
    • Web browser
    • WhatsApp
  • OS:
    • Windows
    • macOS
    • Linux (Ubuntu 24.04)
    • Android
    • iOS

If self-hosted

  • Server Version [e.g. 1.0.1]: Khoj v1.24.1

Additional context

Where can i get more information?

@SchinkTasia SchinkTasia added the fix Fix something that isn't working as expected label Oct 8, 2024
@debanjum
Copy link
Member

debanjum commented Oct 8, 2024

How much RAM/VRAM does your machine have? This seems like Khoj has run out of memory and crashed.

Can you also try use one of the smaller (2B, 3B) default models like Gemma-2 2B to see if you can get a response from Khoj using them?

You would need to update your ServerChatSettings in the Khoj admin panel at localhost:42110/server/admin

@SchinkTasia
Copy link
Author

I tested the bartowski/gemma-2-2b-it-GGUF Model but with the same Abort message.
The machine have 12GB RAM in total.

@debanjum
Copy link
Member

debanjum commented Oct 8, 2024

I see, that does sound strange. 12Gb RAM should be enough for Khoj to work (though without GPU it'd be slow)

Did you switch the chat model to Gemma 2 2B in both the Khoj Admin panel at http://localhost:42110/server/admin/database/serverchatsettings/ and your user settings at http://localhost:42110/settings?

Just want to make sure that this is happening even when a single small chat model is being used by Khoj

@SchinkTasia
Copy link
Author

Yeah, i changed the settings in both.

It is an GPU Installation for a AMD RX6900XT with 16GB VRAM.

@debanjum
Copy link
Member

debanjum commented Oct 9, 2024

I see, good to know. Not sure what's up. You have a decent sized GPU. What command did you use to install Khoj with GPU support?

  1. Can you check if you have the required pre-requisites to use GPU with llama.cpp python binding we use here

  2. As a fallback you can use Khoj with Ollama, to get started with Khoj using Offline chat models running on your GPU

@SchinkTasia
Copy link
Author

I tried the first step, but everything looks alright.

Requirement already satisfied: llama-cpp-python in ./khoj/lib/python3.12/site-packages (0.2.88)
Requirement already satisfied: typing-extensions>=4.5.0 in ./khoj/lib/python3.12/site-packages (from llama-cpp-python) (4.12.2)
Requirement already satisfied: numpy>=1.20.0 in ./khoj/lib/python3.12/site-packages (from llama-cpp-python) (1.26.4)
Requirement already satisfied: diskcache>=5.6.1 in ./khoj/lib/python3.12/site-packages (from llama-cpp-python) (5.6.3)
Requirement already satisfied: jinja2>=2.11.3 in ./khoj/lib/python3.12/site-packages (from llama-cpp-python) (3.1.4)
Requirement already satisfied: MarkupSafe>=2.0 in ./khoj/lib/python3.12/site-packages (from jinja2>=2.11.3->llama-cpp-python) (3.0.0)

i will now try the second step.

@SchinkTasia
Copy link
Author

I am using an Ubuntu VM and apparently my GPU never reached the VM.
When i try to get the GPU in Ubuntu i see, that the system dont have a GPU.

I guess that this is the problem. I try to fix this and will post here later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fix Fix something that isn't working as expected
Projects
None yet
Development

No branches or pull requests

2 participants