Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] LM Studio Model #21

Open
bukit-kronik opened this issue Feb 15, 2025 · 4 comments · Fixed by #26
Open

[Bug] LM Studio Model #21

bukit-kronik opened this issue Feb 15, 2025 · 4 comments · Fixed by #26
Assignees
Labels
bug Something isn't working

Comments

@bukit-kronik
Copy link

bukit-kronik commented Feb 15, 2025

Describe the bug

Use "Default model" placeholder for LM Studio so I can save settings.

Image

@johnfunmula
Copy link
Contributor

johnfunmula commented Feb 18, 2025

Can you try 0.3.2? It should solve your problem.

By the way, this screenshot, if you open LM studio and browse to http://localhost:1234/v1/models, will it be empty?

@bukit-kronik
Copy link
Author

bukit-kronik commented Feb 18, 2025

Will try ASAP. Btw, LM Studio doesn't have a strict model naming, that's why I ask just a "default model", should work whatever model loaded.

@bukit-kronik
Copy link
Author

bukit-kronik commented Feb 18, 2025

@johnfunmula It works.. Thank you.

Image

but it doesn't work when chat with the model, unfortunately.

2025-02-18 16:41:37 [DEBUG] 
Received request: POST to /v1/chat/completions with body  {
  "model": "qwen2.5-7b-celestialharmony-1m-i1",
  "temperature": 1,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0,
  "n": 1,
  "stream": true,
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "echo",
        "description": "A simple echo tool to verify if the MCP server is ... <Truncated in logs> ...racteristic response containing the input message.",
        "parameters": {
          "type": "object",
          "properties": {
            "message": {
              "type": "string",
              "description": "Message to be echoed back"
            }
          },
          "required": [
            "message"
          ],
          "additionalProperties": false
        }
      }
    }
  ],
  "stream_options": {
    "include_usage": true
  },
  "messages": [
    {
      "role": "system",
      "content": "\n<Dive_System_Thinking_Protocol>\n  I am an AI Assi... <Truncated in logs> ..._Specific_Rules>\n</Dive_System_Thinking_Protocol>\n"
    },
    {
      "role": "user",
      "content": "hello"
    },
    {
      "role": "assistant",
      "content": "."
    },
    {
      "role": "user",
      "content": "hello?"
    },
    {
      "role": "assistant",
      "content": "."
    },
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "hello"
        }
      ]
    }
  ]
}
2025-02-18 16:41:37  [INFO] 
[LM STUDIO SERVER] Running chat completion on conversation with 6 messages.

Image

@johnfunmula
Copy link
Contributor

johnfunmula commented Feb 18, 2025

@bukit-kronik Thank you for report. We'll try to check how these models behave in the program.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants