Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The chat model worked fine at first, but it stopped working after the upgrade. #12670

Closed
oilycn opened this issue Jan 17, 2025 · 7 comments
Closed
Labels
in linear Issue or PR has been created in Linear for internal review

Comments

@oilycn
Copy link

oilycn commented Jan 17, 2025

Describe the problem/error/question

The chat model node does not show the running status, but it can output the results at the end. The output of the chat model is not shown in the log of the ai agent, and it takes a long time to reply.
The background log shows:
Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed

What is the error message (if any)?

There is no error message, but the output of the chat model is not displayed in the ai agent log

Please share your workflow/screenshots/recording

Image

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Debug info

core

  • n8nVersion: 1.75.0
  • platform: docker (self-hosted)
  • nodeJsVersion: 20.18.0
  • database: sqlite
  • executionMode: regular
  • concurrency: -1
  • license: enterprise (production)

storage

  • success: all
  • error: all
  • progress: false
  • manual: true
  • binaryMode: memory

pruning

  • enabled: true
  • maxAge: 336 hours
  • maxCount: 10000 executions

client

  • userAgent: mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/131.0.0.0 safari/537.36 edg/131.0.0.0
  • isTouchDevice: false

security

  • secureCookie: false

Generated at: 2025-01-17T12:25:28.764Z}

@Joffcom
Copy link
Member

Joffcom commented Jan 17, 2025

Hey @oilycn,

We have created an internal ticket to look into this which we will be tracking as "N8N-8140"

@Joffcom Joffcom added the in linear Issue or PR has been created in Linear for internal review label Jan 17, 2025
@burivuhster
Copy link
Contributor

Hey @oilycn , can you please help us to reproduce the issue by providing the workflow JSON?

@oilycn
Copy link
Author

oilycn commented Jan 21, 2025

__.json @burivuhster

@jeanpaul
Copy link
Contributor

@oilycn It looks like you used a different model provider by setting the BaseURL in your OpenAI credentials. Using the gemini model that you also had present in your workflow did seem to work properly. Can you verify that the URL in your OpenAI credentials work correctly?

@oilycn
Copy link
Author

oilycn commented Jan 23, 2025

Image

yes,the URL is worded in my OpenAI credentials.The application is functioning correctly on a VPS in Hong Kong, but not on a server located in Mainland China. During local testing, the base URL works fine, and it also worked fine in previous versions. The issue seems to have appeared in one or two releases after the base URL was changed to a configuration key interface.

@oilycn
Copy link
Author

oilycn commented Jan 24, 2025

@jeanpaul

@jeanpaul
Copy link
Contributor

Hi @oilycn -- can you verify that this also breaks with other URLs that follow the OpenAI API spec? I'm unable to help you with this specific host, but I've verified that other hosts do work properly...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in linear Issue or PR has been created in Linear for internal review
Projects
None yet
Development

No branches or pull requests

4 participants