-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The chat model worked fine at first, but it stopped working after the upgrade. #12670
Comments
Hey @oilycn , can you please help us to reproduce the issue by providing the workflow JSON? |
@oilycn It looks like you used a different model provider by setting the BaseURL in your OpenAI credentials. Using the gemini model that you also had present in your workflow did seem to work properly. Can you verify that the URL in your OpenAI credentials work correctly? |
yes,the URL is worded in my OpenAI credentials.The application is functioning correctly on a VPS in Hong Kong, but not on a server located in Mainland China. During local testing, the base URL works fine, and it also worked fine in previous versions. The issue seems to have appeared in one or two releases after the base URL was changed to a configuration key interface. |
Hi @oilycn -- can you verify that this also breaks with other URLs that follow the OpenAI API spec? I'm unable to help you with this specific host, but I've verified that other hosts do work properly... |
Describe the problem/error/question
The chat model node does not show the running status, but it can output the results at the end. The output of the chat model is not shown in the log of the ai agent, and it takes a long time to reply.
The background log shows:
Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed
What is the error message (if any)?
There is no error message, but the output of the chat model is not displayed in the ai agent log
Please share your workflow/screenshots/recording
Share the output returned by the last node
Debug info
core
storage
pruning
client
security
Generated at: 2025-01-17T12:25:28.764Z}
The text was updated successfully, but these errors were encountered: