Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version 1.76.1 AI Agent Issues #12961

Closed
xhzkp opened this issue Jan 31, 2025 · 12 comments
Closed

Version 1.76.1 AI Agent Issues #12961

xhzkp opened this issue Jan 31, 2025 · 12 comments
Labels
in linear Issue or PR has been created in Linear for internal review

Comments

@xhzkp
Copy link

xhzkp commented Jan 31, 2025

Bug Description

I performed a fresh installation of the latest version of n8n (v1.76.1) on a Windows 10 operating system. I created a new workflow, but the execution process is extremely slow, and there are no indicators showing the execution of OpenAI. The following video demonstrates this issue.
(I installed an older version of n8n on another Windows 10 computer, executed the same workflow, and the execution speed was very fast. Additionally, the OpenAI model execution indicators were displayed.)

Executing a simple OpenAI conversation now takes over 10 seconds, which is ten times slower than before. Additionally, most of the old workflows I imported no longer work properly. It's really frustrating.

bug8.mp4

code:

{
  "nodes": [
    {
      "parameters": {
        "options": {}
      },
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "typeVersion": 1.1,
      "position": [
        -180,
        -120
      ],
      "id": "e2a7f4af-7f3f-43f2-b0e9-df620dbead0e",
      "name": "When chat message received",
      "webhookId": "43c980cd-5c31-4288-91c7-23c537387b2f"
    },
    {
      "parameters": {
        "options": {}
      },
      "type": "@n8n/n8n-nodes-langchain.agent",
      "typeVersion": 1.7,
      "position": [
        160,
        -140
      ],
      "id": "26a7f0e5-384a-417d-b28a-96b8dedae96b",
      "name": "AI Agent"
    },
    {
      "parameters": {
        "model": {
          "__rl": true,
          "mode": "list",
          "value": "gpt-4o-mini"
        },
        "options": {}
      },
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "typeVersion": 1.2,
      "position": [
        80,
        40
      ],
      "id": "039acf30-88ce-4a5e-8142-990174fac50a",
      "name": "OpenAI Chat Model",
      "credentials": {
        "openAiApi": {
          "id": "TyEokTl05Lo4KMtT",
          "name": "OpenAi account"
        }
      }
    },
    {
      "parameters": {},
      "type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
      "typeVersion": 1.3,
      "position": [
        260,
        80
      ],
      "id": "3684aa69-0d02-4ac2-a8fb-70be02befafa",
      "name": "Window Buffer Memory"
    }
  ],
  "connections": {
    "When chat message received": {
      "main": [
        [
          {
            "node": "AI Agent",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "OpenAI Chat Model": {
      "ai_languageModel": [
        [
          {
            "node": "AI Agent",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "Window Buffer Memory": {
      "ai_memory": [
        [
          {
            "node": "AI Agent",
            "type": "ai_memory",
            "index": 0
          }
        ]
      ]
    }
  },
  "pinData": {},
  "meta": {
    "templateCredsSetupCompleted": true,
    "instanceId": "39a8e5ec27a409e407646288ee0cfe2b90961dcade892b2e4e9bf07933f589ce"
  }
}

To Reproduce

1.copy code
2. run

Expected behavior

Normal execution speed

Operating System

windows 10

n8n Version

1.76.1

Node.js Version

18.16.0

Database

SQLite (default)

Execution mode

main (default)

@Joffcom
Copy link
Member

Joffcom commented Jan 31, 2025

Hey @xhzkp,

We have created an internal ticket to look into this which we will be tracking as "GHC-681"

@Joffcom Joffcom added the in linear Issue or PR has been created in Linear for internal review label Jan 31, 2025
@Joffcom
Copy link
Member

Joffcom commented Jan 31, 2025

Hey @xhzkp

I have not seen any other reports of this so I suspect it could be environmental.

On the second machine you put the older version on what happens if you try the newest version on that?

@xhzkp
Copy link
Author

xhzkp commented Jan 31, 2025

Hey @xhzkp

I have not seen any other reports of this so I suspect it could be environmental.

On the second machine you put the older version on what happens if you try the newest version on that?

@Joffcom
I don't dare to try again. My previous experience tells me that the chances of encountering issues are too high. The issues mentioned above occurred after a fresh installation of n8n, which is a bit hard to understand.

@xhzkp
Copy link
Author

xhzkp commented Jan 31, 2025

@Joffcom
Before the AI agent responds slowly, the following error message appears in the console window:

Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
2025-01-31T03:56:40.216Z [Rudder] error: Response error code: ETIMEDOUT
Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed
2025-01-31T03:57:01.464Z [Rudder] error: Response error code: ETIMEDOUT
2025-01-31T03:57:22.994Z [Rudder] error: Response error code: ETIMEDOUT
Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
2025-01-31T03:57:44.808Z [Rudder] error: Error: ETIMEDOUT
connect ETIMEDOUT 2606:4700:20::681a:cbb:443
connect ETIMEDOUT 2606:4700:20::681a:cbb:443
connect ETIMEDOUT 2606:4700:20::681a:cbb:443
connect ETIMEDOUT 2606:4700:20::681a:cbb:443
connect ETIMEDOUT 2606:4700:20::681a:cbb:443
connect ETIMEDOUT 2606:4700:20::681a:cbb:443
Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed

@xhzkp
Copy link
Author

xhzkp commented Jan 31, 2025

@Joffcom

I tried upgrading to the latest preview version 1.77.0, but the issue still persists. Below is the upgrade log.

C:\Users\Administrator>npm install -g n8n@next
npm warn deprecated @npmcli/[email protected]: This functionality has been moved to @npmcli/fs
npm warn deprecated [email protected]: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm warn deprecated [email protected]: This package is deprecated. Use the optional chaining (?.) operator instead.
npm warn deprecated [email protected]: This package is no longer supported.
npm warn deprecated [email protected]: Rimraf versions prior to v4 are no longer supported
npm warn deprecated [email protected]: Rimraf versions prior to v4 are no longer supported
npm warn deprecated [email protected]: Glob versions prior to v9 are no longer supported
npm warn deprecated [email protected]: Glob versions prior to v9 are no longer supported
npm warn deprecated [email protected]: Glob versions prior to v9 are no longer supported
npm warn deprecated [email protected]: Glob versions prior to v9 are no longer supported
npm warn deprecated [email protected]: Glob versions prior to v9 are no longer supported
npm warn deprecated [email protected]: Glob versions prior to v9 are no longer supported
npm warn deprecated @aws-sdk/[email protected]: This package has moved to @smithy/protocol-http
npm warn deprecated @aws-sdk/[email protected]: This package has moved to @smithy/signature-v4
npm warn deprecated @aws-sdk/[email protected]: This package has moved to @smithy/node-http-handler
npm warn deprecated [email protected]: This package is no longer supported.
npm warn deprecated [email protected]: Package is no longer maintained
npm warn deprecated [email protected]: dommatrix is no longer maintained. Please use @thednp/dommatrix.
npm warn deprecated [email protected]: This package is no longer supported.
npm warn deprecated [email protected]: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.
npm warn deprecated @azure/[email protected]: This package is no longer supported. Please refer to https://github.com/Azure/azure-sdk-for-js/blob/490ce4dfc5b98ba290dee3b33a6d0876c5f138e2/sdk/core/README.md

changed 2162 packages in 10m

198 packages are looking for funding
  run `npm fund` for details

C:\Users\Administrator>n8n
User settings loaded from: C:\Users\Administrator\.n8n\config
(node:4652) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
Initializing n8n process
n8n ready on 0.0.0.0, port 5678
Version: 1.77.0

Editor is now accessible via:
http://localhost:5678/

Press "o" to open in Browser.
2025-01-31T04:53:30.423Z [Rudder] error: Response error code: ETIMEDOUT
2025-01-31T04:53:51.802Z [Rudder] error: Response error code: ETIMEDOUT
2025-01-31T04:54:13.276Z [Rudder] error: Response error code: ETIMEDOUT
Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed
Error fetching feature flags Error [PostHogFetchNetworkError]: Network error while fetching PostHog
    at new PostHogFetchNetworkError (C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\node_modules\posthog-core\src\index.ts:41:5)
    at PostHog.<anonymous> (C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\node_modules\posthog-core\src\index.ts:546:17)
    at step (C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\node_modules\node_modules\tslib\tslib.es6.js:102:23)
    at Object.throw (C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\node_modules\node_modules\tslib\tslib.es6.js:83:53)
    at rejected (C:\Users\Administrator\AppData\Roaming\npm\node_modules\n8n\node_modules\node_modules\tslib\tslib.es6.js:74:65) {
  error: DOMException [TimeoutError]: The operation was aborted due to timeout
      at node:internal/deps/undici/undici:13502:13,
  [cause]: DOMException [TimeoutError]: The operation was aborted due to timeout
      at node:internal/deps/undici/undici:13502:13
}

@xhzkp
Copy link
Author

xhzkp commented Jan 31, 2025

@Joffcom
To verify the issue, I performed a fresh installation of Windows 10 LTSC 2021, then installed Node.js v22.13.1, followed by n8n v1.76.1. Unfortunately, the issue mentioned earlier still persists.

On my other computer, which was upgraded from an earlier version of n8n to the new version, the issue does not occur.

How can this issue be resolved?

Here is a video demonstrating the n8n startup and error message.

bug9.mp4

@xhzkp
Copy link
Author

xhzkp commented Jan 31, 2025

@Joffcom
I have four computers running Windows 10, all of which have n8n installed via npm. After upgrading to version 1.76.1, all workflows have completely broken, and I am unable to restore them to a working state.
I need help and would appreciate any suggestions or assistance in advance.

@Joffcom
Copy link
Member

Joffcom commented Jan 31, 2025

Hey @xhzkp,

To start I would not recommend using npm to install n8n and instead use Docker, This way everything you need to run n8n and the recommended nodejs version will be included.

Looking at your previous messages it looks like you were having networking issues which could be an ipv6 issue or it could be something local to your environment.

I would start with a Docker install and see how that goes, With the workflows that are not working what is happening with them?

@Rar9
Copy link

Rar9 commented Jan 31, 2025

same issue with Ubuntu 24.04 lts with Docker 1.76.1 Chat breaks

@xhzkp
Copy link
Author

xhzkp commented Jan 31, 2025

@Joffcom

With the workflows that are not working what is happening with them?

The situation is the same as before. Whenever the AI agent is used, the chat replies become extremely slow, with a delay of several seconds before receiving a response. However, there is no issue with version 1.66 of n8n.

I request the developers to install n8n via npm on a Windows system and review this issue, as Windows is the true productivity system.

@xhzkp
Copy link
Author

xhzkp commented Feb 1, 2025

@Joffcom
I downgraded the version to 1.66.0 using the command npm install -g [email protected], and the AI agent response speed became very fast, returning to normal. Below is the video demonstration.

1.66.mp4

I forgot to mention one point: I modified the base URL of the OpenAI model, as shown in the image below. In versions after 1.66, the base URL parameter is included in the credentials, and I believe this might be the cause of the issue.

Image

@xhzkp
Copy link
Author

xhzkp commented Feb 1, 2025

@Joffcom

The issue I encountered is the same as the one in the link below. It occurred after upgrading the version and has the same error message.

Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed

#12670

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in linear Issue or PR has been created in Linear for internal review
Projects
None yet
Development

No branches or pull requests

3 participants