By itself, the model does not have any conversational support, so this library uses a cache to store conversations and pass them to the model as context. This allows you to have persistent conversations with ChatGPT in a nearly identical way to the official website.
- Includes an API server you can run to use ChatGPT in non-Node.js applications.
- Includes a
ChatGPTClient
class that you can use in your own Node.js applications. - Replicates chat threads from the official ChatGPT website (with conversation IDs and message IDs), with persistent conversations using Keyv.
- Conversations are stored in memory by default, but you can optionally install a storage adapter to persist conversations to a database.
- The
keyv-file
adapter is also included in this package, and can be used to store conversations in a JSON file if you're using the API server or CLI (seesettings.example.js
).
- Supports configurable prompt prefixes, and custom names for the user and ChatGPT.
- In essence, this allows you to turn ChatGPT into a different character.
- This is currently only configurable on a global level, but the original author had planned to add support for per-conversation customization.
- Sign up and obtain an OpenAI API key at https://platform.openai.com/account/api-keys
- Install the following (if you haven't already)
- Node.js
- npm
- Configuring a few files
- Modify
settings.js
in the root directory as desired.settings.example.js
is a good template to start with - Create a
.env
file in the root folder and add your OpenAI API key. It is git-ignored.# .env OPENAI_API_KEY=<your key here>
- Modify
git clone [email protected]:hochoy/node-chatgpt-starter.git
cd node-chatgpt-starter
# Option 1: Either of these options will boot up the server on port 4000 on your machine
npm run start
npm run server
# Option 2: Running the following will spin up a container instance with port 4000 exposed
npm run docker:build
npm run docker:run
# Option 3: Running this will also spin up the container instance with port 4000 exposed. The compose file is intended to spin up other (optional) infrastructure in the future
npm run compose:up
To get started making calls to the API Server, you can import the following into Postman:
./postman/ChatGPT-collection.postman_collection.json
./postman/ChatGPT-environment.postman_environment.json
Basically, the collections will send a HTTP POST request to the localhost:4000/conversation
with this JSON body structure:
{
"message": "Hello, how are you today?",
"conversationId": "your-conversation-id (optional)",
"parentMessageId": "your-parent-message-id (optional)"
}
The server will return a JSON object containing ChatGPT's response:
{
"response": "I'm doing well, thank you! How are you?",
"conversationId": "your-conversation-id",
"messageId": "response-message-id"
}
If the request is unsuccessful, the server will return a JSON object with an error message and a status code of 503.
Configuring settings.js
to be debug:true
, will log out issues with the connection to OpenAI.
# Run the following commands to clone and link the repo to your global npm
git clone [email protected]:hochoy/node-chatgpt-starter.git
cd node-chatgpt-starter
npm link
# navigate to your project, and run the following to link your current repo to the local node-chatgpt-starter
cd <my_project>
npm link node-chatgpt-starter
import ChatGPTClient from "node-chatgpt-starter";
const clientOptions = {
modelOptions: {
model: 'text-davinci-003',
},
// (Optional) Set custom instructions instead of "You are ChatGPT...".
// promptPrefix: 'You are Bob, a cowboy in Western times...',
// (Optional) Set a custom name for the user
// userLabel: 'User',
// (Optional) Set a custom name for ChatGPT
// chatGptLabel: 'ChatGPT',
// (Optional) Set to true to enable `console.debug()` logging
debug: false,
};
const cacheOptions = {
// Options for the Keyv cache, see https://www.npmjs.com/package/keyv
// This is used for storing conversations, and supports additional drivers (conversations are stored in memory by default)
// For example, to use a JSON file (`npm i keyv-file`) as a database:
// store: new KeyvFile({ filename: 'cache.json' }),
};
const chatGptClient = new ChatGPTClient('<OPENAI_API_KEY>', clientOptions, cacheOptions);
const response = await chatGptClient.sendMessage('Hello!');
console.log(response); // { response: 'Hi! How can I help you today?', conversationId: '...', messageId: '...' }
const response2 = await chatGptClient.sendMessage('Write a poem about cats.', { conversationId: response.conversationId, parentMessageId: response.messageId });
console.log(response2.response); // Cats are the best pets in the world.
const response3 = await chatGptClient.sendMessage('Now write it in French.', { conversationId: response2.conversationId, parentMessageId: response2.messageId });
console.log(response3.response); // Les chats sont les meilleurs animaux de compagnie du monde.
This is originally a fork of https://github.com/waylaidwanderer/node-chatgpt-api. Additional acknowledgements to @queercat for identifying the issue of needing to specific host: 0.0.0.0
for fastify to work in Docker as per fastify/fastify-cli#57. The initial Dockerfile in this repo was also inspired by the work at https://github.com/queercat/gpt-api-docker
Please see ./docs/NOTES.md for additional caveats and warnings
In this repository, we will be mainly working on:
- preparing a safe and friendly starter kit
- addressing security vulnerabilities
- respecting the terms and conditions of OpenAI usage
If you'd like to contribute to this project, please create a pull request with a detailed description of your changes.
This project is licensed under the MIT License.