Generate Chat Completions with AI in Node.js Using the OpenAI SDK & GPT

Colby Fayock
InstructorColby Fayock
Share this video with your friends

Social Share Links

Send Tweet

ChatGPT and similar conversational AI tools have quickly made impressions in the tech industry and beyond by giving the people using those tools a more natural and a potentially more helpful means to get answers to the questions they have. Bringing GPT into applications can unlock new ways to solve problems and quickly help customers to solve theirs.

We can create ChatGPT-like conversational messages with the OpenAI SDK in a Node.js project. You'll learn how to use the createChatCompletion method and configure the gpt-3.5-turbo model to generate a response based on a message prompt.

https://platform.openai.com/docs/api-reference/chat/create

Instructor: [0:00] One of the OpenAI features that have made waves in the tech community is the ChatGPT product. By being able to simply write a human sentence, such as, "Which is the most popular React framework?" we can see that based off of ChatGPT's current knowledge, that's Next.js.

[0:15] By using the OpenAI SDK, we can take advantage of similar models to create our own ChatGPT interface, or by doing more interesting things tailored to the project that we're working on.

[0:26] Once you have OpenAI installed and configured in your Node environment, where here, I'm working inside of a Next.js serverless function, I'm going to create a new constant called completion. I'm going to set that = await openAI.createChatCompletion, where inside, I'm going to pass in a new object where I'm going to define my model.

[0:46] According to the documentation at this point in time, the latest model we can use for this API is GPT 3.5-Turbo. I'm going to paste that in as my model, and then I'm going to define the messages that I want to say to my createChatCompletion.

[1:01] Now, by following the documentation to Chat format, we can see that there are multiple ways that we can define a message when sending it to ChatCompletion. By using a role of system, we can help define the behavior of the ChatCompletion assistant that we're creating. By using the user role, we can send in messages on behalf of the user using the application or if we want to send messages to prompt ourselves.

[1:23] As you might have been able to guess, the assistant role is going to be messages that are going to appear as if they came from the assistant themselves. The assistant role may come from previous conversations or you might want to try to help set up different scenarios that the assistant should be aware of.

[1:40] For now, let's just get started sending in a user message. I'm going to set up a new object where I'm going to set my role to user and then I'm going to set up my content object. I'm going to set that to whatever my prompt is going to be, such as, how many jellyfish are in the sea? In order to see that data, let's pass back completion.data.

[2:00] Now, depending on the environment you're in where I'm using an XJS serverless function, I can now invoke that function. Now, we can see the response where it seems like the AI didn't necessarily like this question. It's not possible to give an answer for every single individual jellyfish super accurately. They didn't want to give it as such.

[2:18] What if we change this to, how about, what is an estimate of how many jellyfish are in the sea? Let's say, during the summer in the Atlantic Ocean. While it's still prefacing that it doesn't have an exact answer, it's now saying that certain areas can reach millions in the summer months.

[2:39] How about something that can have a more definitive answer, such as, "What is the first online game to be played competitively?" We can see that, of course it was Doom, which was released in '93, and had a multiplayer mode that allowed people to find each other over the network.

[2:55] Whatever that question is, in review, we were able to easily set up a way to ask questions to our GPT model by using the createChatCompletion method of OpenAI, where after defining the model we want to use, as well as defining the messages, we were able to get a response back, including the message that the assistant responded with.