This lesson focuses on personalizing the AI's response by incorporating user input into the prompt. It explains how to modify the code to retrieve the user's text input and pass it as a parameter to fetch the AI response.
It also provides guidance on formatting the prompt using template literals and inverted commas to signal specific text. The challenge prompts learners to create an enthusiastic response that mentions the user's input and indicates that the AI needs time to process it.
Additionally, the concept of "max tokens" is introduced, and its impact on the length and completeness of the AI's response is explored. You'll understand the difference between a successful completion and a cut-off response based on the "finish reason" and the number of completion tokens.
[00:00] Before we get to work on our response from the AI, I want to make a slight change. When we have just one word in these keys, we don't need them to be wrapped in inverted commas. Now, it's a totally personal thing, but I prefer it without where possible, so let me just tidy this up a little bit.
[00:17] Okay, I feel strangely better for doing that, even though it doesn't matter at all. Let's get to work with personalising the response we get back from the AI. So at the moment, we've given it this rather basic prompt, and you only get back as much as you put in, so it's giving us this very boring, generic reply.
[00:36] And if we hit Send, we'll see an example of that, excited and eager. Okay, so thanks for that, but to be honest, we could have hardcoded it ourselves. What I want to do is uncomment this code here. So what's happening now is, when a user clicks the Send button,
[00:53] the If clause here is going to check that there is actually some text in the text area. If there is, it's going to render the loading SVG and update the speech bubble to our first generic message. At that point, I also want it to call FetchBot. Now, if we're going to get a personalised response,
[01:11] then our prompt needs to have access to whatever the user entered into the text area. So let's take whatever that is and save it as a const UserInput, and we'll pass in UserInput when we call FetchBotReply, and let's bring it into FetchBotReply as the parameter Outline,
[01:29] because it's going to be a one-sentence outline of a movie. Now, the last change I'm going to make here is I just want to log out the response. Okay, and now it's time for a challenge for you, and I'm just going to come in here and I'm going to paste it right inside this object
[01:46] because it's this prompt here that I'm going to ask you to refactor. So I want you to refactor this prompt so the AI gives an enthusiastic, personalised response to the user's input and says it needs a few moments to think about it. Now, a couple of things for you to think about.
[02:04] We can use the parameter Outline in the prompt by converting that prompt to a template literal with backticks, and you might want to put the outline in inverted commas or speech marks to signal to OpenAI that this is a chunk of text that you're referring to with the rest of the prompt.
[02:23] And of course, you can experiment with the wording, but don't be afraid to just ask for what you want. There's not one correct way of writing this prompt, and afterwards, I'll show you my way. Now, just before you do that, I'm going to do you a favour. I'm actually going to break the rules again and add a line of code here that we haven't talked about yet.
[02:42] Now, I want to do that because I'm aware that we've looked at loads of theory and we've laid loads of foundations, but there hasn't been too much time for you to get your hands on the keyboard yet. So I just want to pop this line of code in here because without it, this challenge would be ridiculously frustrating, and then we will talk about it afterwards.
[02:59] So all I'm going to do is add a property to this object, max tokens 60. Okay, so pause now. Don't worry about this mysterious new property at all. Get this challenge sorted, and we'll have a look together in just a minute. Okay, so hopefully you managed to do that just fine.
[03:22] So I'm going to come in here and I'm going to completely replace this prompt. And what I'm going to say is this. Generate a short message to enthusiastically say, outline sounds interesting and that you need some minutes to think about it. Mention one aspect of the sentence.
[03:39] So you'll have noticed that I've put outline in inverted commas, so the AI understands that I want it to deal with that specific line of text. And also this last instruction will hopefully make OpenAI personalize the completion. Now, of course, to get access to outline, we need to swap these four backticks.
[03:58] Okay, let's hit save. And I'm just going to put a one line idea in here. And I'm just going to say a spy deep behind enemy lines falls in love with an enemy agent. Let's press send. And look at that. We're getting a really long completion.
[04:15] And if you just read through that, you can see that it's actually like we're interacting with a human. It's being conversational. It's referring to our idea. And that is exactly what we want. Now, down in the console, we've got the response. And just bear with me while I copy and paste something from there.
[04:33] Now, before I explain why I've done that, I'm just going to actually use the same idea again. But this time I'm going to remove this line of code that I added just before you did the challenge. So I said a spy deep behind enemy lines falls in love with an enemy agent.
[04:51] Let's press send and see what happens. And there we are. We get our response. But look, it is much shorter. A spy deep behind enemy lines falls in love with an... And then it stops. My answer is cut off. Now, I'm just going to copy and paste the same properties from the response.
[05:10] And what we can see there is that the first time with the more successful completion, we had the finish reason of stop. And the second time when the completion was actually not complete, where we got cut off, the finish reason was length. So generally speaking, a finish reason of length is bad news.
[05:28] It means OpenAI has not given us everything it wanted to. Now, also at the end here, we see completion tokens 59 in the first call and completion tokens 16 in the second call. So something we're doing with tokens is affecting the length of the completion we get.
[05:46] Now, as you're a seasoned coder, you've probably grasped that we can control the length of our completion to some extent with this max tokens property. But before we start using max tokens all over the place, we should really understand what a token is in OpenAI and what this number 60 really means.
[06:04] So in the next scrim, let's take a peek under the hood and take a dive into tokens and the max tokens property. When you're ready for that, move on.