Getting a long string as a response makes a lot of sense if you're building an app or tool that appears conversational or if you just want to display a little bit of information as a response, but if you want to do something programmatic with the information, such as using different datapoints from the information in different parts of the UI, you'll have a hard time parsing out that information from a single request.
To get around this, we can simply ask GPT to return our data formatted as JSON! Further, we can ask it in a particular shape. You'll learn how to format a chat completion prompt to request JSON data and gain confidence in that data by defining the shape in the request.
Instructor: [0:00] Streaming chat completions is a really interesting way to provide an interface for people who wanted to have a conversation, but it might not make as much sense for programmatic use, such as, what if I want to get a bunch of attributes and display them in a UI that I want to control.
[0:14] Now, part of this is controlling the messaging we're sending to GPT, in order to get a response that makes sense to us. For instance, if I make this a dynamic prompt, where inside I'm still passing in that prompt, but I say, "Return the response as a JSON object that includes an array of strings."
[0:35] If I generate this response again, we can see that they're now returning this object with an attributes property that has an array of all those strings, just like I asked. Now, what can I do with it? I can parse that as JSON and use it inside of my project.
[0:50] The only issue is that, because we're streaming this, it might not make much sense because we would need to try to parse that every time we get a new set of data, or we would just have to wait for it to be done anyways. Why even go through the trouble of trying to stream it into our application in the first place?
[1:06] For this model, we can go back to a pretty standard request format, where we just ask for it and get the response, as opposed to trying to stream it into our application. That way, once it returns, we can try to parse that and deliver it. What I'm going to do is actually duplicate the original chat endpoint that we had.
[1:23] I'm going to call it attributes.js, where inside, we can see that we have that original SDK usage, where we're using the createChatCompletion method, and we're passing that just as data right back to the UI. I'm going to copy over that prompt response, as we'll need it inside of this new use case, and also revert it in the original chat stream API.
[1:45] Back inside of my application, I'm going to remove all the streaming logic where we have that original response. I'm going to also update the actual API endpoint to attributes. I can even add the Venn statement that we had on before to make sure we get it as JSON.
[2:00] Because we know we're going to get this as a property called data inside of that response, I'm going to go ahead and just destructure the data. Let's just test this out to see how it's working before we move forward. Now, if we try to generate that response again, it's going to take a little bit of time before, it's not going to stream directly into the application.
[2:18] Once we get that response, we can now look through the choices, we can find the message. While the content is still going to be a string, we can see that it is going to be that JSON object. We have a decision to make, do we want to parse that JSON inside of the endpoint, or do we want to parse it inside of the UI?
[2:35] It probably makes more sense to do it inside of the endpoint, as ultimately, we're creating a data API in order to get that data back into the application. I'm going to create a new constant called "data." I'm going to set that =completion.data.choices, where I'm going to just grab the first item inside of that choices array.
[2:53] I'm going to specify zero and get the message and the content. Now, realistically, we want to be able to take this data and parse it. Let's wrap this content with JSON.parse where once it is parsed, we can pass that right back into that response. Now, if we try to generate this again, we now see our logged out attributes where we have a list of all those different strings.
[3:15] If we now create a new state property for that, and let's call that attributes, so that we can actually set it to something that makes sense. Then, we can have a default state of an empty array. We can go down to where we get that response and set that data where we want to make sure we actually parse in the attributes.
[3:33] Down inside of our UI, we can say that if we do have those attributes, we can go ahead and map through each and every one of those attributes so that we can return. Let's just do a paragraph tag for now so that we can return that paragraph with our attribute inside, and to keep React happy, we're going to set a key of that attribute.
[3:53] Once we generate that, again, we can see all those attributes listed out right inside of our application. I even like this one floats in zero gravity, so we know we're talking about a space jellyfish. Now, this works pretty consistently, but one thing to consider is if you have more complex data when you're trying to get these JSON objects.
[4:10] Now, just as an example, this is certainly going to break our application, but just to kind of show what would actually happen. If we say something along the lines of create a magic card, completely new and unique. Then, on top of that, we say return all attributes in a JSON formatted object.
[4:31] If we look at the response of these attributes, we can see that we do get a lot of different data about our new magic card, which is super cool that we were able to even do that in the first place, but pay attention to all these different attributes. I'm going to go ahead and copy the object, just so that we can compare it after.
[4:47] We can see that after we generate a new attributes response, we actually have a different set of data. Not just the name, we actually have different properties, where here we now have abilities that we didn't have in our original data object. We also have things like flavor text that doesn't exist in the original one.
[5:05] We can see that we also can be subject to differences in the capitalization, where here we have flavor_text, where maybe in a different response, we might get camel case, which would absolutely break our application, if that's not consistent.
[5:19] Now, reverting back to our original example, what we can actually do is pass in the shape of the JSON object that we want to try to get more consistent results. While this is just a simple one, we have the attributes property on there, it's still a good way to keep a consistent format.
[5:33] What we can do is we can say, constantshape=object, where we have the attributes as an array of data. I can take this shape and at the very end, instead of saying, with an array of strings, I can say, with a shape of JSON.stringify, and I can pass in that shape.
[5:51] I'm going to make sure I actually wrap that so it's a variable. Once I generate a new response, I can be confident that I'm going to get it in that shape that I requested.
[6:00] Just to prove that this is the case with more complex data, if I have this shape, which is the original object that we stored from the first request, and I pass that in, create a new character along with that shape, we can see that I now get that consistent response that looks similar to that shape that I passed in with the prompt.
[6:17] As we see here, and as we've seen in past videos, the real trick with working with GPT is being as specific as you can. That includes with programmatic terms, so that we can get exactly what we want in the response.
[6:29] In review, in order to turn chat completions into structured data, we need to first decide if it makes sense to stream our data or just send back a standard JSON response.
[6:41] Whatever method we choose, we can instruct OpenAI to return the response as a JSON object, along with the exact shape that we want, so that we can later parse that and turn it into our response, where then we have our consistent data response, where we can then do whatever we want programmatically within the UI.
Member comments are a way for members to communicate, interact, and ask questions about a lesson.
The instructor or someone from the community might respond to your question Here are a few basic guidelines to commenting on egghead.io
Be on-Topic
Comments are for discussing a lesson. If you're having a general issue with the website functionality, please contact us at support@egghead.io.
Avoid meta-discussion
Code Problems?
Should be accompanied by code! Codesandbox or Stackblitz provide a way to share code and discuss it in context
Details and Context
Vague question? Vague answer. Any details and context you can provide will lure more interesting answers!