Models

Tom Chant
InstructorTom Chant
Share this video with your friends

Social Share Links

Send Tweet

Understand the concept of AI models as algorithms that recognize patterns and make predictions. Explore the different categories of OpenAI models, including GPT-3, GPT-3.5, and the newly released GPT-4 models, which excel at natural language understanding and generation.

Learn about the Codex models specifically designed for generating computer code. Delve into the capabilities of the TextDaVinci 003 model, known for its ability to provide long text output and follow instructions proficiently.

Compare it with other models like TextCurie 001 and TextBabbage 001 in terms of output length and complexity. Discover the advantages of older models, such as lower costs and faster processing times.

Gain insights into optimizing performance by starting with the best available model and downgrading if necessary. Stay tuned for useful tools that will assist you in selecting the ideal model for your projects. Expand your knowledge and make informed decisions in the fascinating world of AI models.

[00:00] So in the last two scrims, we have used TextDaVinci 003 as our model. So now let's ask the question, what is an AI model? Well, loosely speaking, an AI model is an algorithm that uses training data to recognize patterns and make predictions or decisions.

[00:17] Now, OpenAI has got various models geared towards different tasks, and some models are newer and therefore better than others. At the moment, there are two main categories of models that you might come across. There's the GPT-3, GPT-3.5, and GPT-4 models. And GPT-4 is actually just coming out right now.

[00:36] It's currently in beta. And these models are all about understanding and generating natural language, and they can also generate computer languages as well. Now, there's also the Codex models. These models are specifically designed to generate computer code, including translating natural language to computer code and vice versa.

[00:55] And you've probably seen examples of that online, even if you haven't tried it yet yourself. OpenAI also has a model that filters content to remove or flag unsafe or sensitive text. In this project, we'll be using the TextDaVinci 003 model, which is a GPT-3.5 model.

[01:13] Now, this is one of the newest models. It can provide long text output, and it's great at following instructions. There's GPT-4, which is fresh out, and we will be coming to that later in this course. There's the TextCurie 001 model, and this is a very capable model. It's actually faster than TextDaVinci 003,

[01:33] but it's not as good overall in terms of the language it creates. There's TextBabbage 001, which is great for straightforward tasks and is also very, very fast. And then there's TextAider 001, and that is fine for basic tasks, and it's fast and cheap. Now, these models are in age order.

[01:52] So TextDaVinci 003 is the newest on this list. TextAider 001 is the oldest, and the older the models get, the less complex they are. So they run faster, and generally speaking, they're cheaper, but you should check the OpenAI docs for the latest prices.

[02:08] Now, the Curie, Babbage, and Aider models all provide shorter outputs than TextDaVinci 003, but they might well still be capable of some or even all of the tasks you might want to do in a project. It depends what it is you want to achieve. But just remember, with the older models being cheaper, that means you can scale apps without incurring too many costs.

[02:27] And because they're faster, you'll experience less of the horrible laggy UX you can sometimes get when you're working with any API, but particularly with artificial intelligence, as it obviously has to do so much computation. Now, so far, our app hasn't been too laggy. We've just been generating quite short, quick completions.

[02:45] But as our requests get more complex, we will see lag times increase. And now you might well be asking, what model should I use in my project? Well, OpenAI's advice is this. Start with the best available model and downgrade to save time and costs where possible.

[03:03] So basically, get your app working so you're happy with its performance and then experiment with cheaper models to see if you can get the same level of results. Now, as new models come out, prices will change and probably performance will increase on those new models. So you will have to do some experimentation. Now, to help you with that, in the next scrim,

[03:22] I want to introduce you to a couple of really useful tools. And one of those tools will help you select the best model when you come to build your own projects. Let's move on to that next.