The Model and Object

Tom Chant
InstructorTom Chant
Share this video with your friends

Social Share Links

Send Tweet
Published a year ago
Updated a year ago

Learn how to utilize the GPT-4 model for making API requests and handling the responses. The transcript explains the necessary properties required for the API request, including the model and messages.

Explore how to construct the object with the desired properties and format. The conversation array is discussed as a crucial component of the messages property. After sending the request and receiving a response, delve into understanding the response structure and extracting the relevant information.

Finally, discover how to update the DOM and conversation array using the obtained response. This comprehensive tutorial provides valuable insights into working with the GPT-4 model for enhanced AI-based interactions.

[00:00] Okay, so let's complete the object we're going to send to the API. If we go back to the docs, we can see that two properties are required. They are the model and messages. Now, so far for the model, we've been using TextDaVinci 003, which is a very capable model. But now we're going to use GPT-4.

[00:19] GPT-4 is the newest and most impressive OpenAI model yet. It makes huge improvements on its predecessors, and that is why everyone's been talking about it. Now, if you're looking at these docs and thinking, wait there, it says GPT 3.5 Turbo here. Yes, the GPT-4 model is fresh out at the time of recording,

[00:40] and these docs need updating. So if you click through to these docs, you might find they look a bit different. Now, let's just click through to this endpoint compatibility table. We can see that GPT-4 is listed under chat completions. So this is going to work just fine. Now, for the messages property, slightly confusingly,

[01:01] the example they've given here is a bit of a simplification. So I'm going to ignore that and click through to chat format. Here we can see the format that it wants, and hopefully that looks familiar. It's an array with objects, and each object has got two key value pairs, role and content.

[01:20] And this is exactly what we have in conversation array. We've got an array of objects, and each object has got two key value pairs with role and content. And if we look closer here, we can see that the first object has the role of system, and the content is an instruction. So in the object we send to the API,

[01:40] our messages property just needs to hold conversation array. So just to recap, the object we send to the API will have two properties, model, which is GPT-4, and messages, which should hold conversation array. And you can think of messages as being a replacement for the prompt property that we used in the previous project.

[02:01] Okay, so let's do this as a challenge. And we're working here inside fetch reply and inside the object that we're going to send to the API. I want you to give this object a model property of GPT-4, give it a messages property, which should hold our conversation array,

[02:19] and then ask a question, hit send, and log out the response to see if it works. Now, just before you do that, you might have noticed that I'm not nudging you to include a max tokens property. In the past, we found that max tokens defaulted to 16. That was when we were using the text DaVinci 003 model on the create completions endpoint.

[02:38] On the create chat completions endpoint, things are different. Because we're building up the conversation and sending it with every request, trying to define a single max tokens figure is impossible. And in fact, the default with GPT-4 is much higher anyway, so it's not going to cause a problem. Okay, code up this object, and then we'll have a look together.

[03:05] Okay, so hopefully you got that working just fine. So this should be quite straightforward. The model is GPT-4. And of course, that is in a string. And the message is our conversation array. Let's just come down here and log out the response. And I'll hit save, and I'm going to ask it a question. I've asked, what is the capital of Tunisia?

[03:31] And we've got a response, and I'm just going to copy that response and paste it into the editor so we can see it really clearly. We've got loads of info here, just like before. But the important bit is right here with the content, the capital of Tunisia is Tunis. So we have successfully made our first request to the GPT-4 model.

[03:53] Okay, next, we need to use this response in two ways. We need to update the DOM, and we need to update the conversation array. So let's look at that next.

egghead
egghead
~ just now

Member comments are a way for members to communicate, interact, and ask questions about a lesson.

The instructor or someone from the community might respond to your question Here are a few basic guidelines to commenting on egghead.io

Be on-Topic

Comments are for discussing a lesson. If you're having a general issue with the website functionality, please contact us at support@egghead.io.

Avoid meta-discussion

  • This was great!
  • This was horrible!
  • I didn't like this because it didn't match my skill level.
  • +1 It will likely be deleted as spam.

Code Problems?

Should be accompanied by code! Codesandbox or Stackblitz provide a way to share code and discuss it in context

Details and Context

Vague question? Vague answer. Any details and context you can provide will lure more interesting answers!

Markdown supported.
Become a member to join the discussionEnroll Today