Render Video Transcripts From Symbl Once Processing Is Complete on a Next.js Screen

Vladimir Novick
InstructorVladimir Novick
Share this video with your friends

Social Share Links

Send Tweet

In this video lesson, you will learn the basics of Symbl AI Conversation API and get transcripts of the video. You will render them on the screen along with their timestamps

Vladimir Novick: [0:00] Now, let's get transcripts pulled from Conversation API using simple Conversation API. We want to render them, so we'll create a new state variable called messages. We'll get messages setter. We'll use useState hook for that.

[0:21] Let's create a function. Let's call it getTranscript. This function will fetch simple Conversation API. We'll use the messages endpoint, so we'll use API.simple.AI d1 conversation. Then, we'll pass the Conversation API that we received when processing the media, and we'll use messages endpoint. We'll also need to pass our token and our method.

[1:01] We'll use method GET. Also, we'll use headers with our x-api-key. We'll pass token. We'll also need to pass Content-Type to the application.json.

[1:17] We'll also use mod course. When we receive a result, again similar to previous course, we will post that result JSON. We will set our messages to be result.messages.

[2:11] Now, what we need to do is listen to our status change. When it's completed, we will log our messages. In order to listen for the status change, lets use useEffect hook, and will listen for status change. Within useEffect, we'll check if the status is completed. If it is, then we can call our getTranscripts.

[2:47] Let's also log our messages. I will try that. Let's take this testing app. Send this for processing. We will wait for a video to get sent for processing to Symbl. Then, we'll poll Job API to get the status. Whenever the status is completed

[3:19] We will see messages rendered, messages logged into the console. We can see that there is a conversationId and the jobId. Meaning, the videos already sent to Symbl. We'll see the polling is still working.

[3:44] We can see bunch of messages that are not logged because we don't have messages on dependency of useEffect, so we log messages only once whenever we're on getTranscripts. That gives us empty messages to screen. We can clearly see messages are there.

[4:05] What we need to do is to take this console.log and move it outside of useEffect. As you can see, we have messages rendered. You can get style time, end time, a bunch of parameters within the message as well.

[4:25] Now, let's render messages on the screen. In order to render them, we'll create a list with some spacing. A little bit of mulching will map over messages array. For every message, we'll render list item. As a key, we'll pass a messageId, and the message itself will be within the container. It will be simple text element having a font size of large. It will pass a message text within the container.

[5:41] We'll also add timestamps so we'll see when every message is basically said within the video. The badge will be a new date. It will pass a message start time. We'll concur that to the date string.

[6:16] Then we'll get the start time and get the time string. We'll have a nicely formatted date within our message component. Let's bring badge list item and list components from Chakra UI in case that's a Chakra UI components.

[6:43] We'll see that there is a problem because we haven't rendered text yet, and messages map is failing because the initial state was a string which is wrong. It's supposed to be an array.

[6:55] Right now if we get to our similar platform, copy, APIsecret, log in. Now, let's choose a file. We'll choose the same testing app like before that we used previously, send it for processing. Once the processing will be done, we'll see a bunch of messages render it on screen.

[7:30] We'll see our polling has started. We'll see that there's conversation idea on jobid. Once the job status will be completed, we'll get our conversation API call. We'll get a bunch of transcripts with dates.