This Lesson is for Members

Subscribe today and get access to all lessons! Plus direct HD download for offline use, enhances transcripts, member comment forums, and iTunes "podcast" RSS feed. Level up your skills now!

Unlock This Lesson

Already subscribed? Sign In

Autoplay

    Make a Twitter Audio Bot That Composes a Song Based on a Tweet

    Hannah DavisHannah Davis

    Make a Twitter Audio Bot That Composes a Song Based on a Tweet - In the final bot lesson, we'll compose a ditty based on a tweet, save it as an audio file, and post it to Twitter. Because Twitter only supports uploading audio in video form, we'll learn how to create a video from the MIDI file and post it to Twitter. This is a longer video since we are going over how to create this pipeline from scratch.

    For more Natural Language Processing, see the course on Natural: https://egghead.io/courses/natural-language-processing-in-javascript-with-natural

    We'll use RiTa to tokenize the text of a tweet and find the parts of speech: https://rednoise.org/rita/

    We'll use Jsmidigen to compose a tune in a MIDI format: https://www.npmjs.com/package/jsmidgen

    We'll also use FFMPEG, which will help us create a video from our audio and a picture: https://ffmpeg.org/

    And we'll use TiMidity to convert our MIDI file to a Wav file: http://timidity.sourceforge.net/

    You can use any image in place of the black image used in this video.

    Code

    Code

    Become a Member to view code

    You must be a Member to view code

    Access all courses and lessons, track your progress, gain confidence and expertise.

    Become a Member
    and unlock code for this lesson
    Transcript

    Transcript

    00:00 In addition to Twit, we'll need RiTa, we'll a midi library called jsmidgen, we'll need FS for working with the file system. We'll need path, we'll need childProcess to execute system level commands.

    00:20 We'll require an FFMPEG installer, which we're going to call FFMPEGPath, which is require @FFMPEGPath -installer/FFMPEG. We will need FFMPEG itself, which is fluent-FFMPEG, and then we need to say FFMPEG.setFFMPEGPath FFMPEGPath.

    00:44 What are we going to do here? We're creating an audio bot that will generate a midi melody based on the text of the tweet that's sent to it. We'll then convert that midi to a WAV file, and incorporate it into a video so that we can upload it to Twitter.

    01:00 Let's put our bot username here, and we're going to need a couple variable names. We'll need an image file name, which will be the background image for our video that we're going to create. I just want a simple black background, so I have a file named black.jpg that I'm going to use.

    01:17 Just to be thorough, we're going to say path.join(process.cwd). We need a midi file name which we'll call output.mid, well a WAV file name which we'll call output.wav, and finally we'll need a video file name, and we'll call that output.mp4.

    01:41 Since our bot will be listening for tweets that people send it, we'll need a stream. We'll say var stream = bot.stream statuses/filter and we'll track our bot username. For some troubleshooting we can say stream.onConnecting, stream.onConnected, stream.onError, and then what we really want is stream.onTweet.

    02:17 Here we'll say if the tweet text.length is bigger than zero, so if there's text, then we're going to call a function called createMedia. It will take our tweet, our image file name, our midi file name, our WAV file name, our video file name, and a callback. If there's an error, we can log that out, otherwise we'll say media created.

    02:48 Let's make our createMedia function. We've got a tweet, image file name, midi file name, WAV file name, video file name, and our callback. The first thing we're going to do is create a midi file, that will take a tweet, a midi file name, and a callback.

    03:13 If there's an error, we'll log it out, otherwise, we'll convert our midi here. But first let's create our createMidi function. We've got a tweet, a midi file name, and a callback. To create a midi file, we'll say var file = new midi.file, and then we need say var track = new midi.track.

    03:36 Then we'll say file.addTrack(track). We're going to base our midifile on the tweet our bot received. First, we need to clean our text up a little bit. The first thing we'll do is tokenize it by saying rita.rita.tokenize(tweet.text). Then I want to remove tokens that are a user handle, an RT symbol, or a URL.

    04:04 I'm going to make another function called cleanText, which will take some text, it will return the text split on spaces .filter, and filter it so that it has none of these symbols. In natural language processing, the words you want to get rid of are often call stop words. We'll call this function hasNoStopWords, and we'll make another function for that here.

    04:33 HasNoStopWords takes a token, our stop words array will include anything that has the @ symbol, the RT symbol, or http, which will capture most URL. We're going to say return stopWords.everyFunction(stopWord), and will return it if the token doesn't include every stop word. Going back down here, we'll join the result together with a space and trim any white space from the beginning and end.

    05:09 Now that we've removed stop words, we also want to remove the punctuation. We can do that by saying .filter by another function, isNotPunctuation. We pass it the token. This one is straightforward because RiTa has a function called isPunctuation. We'll return the token if isPunctuation returns false. Then we'll join the text with the space.

    05:37 Let's make a midi file where each note is based on the part of speech of the tweet. We'll say var tagTweet = getPartsofSpeechCleanedText and for this function, all it is is return rita.rita.getPOSTags of the text. This will return tags that will let us know if each word is a noun, a verb, etc.

    06:05 Here's the fun part, we're going to take our tagged tweet and our track, and compose a little song. Our compose function takes the tagged tweet and a track. We're going to create an array of notes where we map each tag to a note based on its part of speech.

    06:25 If the tag includes an N, which is any kind of noun, we'll return E4, where E is the note, and 4 is the octave. If the tag includes VB, we'll return G4, otherwise we'll return C4. We'll also add here, if tag.includesI, which is the tag for the word I which is a pronoun, it should be in this category. Once we have our notes, we'll say notes.foreach note track.addNote.

    07:01 The first argument is the channel, the second argument is the note string, and the third is the duration, where 128 equals a quarter note. Then we'll return our track. After we compose our midi, we'll need to save it. We can do that by saying fs.writeFile with the midi file name file.toBytes, and encoding, which is binary, and our callback.

    07:32 Now we're back here, and our next step is to convert the midi to a WAV file. We'll make another function called convertMidiToWAV which will take the midi file name, the WAV file name, and a callback. If there's an error, we'll log it out, otherwise, we'll say midi converted, and we'll create our video. Let's write our conversion function.

    08:07 To convert our file, we're actually going to call a command line process in our Node app. First, we need our command which is timidity, which is the software we're going to use to convert our file -output -24bit -a120 which is going to create a high-quality file, plus our midi file name, plus -OW -O plus our WAV file name.

    08:37 The way we call this is childProcess.exec our command, any additional options, and our callback. If there's an error, we'll pass it to our callback, otherwise, we'll call our callback with null. Back down here, if we don't get an error, we'll call createVideo. CreateVideo takes the image name, the WAV file name, the video file name, and the callback.

    09:11 In our createVideo function, the command for this is FFMPEG.onEnd, if it finishes correctly we'll call our callback with null. On an error, we'll call our callback with the error, then we need to input our image and an input frames per second.

    09:37 You can play around with this, but I'll do 1/6. You'll need an audio input which is our WAV file, and then we need an output, which is our video file, and an output frames per second, which we'll say is 30, then it's .run. Then down here if this comes back with no error, we know our video has been created.

    09:58 We're going to add one more function before we post our video, and that's a function called deleteWAV, that will take our WAV file name and a callback. If there's an error, we'll log it out, otherwise, we'll upload our media here.

    10:16 We're going to delete our WAV file because there's occasionally an error when the program tries to create a WAV that's already there. This takes our WAV file name and a callback, and we'll use childProcess again for this.

    10:28 The command is rm, or remove plus the WAV file. We'll say childProcess.exc command array of options, and our callback. If there's an error, we'll return our callback with the error, otherwise, our callback will return without an error. We're getting close. Now we just need to upload our media to Twitter, and then post it.

    10:56 To upload our media, we need our tweet and our video file name. We say bot.postMediaChunked and this takes a file path which is our video file name and a callback. ErrorDataResponse if there's an error, we'll log it out, otherwise, this is where we'll post our status.

    11:20 Let's first make some params that our status will need. We'll need the actual status. Let's have our status be the same tweet that was tweeted at us without our own username. We'll say var stat = tweet.text.split on our bot username. Then we can say .join with a space, and then trim any white space at the beginning or end.

    11:44 Let's have our status be the @ symbol, plus the screen name of the person who tweeted at us, plus a space, plus stat. Then we want to make sure that we reply to the right tweet, so we'll include inReplyToStatusID which is tweet.ID_STR and then we need our media ID string, which is data.media_ID_string. Then we'll post a status with these params.

    12:18 Lastly, we need make our postStatus function. Here we say bot.postStatuses/update with our params, our callback. If there's an error, we'll log it out, otherwise, we'll say bot has posted. Ah, and I can see I forgot to include tweet and video file name here.

    12:46 This should do it. Let's see if this runs. It's connected, if we tweet something, we have an error. Ah, so this is actually set FFMPEGPath.path, let's clear and try again. If we write another tweet, it says midi converted, media created, bot has posted.

    13:04 It works!

    Discuss

    Discuss