Web Audio: Playing Back Audio Files

Keith Peters
InstructorKeith Peters
Share this video with your friends

Social Share Links

Send Tweet
Published 8 years ago
Updated 5 years ago

In this lesson we cover how to load external audio files, such as mp3s, and play them back with the web audio api, including altering playback speed, and look at some of the parameters to the start method.

[00:00] In addition to synthesizing sounds from scratch, you can load existing audio files in and play them with the Web Audio API. This is a bit more complex as you need to make the connection and load the bytes of the file and then decode it into an audio buffer that can be used with the audio API. I'm switching over to a desktop editor on a local server, as I'd be unable to load and play an external file through JS Bin.

[00:24] However, the code itself will be available on JS Bin. I'll start with a window on load handler, just to make sure the page is loaded before we start executing code. As I'm sure you're aware, there are many other ways to accomplish this, most will be better in a production environment. I'll create the Web Audio context as usual.

[00:42] I'll also create a new XML HTTP request object, as that's how I'll be loading the audio in. There are, of course, higher level libraries that will load assets for you, so feel free to use whatever's comfortable for you. I'll have the requests load the sound file, which is named HappyBee.mp3.

[01:04] I'll set the response type to Array Buffer. This will make sure that the data comes in in the format that we need it. Now, we need to know when the request has loaded the data, so I'll assign a function to the request on load property. In this function, the bytes of the audio data from the sound file will be in the request response property.

[01:26] What we need to do is decode this raw data into an audio buffer that can be used within the Web Audio API. We do that with context decode audio data. This gets that raw response in a call back handler that will be called when the decoding is complete. I'll set that to On Decoded.

[01:47] Now, we need to define that call back handler. This function gets a single parameter, which is an audio buffer. The way we play that buffer is by creating an audio buffer source node. An audio buffer source node is much like an oscillator node in that it's a source for a sound. But rather than creating a sound from scratch, it plays back the sounds that are in an audio buffer.

[02:13] We create the node using context create buffer source. Then we need to give it the buffer that it's going to play. That's the same buffer that just got passed into this call back. We just say buffer source.buffer = buffer.

[02:29] The rest of it you should already know how to do if you've been following along in the series. Reconnect this node to the context destination and we start playing it. When we're all hooked up, we call our request to send to kick it all off. You should be able to hear the music playing after a few seconds of buffering.

[02:47] I won't show you the web page itself, because there's nothing to see there. Now, there are a few properties of the audio buffer source node that you might want to check out. One is playback rate. With this, you can slow down or speed up the playback of the sound.

[03:01] This is an audio param, meaning that it's an object that has a value property. We could say, buffer source.playback rate.value = 2. Now, the music is playing at double speed. Or we can set it to 05. Now it's at half speed. There are also options to detune the sound and loop it, amongst other things.

[03:27] Now that we're dealing with something other than a single tone, let's look a bit more deeply at the start method here. We've been calling it with no parameters, but it can take up to three. The first parameter allows you to start the sound after a delay.

[03:41] Passing nothing, or zero, will cause the sound to start immediately, as we've been doing. To add a delay, need to specify a time for that sound to start, in terms of the audio context's internal timekeeping. The context starts keeping track of time when it is created, starting at zero. You can get the value of its current time by accessing context.currentTime.

[04:06] To have the audio start 10 seconds from now, you'd have to pass context.currentTime + 10. The second parameter allows you to start the sound at some offset to its own beginning. This could be useful if you wanted to play a snippet from a sound that you knew was 30 seconds in. Just past 30 seconds there's a second parameter.

[04:30] The final parameter tells the sound how long it should play. I'll pass in 10 here, and load the file now. Now, nothing happens at first. But after 10 seconds the music should start playing. Notice that it started well into the song, not right at the beginning. 30 seconds in, to be exact. If everything works out OK, it should end after exactly 10 seconds.

egghead
egghead
~ 38 minutes ago

Member comments are a way for members to communicate, interact, and ask questions about a lesson.

The instructor or someone from the community might respond to your question Here are a few basic guidelines to commenting on egghead.io

Be on-Topic

Comments are for discussing a lesson. If you're having a general issue with the website functionality, please contact us at support@egghead.io.

Avoid meta-discussion

  • This was great!
  • This was horrible!
  • I didn't like this because it didn't match my skill level.
  • +1 It will likely be deleted as spam.

Code Problems?

Should be accompanied by code! Codesandbox or Stackblitz provide a way to share code and discuss it in context

Details and Context

Vague question? Vague answer. Any details and context you can provide will lure more interesting answers!

Markdown supported.
Become a member to join the discussionEnroll Today