An introduction to the Web Audio API. In this lesson, we cover creating an audio context and an oscillator node that actually plays a sound in the browser, and different ways to alter that sound.
[00:00] The Web Audio API allows you to create and manipulate sounds in real time in the browser with JavaScript. It's pretty cool stuff. At this point, it's supported by the latest release of most modern browsers, with the notable exception of any version of Internet Explorer, though it does work in the Microsoft Edge browser.
[00:20] If you're familiar with HTML5's canvas, working with Web Audio is somewhat similar in that you first create a context, which is the object you'll use to create other objects in order to create and manipulate sounds.
[00:34] You create an audio context by saying new AudioContext() or new webkitAudioContext(). One of these should be a property in the global window object in any supported browser, so you can set it up like so, "var context = new window.AudioContext()" or "new window.webkitAudioContext()."
[00:57] Normally, you'd want to verify that that context was created, but for simplicity's sake, we'll just assume that it was. An important property of the AudioContext is the destination. This is where the sound goes. Generally, this is going to be your speakers or however audio is getting out of your computer.
[01:16] The Web Audio API's a node-based system. The objects you create are nodes that have inputs and outputs. You connect them by hooking the output of one node to the input of another. This way, you can chain various effects together and eventually hook the final outputs to the audio context destination to get the results to your speakers.
[01:38] One of the most basic nodes is an oscillator. This directly creates an audible tone. Our strategy will be to create an oscillator, give it a frequency, connect it to the destination, and play it. To create an oscillator, you just call context.createOscillator().
[02:02] Now, we need to configure this oscillator. Minimally, we'll need to tell it what note to play. We do this by setting its frequency. You might expect to set the frequency by assigning a numerical value to a frequency property on the oscillator, but while the oscillator does have a frequency property, it's actually an object called an AudioParam.
[02:23] The AudioParam itself has a value property, and that's what we assign the numerical property to. This will make more sense later when we start hooking up inputs and outputs.
[02:36] We have oscillator.frequency.value = 440. 440 hertz is an A note above middle C commonly used for tuning instruments.
[02:47] Since this is the only node we're using, we can hook its output directly to the context destination. We do this by calling oscillator.connect, passing in context.destination. Now, we have an oscillator. All we have to do is start it playing. Call start, and instantly, you hear a perfect A note.
[03:09] Now, let's try changing the frequency, first making it lower, then higher, and even higher. That works as expected.
[03:29] Another important property of sound is the type of wave used in the oscillation. There are several types of waves commonly used in audio synthesis. Sine waves, square waves, triangle waves, and sawtooth waves are the standard ones. These are all built into the Web Audio API.
[03:46] The default wave that we're getting now is a sine wave. Let's set the frequency back to 440 and try a square wave. We can do this by setting the type property of the oscillator. We say type = square. Start it again, and now you hear a much harsher sound. Let's try triangle and sawtooth. Then, back to the default sine.
[04:18] Other than sine, all these sound pretty harsh when just a single tone is playing forever, but when we start experimenting with the sound envelope, that harshness will become more of a richness.
[04:26] There's also a way to create custom wave shapes, but we won't be getting into that here, at least not right away.