Electric Guitar Synth in HTML5

TL;DR: Hear it here.

The HTML5 WebAudio API gives web developers low level access to high performance audio directly in our browsers. One of many things you could do with this power is sound synthesis: making sounds completely from numbers. Essentially, you can produce music out of a blank slate.

Karplus–Strong String Synthesis

Karplus and Strong came up with a genius method to model how strings physically behaves. It is very simple and elegant:

  1. A short impulse of gaussian-like white noise is first created.
  2. A feedback line is connected with a delay equal to the period of the desired frequency.
  3. Before the feedback is added back into the input line, a lowpass filter is applied.

In hardware, it is extremely simple to implement, but can produce surprisingly convincing results.

The Implementation

In order to begin, we first need what is called an audio context from the browser:

var context = new AudioContext();  

WebAudio provides a lot of audio processing blocks that might be useful for our sound synthesis. However, immediately we face a problem: DelayNodes in WebAudio were intended for use with delays in the magnitude of seconds, however we require extremely tight and accurate delay in the magnitude of miliseconds. Thankfully, emulating DelayNodes using ScriptProcessorNodes is not too difficult.

// Signal dampening amount
var dampening = 0.99;

// Returns a AudioNode object that will produce a plucing sound
function pluck(frequency) {  
    // We create a script processor that will enable
    // low-level signal sample access 
    var pluck = context.createScriptProcessor(4096, 0, 1);

    // N is the period of our signal in samples
    var N = Math.round(context.sampleRate / frequency);

    // y is the signal presently
    var y = new Float32Array(N);
    for (var i = 0; i < N; i++) {
        // We fill this with gaussian noise between [-1, 1]
        y[i] = Math.random()*2 - 1;
    }

    // This callback produces the sound signal
    var n = 0;
    pluck.onaudioprocess = function (e) {
        // We get a reference to the outputBuffer
        var output = e.outputBuffer.getChannelData(0);

        // We fill the outputBuffer with our generated signal
        for (var i = 0; i < e.outputBuffer.length; i++) {
            // This averages the current sample with the next one
            // Effectively, this is a lowpass filter with a 
            // frequency exactly half of sampling rate
            y[n] = (y[n] + y[(n + 1) % N]) / 2;

            // Put the actual sample into the buffer
            output[i] = y[n];

            // Hasten the signal decay by applying dampening.
            y[n] *= dampening;

            // Counting variables to help us read our current
            // signal y
            n++;
            if (n >= N) n = 0;
        }
    }

    // The resulting signal is not as clean as it should be.
    // In lower frequencies, aliasing is producing sharp sounding
    // noise, making the signal sound like a harpsichord. We
    // apply a bandpass centered on our target frequency to remove
    // these unwanted noise.
    var bandpass = context.createBiquadFilter();
    bandpass.type = "bandpass";
    bandpass.frequency.value = frequency;
    bandpass.Q.value = 1/6;

    // We connect the ScriptProcessorNode to the BiquadFilterNode
    pluck.connect(bandpass);

    // Our signal would have died down by 2s, so we automatically
    // disconnect eventually to prevent leaking memory.
    setTimeout(function() { pluck.disconnect() }, 2000);
    setTimeout(function() { bandpass.disconnect() }, 2000);

    // The bandpass is last AudioNode in the chain, so we return
    // it as the "pluck"
    return bandpass;
}

Once we defined our pluck node generator, all we need to do to use it is by connecting it to a sink in order to play the sound:

// A string in Concert A
var p = pluck(110);

// Connect the node to the sink in our context
p.connect(context.destination);  

That's all there is to Karplus-Strong! Simple, right?

Now, in order to simulate a guitar we would need to play 6 strings in a quick strum. We can easily do that by connecting 6 plucks in succession:

// Concert A frequency
var A = 110;

// These are how far guitar strings are tuned apart from A
var Eoffset = -5;  
var Doffset = 5;  
var Goffset = 10;  
var Boffset = 14;  
var E2offset = 19;

// Fret is an array of finger positions
// e.g. [-1, 3, 5, 5, -1, -1];
// -1 is not played, and 0 is an open string
// >=1 are the finger positions above the neck
function strum(fret) {  
    // Reset dampening to the natural state
    dampening = 0.99;

    // Connect our strings to the sink
    var dst = context.destination;
    if (fret[0] >= 0) pluck(A*Math.pow(2,(fret[0]+Eoffset)/12) ).connect(dst);
    if (fret[1] >= 0) pluck(A*Math.pow(2,(fret[1])/12)         ).connect(dst);
    if (fret[2] >= 0) pluck(A*Math.pow(2,(fret[2]+Doffset)/12) ).connect(dst);
    if (fret[3] >= 0) pluck(A*Math.pow(2,(fret[3]+Goffset)/12) ).connect(dst);
    if (fret[4] >= 0) pluck(A*Math.pow(2,(fret[4]+Boffset)/12) ).connect(dst);
    if (fret[5] >= 0) pluck(A*Math.pow(2,(fret[5]+E2offset)/12)).connect(dst);
}

You might be wondering why we need the dampening variable all this while. The reason is because of palm muting on the guitar. If we jump straight to mute, undesirable popping noises will occur. Instead, we should increase the dampening and the sound will stop immediately yet quietly.

function mute() {  
    dampening = 0.89;
}

Now that we have built our entire guitar, we can start strumming it:

strum([[-1, 3, 5, 5, -1, -1]);  

Having Fun With Distortion

Acoustic guitar is nice, but it doesn't rock. What we really want is an electric guitar.

Traditionally electric guitars use tube amplifiers to overdrive the sound signal beyond the limits of the sound drivers. This causes clipping in the signal which introduces (normally unwanted) frequencies, producing a fuller, richer sound. We could emulate this overdriving by using a waveshaper.

A waveshaper simply maps your input signal to the output signal by using the waveshaping curve. However, waveshaping can change the loudness of the sound, so additional post-gain is required to compensate the extra loudness. After that, we remove unwanted high frequencies by applying a low-pass filter below nyquist rate.

We start off by producing a waveshaping curve:

// By applying postGain inside the waveshaper curve,
// we can combine the GainNode and the WaveShaperNode
// into one WaveShaperNode         
var postGain = 0.25;

// Creates a waveshaper curve with distortion amount > 0
function makeDistortionCurve(amount) {  
    // The curve is discrete, so it needs a resolution
    var samples = 100;
    var curve = new Float32Array(samples);
    var deg = Math.PI / 180;

    // We add the mapping for our signal in the range of [-1, 1]
    for (var i = 0; i < samples; i++) {
        var x = i * 2 / samples - 1;
        // I'd be lying if I can tell you how this equation is dervied.
        // Its probably just good ol' trial and error.
        curve[i] = (3 + amount) * x * postGain * 60 * deg / (Math.PI + amount * Math.abs(x));
    }

    return curve;
};

We simply just connect our AudioNodes together:

var distortion = context.createWaveShaper();  
distortion.curve = makeDistortionCurve(100);  
distortion.oversample = '4x';

var lowpass = context.createBiquadFilter();  
lowpass.type = "lowpass";  
lowpass.frequency.value = context.sampleRate / 8;

// Tube distortion
distortion.connect(lowpass);  
lowpass.connect(context.destination);  

The only thing left to do is using the distortion node as the sink instead of the usual sink:

function strum(fret) {  
    dampening = 0.99;

    // Connect our strings to the amplifier instead
    var dst = distortion;
    if (fret[0] >= 0) pluck(A*Math.pow(2,(fret[0]+Eoffset)/12) ).connect(dst);
    if (fret[1] >= 0) pluck(A*Math.pow(2,(fret[1])/12)         ).connect(dst);
    if (fret[2] >= 0) pluck(A*Math.pow(2,(fret[2]+Doffset)/12) ).connect(dst);
    if (fret[3] >= 0) pluck(A*Math.pow(2,(fret[3]+Goffset)/12) ).connect(dst);
    if (fret[4] >= 0) pluck(A*Math.pow(2,(fret[4]+Boffset)/12) ).connect(dst);
    if (fret[5] >= 0) pluck(A*Math.pow(2,(fret[5]+E2offset)/12)).connect(dst);
}

The Result

Hint: you can change these parameters while the guitar is playing.

Preset
Waveshaper
Overdrive
Post Gain
Lowpass Filter
Repeat

Relevant Modules in NUS

Interested to do sound processing, sound synthesis and sound audition?

You can learn about the concepts in the following modules:

  • CS2108 Introduction to Media Computing (Sem 1 only)
  • CS4347 Sound and Music Computing (Sem 2 only)
  • NM4224 Sound and Interaction (Sem 2 only)

CS2108 is new but should serve as an introductory level module for digital audio processing. It is also a prerequisite to higher level media modules such as CS4347 which is unfortunately does not appear to be mounted for AY2015/2016.

Be warned, CS4347 isn't exactly a walk in the park. There are regular assignments you have to do almost weekly. However, the actual hours required for attendance is very few. Also, the assignments are written in Python, which is very easy to use.

NM4224 is a little more focused on the creative aspect of sound production but I've heard they might be using WebAudio in AY2015/2016, so you might want to keep an eye out for that too.

There are some assignments in CS1101S where you can do sound programming (since AY2014/2015) but they are designed only to teach basic programming methodology. Nevertheless, it might be worth your while to read CS1101S if you are an incoming freshman and want a sampler to explore a wide variety of topics such as this.


I hope you've enjoyed this demonstration.

If you have any feedback/questions or if you noticed any mistakes in my article, please contact me at fazli[at]sapuan[dot]org.

Comment section for placebo effect only. Please use email to contact the author