Speaking for the Dead with Twilio, Netlify, Pusher, and Web Audio for Trivium
What Do The Dead Men Say?

In 2012, I built a little app called “Call Drops” at a hack day in Hollywood back when I was still working for SoundCloud. The hack asked that users dial a phone number and when prompted, leave a recorded message of a “rain drop sound.” Then, on an awaiting web app, a rain drop would fall from the top of the website. When the rain drop finally reached the bottom of the site it would splash into a ripple, emitting the sound which the user left. It was a hit. Attendees loved the low barrier of entry and the magic of seeing their contribution visualized in this way in nearly real time. 🌧
Flash forward to this week. Trivium is dropping the title track of their upcoming record What The Dead Men Say and have collaborated with me to create a teaser campaign. Inspired by the original “Call Drops” concept and my recent project for Slipknot, we have developed a concept which allows fans to call a number (or visit a web app) to leave a recorded message and then hear that message played back (with a spooky filter) on an awaiting visual, streaming live from the Trivium YouTube.
It was a lot of fun to revisit this classic concept with a mess of new technologies. Read on to find out how it was done.
Dead Voicemail with Twilio
I’ve been a huge advocate of Twilio for a long time now and I look forward to any opportunity I have to integrate the service into an artist campaign. For a few years now, Twilio has offered a product called Functions which allows you to write simple serverless functions right from the comfort of the Twilio dashboard. These functions can then be used to respond to events such as incoming calls or texts.
The Twilio function for our campaign is wildly simple. All it needs to do is prompt the user to leave a message, record that message for a certain duration, and then send that recording over to another (Netlify powered) serverless function. We can do this by simply defining a voice response in the form of TwiML. Since Twilio functions automatically inject Twilio’s node library, you can simply start writing the responses and then providing them to the callback. The key here is making sure the record action points to the next serverless function.
exports.handler = function(context, event, callback) {
let twiml = new Twilio.twiml.VoiceResponse(); twiml.say("What do the Dead Men Say?"); twiml.record({
action: NETLIFY_FUNCTION_URL,
method: "GET",
timeout: 5,
maxLength: 5
}); callback(null, twiml);
};
Channel Recording to Client
As I mentioned, Twilio will be sending the recording response (along with a URL to the actual recording) over to an awaiting Netlify serverless function. The point of this function is to send that sound URL to the actual web app. Rather than hurting my brain with sockets and the such, I just use Pusher to send data from the server to the client. First, you need to configure Pusher.
const Pusher = require('pusher')let pusher = new Pusher({
appId: process.env.PUSHER_APP_ID,
key: process.env.PUSHER_KEY,
secret: process.env.PUSHER_SECRET,
cluster: 'us2',
encrypted: true
})
Then, since I’ll be writing an asynchronous serverless function, let’s define a little helper function which wraps the Pusher trigger
function in a Promise so we know when it completes. The trigger function accepts a channel name, event name, and the data package (message) itself.
const triggerEvent = (message) => {
return new Promise((resolve, revoke) => {
pusher.trigger(
'trivium',
'new-recording',
message,
(err, req, res) => {
if (err) {
revoke(err)
} resolve()
}
);
})
}
Finally, we can write our serverless function. First, we create a message object with the single property of url and give it the value of the recording url we can pull from the parameters of the incoming Twilio event. (Pro tip: Append .mp3 to receive an MP3 instead of WAV file from Twilio.) Then, we can call our triggerEvent
function. Once this completes, we’ll actually return a body of TwiML XML for Twilio so it knows to hangup the call. Cool, right?
exports.handler = async (event, context) => {
let message = {
"url": `${event.queryStringParameters['RecordingUrl']}.mp3`
} try{
await triggerEvent(message) return {
statusCode: 200,
headers: {
'Content-Type': 'application/xml'
},
body: '<?xml version="1.0" encoding="UTF-8"?><Response><Hangup/></Response>'
}
} catch (err) {
return {
statusCode: 500,
body: err.toString()
}
}
}
Once again, our recording is on the move. This time it is headed to the client via Pusher so let’s head there next to receive the recording and finally play it back in the browser.
Transform and Playback Recording
We’re also going to initialize a Pusher client on the web app and this one requires a slightly different configuration. We’ll then subscribe to the trivium
channel we’re triggering events to in the Netlify function. Finally, we’ll bind to listen for the channel’s new-recording
event. Since thousands of fans are going to visit this experience at once, let’s avoid 1000 voice recordings going off at once by using a queue.
let pusher = new Pusher(process.env.PUSHER_KEY, {
cluster: 'us2',
forceTLS: true
})let channel = pusher.subscribe('trivium')channel.bind('new-recording', (data) => {
recordingQueue.push(data)
})
Our playback queue is powered by the Caolan Mahon’s excellent Async utility module and in particular the queue method. Since I know my recording playback function will be asynchronous, we’ll also wrap it in the asyncify function. One thing you’ll notice is the number 5
. This defines the concurrency of the queue and in our case, allows five recordings to be played at once. The function itself is quite simple. It waits for the recording to playback before it returns a successful callback.
recordingQueue = queue(asyncify(async (task, callback) => {
await playRecording(task.url) callback()
}), 5);
Now, we aren’t just going to play back the recording are we? First we must transform the recording to sound spooky. We found the effect we were looking for on VoiceChanger.io. It was a “Reverse Reverb.” So, I borrowed some of the source the creator made available for our solution. This is all done in Web Audio so let’s initialize a new audio context first.
let AudioContext = window.AudioContext || window.webkitAudioContextlet context = new AudioContext()
A “Reverse Reverb” reverses a sound, gives it reverb, and then reverses it back. So, we’ll need a helper method which reverses our recording.
function reverseRecording(audioBuffer) {
let reversedAudioBuffer = context.createBuffer(
audioBuffer.numberOfChannels,
audioBuffer.length,
audioBuffer.sampleRate
) for (let i = 0; i < audioBuffer.numberOfChannels; i++) {
reversedAudioBuffer.copyToChannel(audioBuffer.getChannelData(i), i);
} for (let i = 0; i < reversedAudioBuffer.numberOfChannels; i++) {
reversedAudioBuffer.getChannelData(i).reverse();
} return reversedAudioBuffer
}
Then, we’ll initialize an offline audio context to add the reverb to our reversed recording. You can add reverb using the ConvolverNode. The convolver requires an impulse buffer. An impulse is an audio file which helps to establish the acoustics of a room, typically with a loud slap at the beginning. More on that here. We’ll then setup a DynamicCompressorNode to create a dry gain effect on the audio. Since we’re using an offline context, we’ll start
the source
and then await
the completion of the startRendering
method. Finally, the transformed audio is reversed back.
async function transformRecording(audioBuffer) {
let context = new OfflineAudioContext(
audioBuffer.numberOfChannels,
audioBuffer.length,
audioBuffer.sampleRate
) let source = context.createBufferSource()
source.buffer = reversedAudioBuffer let convolver = context.createConvolver() convolver.buffer = await context.decodeAudioData(
await (await fetch(IMPULSE_URL)).arrayBuffer()
) let outCompressor = context.createDynamicsCompressor() source.connect(convolver)
convolver.connect(outCompressor) let dryGain = context.createGain()
dryGain.gain.value = 0.5
source.connect(dryGain)
dryGain.connect(outCompressor)
outCompressor.connect(context.destination) source.start(0) return this.reverseRecording(await context.startRendering())
}
Now we can bring it all together. First, fetch the recording, await an array buffer, and then decode the audio data into a buffer. Then transform the audio. Finally, establish a new buffer source, play it back, and wait for the onended
event to be called.
async function playRecording(url) {
try {
let recording = await fetch(url) let arrayBuffer = await recording.arrayBuffer() let audioBuffer = await context.decodeAudioData(arrayBuffer) audioBuffer = await transformRecording(audioBuffer) let source = context.createBufferSource() source.buffer = audioBuffer
source.connect(context.destination)
source.start() await new Promise(resolve => source.onended = resolve)
} catch (e) {
// console.log(e)
}
}
Thanks

Thanks to 5b Artist Management, Stephen Reeder, and Trivium for allowing me to help out on this one. Also, thanks to the fans for showing up and participating! Watch the new video for “What The Dead Men Say” now and pre-order the album which is out on April 24. 🤘🏻