Reactive Audio WebVR

Avatar of Alex Kempton
Alex Kempton on (Updated on )

Virtual reality has become a thing again! All of the usual suspects are involved: HTC, Microsoft, Samsung, and Facebook, among others, are all peddling their respective devices. These predictable players shouldn’t be having all the fun, though!

You make websites. You know a bit of Javascript. You have a mobile device. You can have a slice of this virtual pie too! WebVR is here, and it’s not that difficult to learn. If you already know the basics of three.js, you might be surprised at how simple it is to get it going. If you haven’t ever used three.js, this will be a fun way to learn it.

I’ve been making websites for quite a while, but only in the last couple of years have I explored the use of front-end technologies for more than just websites. Having spent some time using tools such as canvas and three.js, my mind has been opened to the wonderful potential this side of the web can offer us as developers (and artists!).

Polyop – Ceremony. Music video created with three.js and WebVR controls

I’ve taken the path of making trippy visuals with Javascript and am now one-third of audio-visual techno act, Polyop, because of it. As part of a vinyl release, we’ve created a 360 degree music video built with three.js and webVR controls. I’d thought I’d share with you the basic concepts I picked up while developing it.

But I don’t have those fancy goggles

There’s no denying that not having the kit seems like a barrier to entry. However, you don’t need any sort of extra hardware for most of this tutorial so you can still have fun moving your phone around exploring the 3D world you’ll create.

To play with the VR portion of this tutorial, you’ll want some sort of VR Viewer. The cheapest way to do this is to buy a headset that turns your mobile phone into a VR headset, you simply slot it your phone in and away you go. These headsets range from a £3 to £50 so have a look around to see what best suits you and your budget. “Google Cardboard” is the term you’ll hear about these types of devices.

What we’ll be making

Here’s a demo. All the source code for the steps we’ll be taking is available on GitHub too.

If you’re viewing on a mobile or tablet, you can look around by moving the device. If you’re on a laptop, you have to click and drag. If you have a VR Viewer for your phone, there’s an option to go into actual VR mode by clicking on the “start VR” button.

We’ll tackle it in three parts:

  1. Make the three.js scene (+ demo)
  2. Add in VR Controls (device motion) (+ demo)
  3. Apply the VR Effect (stereoscopic picture) (+ demo)

Making the scene

Those who have some experience with three.js may want to skip this part and head straight for the VR stuff.

Three.js has become the web dev’s favorite library for creating 3D scenes. Don’t let that extra dimension scare you; it’s not so difficult to get going! Before we even think about VR, we’re going to make a simple 3D world that has a bunch of cubes, slowly spinning.

If you’re new to three.js I recommend taking a look at the “creating a scene” tutorial included in the documentation. It goes into a little more detail than I will, and you’ll have a spinning cube up and running in no time. Otherwise feel free to jump straight in here, we’ll still be going quite slow.


Firstly we need to set up a document with the three.js library included. You can install with Bower, npm, or keep it simple and get the file from a CDN.

Please note that the three.js API changes from time to time. This tutorial has been created with r82 and while it’s always good to use the newest version of any library, for our purposes it may make sense to use the same version used in the examples.

<!DOCTYPE html>
<html lang="en">

    <title>WebVR Tutorial</title>
    <meta name="viewport" content="width=device-width, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0, shrink-to-fit=no">
      body {
        margin: 0;

    <script src="lib/three.js"></script>
      // All scripts will go here


Now we need to set up the scene, the camera, and the renderer. The scene acts as a container for all objects to go inside. The camera is one of those objects and gives us a point of view from inside the scene. The renderer takes the view from the camera and paints it onto a canvas element.

// Create the scene and camera
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 1, 10000 );

// Create the renderer
var renderer = new THREE.WebGLRenderer();

// Set the size of the renderer to take up the entire window
renderer.setSize( window.innerWidth, window.innerHeight );

// Append the renderer canvas element to the body
document.body.appendChild( renderer.domElement );

We’ll also need to tell the renderer to render the scene:

// Render the scene
renderer.render( scene, camera );

For now on, you should make sure this rendering happens last in your code. Later we’ll be firing it every frame inside of an animate() function.

At this point, your scene should be rendering with a canvas element on the page, but all you’ll see is black.

Let’s add a cube to the scene

It comprises of a geometry and a material, held together in a mesh:

// Create cube
var material = new THREE.MeshNormalMaterial();
var geometry = new THREE.BoxGeometry( 50, 50, 50 );
var mesh = new THREE.Mesh( geometry, material );

// Add cube to scene

Now you should see a cube being rendered, yay!

Let’s make lots of cubes by wrapping the code in a for loop:

var cubes = [];

for (var i = 0; i < 100; i++) {

  var material = new THREE.MeshNormalMaterial();
  var geometry = new THREE.BoxGeometry( 50, 50, 50 );
  var mesh = new THREE.Mesh( geometry, material );

  // Give each cube a random position
  mesh.position.x = (Math.random() * 1000) - 500;
  mesh.position.y = (Math.random() * 1000) - 500;
  mesh.position.z = (Math.random() * 1000) - 500;

  // Store each mesh in array


You’ll notice that I’ve also given each cube a random position by changing their position property. X,Y and Z refers to their positions along each axis. Our camera is at position (0,0,0), in the center of the scene. By giving each cube a random position along each axis (between -500 and 500), the cubes will be surrounding the camera in all directions.

I’ve also stored each cube’s mesh in an array, which will allow us to animate them. We need to create an animate() function that will fire every frame:

function animate() {

  requestAnimationFrame( animate );

  // Every frame, rotate the cubes a little bit
  for (var i = 0; i < cubes.length; i++) {
    cubes[i].rotation.x += 0.01;
    cubes[i].rotation.y += 0.02;

  // Render the scene
  renderer.render( scene, camera );


The animate() function iterates through the cubes array and updates the rotation property of each mesh. It will constantly loop every frame because we’re calling it recursively using requestAnimationFrame. You’ll also notice I’ve moved renderer.render() inside this function, so that the scene is being rendered every frame too.

Make sure you call animate() somewhere in the script to start the animation loop.

That’s our scene done! If you’re struggling, have a look at the source code for this step, I’ve tried my best to include descriptive comments. You’ll notice I’ve rearranged the code slightly from the snippets in this article, along with a better use of variable names.

Time to get virtual

Before we get started, it’s good to know what we’re actually playing with! The WebVR website sums it up very well:

WebVR is an experimental JavaScript API that provides access to Virtual Reality devices, such as the Oculus Rift, HTC Vive, Samsung Gear VR, or Google Cardboard, in your browser.

At the moment the API only works in special browser builds, which may be fun to play with, but are lacking an audience. Luckily for us, however, the WebVR Polyfill swoops in to save the day. It makes your VR creations available on mobile devices via Google Cardboard (or similar viewers), while also allowing users to view the same content without a VR viewer. You should know that the polyfill doesn’t support any other VR devices, such as the Oculus Rift or HTC Vive.

To use the polyfill, include the script in your page, before all other scripts. The next two parts to this tutorial won’t work if you don’t have it included.


A critical component to any virtual reality experience is capturing the motion of the user and using that information to update the orientation of the camera in the virtual scene. We can achieve this in three.js with the VRControls constructor. VRControls doesn’t come built with the library but as an extra you can find in the repository. You should include it in a separate script tag after the three.js library.

You’ll be surprised at how simple it is to implement. Firstly, create the controls, passing in the camera:

var controls = new THREE.VRControls( camera );

This now means that the controls will be affecting the camera, which is essentially just an object in the scene like any other mesh. You could use these controls to rotate a cube rather than the camera if you wanted to.

In your animate() function you’ll also need to tell the controls to update every frame:


And that’s it! If you look at what you’ve made using a mobile device, you should be able to “look around” the scene by moving the device. On a laptop without these capabilities, you’ll have to click and drag with the mouse, this click and drag fallback is an extra bonus we get with the WebVR polyfill.

Take a look at the source code for this step if you’re stuck.

VR Effect

At this point you may already be satisfied with what you’ve created. Looking around using the motion of your device is super fun and opens up all sorts of possibilities for making something cool. When making the interactive video for Polyop, I felt this behavior was immersive enough and chose not to introduce the stereoscopic feature.

However I promised actual VR and so that’s what you’re hear for! The final piece of the puzzle is to get three.js to render two separate images, one for each eye. We’ll do this using the VREffect constructor. Just like you did with VRControls, include the script and away we go. First we need to define the effect:

effect = new THREE.VREffect(renderer);
effect.setSize(window.innerWidth, window.innerHeight);

We define a new VREffect, passing in the renderer. From now on we don’t need to deal with the renderer, it will be dealt with by VREffect. That’s why we’re now setting the size of the effect instead of the renderer. Importantly, we need to swap out the way we render in the animate function:

effect.render( scene, camera );

We’re now telling the effect to render, not the renderer. At the moment nothing will have changed. The VREffect simply takes in the renderer you give it and renders as normal when you tell it to. To get the stereoscopic effect we’re looking for; we need to do a little more.

Firstly, we need to search for any connected VR devices. Because we’re using the WebVR Polyfill, all we get is one “device” connected, which will be Google Cardboard. Here’s how we get it:

var vrDisplay;
navigator.getVRDisplays().then(function(displays) {
    if (displays.length > 0) {
     vrDisplay = displays[0];

navigator.getVRDisplays returns a promise function which will be invoked once it has finished looking for devices. In this instance, we take the first and only item in the displays array and define it globally as vrDisplay so we can use it elsewhere. If we weren’t using the polyfill, there might be more than one device in the array, and you’d probably want to add in some user functionality to choose between them. Luckily today we don’t have to accommodate for little Johnny and his fifty different VR devices.

Now we have our single device defined as vrDisplay, we need to fire it up! The method to do this is requestPresent, and we’ll give it the canvas element we’re rendering to.

document.querySelector('#startVR').addEventListener('click', function() {
  vrDisplay.requestPresent([{source: renderer.domElement}]);

To avoid abuse of the webVR API, it is required that you wrap any calls of requestPresent in an event listener. This one fires on the click of a button element with an ID of “startVR”.

The last thing we need to do is make sure everything renders properly after a resize of the renderer. This happens not just when the screen size changed but when we switch in and out of VR mode.

// Resize the renderer canvas
function onResize() {
effect.setSize(window.innerWidth, window.innerHeight);
  camera.aspect = window.innerWidth / window.innerHeight;
// Resize the renderer canvas when going in or out of VR mode
window.addEventListener('vrdisplaypresentchange', onResize);

// Resize the renderer canvas if the browser window size changes
window.addEventListener('resize', onResize);

The onResize() function resets the size of the effect (and therefore the renderer) while also updating some properties of the camera.

Once again, if you’re feeling a bit muddled, take a look at the source code of this final step.

Summing up

Congratulations! You’ve officially entered cyberspace. What to do with your new powers?

Why not build on the work we’ve already done today? Perhaps try and transform the scene into something a little more aesthetically pleasing by using lighting and different geometries/materials? Maybe you could even try making the objects bounce to music using the Audio API? To give you an idea, here’s one I made earlier.