I’ve been thinking about sound on websites for a while now.
When we talk about using sound on websites, most of us grimace and think of the old days, when blaring background music played when the website loaded.
Today this isn’t and needn’t be a thing. We can get clever with sound. We have the Web Audio API now and it gives us a great deal of control over how we design sound to be used within our web applications.
In this article, we’ll experiment with just one simple example: a form.
What if when you were filling out a form it gave you auditory feedback as well as visual feedback. I can see your grimacing faces! But give me a moment.
We already have a lot of auditory feedback within the digital products we use. The keyboard on a phone produces a tapping sound. Even if you have “message received” sounds turned off, you’re more than likely able to hear your phone vibrate. My MacBook makes a sound when I restart it and so do games consoles. Auditory feedback is everywhere and pretty well integrated, to the point that we don’t really think about it. When was the last time you grimaced at the microwave when it pinged? I bet you’re glad you didn’t have to watch it to know when it was done.
As I’m writing this article my computer just pinged. One of my open tabs sent me a useful notification. My point being sound can be helpful. We may not all need to know with our ears whether we’ve filled a form incorrectly, there may be plenty of people out there that do find it beneficial.
So I’m going to try it!
Why now? We have the capabilities at our finger tips now. I already mentioned the Web Audio API, we can use this to create/load and play sounds. Add this to HTML form validating capabilities and we should be all set to go.
Here’s a simple sign up form.
With everything we learned from Chris Ferdinandi’s guide to form validation, here’s a version of that form with validation:
We don’t want awful obtrusive sounds, but we do want those sounds to represent success and failure. One simple way to do this would be to have a higher, brighter sounds which go up for success and lower, more distorted sounds that go down for failure. This still gives us very broad options to choose from but is a general sound design pattern.
With the Web Audio API, we can create sounds right in the browser. Here are examples of little functions that play positive and negative sounds:
Those are examples of creating sound with the oscillator, which is kinda cool because it doesn’t require any web requests. You’re literally coding the sounds. It’s a bit like the SVG of the sound world. It can be fun, but it can be a lot of work and a lot of code.
While I was playing around with this idea, FaceBook released their SoundKit, which is:
To help designers explore how sound can impact their designs, Facebook Design created a collection of interaction sounds for prototypes.
Here’s an example of selecting a few sounds from that and playing them:
Another way would be to fetch the sound file and use the
audioBufferSourceNode. As we’re using small files there isn’t much overhead here, but, the demo above does fetch the file over the network everytime it is played. If we put the sound in a buffer, we wouldn’t have to do that.
This experiment of adding sounds to a form brings up a lot of questions around the UX of using sound within an interface.
So far, we have two sounds, a positive/success sound and a negative/fail sound. It makes sense that we’d play these sounds to alert the user of these scenarios. But when exactly?
Here’s some food for thought:
- Do we play sound for everyone, or is it an opt-in scenario? opt-out? Are there APIs or media queries we can use to inform the default?
- Do we play success and fail sounds upon form submission or is it at the individual input level? Or maybe even groups/fieldsets/pages?
- If we’re playing sounds for each input, when do we do that? On
- Do we play sounds on every blur? Is there different logic for success and fail sounds, like only one fail sound until it’s fixed?
There aren’t any extremely established best practices for this stuff. The best we can do is make tasteful choices and do user research. Which is to say, the examples in this post are ideas, not gospel.
And here’s a video, with sound, of it working:
Greg Genovese has an article all about form validation and screen readers. “Readers” being relevant here, as that’s all about audio! There is a lot to be done with aria roles and moving focus and such so that errors are clear and it’s clear how to fix them.
The Web Audio API could play a role here as well, or more likely, the Web Speech API. Audio feedback for form validation need not be limited to screen reader software. It certainly would be interesting to experiment with reading out actual error messages, perhaps in conjunction with other sounds like we’ve experimented with here.
All of this is what I call Sound Design in Web Design. It’s not merely just playing music and sounds, it’s giving the sound scape thought and undertaking some planning and designing like you would with any other aspect of what you design and build.
There is loads more to be said on this topic and absolutely more ways in which you can use sound in your designs. Let’s talk about it!