Accessibility Events

Mat Marquis - Apr 11, 2019

“There isn’t some way to know when—…?”

There is always a pause here. The client knows what they’re asking, and I know what they’re asking, but putting it into words—saying it out loud—turns unexpectedly difficult.

In the moments before the asking, it was a purely technical question—no different from “can we do this when a user is on their phone.” But there’s always a pause, because this question doesn’t come easy; not like all the other questions about browsers and connection speeds did. A phrase like “in an assisted browsing context” doesn’t spring to mind as readily as “on a phone,” “in Internet Explorer,” or “on a slow connection.” The former, well, that’s something I would say—a phrase squarely in the realm of accessibility consultants. The latter the client can relate to. They have a phone, they’ve used other browsers, they’ve been stuck with slow internet connections.

“There isn’t some way to know when—… a user is… using something like a screen reader…?”

An easy question that begets a complicated answer is standard fare for almost any exchange with a web developer. This answer has, for a long time, been a refreshing deviation from that norm: “no, we can’t.”

The matter is, I’ll offer, technically impossible; computers, you see, can’t talk to each other that way. Often, there’s a palpable relief here: “no” to the technical part; “no” to the the computers part. That is, of course, all they had meant to ask. I truly believe that.

Even if we could, I’ll explain, we wouldn’t really want to. Forking our codebase that way would put more burden on us maintainers, not less. There’s an easy parallel to the “when they’re on a phone” conversation, here; one we’ve surely had already. We can never know a user’s browsing context for certain, and making assumptions will only get us and our users into trouble. Whenever a feature, component, or new design treatment was added or changed, we’d be left having all the same conversations around how to translate it over to the “accessible” experience. If those features aren’t essential in the first place, well, are they worth having at all? If those features are essential—well, we’ll still need to find a way to make them work in both contexts.

It could seem like an enticing option for our users, at first glance: an enhanced, fully-featured website, on the one hand, a fully accessible alternative experience on the other. That unravels with even the slightest examination, though: if the fully-featured website isn’t accessible, the accessible website won’t be fully featured. By choosing to have the “accessible experience” deviate from the “real website,” we end up drawing a sharper line between those two definitions, and we nudge the “accessible experience” closer to an afterthought—limited and frustratingly out-of-sync with the “real” website, like so many dedicated mobile sites quickly became.

There’s never any disagreement, here. Again: this is all relatable. We’ve all found ourselves inescapably opted into using the “mobile” version of a website at some point. We’ve been here before as users; we’ve made these mistakes before as developers. We know better now.

But this isn’t a strictly technical question. This isn’t as simple as browser features and screen sizes—a question of one privileged browsing context or another. Technical questions come easy. Partway through the asking—in the hesitation, in the pause, in the word stumbled over—what was meant to be a mundane development question became something much more fraught. Because there was a word that fit.

“Is there a way we can know when a user has a disability?”

The easy “no” felt empowering; a cop-out. “It doesn’t matter; it can’t be done” in response to a deeply fraught question was an unexpected balm for both the asked and the answered. There was, again, that palpable relief—”no” to the technical part; “no” to the the computers part. That was, of course, all they had meant to ask.

We no longer have that easy answer. In iOS 12.2 and MacOS 10.14.4, a toggle switch has appeared in Apple’s VoiceOver preferences, innocuously labeled “accessibility events.” It was rolled out to no fanfare—short of a brief mention in Apple’s iPhone User Guide—and we’re still not sure how it’s meant to be used. The most generous interpretation of the intention behind this feature is that it was developed with the same intention as a “UA string”-style identifier for users browsing via VoiceOver.

We do know this much: when this setting is enabled—and it is, by default—your browser will identify you as using VoiceOver to help you browse the web. If you’re using Apple’s VoiceOver, both your phone and your computer will broadcast your assumed disability to the entire internet, unless and until you specifically tell it to stop.

Update May 2019: They yanked it.

If you’re not furious at this change, you should be—not just for what it means for users, but what it foists upon you. Apple has burdened you with the knowledge that, now, yes, you can know whether a user has a disability. We can use this information to serve up a limited alternative version of a website, into which we can very easily opt people of a protected class. And once we choose to start listening for “accessibility events,” well, we can capture that information, as anything else broadcast to the web. A user’s disability can and will be reduced to a single data point—a cold, impersonal true, inexorably bound to their name, stored in a database, perhaps destined to be sold, leaked, passed along to insurance providers, reduced to a targeted marketing opportunity. All under the auspice of inclusivity.

At some point, the developers responsible for the “accessibility events” feature were, I’m certain, asked whether such a feature were possible. Their answer was “yes.” I don’t doubt that they meant well. I’m just as certain that, in the moment, it felt like the right answer; a technical solution to a technical problem, and a simple matter of browsing context.

Someday—not far in the future, I trust—I’ll be asked a similar question. It will be asked hesitantly, haltingly. The pauses will ring all too familiar. I will no longer have the easy, familiar comfort of technical impossibility—no easy “no” to insulate me from the uncomfortable conversations I should have been having with clients all along. Now, there’s no technical reason that I can’t know whether a user is using “something like a screen reader.” I—my clients, their databases, their organizations, their parent companies, their partners, their VC funders, their advertisers, and so on unto infinity—can absolutely know when a user is disabled.

But I won’t play a part in helping to propagate the mistake Apple’s developers made. I’ll let my answer hang heavy and uncomfortable in the air: no. Not because we can’t—we can. Not because we shouldn’t, though, no, we still shouldn’t. No—now, I will allow the word to become as coarse as I had always wanted it to be, because I no longer have the cold comfort of “well, technically” to hide behind.

No.