When Technology Fails, Who’s Responsible?

We’re creating new technologies without giving thought to the danger they could cause if they fail

Amber Case
Nov 7, 2019 · 5 min read
Credit: Wikimedia Commons

EElectricity is such an integral part of our everyday lives, we hardly even think of it as a technology. There’s a very good reason for that: Decades of careful regulation and planning across many industries and levels of government have worked together to make electricity pervasive, rarely prone to outage, and as safe as humanly possible. But somehow this approach is rarely followed in current technology development — even for products and services where failure can, like electricity, lead to extreme danger, and even death.

A few years ago, for instance, I got a sneak preview of an upcoming luxury sedan from a major automotive brand. Sitting in the driver’s seat, it all seemed elegant and sleek — until I noticed the in-car entertainment system below the dashboard. It was a large flat screen, seemingly designed to resemble an iPad. All the controls were touch-based, so there was no way for a driver to navigate the system without having to take their eyes off the road, even to perform a simple task like turning the volume down. Still worse, the display was bright blue, a key culprit for causing temporary vision impairment at night.

Physically, this vehicle was built to the world’s highest safety standards. Digitally, it was a turbo-charged safety hazard.

“I can’t believe the company let this thing on the road with this horrible display!” I ranted at the older gentleman who was climbing into the passenger’s seat. “It’s as if this entire system was tested in a lab under ideal conditions, but never once on the road with a single confused driver!”

The man winced, chuckled, and handed me his business card. He was the company’s CEO.

He readily accepted responsibility for the decision to add a touchscreen, despite its significant UX drawbacks; it was largely motivated, he explained, by a desire to make his automotive brand appear as forward-thinking as Tesla. Fortunately, his company managed to revert back to analog dashboard controls in subsequent editions of the car. By then, however, hundreds of millions had already been spent on a model that was far less safe to drive than it could have been.

WeWe see less extreme examples of this every day: Products and services with failure scenarios that simply haven’t been considered in their overall design, even when a breakdown can cause serious distress or worse. Typically, the failure is due to a variable that wasn’t seriously considered or even recognized early on in the planning process — or if it was, it was overridden by management as a marginal problem. And somehow, we seem to have resigned ourselves to this as an inevitability of high tech’s ubiquitous “beta phase.”

For instance, consider a popular “smart” pet feeder, designed to feed your animal through a smartphone app. In 2016, the company’s Twitter account was bombarded with panicked users, many of them on vacation, far from home, who suddenly realized their feeder was offline, leaving them unable to feed their cherished pets.

Why? The company had designed the feeder to work with a third-party server from Google and hadn’t bothered to add a failsafe mode in the event the server ever went offline. Somewhere in its planning process, the company decided not to acknowledge that Google could ever suffer server downtime. They did, however, build the possibility of failure into their terms of service, with a clause saying that the company was not accountable for service outages. Such clauses are common for tech companies, a uniquely legalistic way of evading responsibility.

The assumption that cloud services are 100% reliable still persists to this day, especially in the design of internet of things devices, which often depend on them. But they’re not foolproof: According to Network World, during a recent 17-month period, Amazon Web Services, Google Cloud Platform, and Microsoft Azure experienced 338, 361, and 1,934 hours of downtime, respectively. During each of those hours, it’s possible that tens of millions of consumers were using cloud-dependent services that failed, causing anything from mild inconvenience to serious danger.

As is often the case, this is a new tech problem that was anticipated by earlier tech wisdom that’s been buried beneath our current era of “move fast and break things.” In the mid-1990s, Mark Weiser, principal scientist at Xerox PARC, spelled it out in his seminal essay, “The Technologist’s Responsibilities and Social Change.”

Weiser’s Principles of Inventing Socially Dangerous Technology:

1. Build it as safe as you can, and build into it all the safeguards to personal values that you can imagine.

2. Tell the world at large that you are doing something dangerous.

“Most engineers,” as Weiser explained, “will defend as strongly as possible the value of their work and leave it to others to find fault. But that is not enough if one is doing something that one knows has possibly dangerous consequences.”

It would be wonderful if tech companies would willingly embrace Weiser’s principles, but for the corporate culture of many companies, it may be too late. We are seeing what they’ve wrought: massive companies built on technology in which little thought has been put into fundamental safeguards and little transparency into the dangers the technology poses.

The tech press should not consider themselves exempt from these principles. All too often they do. In their product reviews, for example, a potential failure point (“this service pauses if wifi connectivity is lost”) is usually shrugged away in passing as a mild inconvenience, instead of being red-flagged for what it really is: a potential deal-breaker for the product as a whole. Where are the warning labels that read, “these products could result in the death of your pet if there is a wifi outage while you’re away”?

A new institution might be needed. In 1894, when electricity was still very much an unpredictable, dangerous technology, a company called Underwriters Laboratories introduced safety and testing standards. They were quickly embraced by the burgeoning industry and are still in use today. Perhaps it’s time for Silicon Valley to create an Underwriter Laboratories of its own, placing its many products and services under serious, rigorous scrutiny.


Helping designers thrive.

Amber Case

Written by

Design advocate, speaker and author of Calm Technology + Designing With Sound. Research Fellow at Institute for the Future. Caseorganic.com


Helping designers thrive. A Medium publication about UX/UI design.

Amber Case

Written by

Design advocate, speaker and author of Calm Technology + Designing With Sound. Research Fellow at Institute for the Future. Caseorganic.com


Helping designers thrive. A Medium publication about UX/UI design.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store