You Can’t Fix Unethical Design by Yourself
Individual action isn’t enough to spur the paradigm shift we need
Nearly every tech conference right now has at least one, if not many, sessions about ethics: Ethics in artificial intelligence, introductions to data ethics, why letting the internet go to sleep is the ethical thing to do, or just plain integrating the basics of ethics into your design. We as a community are doing a great job raising questions about the implications of technology and spreading awareness to our communities about the potential for harm.
I should know — I’m one of the speakers doing it. My talk, “Designing Against Domestic Violence,” has been accepted to more conferences than I ever dreamed. I’ve been happily surprised to hear from conferences specifically themed around development languages that I never once mention. Conference organizers want talks like this. Overall, I’m thrilled with the interest in how our digital products can facilitate domestic abuse and how we can design against it.
Which is why I want you to trust me when I say that this kind of ethics education is not enough. Don’t get me wrong — it absolutely matters. I wouldn’t be giving up all of my free time and have quit my lucrative teaching side hustle if I didn’t think it was worth it. People seek me out during conferences to ask for help with a specific feature that my talk prompted them to realize could be used for interpersonal abuse, and others have said it’s changed how they do their design work.
That said, this type of ethics education won’t fully solve the problem. It will make meaningful impacts on individuals, yes — but in an environment with enormous profits to be gained and few consequences for bad behavior, where government oversight is laughable, where a CEO like Facebook’s Mark Zuckerberg has shown over and over again that he can issue an apology and continue doing the exact same terrible things as before without losing power or money, how much can individual designers and developers and product managers really do?
The recent uproar about Superhuman is a perfect example, and I was happy to see Charlie Warzel, writer of my favorite email newsletter The Privacy Project, from The New York Times, lay it out so concisely:
Call it the Five Stages of Privacy Erosion.
Tech Company builds popular product. Product is exposed in the press for doing something shady behind the scenes. Tech Company apologizes/clarifies/signals a fix. Brief phase of collective rejoicing and moving on. It’s revealed (usually by the same people) that Product was never really fixed.
If you’re unfamiliar with the Superhuman scandal, start with the original blog post by Mike Davidson about its problematic and dangerous methods.
Warzel did a breakdown of Superhuman’s CEO’s apology, calling attention to one line in particular:
“If one of us creates something new, and that innovation becomes popular, then market dynamics will pull us all in that direction,” [Superhuman CEO Rahul Vohra] wrote. It’s worth noting because it’s a line I’ve heard frequently from ad tech executives and tech companies in my reporting for The Privacy Project — this couldn’t be wrong because it’s the industry standard. But, as Davidson rightly notes, “just because technology is being used unethically by others does not mean you should use it unethically yourself.”
The desire of ethical tech workers to do right will always come up against this mindset. “It’s already happening, so it can’t be that bad, right? So then we can do it too, and it’s okay.” It’s the tech ethics equivalent of bystander syndrome, where a person faints on a crowded train surrounded by people who don’t intervene to break the fall because, well, no one else is doing it. This is the current paradigm: Ethics are murky, there are no real rules, and other companies who do problematic things more or less get away with it — why should we be any different?
This is the reality we live in: Security is indeed “nice to have but not a necessity.”
It’s a terrible paradigm for anyone with morals to work within, and we desperately need to shift it to be truly ethical. We need an overhaul of tech, from education to the workplace. We need a set of regulations around digital security and safety, and it needs to be a legal policy with teeth. Tech companies who put their users in danger, or allow their products to be used for stalking, monitoring, and interpersonal harm need to face massive fees and the leadership responsible need to face jail time.
I say this as someone working day in and day out to improve the ethics and education side of tech, someone who celebrates marginal progress of individuals getting on board, but who also follows the endless cycle of tech companies creating products that cause harm — real harm — and face few to no consequences. The ethics and education approach relies on individuals working during their off hours researching and writing and delivering talks, all to move the needle on a few dozen tech workers who have opted into hearing the talk in the first place. This approach is slow, piecemeal, and will not result in the change we need quickly enough to prevent people from being psychologically tortured on social media sites, prevent kids from being bullied online until they commit suicide, prevent vengeful abusers from tracking down their exes to murder them.
I take a lot of inspiration from the tenets of the Green New Deal, which says that our slow, broken, ineffective, and half-hearted approach to solving the climate emergency isn’t enough. It demands a paradigm shift at every level of society so children and young adults can have a habitable Earth. It also emphasizes the need for structural, systemic change beyond getting individuals to do things like recycle and use less electricity. Just as this individual action will not solve the climate emergency, individual action alone will not fix the toxic, dangerous tech industry. We need to shift the responsibility away from individual employees and onto the people who can actually make the big, necessary changes happen: those running the largest and most powerful companies.
We need every coding, design, data science, and product management boot camp, course, and online self-study curriculum to teach about the various ways that digital products are misused for both big, anonymous abuses like hacked personal information and for abuse on a personal, intimate level.
We need a standardized set of principles and regulations that every employee working on a digital product follows, just as carpenters and other trades have established standards. We need those standards to be protected by a legal policy that takes the decision to do the right thing (or not) away from individuals. We need unions for tech workers so that we can harness our individual power into something bigger that can make more impact. Instead of anxiously working to convince a client or stakeholder to spend more money on doing the right thing, that client or stakeholder should understand that failing to do the right thing comes with real consequences.
The fight for ethical tech is being fought at a more-or-less individual level, and while it certainly yields some results, asking every designer and developer to put their livelihood on the line to do what is right is unfair and ineffective.
In a 2017 article from the New York Times, “Data Insecurity Is the New Normal,” authors Steven Weber and Betsy Cooper describe their research from 2015, which involves a switch from “the internet is basically safe unless I do something stupid” to “the internet is fundamentally insecure, a dangerous neighborhood in which my safety is always at risk.” They write that this switch has now happened, and that one of the reasons is “consumer demand for digital devices and services keeps pushing companies to the limits of what is technically possible, and then pressing them to go even a little bit further, where security often becomes nice to have but not a necessity.”
This is the reality we live in: Security is indeed “nice to have but not a necessity.” When companies can get away with a mere slap on the wrist for massive security flaws, where is the motivation to change their behavior? Until we have a paradigm shift in how we think about privacy, security, and personal safety, this will continue to be a problem. As the authors say: “You can’t fix a broken foundation by simply building more stories atop the house that rests on it.” Putting the onus on rank-and-file tech workers to fight for fixing these problems as individuals, putting their jobs on the line, does have some impact, but is ultimately putting another story on top of a building sitting on a crumbling foundation.
At the moment, the fight for ethical tech is being fought at a more or less individual level, and while it certainly yields some results, asking every designer and developer to put their livelihood on the line to do what is right is unfair and ineffective. I’ve personally fought for my clients to do the right thing, to be safe and inclusive, and sometimes I’ve won, but sometimes I’ve lost. And when I’ve realized I’ve failed to convince someone to add a third option when asking for a user’s gender, or to not attempt to collect and store information about users’ status as trans, I’ve been faced with a decision: accept the loss and go back to work the next day to keep advocating for the right thing, or push my client further and risk my team getting fired by the client, and risk me getting fired from my company. What’s an ethical designer to do? My employer isn’t Google or Facebook; it’s overall a great, moral company where I want to keep working.
We need to shift the responsibility away from individual employees and onto the people who can actually make the big, necessary changes happen: those running the largest and most powerful companies.
This is a thought experiment for some and a real decision for others, but it shouldn’t be. The various people who design and build cars, buildings, roads, and parks don’t need to worry about convincing their boss to make their product safe, because there are laws and regulations they have to follow to ensure safety that are non-negotiable. And while we do see these laws get violated, overall they work. No tech employee should have to go through the ethical dilemmas that we regularly go through right now. If we had an established set of standards that we as practitioners of this discipline had to follow, and if those standards came from a legal policy with teeth behind it, the burden would be lifted on us as individuals.
Imagine no longer appealing to your client with ethics and PR fallout, but instead meeting their objections or demands with, “Sorry, but that’s what the Digital Safety and Inclusion Guidelines say. It’s not my decision. I can’t lose my job, and I’m sure you don’t want the lawsuit that we know will inevitably follow if we push this feature when we know someone could use it for stalking.”
To make this happen, we need technologists to become involved with policy, and we need policymakers to work directly with technologists. We need to harness our collective power into unions and a set of standards to which we’re all held. We need to continue to fight our individual battles while simultaneously coming together as teams, workplaces, and an industry to say that we’ve had enough; we demand tech become an industry where ethical decisions are something we don’t just talk about, but something we do.