Why Practice Humane Design?
The work I share on this site focuses on implementing humane design — sometimes called benign design — and I mention upgrading humanity in several places. I'll define those terms here:
upgrade humanity: enhance our abilities and quality of life through tech
humane design: design that seeks to upgrade humanity
I adopted these terms from the Center for Humane Technology. Their organization focuses on raising awareness for the idea and principles of humane technology, while Meerific focuses on implementing humane design.
Implementing humane design can be challenging for us designers. We need to ensure that the high-level direction and low-level details align with humanity's core values. A slight digression from these values can quickly lead us down the wrong path and spawn technology that downgrades humanity.
downgrade humanity: inhibit our abilities or quality of life through tech
This challenge affects all technologists, whether we work at a small startup or a large corporation. Unfortunately, the largest companies tend to create technology which downgrades humanity the most. But we can all upgrade humanity by adopting humane design, as long as we adjust our mindset when building technology.
The rest of this article will discuss our relationship with technology, explore the "slippery slope" of technology development and its consequences, and offer guidelines for implementing humane design.
Throughout history, humans have combined technology with our adaptive minds to make our lives easier. This human-technology relationship grew rapidly once we invented computers. We began to imagine a future where we work hand-in-hand with intelligent machines to solve our biggest problems. We would explore the universe, end human suffering, and live fulfilling lives together.
UPGRADE, one of my favorite movies from 2018
A great example of this appears in the underrated 2018 film Upgrade. The film's main character, Grey, is paralyzed in a tragic shooting. Grey agrees to be implanted with an artificial intelligence called Stem to repair his broken body. The bond he forms with Stem turns Grey into a superhero: he gains a voice in his head feeding him useful information, along with heightened physical abilities — if Stem takes over Grey's body. Stem could only take control, however, if Grey explicitly gave the order. This clear boundary made the relationship work.
The agreement between Grey and Stem shows the ideal bond between humans and machines: technology should upgrade humanity, heighten our natural abilities, and let us flourish. If Stem had full control, it could turn Gray into a puppet — a downgrade for humanity.
The idea of technology upgrading humanity echoes throughout much of the science fiction I've explored beyond Upgrade. I'm sure similar material inspires many other technologists today. Yet I can't help but think we're forgetting that ideal when I look at the products we've built. We create too much technology that downgrades humanity; to upgrade humanity instead, we need to change the way we think about and design our products.
How can technology downgrade humanity? To find out, let's take a look at one of the most transformative technologies of the last decade: social media.
Social media started as a way to connect with friends, family, and colleagues. It allowed me to work with my classmates and meet interesting people. This value proposition and the network effect enabled these social platforms to grow rapidly. Eventually, to continue this growth, they needed to start generating revenue. The source of this revenue would not be their users, but their users' personal data. Data about who they liked and followed, their interests and intimate details. This data was coveted by advertisers, who sought the power to create ads for their target markets with terrifying precision. They offered tons of cash to social platforms in exchange for their users' data, a deal the platforms accepted.
Advertisers' demographic targeting works as long as the data is up-to-date. Recent data has more value; in other words, more user engagement leads to more money for social media platforms.
We call this monetizing your attention. This is a tell-tale sign of technology that downgrades humanity.
Engineers building these habit-forming apps use a number of subtle psychological techniques to keep you coming back, hour after hour, day after day. Don't take my word for it, though. An industry veteran explains these tricks in Hooked: How to Build Habit-Forming Products. By now companies have likely created additional mechanisms on their platforms to get you "hooked".
Consequence: Loss of Focus
If several of your apps are trying to engage your attention at the same time, you'll find yourself switching between them constantly, unable to focus on important tasks. Without focus, your quality of work suffers immensely. Without focus, you cannot achieve all that you can in your line of work.
To learn more about the importance of focus, please consider reading Deep Work by Cal Newport.
Consequence: Cognitive Bias Takeover
ALL the cognitive biases. Source
The platforms' psychological tricks keep us distracted due to evolutionary quirks in our brains. We call them cognitive biases. They can help us make "good enough" decisions with limited information, but they can harm us if technology takes advantage of them. Here are a few examples and consequences:
- We prioritize immediate gains over long-term benefits. Video platforms take advantage of this with "auto-play" features. Since "one more video" is good in the short term, we binge-watch videos instead of moving on with our lives.
- We all want to be a part of a community. If we notice people in an interesting chat room engaging in toxic behavior, we might consider that behavior reasonable in that context. Thus, we adopt these toxic behaviors for social acceptance.
- We avoid material that conflicts with our world views. We tend to avoid challenging our opinions and talking with people who think differently than us. As a result, we could trap ourselves in a social "echo chamber".
To see more, check out the Center for Humane Technology's Ledger of Harms.
Increased usage of attention-grabbing apps can deteriorate your health. Again, don't take my word for it:
- Social Media and Mental Health.
- Time Well Spent, which compares time spent on an app with how "happy" the app makes its users. The users in this study self-report and, considering that the Moment app is designed to help reduce phone usage, might be more aware than the average user.
- 6 Ways Social Media Affects Our Mental Health
- A Harvard study which shows moderate, mindful social media use can be healthy. But since when is addiction mindful or moderate?
Social media, in theory, is not a bad thing. You can see how social media platforms started out with good intentions (connecting people online) but made decisions which created problems at scale. They monetize our attention. They addict us through psychological mechanisms. They take our data and gives us little — or negative — value in return. They inhibit our ability to get our work done and decrease our quality of life. They downgrade humanity.
The blame for these failings does not fall on any one designer or engineer, but on the companies that followed this path for the sake of profit.
What can we do as technologists to avoid these traps and build tools which upgrade humanity? We will need to adjust our mindset while designing products. Below are some qualities I think we should emphasize.
Prioritize value over engagement
By "value", I mean value for users. Examples of value include:
In the beginning of your design process, write out the value you want to provide your users. Ask yourself: "what are our users' intentions when using our product?" Align your product with those intentions.
Your degree of alignment — your ability to generate value — should be reflected in the metrics you choose to collect. Keep in mind Goodhart's Law: the more you rely on a metric, the less reliable it becomes.
Augment, don't replace, human capabilities
We are capable of solving problems which technology currently cannot, e.g. human connection.
Rather than trying to solve these problems yourself with your product, give your users the chance to solve these problems with your product's help.
Empathize with your users
Internalize the pain your users feel when dealing with the problems you're helping them solve.
Diversify your development team and features proportionally to the population you serve.
Remember, your users are not your products. They are your customers. They can even be your collaborators! If possible, encourage your users to take part in the decision-making process for your products. One example: a public roadmap and "issues tracker" which users can contribute to.
Enable your users to make informed, wise choices
Frame information in a relatable way to promote your users' values. Consider how you frame the information you present, and how that affects the choices your users make.
Value users' mindfulness over their attention
In the modern world, we are more distracted than ever. Instead of adding to these distractions, use your products to encourage mindfulness — general awareness of your thoughts, surroundings, and intentions.
To encourage your users to adopt this principle, do not use the attention a piece of content receives to measure its quality.
Limit distractions within your product. Keep important content focused and visible. Brutally distinguish the important and unimportant, then hide the unimportant.
As you expose your technology to more people, you may inadvertently cause harm:
- more people may try to use it for nefarious purposes
- your features may create unintended negative consequences, e.g. product overuse or idea "echo chambers"
Try to consider how your product's features might cause harm once your user base increases, then create mitigations for those harms. While brainstorming potential harms, include the Trust & Safety folks — your content moderators — in the discussion. They see your product's negative consequences every day.
This post is my case for humane design: what it is, and why it is important.
Unfortunately, benign technology cannot flourish in today's conditions. The market does not currently value humane design. In addition, designers rarely consider these concepts when building new products.
We need to change the culture around technology design, in our companies and with our clients, to upgrade humanity. We have a huge mountain to climb if we want to nurture the ideal relationship between humans and machines. I hope the arguments above have convinced you, dear reader, that the required change in mindset is worth your effort.
The best way to upgrade our future is to get informed and inform others. Share these principles with your co-workers. We need to start implementing humane design at this point, but any bit of awareness helps!
Additionally, consider taking a technology-focused ethics course. FastAI has a data ethics video course, along with a shorter lesson as part of their deep learning video course.
Have you built a humanely-designed product? Let me know so I can explore and maybe write about it!