This book is not an argument that robots will take all of the jobs, some of the jobs, or none of the jobs. It’s not a rant about the horrors of technological capitalism or a rumination about how we’ll coexist with machine intelligence.
This is a book about how to be a human in a world that is increasingly arranged by and for machines. It’s an attempt to persuade you that the key to living a happy, rewarding life in the age of AI and automation is not competing with machines head-on—learning to code, optimizing your life, eliminating all forms of personal inefficiency and waste—but strengthening your uniquely human skills, so you’re better equipped to do the things machines can’t do.
Rule #1: Be Surprising, Social, and Scarce
Surprising
In general, AI is better than humans at operating in stable environments, with static, well-defined rules and consistent inputs. On the other hand, humans are much better than AI at handling surprises, filling in gaps, or operating in environments with poorly defined rules or incomplete information.
This is why, for example, a computer can beat a human grandmaster at chess, but would make for an extraordinarily bad kindergarten teacher. It’s why virtual assistants like Siri and Alexa respond well to simple, structured questions that draw from concrete data sets (“What’s the weather in New York next Tuesday?”) but freeze up when confronted with questions that require handling uncertainty or drawing inferences from incomplete data (“What’s the restaurant near Gramercy Park with the really good burger?”).
Social
while AI is really good at meeting many of our material needs, humans are much better at meeting our social needs.
There are some areas of life in which only outcomes matter. We don’t really care if our subway car is driven by a person or a computer, as long as it’s safe and efficient and gets us to our destination. Few people would object to a robot handling their packages in a warehouse, as long as the packages arrived on time and intact.
But many things in life are not bloodless exchanges of currency for goods and services.
Humans are social beings. We like feeling connected to one another and having meaningful interactions with the people around us. We care deeply about our social status, and what other people think of us. And many of the choices we make every day—even the seemingly mundane ones, like the food we eat or the clothes we wear—are actually deeply related to our identities, our values, and our need for human connection.
Scarce
AI is much better than humans at big work—work that involves large data sets, huge numbers of users, or global-scale systems. If you need to produce a million of something or spot the patterns in a hundred thousand data points, you’re probably looking at a job that is already done by a machine, or soon will be.
On the other hand, humans are much better than AI at work that involves unusual combinations of skills, high-stakes situations, or extraordinary talent.
Rule #2: Resist Machine Drift
Today, the world runs on recommendation engines. Facebook, Instagram, YouTube, Netflix, Spotify, and even The New York Times use recommenders to personalize users’ feeds, showing them what the machines believe will keep them engaged for as long as possible.
At their best, recommenders are a beautiful and empowering form of consumer leverage—a way to put powerful machines to work as our personal concierges, sorting through the vast expanse of the internet to create an experience tailored to our preferences.
At their worst, they’re more like pushy salespeople—shoving options we don’t want in front of us, playing psychological games, hoping we’ll relent. With recommenders, we’re still technically in control. (We are, after all, humans with agency and free will.) But the force these systems exert on us is not always the nudge of a friendly suggestion. Often, they coerce us in their preferred direction by arranging our choices for us—making desirable options easier or more prominent, while burying undesirable options several clicks deep in a menu. Many recommenders come attached to friction-reducing features like autoplay and one-tap checkout, all of which are designed to speed us to a decision before we can stop and consider whether the machine’s preferences actually match our own.
Rule #3: Demote Your Devices
Steve Jobs famously described the personal computer as “a bicycle for the mind,” and for years, the metaphor fit. Like bicycles, computers could help us get places faster, and reduce the effort needed to move ideas and objects around the world. But these days, many of our devices (and the apps we install on them) are designed to function less like bicycles, and more like runaway trains. They lure us onboard, tempting us with the possibility of rewards—a new email, a Facebook like, a funny TikTok video. Then, once we’re in, they speed off to their chosen destination, whether it’s where we originally wanted to go or not.
That these forces are largely invisible doesn’t make them any less real. The algorithms that power platforms like Facebook and YouTube are many times more powerful than the technology that sent humans to the moon, or even the technology that allowed us to decode the human genome. They’re the products of billions of dollars of research and investment, exabytes of personal data, and the expertise of thousands of Ph.D.s from the top universities in the world. These AIs represent the kind of futuristic superintelligence we saw in sci-fi movies as kids, and they stare out at us from our screens every day—observing us, adapting to our preferences, figuring out what sequence of stimuli will get us to watch one more video, share one most post, click on one more ad.
All the evidence we have suggests that what matters is how we use our devices, not just how often we pick them up. Studies have suggested that certain types of device use are better for our mental well-being than others. For example, using Facebook passively (scrolling through our feeds, watching videos, absorbing news updates) has been shown to increase anxiety and decrease happiness, while using Facebook more actively (posting status updates, chatting with friends) has been shown to have more positive effects.
Rule #4: Leave Handprints
The idea that we can outwork machines is a seductive fantasy, going all the way back to the legend of John Henry and the steam engine. But many of today’s most powerful technologies operate at such a vast scale, with such enormous computing power behind them, that the idea of competing with them head-on isn’t even conceptually possible. What would it even mean for a human librarian to “compete” with Google at retrieving information from among billions of websites? Or for a human stock trader to “compete” with a high-frequency trading algorithm that can analyze millions of transactions per second? More to the point, why would they even want to try?
Instead of trying to hustle our way to safety, we should refuse to compete on the machines’ terms, and focusing instead on leaving our own, distinctively human mark on the things we’re creating. No matter what our job is, or how many hours a week we work, we must know that what will make us stand out is not how hard we labor, but how much of ourselves shows up in the final product.
In other words, elbow grease is out. Handprints are in.
Rule #5: Don’t Be an Endpoint
I’d see a security guard at an office building checking visitors into the building’s security system and pressing the buttons to let them through the turnstile, and I’d think: endpoint.
I’d go to the doctor’s office for my annual physical, and I’d see the nurse practitioner reading numbers off medical instruments and plugging them into an iPad loaded with my electronic health records, and I’d think: endpoint.
I’d see a Starbucks barista handing mobile delivery orders off to a Postmates courier—a human following one app’s instructions and handing the product over to another person following a different app’s instructions—and I’d think: two endpoints.
One group of people who have to be especially careful not to become endpoints are remote workers. Perhaps, the biggest risk of remote work, when it comes to automation, is that it’s much harder to display your humanity in the absence of face-to-face interaction. In some sense, remote workers are already halfway automated. They are experienced as two-dimensional heads in a Zoom chat, or avatars in a Slack thread. Their output is most often measured in terms of tasks completed and metrics hit, and their ability to contribute to an organization in subtler, more human ways—cheering up a demotivated co-worker, organizing happy hours, showing an intern the ropes—is dramatically limited.
Because of this, it’s even more important for remote workers to go overboard in expressing their humanity and reminding others of their presence. And it’s important for organizations that employ remote workers to bring those workers in for regular, in-person get-togethers so they can be fully socialized and integrated into their teams.
Rule #6: Treat AI Like a Chimp Army
If an army of a thousand chimpanzees showed up at your office one day, looking for work, what would you do?
Realistically, you’d probably lock the door and call animal control, or tell yourself to lay off the magic mushrooms. But let’s suspend reality for a second and imagine that, instead of panicking, you actually tried to find a task for them to do.
After all, under the right circumstances, chimps could make great workers. They’re strong, agile, and fairly intelligent. They can be trained to recognize faces, pick up and carry items, and even respond to simple commands. You could imagine a group of well-trained office chimps loading and unloading warehouse shipments or restocking an empty laser printer.
Before you made any promises, of course, you’d want to know more about the chimps. How well-behaved were they? Did they have a history of aggression? How much training and supervision would they need? And ultimately, if you did decide to invite the chimp army into your office, you wouldn’t do it right away. You might conduct a Chimp Safety Audit or convene a Chimp Oversight Task Force. You might decide to put a small number of chimps in a room under close supervision, train them to do a simple task, and evaluate the results before giving them more important assignments.
But whatever your risk tolerance was, I’m fairly confident that you wouldn’t just invite the chimps in, give them badges and lanyards, and say “Okay, get to work!” And you sure as hell wouldn’t put them in charge.
Today, most AI is similar to an army of chimps. It’s smart, but not as smart as humans. It can follow directions if it has been properly trained and supervised, but it can be erratic and destructive if it hasn’t. With years of training and development, AI can do superhuman things—like filtering spam out of a billion email inboxes or creating a million personalized music playlists—but it’s not particularly good at being thrown into new, high-stakes situations.
Rule #7: Build Big Nets and Small Webs
Big nets are the large-scale programs and policies that soften the blow of sudden employment shocks. Small webs are the informal, local networks that support us during times of hardship.
Historically, big nets have made it easier for societies to adapt to technological change. In Japan, for instance, a widespread labor practice called shukko helped soften the blow of major layoffs in the 1980s, as the country introduced robots into many of its factories. Under shukko, workers who were slated to be laid off could instead be temporarily “loaned” to other companies for as long as several years while the original employer found new work for them to do.
In addition to big nets, we also need to think about the small webs we can create to support each other through this technological transition. Because in the absence of some fairly radical economic and policy changes, we’re going to have to do a lot of this ourselves.
Our response to the Covid-19 crisis is a useful guide here. When the pandemic hit, state and local governments made up for the Trump administration’s inept handling of the situation by gathering their own data, creating their own protocols, and building their own supply chains. Neighborhoods formed mutual aid networks to pool resources, arrange grocery deliveries and other assistance for needy and vulnerable residents, and help each other through financial distress. Donations flooded into food banks, worker relief funds, and small-business fundraising drives. People lent spare rooms to healthcare workers, and organized mask-sewing workshops.
Rule #8: Learn Machine-Age Humanities
Call them “machine-age humanities” because, while they’re not strictly technical skills, they’re not exactly classic humanities disciplines like philosophy or Russian literature, either. They’re practical skills that can help everyone—from young kids to adults—maximize their advantages over machines.
Attention Guarding
Guarding attention is typically thought of as a productivity hack—a way to get more done, with less distraction. But there are noneconomic reasons to practice keeping our attention away from the forces trying to capture and redirect it. Sustained focus is how we develop new skills and connect with other people. It’s how we learn about ourselves and construct a positive identity that can withstand influence from machines. After all, as the historian and author Yuval Noah Harari writes, “If the algorithms understand what’s happening within you better than you understand it, authority will shift to them.”
Room Reading
Jed Kolko, the chief economist of Indeed.com said “The kind of skill that one gets from being in the closet—the ability to read a room—that’s not a skill that shows up anywhere in a skills inventory, but ends up being the kind of skill that can be valuable in all kinds of workplaces,” Kolko said.
Of course, it would be much better to live in a more equitable society, where women and minorities weren’t required to manage their self-presentation so carefully. But the machine age may present a silver lining for people who have gotten good at quickly assessing the biases and prejudices of others. And those of us who don’t bear the burden of code switching and room reading should try to cultivate these skills in other ways, because we’ll need them.
Resting
We generally stop incorporating naptime into education after early childhood. But resting—turning off our brains, recharging our bodies—is an increasingly useful skill for people of all ages. It helps prevent burnout and exhaustion, allows us to step back and look at the bigger picture, and helps us step off the hamster wheel of productivity and reconnect with the most human parts of ourselves. And many of us could use a refresher course.
Analog Ethics
Playing fair, and apologizing never left the curriculum. But schools are now starting to explicitly design programs around cultivating kindness.
Older students, too, are experimenting with revisiting analog ethics. At Stanford, for example, students can take a seminar called “Becoming Kinder,” which teaches them about the psychology of altruistic behavior. At NYU, an undergraduate course called “The Real World” is teaching students a critical skill of the future—the ability to cope with change—by conducting simulated problem-solving drills. At Duke, Pittsburgh, and other top medical schools, oncology fellows can sign up for “Oncotalk,” a specialized communications course that teaches them how to have difficult conversations with their cancer patients.
These efforts are all a good start, and more analog ethics teaching is deeply necessary—not just to improve people’s personal lives, but to equip them for a future in which our social and emotional skills will be some of our most precious assets.
Consequentialism
Some of the most valuable skills in the future will involve thinking about the downstream consequences of AI and machine learning and understanding the effects these systems are likely to have when they’re unleashed into society.
Consequentialist thinking will be useful outside of tech, too, as AI moves into more industries and creates more opportunities for error. Doctors and nurses will need to understand the strengths and weaknesses in the tools used for diagnostic imaging and anticipate how they could produce faulty readings. Lawyers will need to be able to peer inside the algorithms used by courts and law enforcement agencies and see how they could result in biased decisions. Human rights activists will need to know how things like facial-recognition AI could be used to surveil and target vulnerable populations.
One way to instill consequentialist thinking would be by formalizing it as part of a standard STEM curriculum or turning it into a professional rite of passage. In Canada, when you graduate from engineering school, you’re invited to take part in a ceremony called the Ritual of the Calling of an Engineer.
During the ceremony, graduates are each presented with an iron ring, worn on the pinkie finger, that is supposed to remind them of their responsibilities to serve the public good. They then recite an oath, which begins with a pledge that they will “not henceforward suffer or pass, or be privy to the passing of, Bad Workmanship or Faulty Material.”
Imagine if software engineers at Facebook and YouTube were required to undergo a similar ceremony before shipping their first feature or training their first neural network. Would it solve all of society’s problems? Of course not. But could it remind them of the stakes of their work, and the need to be mindful of the vulnerabilities of their users? It’s certainly possible.
Rule #9: Arm the Rebels
In many ways, the world today looks a lot like it did in 1845. New, powerful machines have revolutionized industries, destabilized legacy institutions, and changed the fabric of civic life. Workers are worried about becoming obsolete, and parents are worried about what new technologies are doing to their children. Unregulated capitalism has created an extraordinary amount of new wealth, but workers’ lives aren’t necessarily getting better. Society is fractured along lines of race, class, and geography, and politicians are warning about the dangers of rising inequality and corporate corruption.
In the face of these challenges, we have two options. We can throw our hands up, unplug our devices, opt out of modernity and retreat into the wilderness. Or we can step into the conversation, learn the details of the power structures that are shaping technological adoption, and bend those structures toward a better, fairer future.
Personally Kevin thinks we have a moral duty to fight for people, rather than simply fighting against machines and for those of us who aren’t tech workers, that duty extends to supporting ethical technologists who are working to make AI and automation a liberating force rather than just a vehicle for wealth creation.
Call this strategy “arming the rebels,” not because resisting technological exploitation should involve violence of any kind, but because it’s important to support the people fighting for ethics and transparency inside our most powerful tech institutions by giving them ammunition in the form of tools, data, and emotional support.
On a practical level, this strategy is likely to be more effective than trying to tear down these institutions altogether. History shows us that those who simply oppose technology, without offering a vision of how it could be made better and more equitable, generally lose.