“Most of us would sacrifice one person to save five. It’s a pretty straightforward bit of moral math. But if we have to actually kill that person ourselves, the math gets fuzzy.
“That’s the lesson of the classic Trolley Problem, a moral puzzle that fried our brains in an episode we did about 11 years ago. Luckily, the Trolley Problem has always been little more than a thought experiment, mostly confined to conversations at a certain kind of cocktail party. That is until now. New technologies are forcing that moral quandry out of our philosophy departments and onto our streets. So today we revisit the Trolley Problem and wonder how a two-ton hunk of speeding metal will make moral calculations about life and death that we can’t even figure out ourselves.”
Similar to the dilemma raised by the trolley car problem: is it right to sacrifice one person to save many?
“Here’s the story: Hawkeye has gone insane and is spending time at a hospital. Throughout the episode, he tells this story about how they were able to go out to a beach and have a great day. Just playing at the beach. They all pile up on a bus to head home. Suddenly, they realise that the enemy is nearby, so they shut off the engine, turned out all the lights and everybody got quiet. Except this woman in the back who has a chicken that won’t get quiet. In this scene, BJ shows up to tell Hawkeye that he (BJ) is going home but can’t because Hawkeye is getting very upset. So BJ calls in the DR.”
“It’s tempting to hope that someone else will come along and solve the trolley problem. After all, finding a solution requires confronting some uncomfortable truths about one’s moral sensibilities. Imagine, for instance, that driverless cars are governed by a simple rule: minimize casualties. Occasionally, this rule may lead to objectionable results — e.g., mowing down a mother and her two children on the sidewalk rather than hitting four adults who have illegally run into the street. So, the rule might be augmented with a proviso: Minimize casualties, unless one party put itself in danger.”
More ethical than humans?
Many ethicists and artificial intelligence developers want to ensure people are kept in the loop when lethal force is applied. At the moment, that’s a given, at least from those nations adhering to the law of war. Robots struggle to differentiate between soldiers and civilians in complex battle settings.
But the day may come, some say, when robots are able to be more ethical than human troops, because their judgment wouldn’t be clouded by emotions such vengefulness or self-preservation, which can shape human judgment.
“Unfortunately, humanity has a rather dismal record in ethical behavior in the battlefield,” Ronald Arkin, director of the Mobile Robot Laboratory at the Georgia Institute of Technology, wrote in a guest blog for the IEEE, a technical professional organization. “Such systems might be capable of reducing civilian casualties and property damage when compared to the performance of human warfighters.”
“It is morally problematic, because more people are thinking of pets as people … They consider them part of their family, they think of them as their best friend, they wouldn’t sell them for a million dollars,” says Dr Hal Herzog, a professor of psychology at Western Carolina University and one of the founders of the budding field of anthrozoology, which examines human-animal relations. At the same time, research is revealing that the emotional lives of animals, even relatively “simple” animals such as goldfish, are far more complex and rich than we once thought (“dogs are people, too”, according to a 2013 New York Times comment piece by the neuroscientist Gregory Berns). “The logical consequence is that the more we attribute them with these characteristics, the less right we have to control every single aspect of their lives,” says Herzog.
Does this mean that, in 50 years or 100 years, we won’t have pets? Institutions that exploit animals, such as the circus, are shutting down – animal rights activists claimed a significant victory this year with the closure of Ringling Bros circus – and there are calls to end, or at least rethink, zoos. Meanwhile, the number of Britons who profess to be vegan is on the rise, skyrocketing 350% between 2006 and 2016.
The show Thirteen Reasons Why raises some interesting issues regarding ethical responsibility of content producers and networks that broadcast content that may have “deleterious effects” on their viewers. This also raises interesting questions about the value and power of art.
A new study reveals that internet searches for suicide skyrocketed in the wake of the show’s release.
The question is whether this particular study, or any of the allegations that the show directly led to copycat suicides and suicide attempts, will be enough of an impetus for the show’s producers to respond. The study’s authors suggest that editing out the scene of Hannah Baker’s suicide from the show and adding information about suicide hotlines to episodes could immediately minimize some of 13 Reasons Why’s “deleterious effects.” Netflix’s response to the study, though, indicated no such moves would be forthcoming. “We always believed this show would increase discussion around this tough subject matter,” the company said in a statement. “This is an interesting quasi-experimental study that confirms this. We are looking forward to more research and taking everything we learn to heart as we prepare for Season 2.” Netflix declined interview requests from The Atlantic regarding the show.