The troves of data at our fingertips today have opened up a world of opportunities for innovative companies. But algorithms—the technology that crunches that data—can yield unintended consequences. “Things that are designed for beneficial market purposes can actually cause terrible problems,” says professor of legal studies and business ethics Kevin Werbach, who explores this issue in his course Big Data, Big Responsibilities: The Law and Ethics of Business Analytics.

Take, for example, the scrutiny YouTube has faced in recent years for the algorithm it uses to recommend videos. Though the algorithm was designed to keep viewers engaged, YouTube found the algorithm could achieve that goal by serving up content with misinformation and extremist opinions—a problem it has yet to fix. But Werbach doesn’t only focus on the pitfalls of technology in his class. These select readings and videos from the course serve up more examples—most real, some fictional—to help future business leaders grapple with both the possibilities and the limitations of employing big data for business.

“Inside China’s Vast New Experiment in Social Ranking”

In her Wired cover story, Pulitzer Prize finalist Mara Hvistendahl puts smartphone app Alipay in the spotlight to explain how social ranking systems in China work. In essence, the systems aggregate data from sources like shopping payments and personal contacts to come up with a number for a user that’s similar to a credit score, but with wider implications. “At some level, these are the natural outcome of many of the algorithmic techniques that are already being used in the U.S. and elsewhere,” says Werbach, who prompts his students to wrestle with the consequences.

“Fifteen Million Merits”

Werbach pairs each of his lessons with an episode of the Emmy-winning Netflix show Black Mirror, the sci-fi anthology series known for spotlighting technology’s dark side. He uses this one—in which Get Out star Daniel Kaluuya tries to earn his way out of a prison-like existence—to discuss how companies gamify shopping experiences. “We start with the idea that incentives are good and useful things,” says Werbach. “But when you push that to the extreme, the results are potentially dangerous and unfair.”

“Algorithms Need Managers, Too”

Business leaders looking for a quick guide to harnessing algorithms need look no further than this article from Harvard Business Review. Among the rules laid out in the piece: Because an algorithm doesn’t automatically factor ethical considerations in its means to accomplish certain objectives, be “crystal clear about everything you want to achieve,” writes a trio of computer science and business professors. For example, an algorithm won’t seek to maintain a certain reputation for a company in its pursuit of profits unless you tell it to do that.

“Can A.I. Be Taught to Explain Itself?”

To boost humans’ trust in artificial intelligence programs, developers are trying to get machines to do something very complicated: explain how they reach their conclusions. This New York Times article by tech journalist Cliff Kuang details those efforts and some of the legal pressures developers face in this area. It may seem baffling that developers don’t know the criteria their systems use to make decisions, but you can thank machine learning—a system’s capacity to teach itself new skills—for that. A machine’s ability to explain its process is helpful if we, as skeptical humans, suspect an algorithm is flawed.

“How a Company You’ve Never Heard of Sends You Letters About Your Medical Condition”

When Alexandra Franco received a letter from AcurianHealth, she was perplexed about how the provider of clinical trial services had identified her as a candidate for a psoriasis study. Although she didn’t have the condition, she remembered she had looked it up online. This Gizmodo article explores many of the ways companies acquire identifying data. “It shows just how pervasive the exploitation of data has become,” Werbach says.

“Machine Bias”

This exposé by ProPublica sheds light on the racial biases demonstrated by an algorithm used within the criminal justice system to predict the likelihood of someone becoming a repeat offender. Analyzing thousands of the algorithm’s conclusions, the news outlet found the technology was prone to wrongly label black defendants as high-risk while at the same time misidentifying white defendants as low-risk at a higher rate than blacks. “It could seem like an algorithm making predictions based on quantifiable data isn’t going to be biased in the way a judge might be,” says Werbach. “But actually, it might bring new kinds of discrimination.”

 

Published as “Who Will Manage The Machines?” in the Fall/Winter 2019 issue of  Wharton Magazine.