Wharton management professor John Paul MacDuffie first realized his fascination with technology’s influence on employees during a summer job at a paper mill in his youth: “It’s where I first encountered the combination of super-sophisticated, expensive capital equipment and a really complicated human system,” he says. Years later, as he was thinking about a course that would explore dynamics between workers and technology, MacDuffie connected with Adam Seth Litwin C98 W98, a professor of industrial and labor relations at Cornell University. Litwin, who was already teaching about technological change at work, shared his syllabus. MacDuffie modeled some assignments on Litwin’s course, and the two remain in touch as both courses evolve.

Today, MacDuffie teaches Work and Technology: Choices and Outcomes to undergraduate and MBA students, focusing on options available to managers and engineers as they implement or design new technologies, respectively. “I want students to hear that there’s no reason to assume technology determines certain outcomes,” MacDuffie says. Readings, videos, and other materials offer insights into key course concepts, including the historical impacts of innovation and how humans can leverage — rather than be replaced by — technology.

“What a Japanese AI Unicorn Can Teach Silicon Valley”

Large language models — the systems powering generative AI — use immense energy. Seeking growth, many AI firms are rushing to make their models larger. Japanese startup Sakana AI has adopted a different strategy, as described in this Japan Times article. In contrast to peer startups, the firm is seeking to deploy AI in intentionally energy-conscious ways, rather than pursuing unbounded growth. (Do we really need another chatbot?) “It’s making a slightly different set of choices about AI, and it’s a unicorn, so not an untried startup,” says MacDuffie. (The recent release of DeepSeek, from an AI startup in China, has revealed another alternate development path that economizes resources.)

“When Robots Take All of Our Jobs, Remember the Luddites”

To anticipate reactions to automation today, this Smithsonian Magazine article looks to the Luddites, textile workers in 1800s England who destroyed new factory machines to protest job and pay loss. “There were some factory owners that kept pay for workers higher,” says MacDuffie. “The Luddites would spare those factories.” Positing that new machines have eventually led to job creation overall, the article points to education, antitrust laws, and progressive taxation as possibilities for guiding society through such transitions.

“Ways to Think About Machine Learning”

What exactly is machine learning, anyway? Influential tech analyst Benedict Evans explains what the term really means to help us get a grasp of the technological change at hand. “This blog is very good at showing how machine learning differs from classical programming approaches and thus started to be able to do things that a lot of other automation attempts hadn’t been able to,” MacDuffie says. “Not all of them are razzle-dazzle exciting, but they are nonetheless breakthroughs.”

“Now Is the Time for Grimoires”

Wharton’s Ethan Mollick — the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, co-director of the Wharton Generative AI Labs, and an associate professor of management — stresses one way we can make AI more accessible. Mollick envisions building a “spellbook” of the best generative AI prompts that anyone can employ in their work. “There’s no shame in using somebody else’s great prompt,” MacDuffie says.

“Can We Have Pro-Worker AI?”

“The real promise of generative AI is intelligent, flexible, and easily usable tools that are complementary to human decision-making, problem-solving, [and] creative tasks,” posits Daron Acemoglu in this United Nations webinar. Acemoglu — a recipient last year of the Nobel Prize in Economic Sciences — describes roadblocks to this vision, such as “excessive automation” and “monopolized control over information.”

“Why Robots Will Always Need Us”

For all the talk of a robot takeover, this New York Times opinion piece offers a brighter perspective on the value of human judgment. Consider self-driving cars, in which people will likely always want to be on standby. “It may be that human societies, to feel comfortable that our automated systems are safe, need to know there is a ‘human in the loop,’” MacDuffie says. “There are so many reasons why something could be automated but isn’t. That’s where a lot of choice around how we use technology enters — and what I dig into deeply with students.”

 

Published as “Is a Tech Takeover Inevitable?” in the Spring/Summer 2025 issue of Wharton Magazine.