You could hear a pin drop in Ambani Auditorium on the morning of MBA Reunion in May. Ethan Mollick, associate management professor and co-director of the Generative AI Lab at Wharton, had just delivered his keynote, and a crowd of alumni had stormed the podium for a chance to speak with him. Some wondered if he remembered them from back before he caught the attention of the Wall Street Journal and Time. Others peppered him with updates about the companies they’d launched. But one alum’s question silenced them all: What jobs are immune from artificial intelligence?
Wharton professors have become a sounding board for these types of concerns since generative AI took off with the introduction of ChatGPT in 2022. The School met the moment in May, launching the Wharton AI & Analytics Initiative, a Penn-wide endeavor led by vice dean Eric Bradlow. But many faculty members have been immersed in AI for years, tapping into their areas of expertise to adjust for its impact on the business world. Mollick’s new book, Co-Intelligence: Living and Working with AI, and Legal Studies and Business Ethics professor and chairperson Kevin Werbach’s podcast, The Road to Accountable AI, are the latest examples of Wharton’s AI thought leadership. Mollick explores how the technology will affect education and everyday creativity, while Werbach delves into the governance challenges of AI, interviewing leaders from a variety of sectors on the technology’s potential and pitfalls. Together, their character and expertise create a holistic view of AI, incorporating the School’s vast network of knowledge along the way.
Although Mollick doesn’t have a computer science background, it’s as if he’s been prepping to become the “oracle” of generative AI (as Angela Duckworth, Penn psychology professor and best-selling author of Grit, puts it) his whole life — even down to his love of gaming. When boxes of strategy board games such as Breakthrough and Dixit appeared behind Mollick at a recent “AI Horizons” Wharton webinar, one attendee joked that it was very on-brand. In fact, Dungeons & Dragons was the inspiration behind a startup simulator Mollick created when he co-founded Wharton Interactive in 2020. Finding new ways to make learning fun is part of Mollick’s mission to “democratize education” — an overarching theme in Co-Intelligence. Just like games, he says, AI will facilitate processing new information. Mollick’s background in game-based pedagogy plus his PhD in Innovation and Entrepreneurship turned out to be the perfect formula to catapult him into a role as an AI authority.
“This is a very entrepreneurial school for professors, too,” Mollick says as he sits crossed-legged on a leather chair in Huntsman Hall, his foot twitching with boundless energy. “The university is standing behind us and helping us. We’re already heading toward the world of active learning, so we’ll figure out how to teach in a world of AI.”
By integrating hands-on simulations and exercises into his classes, Mollick utilizes AI to flip the traditional Socratic method on its head. In the New York Times best-selling Co-Intelligence, he writes that now is the time for teachers to directly influence education technology. When ChatGPT was first released, Mollick was an early adopter, allowing students to use the chatbot for assignments. He continued his unorthodox practices; in the book, he recounts tasking students in his entrepreneurship class with using AI to critique their venture ideas in the voices of public figures such as Rihanna and Steve Jobs. Mollick even had ChatGPT write portions of the book for him, sometimes not revealing that authorship until after the passage, as a meta example of “co-intelligence.” Contrary to the mainstream belief that AI will compete with artists, Mollick sees it as a tool for creativity. But even he has his limit: He won’t use AI to write letters of recommendation for students. “The fact that it is time-consuming,” he writes, “is somewhat the point.”
Mollick’s loyalty to his students and alumni was evident at a sold-out conference in May hosted by AI at Wharton, titled “AI and the Future of Work.” The two-day campus event brought together professionals from varying backgrounds and offered breakout sessions on topics such as “Challenges with LLM Adoption in the Organization” and “Managing AI Workflows.” A few minutes before the event — his second keynote in a week — Mollick walked in with an open laptop. His height and aura of approachability give off “kind grocery-store clerk” vibes. But rather than asking him to snag an item off the top shelf, students want him to future-proof their careers.
Audiences often treat Mollick as a celebrity, perhaps in part because he makes his research accessible to anyone. His keynote included practical tips for using AI, with slides on “How to Write and Publish” and “How to Research,” raising the question: How do we deal with the flood of AI-assisted prose in academic publishing? His wife, Lilach Mollick, with whom he co-directs Wharton’s Generative AI Lab, brings her own unique perspective to this topic. Her master’s degree in education combined with Ethan’s coding skills have made the two expert prompters — an essential skill in using the technology. Mollick encouraged audience members to put themselves in AI’s shoes when crafting prompts. “If it’s not doing your homework, you’re not doing your job teaching it,” he said.
Still, some audience members approached “AI and the Future of Work” — and the technology itself — with healthy skepticism. “The hype around artificial intelligence can often mask some of the difficulties around implementing it that we as a society don’t have good answers for,” attendee Michael Dea C16, senior growth marketer at Weidenhammer Systems, said before the event. “However, Professor Mollick made valid points in raising questions about the ways the process of research and finding meaning must change. Even if today’s models aren’t ready to replace all workers, they present as much risk as possibility to what we envision as the future of society.”
Throughout the presentation, Mollick’s rapid-fire advice was interspersed with the funny yet sometimes ominous content he’s become known for, such as a video of him speaking Hindi that’s completely generated by AI. The question-and-answer portion felt more like a press conference than an academic one, with Mollick fielding inquiries on AI in the workplace from journalists, Penn community members, and alumni. His ability to distill the powerful capabilities of AI into actionable steps have made his entrepreneurship and innovation classes surge in popularity.
“Professor Mollick’s most influential suggestion has been, ‘Think of AI tools like an intern,’” said Ankul Daga WG24, an investment strategist at Vanguard who learned from Mollick through Executive Education coursework and attended the conference. “This intern needs guidance and supervision. You have to watch every step. That has influenced how I use AI.”
Mollick predicts that AI’s role in the workplace will allow humans to concentrate on complex, strategic matters: Let AI do the menial tasks, he argues, and have workers focus on the systems of stakeholders and colleagues that define our jobs. As a training ground for future managers and leaders, Mollick says, Wharton is ahead of the curve in that regard.
“AI is great, but it’s not as good at the job as any Wharton alumnus who’s reading this piece,” he says. “We’re in a moment where leadership really matters, because there’s no one who’s going to tell you how AI should be used in your industry. People have to shape that themselves.”
While Mollick’s book is written for the public at large, Werbach’s podcast speaks directly to business leaders. Even after taking the stairs at Huntsman Hall for an interview on the first day of summer break, Werbach retained the composure of a seasoned scholar. It immediately became evident why he’s well-suited to hosting a podcast: Like that of a judge in a courtroom drama, his booming voice echoed against the walls.
Werbach’s expertise is a true melding of the legal and business fields, and he has influenced some of the most transformative technology policy of the 21st century. In the 1990s, he worked with the White House on internet regulation, creating the first governmental policy framework on e-commerce. When blockchain and cryptocurrency emerged and needed regulation, he testified before the U.S. Senate and the House of Representatives on the role of government in crypto. He founded Wharton’s Blockchain and Digital Asset Project and started what’s believed to be the first ethical data science class at a business school. Now, he’s responding to demand yet again: Werbach’s podcast, The Road to Accountable AI, doubled as research for his new online Executive Education program, Strategies for Accountable AI, which launches in October.
“With the explosive growth of generative AI, I wanted to do more for my own educational understanding,” says Werbach, his silver hair reflecting the sunlight pouring through the full-length window. “Finding people and talking to them is something I’ve done a lot of, and I realized it was not that hard to translate that into a podcast.” Werbach has seen a range of issues take shape in the evolving field, from corporate executives struggling to enforce AI audits to state officials encouraging public-sector employees to use the technology. As for solutions to real-time challenges, he says, “I don’t think there’s enough discussion about what’s actually happening on the ground.”
Werbach aims to remedy that problem in his podcast. He interviews leaders in AI ethics from around the world, revealing both the technology’s omnipresence and some of the cultural differences that will dictate governance. “We use the term ‘accountable AI’ intentionally because it focuses on the accountability and not just the acknowledgement of responsibility,” he says. Among his guests so far was Jean Enno Charton, director of digital ethics and bioethics for Merck, who spoke about the differences between health-care data ethics in Germany and the United States.
Another participant, Romanian politician Dragos Tudorache, spoke on the lessons learned from Europe’s General Data Protection Regulation (GDPR). On that episode, Werbach asked about the importance of consensus between countries. “It’s a must,” Tudorache responded. “It’s not a nice-to-have; it’s not a luxury. It is a priority, because AI is the same for everyone, and AI will be driving the future. There has to be a global conversation on AI.” When Werbach was shaping internet policy in 1990, the majority of internet users were in the United States, so getting the U.S. government aligned was crucial. With AI, it’s a whole new frontier.
Werbach’s former students, such as Swiss native and technology venture firm co-founder Philipp Stauffer WG04, see the value in international diplomacy as it relates to AI. “The U.S. and Switzerland are the oldest democracies on the planet that borrowed from one another,” Stauffer says. “You see similar efforts in terms of how to regulate AI. Wharton, with its global community, is incredibly positioned for this.”
Werbach has tapped into that network himself, involving alumni and students in his AI projects. When he noticed Julia Shelanski W24 excelling in class, he brought her on for research and pre-production roles on the podcast. “A lot of the podcasts and literature in this field are just very philosophical,” says Shelanski. “But this is happening. It’s a problem but also a possibility. What are we going to do about it, and how should people think about it? So it was helpful to think about questions that are really relevant for listeners.”
Werbach has been dealing with the challenge of teaching an ever-evolving subject since 2016, when he created a course titled Big Data, Big Responsibilities. “Students would say, ‘Who are the companies we can look to as models and examples?’” he recalls. “And I had a very hard time giving them an answer.” But Shelanski says both the podcast, which was recently featured in the Philadelphia Inquirer, and the Big Data, Big Responsibilities class prepared her for a role she just accepted at “the intersection of AI and finance,” working on data science for a bank.
As Werbach prepares for his Executive Education course in the fall, he reflects on how widely applicable it can be. “It’s not just limited to those who are going to be in some kind of formal role dealing with AI accountability,” he says. “In fact, the vast majority won’t be.” Much like the podcast, he sees the course reaching a wide audience and argues that anyone who is responsible for putting out a product to customers can benefit: “My guess is we’ll get a mixture of people with different kinds of roles. [There will be] some people for whom this is their job, and they want to really better understand and benchmark what they’re doing on responsible AI. But my bet is that most of them are people who don’t necessarily have that in their title but see the relevance of it.”
The interdisciplinary nature of the Wharton School has made it a force in the world of AI, as Dean Erika James wrote in her announcement of the new Wharton AI & Analytics Initiative. “The scope of our expertise and ability to break down the difficult and make it accessible to the world is unmatched,” she said. Whether through the governance of leaders interviewed by Werbach or the creative learning Mollick has sparked, Wharton’s research and insights will be critical to understanding and implementing this transformative technology. If the future of AI is a chessboard, Mollick and Werbach aim to stay several moves ahead. But maybe it’s best to leave the game analogies to Mollick. As he writes in Co-Intelligence, “We are playing Pac-Man in a world that will soon have PlayStation 6.”