After decades of slow development, the growth of artificial intelligence is now in hyperdrive and changing human history, whether we’re ready or not. The emergence of ChatGPT, Bard, and other generative artificial intelligence seems to have been the starting point for much greater change to come. Such monumental advancement has kick-started important conversations among leaders far and wide, recently bringing together Washington lawmakers and tech titans — from Microsoft co-founder Bill Gates to Meta’s Mark Zuckerberg — to discuss AI’s vast potential and the need for regulation.

Such quick progress has also raised pressing existential questions: Will we seize this moment in history? Will innovations help us become more efficient, or will they make our jobs obsolete? How do we begin to understand it all? And what, exactly, makes us fear technological change so much, anyway?

Wharton professors Kartik Hosanagar, Lynn Wu, Rahul Kapoor, and Stefano Puntoni joined host Dan Loney on Knowledge at Wharton’s new Ripple Effect podcast to discuss these questions and more during a monthlong examination of what the advancements may mean for business and society at large. The following excerpts from those interviews have been condensed and edited for clarity. To listen to the full episodes, visit the Knowledge at Wharton website.

 

How Do We Coexist With Algorithms?

How AI Is Reshaping the World as We Know It and How We Can Keep Up
Mixed media portrait of a man in a suit in front of a purple background and with red and orange coloring added around him.

Kartik Hosanagar, AI at Wharton faculty co-director; John C. Hower Professor; professor of operations, information, and decisions

Dan Loney: There is so much conversation right now about how AI is going to impact business. AI and business are not new to each other; they’ve been connected for some time. But it feels like the conversation has reached a different level. How do you view that combination and how those two will interact in the future?

Kartik Hosanagar, AI at Wharton faculty co-director; John C. Hower Professor; professor of operations, information, and decisions: I’m going to make a big, bold claim, which is that I think AI is going to be like electricity or the steam engine or computers, meaning the kinds of technology that change humanity forever. And this is not just a statement I’m making based on my gut feeling — which, by the way, there is gut feeling in that statement, but it’s based on real evidence.

Economists and other researchers have studied these kinds of technologies that we refer to as general-purpose technologies. These are technologies like electricity and computers that are different from other technologies in a few ways. At a macro level, they stimulate a lot of innovation and a huge amount of economic growth. At a micro level, meaning individual firms, they end up changing winners and losers of individual markets because of how companies adopt the technology.

“AI is the kind of technology that changes humanity forever,” says professor Kartik Hosanagar.

Take the internet, for example. The largest retailer isn’t Walmart. It’s Amazon. It changes competitive dynamics fundamentally. And researchers have looked at the properties of technologies that go on to become general-purpose technologies. All the early data suggests that AI looks like a general-purpose technology, if you look at hiring patterns and patent filings relative to AI and a number of other things. In fact, there was a recent study by my colleague, [professor] Dan Rock, where he looked specifically at large language models like ChatGPT, and his study finds that even those models have some of the properties of general-purpose technologies.

You started by asking what the connection to business is, and I think my answer is that it is going to be fundamentally transformative for business.

DL: Then you’re talking about a pivot moment. We’ve used the word “pivot” a lot over the past three or four years because of the pandemic and how businesses had to pivot to survive. This is a pivot, but on a much larger scale.

KH: Absolutely. Imagine the pandemic without the internet. We were able to continue to work because of Zoom and other things. The internet was really a general-purpose technology that has changed our lives, and it has had a huge impact over the past 20 years and certainly the past two or three years. AI will be similar.

We’re starting to see the early things like ChatGPT, but this is just the start. It’s going to change everything. And companies that don’t wake up to that reality — that want to follow rather than lead; that want to say, “This could be just the next buzzword. We will play it safe”; that see an early failure, backtrack, and say there’s no ROI in this, like the companies that did that when the dot-com bust happened — will pay a big price.

DL: What do you say about the calls to slow down the development and take more time and really think this out?

KH: The concerns are legitimate. It is moving very fast. This is a technology that is unlike other technologies we’ve seen in terms of the rate of change and progress, especially given its implications for simple things like employment all the way to things like the use of AI in warfare. I’m not sold on whether a pause in AI work would change anything. What needs to happen is investments in education at school levels, where people are trained to understand AI — things like deep fakes and issues around ethics when building technology. You need to retrain engineers and managers. You also need to retrain your Congressmen and senators, all of the politicians and lawmakers. This is something you solve over 10 years as you change the curriculum.

 

Will Robots Take Our Jobs?

New Advancements Require Firms to Rethink the Traditional Career Ladder.
Mixed media portrait of a woman in a white blazer in front of a blue background and with purple, orange, and green coloring around her.

Lynn Wu, associate professor of operations, information, and decisions

DL: What sparked your interest in studying how robots are starting to change employment?

Lynn Wu, associate professor of operations, information, and decisions: At the time — and it’s still going on now — there were so many articles in the press and academia about how robots are going to take over all our jobs and the impending robocalpyse. I think it’s really important that we understand what’s going on at the firm level, to see whether firms actually do lay off people en masse after robot adoption. There was only industry- and country-level evidence at the time. And it is really important to study at the firm level, because countries and industries do not adopt robots. The positive or negative effects that you find depend on whether firms that adopt actually lay off people. Or is it coming from firms that do not adopt? These facts are very difficult to observe when we are looking at a macro level.

DL: Tell us a little bit about the research that you’re doing in this area to try to get a better grasp on what’s been taking place.

LW: Our work is, first, to study robot adoption and employment and its effect at a firm level. We used data from Statistics Canada, which has comprehensive data on robot import and export — a very good measure of robot adoption at firms. We also have very comprehensive data about their financial performance from their tax filings and surveys that the Canadian government mandated on various firm practices.

“Robot-adopting firms are not hurting employment. Firms that did not adopt robots are losing the competition,” says professor Lynn Wu.

What we found is exactly the opposite of what people were expecting. Robots did not replace human workers. In fact, the firms that adopted robots hired more people than they did before. So, how do we reconcile the evidence that we see sometimes at an industry level and the country level that there is a negative effect of robots on employment? It turns out it’s not the robot-adopting firms that are hurting employment. It is the firms that did not adopt robots that are losing the competition. They’re not as competitive as before, and they had to lay off people because they’re losing the market share.

We also have found other effects on employment, and the story is not as rosy. Robot-adopting firms hired more high-skill workers and many more low-skill workers at the expense of middle-skill workers. I define high-skill workers as those with college education. Low-skill workers are the people who barely finished high school. And middle-skill workers are people with high-school degrees or associate degrees, who had some kind of advanced work-related training. It’s these middle-skill workers who are being decimated by robots, and that is a big problem. We also show that managerial work has been decimated by robots. Hollowing out the middle-skill work and supervisory work is a big problem, because now the career ladder is broken.

DL: Does the longer-term outlook for middle-skill jobs look very bleak at this point?

LW: I think existing middle-skill work is in trouble. But new middle-skill work will be created. For example, there’s prompt engineering, something you’ve never heard of until maybe a few months ago. These engineers are literally trying to make ChatGPT do what it’s supposed to do. We’ll need robot technicians to fix the robots, process engineering to observe processes and see where robots can be used in production. Over time, these new tasks will evolve into new career opportunities. But the important problem is not necessarily the new jobs that will be created. I guarantee you that new jobs and new tasks will be created. It’s the speed at which we can retrain the existing workforce to leverage that. We’re going to have a five-to-10-year horizon. How you retrain that existing workforce is going to be a huge challenge for everyone.

 

Is Your Company Prepared for Generative AI?

Why Now Is the Time for Leaders to Develop New Frameworks Around AI Like ChatGPT
Mixed media portrait of a man in a black suit and a white button-down shirt against a green, orange, and yellow background, and with touches of purple and yellow on his clothing.

Rahul Kapoor, Management Department chairperson; David W. Hauck Professor; professor of management

DL: What are the challenges associated with these massive technological changes? How does management — and how do corporations — react?

Rahul Kapoor, Management Department chairperson; David W. Hauck Professor; professor of management: There is a critical need to understand what generative AI or ChatGPT can do in terms of your business. This is where most of my research and teaching is focused, in terms of a framework that leaders and managers can use. If you are a leader or an entrepreneur in an emerging business, you have to think about the assets in the business market that your firm is engaging with. And then you have to think about these shifts taking place in technology. For example, what aspects of your assets and business models could complement what ChatGPT could do?

If you are in health-care services, maybe there are certain things that would continue to be done the way they are done. But then ChatGPT or generative AI becomes a value-added service for the patients, for your clients. I think that’s a starting point — how your existing assets and business model are going to be shaped. And not framing it as a negative threat, but thinking about where it can complement and where it might replace.

A very parallel thought process is thinking on the customer side as well. You have existing customers, and then you’re looking for growth. New customers may require you to build a new ecosystem. Once you start thinking about ecosystems, you think about how a new technology like generative AI affects your existing ecosystem. Are your partners and customers better off? Do you need to invest in the ecosystem to create a higher value for your customers? Do you need to create new ecosystems, like electric cars? What Tesla has done and the success story there is not just about the electric car. It’s about building the whole ecosystem.

“The best time to invest in a new disruptive solution or opportunity is when you are running a healthy business,” says professor Rahul Kapoor.

The organizations and leaders who are going to do well are the ones who are going to have a very thoughtful approach to how this shift is affecting their business, both in terms of the positives but also in terms of the challenges. Thinking about it not in terms of just existing customers, but new market opportunities. And not just focusing on the existing value chain, but the broader ecosystem of how these companies create value.

DL: The sheer volume of companies and industries that could be impacted by this feels extremely large. And it will be on the companies themselves, or maybe the industry, to understand how ChatGPT will be incorporated within the structure of their firm.

RK: The best time to explore something that’s uncertain and disruptive is when you’re not desperate for it to be adopted. The best time to invest in a new disruptive solution or opportunity is when you are running a healthy business. And that’s where the pressure is going to be the lowest for you to take more risk and explore more broadly.

Also, think about what sort of organizational structure would allow for companies to take advantage of these disruptive opportunities. It’s often talked about that they need to be managed as separate businesses or units. You see what Google is doing with Alphabet. There’s a Google business, and then there is a Waymo business, which is a self-driving technology. I think that’s one way to do it. Partnerships and alliances with many companies — to share risk, to share cost, to learn from each other — is a model that I think is very effective, especially when it’s still uncertain. Third, I think we need to be disciplined in terms of how we experiment and innovate as well.

It’s a combination of how do we balance existing with new? How do we engage in an organizational configuration that allows us to maintain that balance? And then having a disciplined approach to how we are experimenting around these new technologies and business models and giving them enough of a runway to take off.

 

How Is AI Shaping Human Identity?

The Psychology Behind Our Fear of New Technologies and the Barriers to Widespread Adoption
A mixed media portrait of a man in a blazer and white shirt against an orange, green, and purple background.

Stefano Puntoni, AI at Wharton faculty co-director; Wharton Human-Centered Technology Initiative co-director; Sebastian S. Kresge Professor of Marketing

DL: AI is a polarizing topic. But the hope is that this is going to become a normal part of life as we move forward, correct?

Stefano Puntoni, AI at Wharton faculty co-director; Wharton Human-Centered Technology Initiative co-director; Sebastian S. Kresge Professor of Marketing: This is the kind of topic that leaves no one indifferent. I never speak to anyone who says, “No, it’s not interesting, it’s not relevant, it’s not important.” Everybody recognizes it’s a momentous change. It will have implications for all kinds of stuff in life, from our ability to have an inclusive society to our ability to sustain democracy processes. On the positive, there is incredible potential for improving welfare, well-being, and the economy.

I think there is another element informed by popular culture. We grew up with movies and books that highlight the dangers of technology: The Matrix, The Terminator, 2001: A Space Odyssey, Blade Runner. Those kinds of stories inform how we think about technology, especially today, where we almost seem to be living in a sci-fi movie. We’re at that moment of upheaval and change. And I think some of the fears are bubbling up because of that.

DL: When you think about how our identities are potentially impacted by AI, I would imagine it’s a broad scope of study.

SP: I think of the general question, which is not just to ask, “What do people think of AI, and how would we improve consumer beliefs or acceptance of technology?” Maybe more interesting is to think about how AI changes the way we think about ourselves. It’s a link to identity — our human identity and our identity in specific domains. There are a lot of things that we do in life. We don’t do them only for instrumental reasons — to get a job done. We do them partly because that’s who we are. We have hobbies, passions, ways in which we construe our personas to ourselves and to other people. Technology and automation can be a threat to those personas as more and more activities can be done by machines.

“It’s not about replacing the human. It’s about making the human more effective, inspiring, and productive,” says professor Stefano Puntoni.

Let’s say a potential stumbling block for a lot of tech deployments in organizations is that people feel threatened by it. They may not want to adopt this technology because they feel they can do it better, or because they’re afraid that now they’re irrelevant, or because they are worried about what’s coming next. Some of the resistance is partly due to this perceived threat that people may see, so communicating properly about technology is important. Also, understanding how this technology might affect people’s feelings of identity. Imagine that I am passionately into cooking. There are certain things that I do in cooking that I don’t want to be replaced by a machine. Maybe I’m baking bread, and I’m okay with using a machine that would automate the physical labor of kneading the dough, which is hard work and boring. But what I don’t want is a machine that automates my cognitive skills, my unique abilities to understand what ingredients we need and how you actually bake the bread. So I might feel something like a bread-baking machine would be highly threatening to me.

DL: Is it important to frame the use of AI as complementary to what we already have in society, instead of relying upon it as the next step in how to do everything in our lives?

SP: I think too much of the discussion around AI over the past few years has been “human or machine,” meaning we’ve been thinking about how we can take the human out of the equation. I think we need to change gears now and go into the mind-set I call “human and machine.” Not just understand how we can mimic or replace a human decision process, but focus more on: How do we leverage the unique capabilities of algorithms and of humans, which are different and complementary, to improve business? It’s not about replacing the human. It’s about making the human more effective, inspiring, and productive.

 

Published as “The Rise of AI” in the Fall/Winter 2023 issue of Wharton Magazine.