“AI is one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity.” — Sundar Pichai WG02, Alphabet CEO, at the 2020 World Economic Forum
Artificial intelligence, arguably the most transformative technology to arrive since the PC era, is enabling businesses to achieve aspirations that would seem pie-in-the-sky not so long ago. From foundations first laid in the 1950s and a resurgence decades later as technology accelerated, AI is now equipping businesses with skills that leapfrog their prior capabilities — and do so by an order of magnitude. Illustrating the power of today’s AI are three companies that represent a cross-section of business: a digital-payments firm, a health-care startup, and traditional banks.
In 2020, Visa’s global payment management platform, Cybersource, processed 21 billion transactions, of which two billion were screened for fraud. Each transaction took half a second. “You can only do that if you’ve got AI,” says Carleigh Jaques WG95, global head of Cybersource. “The utility of these tools is incredible.”
KenSci is helping hospitals, health plans, and other health-care companies improve health outcomes through AI-powered insights that lead to better planning and data-driven decision-making. For example, KenSci’s AI platform can predict a patient’s length of hospital stay 40 percent more accurately than the rules-based systems hospitals typically use. This leads to better patient discharge planning, which aids in preparation for staffing levels and bed availability. During COVID, the company also launched real-time analytics to speed up projections, says Sachin Vora WG06, KenSci’s executive vice president.
And thanks to AI, the world’s largest banks may one day essentially become personalized bankers to the masses — ones that “only work for you, and know you, and help you in your time of need,” says Apoorv Saxena WG08, former global head of AI for JPMorgan Chase, who recently joined private equity firm Silver Lake as chief data scientist. “In the world to come,” Saxena predicts, “AI will be embedded in every piece of your business.”
There’s good reason for all of the hype surrounding artificial intelligence. “AI” refers to the pseudo-intellectual capability of computers to mimic human reasoning and understand visual and linguistic inputs. Machine learning is a subset of AI in which computers are trained using data sets to spot and learn from patterns and make recommendations, decisions, and predictions. Computers follow algorithms — or sets of instructions — to produce the desired results.
“Even though the field has been around for 50 years, in many ways, it’s only just getting started, because finally, you have the data and computing power wherein AI can thrive,” says Kartik Hosanagar, who heads Wharton’s Artificial Intelligence for Business program within Analytics at Wharton and is the John C. Hower Professor of Technology and Digital Business and a professor of marketing.
But AI deployed without strong governance guardrails can lead to great harm — not just to business, but to society at large. Wharton recognized the implications of this transformative technological shift and launched AI for Business in 2020. “AI is the next phase of digital transformation” after the internet, mobile, and cloud revolutions, says Hosanagar: “We’re trying to see how we can help the next generation of managers, students, and practitioners in the industry navigate this transition.”
A Practical Approach to AI
AI for Business is an interdisciplinary effort that pairs Wharton’s business and quantitative expertise in a program that readies executives and students for an AI world. What makes it different from other courses in the marketplace? As one executive going through Wharton’s Executive MBA program told Hosanagar: “I’ve taken a bunch of these AI courses elsewhere and never quite understood it, because at the end of the day, they’re really for engineers.”
Wharton’s approach tackles AI through a business lens that leads to actionable insights, fueled by deep expertise in data analytics. “Wharton is probably the most quantitative business school out there,” says Hosanagar, who’s also the author of A Human’s Guide to Machine Intelligence. “There is a role for informed decision-making, and that’s at the heart of who we are at Wharton.”
Hosanagar says the program will help executives answer such business questions as: How do you rebuild around AI? Which initiative should we chase? Should we pursue one big moonshot AI project, or should we do a portfolio of things? Is AI meant to be a centralized function in the organization, or should it be distributed across different divisions?
Beyond classes, AI for Business also offers research, provides training, supports startups, holds conferences, develops AI tools, and sponsors the AI@Penn student club, among other activities. “Our mission at AI for Business is to provide tomorrow’s leaders with knowledge about where and how to apply AI in the enterprise,” says Mary Purk, executive director for both AI for Business and Wharton Customer Analytics.
One area that’s particularly helpful to companies is partnerships with student teams brought on to solve real-world analytics problems. “Students are well versed in the latest machine-learning and analytics applications and also aren’t encumbered by corporate cultures,” says Purk. “So they typically come up with solutions that are innovative.”
Through these projects, companies can also tap a supply of sharp analytics minds among the students. “There’s a talent drought out there,” says Purk, “and through our corporate partnership programs, companies can collaborate alongside our students and observe what future talent can deliver.”
Silver Lake’s Saxena, who’s been a guest speaker for AI for Business, believes this marriage of AI and business is critical. Consider the e-commerce battle between Walmart and Amazon. “For Walmart to fully embrace the power of the web, it needed to hire more than just engineers who could build a high-performance and usable e-commerce website,” Saxena says. “They also needed to understand and embrace the new business innovations that customer interactions on the web enable — for example, shopping personalization and same-day delivery of online purchases — to successfully compete with Amazon.”
The emphasis on business in AI for Business is a smart one, says Saxena: “Does it matter to CEOs how a mobile phone works? How the cloud, the internet, or the PC works? It doesn’t. What should matter to them deeply is what it enables and how it transforms the way they do business.”
The Importance of an AI Playbook
Companies big and small can’t afford to ignore AI, especially since it’s already touching so many aspects of life: Gmail, the Facebook newsfeed, Netflix movie recommendations, digital assistants such as Siri and Alexa, voice recognition systems, Wall Street trading, self-driving cars, manufacturing robots, customer service chatbots, and much more.
When companies are considering the use of AI, it’s helpful for them to understand the context. “AI and machine-learning and their different models aren’t one-size-fits-all,” says Cybersource’s Jaques. “They need to be relevant and aligned with the business outcome.” For example, Cybersource uses AI to determine the proper fraud-management level for clients by balancing a merchant’s desire to increase sales with minimizing the number of fraudulent transactions. If a client has a business model in which the cost to produce a product or service is low, it may want to accept as many sales transactions as possible. Cybersource would adjust its fraud management system to reflect a moderate tolerance for fraud. For a client with a high “cost of goods sold,” however, the company would apply a lower fraud tolerance level.
Another tip from Saxena: Make sure AI is enhancing the core of the business. For example, a big cost for a logistics company is transport. “There’s an amazing opportunity to use AI to optimize your operations and have a deeper engagement with customers in terms of telling them where their shipment is along the process and proactively identifying obstacles and rerouting it,” he says.
Instead, Saxena has observed leaders who overlook this critical consideration: “I see the CEO get excited about AI, and the innovation team comes in with an Alexa-like interaction model that answers the question, ‘What is the status of my shipment?’ That’s valuable, but it’s not hitting the core of your business.”
Before management greenlights an AI project, it often asks about the return on investment. Saxena says mostly fixating on ROI misses the point. Imagine if a BlackBerry executive had the opportunity to develop an iPhone-like mobile phone before Apple. If ROI was the main criterion, it would be very difficult to justify the project, he says: “ROI is easy to quantify but a poor representation of value.” The iPhone went on to crush BlackBerry, which had long been the mobile phone of choice in business.
Companies should view AI’s value similarly — not just in terms of dollars and cents, but for its contribution to long-term value. Consider Netflix. “What is the ROI of personalization on Netflix?” Saxena asks. “The value created might be stickiness, better engagement from a customer, better click-through, more happiness, and a better NPS,” or Net Promoter Score, a metric measuring the likelihood a customer would recommend a company, product, or service.
“Early in the process, if you make ROI the key factor in getting yourself up to speed on AI, you will fail,” Saxena says. While businesses can’t entirely ignore budget questions, a better strategy is to start with a portfolio of smaller AI projects. “There’s a huge benefit in just learning the technology and getting familiar with it,” Saxena says. “Then you start building this portfolio-based approach. Some projects will create amazing value, which will pay for the rest.” Hosanagar adds, “Early failures should not discourage companies from continuing to invest in AI. The first few initiatives should be more focused on organizational learning than generating ROI in the short run. Once data and AI are in the DNA of the organization, business transformation and ROI will follow.”
Once AI is deployed, will the humans who work with it accept its judgments? This is an issue that KenSci encounters in the health-care sector. Gaining trust is “a very important thing in health care, because you’re playing with a very critical aspect — people’s health,” Vora says. That’s why KenSci deploys “explainable AI,” in which results are explained to humans, who then decide whether or not to accept them. “The expert at the end of the day is the doctor or health-care provider — not a machine, not a model, not a scientific approach just by itself.”
Such transparency engenders trust, Vora explains. Let’s say a patient consults a doctor about shoulder replacement surgery. Typically, the doctor would consider half a dozen factors to project the post-surgery recovery of the patient. With KenSci’s system, the AI model would look at more than 100 factors — including anonymized data of other patients with similar demographics who went through similar surgeries. The AI model would provide the surgeon with a prediction of a patient’s postoperative parameters and an explanation of which factors would affect a patient’s full recovery. “It’s an order-of-magnitude difference that can impact the way a data set can define what the outcomes could be,” Vora says.
To gain acceptance from health-care providers, KenSci decided to take an “assistive intelligence” rather than a “prescriptive” approach, Vora says. Its results are meant to assist humans, who maintain final veto power, rather than to tell them what to do. The AI system discloses which variables play the strongest part in reaching its results, so the provider is better able to decide whether to accept them or not.
This type of “explainable AI” is key to gaining acceptance among humans, says Vora: “The scientific approach is to really highlight things that wouldn’t be possible for a human to take into account at the point of decision-making, and it gives a comprehensive overview of what the decision should be — then allows the human to make the determination.”
As powerful a tool as AI is, its ability to do harm also seems unprecedented. Kevin Werbach, Wharton professor and department chairperson of legal studies and business ethics, has been teaching Big Data, Big Responsibilities, a course on the challenges surrounding AI, for the past five years. He says Wharton likely was the first U.S. business school to offer a class that focuses on these challenges, including algorithmic bias, discrimination, privacy violations, manipulations, and the risk of inaccurate or problematic AI decisions.
Bias and discriminatory decisions from AI systems can arise when the data that trains the AI model has embedded biases. For example, an AI system may generally be more inclined to reject a loan application from minority borrowers if its training data includes fewer approvals for the same racial groups. Even if the borrower doesn’t disclose his or her race, the AI model could still infer the racial group based on such factors as zip code and income.
“These are topics that have become of great significance and interest in computer science and data science, but the business world lagged behind,” Werbach says. “The course is designed to help students identify, for all the benefits and potential, what are situations where AI can violate the law, result in ethical quandaries, or simply produce unexpected and harmful results.”
The first guardrail is to have a core set of principles around which to evaluate AI. Next, companies must develop a set of major concern areas that are well understood. “Firms should have a checklist to make sure that at a high level, they’re asking about those kinds of well-identified harms every time they’re considering deploying a new system,” Werbach says. Third, there need to be clear lines of communication between legal, “responsible AI” compliance executives and the front lines, so issues can be identified and addressed early and systematically. Werbach says data scientists are developing sophisticated new models to weed out problematic issues such as bias and privacy violations.
Cybersource has codified its data governance into five pillars: security, control (how data is being used), value (data used has a clear benefit to the ecosystem), fairness, and accountability. Says Jaques: “We wanted to have these guideposts and have them be specific in articulating these values, making sure they’re known within the company so they’re incorporated into the business.”
Hosanagar proposes creating a quality assurance framework for data similar to what already exists for software. “Almost every company that has software engineers designing software for them will also have test engineers to do software testing,” he says. “In data, companies have data scientists build systems, but they don’t have the equivalent of test engineers or quality assurance — and that’s needed.” In his book A Human’s Guide to Machine Intelligence, Hosanagar discusses this and other governance frameworks that companies can roll out to manage the risks associated with automated decisions.
Regulators are also still figuring out how to manage the complexities of AI. Europe has proposed rules that would layer on top of its GDPR privacy regulations. “That’s under debate in Europe and hasn’t been finalized or adopted,” Werbach says. “There have been a number of narrower sets of AI regulations proposed but not adopted in the U.S. There isn’t any major jurisdiction in the world today with a comprehensive set of AI regulations.”
Beyond the risks to business, there’s a long-running concern that AI, in a dystopian twist, will destroy jobs — a fear that has surfaced in many forms since the Industrial Revolution. But Saxena challenges that; his view is that AI won’t replace human employment en masse. Instead, jobs held by humans will become smarter and more personalized. For example, with the advent of AI-powered digital assistants, administrators might be seen as redundant. Not necessarily: “You’ll still be using your personal assistant five years from now,” Saxena says. “Rather than setting up the calendar, which can be done by AI bots, your assistant will be involved in higher-value engagements than what they are doing today.”
Whatever the outlook for jobs, the AI wave looks to be unstoppable. According to a 2021 KPMG survey, the rate of AI adoption surged during the pandemic, with the highest gains seen in financial services (84 percent vs. 47 percent in the prior year), followed by tech (83 percent vs. 63 percent) and retail (81 percent vs. 52 percent). Separately, a PwC survey of 1,000 executives — including more than 200 CEOs — shows that 86 percent said that AI would be a “mainstream technology” at their companies in 2021, bringing such widespread benefits as creating better customer experiences, improving decision-making, and enhancing innovation, as well as increasing cost savings, operational efficiency, and productivity.
With so much to gain from AI, companies must learn how to use it effectively or risk being left behind. “The CEOs have to decide whether they want to be the disrupter or the disrupted,” Saxena says. “Just like the mobile wave, just like the web, just like the cloud, AI fundamentally transforms every piece of business in a new way. That’s how CEOs should think about it.”
Deborah Yao is a finance and technology writer and editor who has held editorial positions at Knowledge@Wharton, Amazon, and the Associated Press. She currently works in marketing for Milliman FRM, a financial risk management firm.
Published as “The Future of Business” in the Fall/Winter 2021 issue of Wharton Magazine.