Every three months, thousands of executives, analysts and money managers around the world wait anxiously for the hard numbers that will move markets, slash or increase payrolls and drive expansion. Although the importance of traditional performance measures such as return on investment, stock price and quarterly earnings has long been paramount, things may be about to change.

In a decade characterized by exploding global competition, a shift from manufacturing-oriented firms to service-oriented firms and almost perpetual reengineering, more and more companies are paying far greater attention to the measurement of “soft” or “intangible” assets — intellectual capital, training, human resources, brand image and, most important, customer satisfaction. As Steven Wallman, a commissioner at the U.S. Securities and Exchange Commission, says, “assets like customer satisfaction and employee loyalty are increasingly viewed as drivers of wealth production and earnings. If measured reliably, they can provide critically important information to the financial markets.”

Reliable measurements are half the battle. The real challenge, says David Larcker, Ernst & Young Professor of Accounting, is actually linking quality and customer-focused initiatives to accounting and stock market returns.

Larcker and colleague Christopher Ittner, KPMG Peat Marwick Term Assistant Professor of Accounting, are at the forefront of a movement within a growing number of companies to predict future economic performance using soft asset measurements. They, along with University of Michigan Professor Claes Fornell, have developed a sophisticated approach for measuring the economic value of soft assets with a mathematical model based on the American Customer Satisfaction Index and the Swedish Customer Satisfaction Barometer.

“The choice of performance measures is important for everyone in an organization,” says Larcker. “For example, more and more of these measures — like customer satisfaction, employee satisfaction and quality — are being folded directly into bonus contracts for executives. The idea is that traditional financial measures don’t capture everything about managerial performance. There are certain steps a manager can take to improve customer satisfaction that don’t show up immediately in the bottom line, but will show up in the future. These are forward-looking indicators of economic values of the company.”

Neel Foster, a board member at the Financial Accounting Standards Board, agrees. “As we move into more of an information age and service-based economy, the importance of soft assets is becoming more relevant to valuing some companies than brick and mortar. A lot of companies don’t even have brick and mortar.”

The distinction, Foster says, is an important one for analysts whose job is to “figure out what future cash flows are and estimate the value of a business. If they don’t have any information regarding soft assets and the kinds of cash flows that they’re going to generate, they have an incomplete picture.”

Drastic changes in industries such as telecommunications, banking and utilities have fueled the interest in putting a value on soft assets. “The service sector is under tremendous competitive pressure due to deregulation,” Larcker says. “A customer now has a choice of alternative service providers. If a company doesn’t satisfy them, they’ll just go to another company that can.”

The same might be said about employees. “We’re taking a lot of technology from the customer side and applying it to the employee side,” says Jim Brown, WG’92, vice president of quality and business development at GE Capital. “We’re looking at employee retention and employee satisfaction and relating this back to things we can do differently as a company. Previously we had much more generic ‘feel good’ surveys for our employees, asking them how they like it here, how they like their benefits and so forth. Now our questions relate to what makes an employee leave or stay. For example, what is the likelihood he or she will be with us five years from now. That is more of an outcome measure. Then we drill down to the areas that would affect that decision.”

While many companies have, in fact, made progress in improving their measurement of soft assets, the U.S. corporate landscape is littered with firms, even entire industries, that followed narrow, financial measures, says Marshall Meyer, professor of management and sociology.

In his forthcoming book, Finding Performance (Harvard Business School Press, 1997), Meyer points to several industries that lost significant market share to competition in the early 1980s because they were obsessed with production and inattentive to quality. The auto industry is one. Another is the mainframe computer industry whose preoccupation with profit margins in static commercial markets blinded it to growth opportunities in consumer markets and opened the way for clone makers to seize the market for desktop computers. A third is the banking industry which found itself holding billions of dollars of distressed assets, mainly real estate, in the early 1990s because it had been booking profits while ignoring risk.

“The common element in these stories is that financial disasters might have been avoided had the right measures been in place,” says Meyer. “The stories have taught business people that no single measure can capture the totality of performance and that obsession with any single measure can be very risky. The managers I’ve spoken with, as a consequence, now accept the need for multifaceted performance measurements to capture financial and nonfinancial dimensions of performance simultaneously.”

The usefulness of information provided by nonfinancial measurements is not limited to companies for their internal decision making. As Foster has pointed out, analysts and investors would find the information useful as well. Yet current accounting rules do not allow for intangible assets to appear on the balance sheet as an asset like equipment or inventory, notes Ittner. “Say a company invests in intangible assets such as employee training or research and development. The payoff from that may take some time to show up in a typical financial statement.”

For those analysts who do obtain such information about intangible assets, the results can be powerful, says GE Capital’s Brown. “Morgan Stanley upped its estimate of GE’s stock price almost 20 percent based on the information GE reported to them about what they were doing with their quality initiative and where they thought it was leading,” Brown says.

Ideally, adds the SEC’s Wallman, “analysts ought to be saying that they desperately want this information, and that it must be generated from a model that’s comparable across businesses and is fair, reasonable and verified by a third party such as an auditing firm. As it becomes clearer that this information is both reliable and useful, and that it can measure numbers that provide cross-company and cross-industry comparisons, there will be a growing demand for the information. It will start with sophisticated investors and lenders of capital, who will obtain the information either directly from the company (confidentially if need be) or commission their own studies. Or, you’ll find money managers commissioning studies. Finally, you’ll see information about intangible assets made public.”

II. Tackling the Problem of Measurement

While companies have mastered the ability to pinpoint cycle times and defect rates, measuring intangible assets like customer satisfaction and employee satisfaction has proven more difficult.

According to Ittner, a major pitfall in trying to develop reliable measurement systems is that measures move around too much. For example, a study by Ittner, Larcker and Marshall Meyer of a major financial services firm found that according to its measures, 80 percent of its customers were satisfied in a given quarter. But in the subsequent quarter only 55 percent said they were satisfied. A measurement system like that has too much error, Ittner says. And the implications can be damaging. “If you use the wrong measure, or it doesn’t map into economic performance, not only have you wasted a lot of money creating a customer satisfaction index that’s not very good, but you’ve potentially made disastrous product and service design decisions.”

“Many times companies just come up with a list of attributes and ask people to rate them without having any real knowledge of what’s important,” says William Madway, W’79, WG’85, director of research at Charlotte, N.C.-based American City Business Journals, a publisher of 35 business newspapers nationwide. “One could miss the boat entirely as to defining what a satisfactory experience is for a customer.”

“A critical factor in developing accurate measurements is making sure that they’re part of a broader-based measurement system that integrates data about a customer’s behavior and the competitor’s customer behavior,” says Denis Hamilton, WEMBA’87, director of quality management at Johnson & Johnson. “Many companies have mastered techniques that measure attributes such as price, image, service and product factors, but the secret is in designing surveys around behavioral segments of the market. The learning comes from contrasting loyal customers against non-loyal customers.”

A large chemical company studied by Larcker is a case in point. After learning that its customer satisfaction levels (and revenue growth rates) were lower than its competitors, company analysts found that customers really wanted the chemical company’s salespeople to be not just order takers, but partners who would interact with the customers to suggest value added initiatives, such as how to more efficiently use technology. Once the chemical company’s salespeople moved from pure order takers to partners, Larcker says, the chemical company saw noticeable improvements in customer satisfaction and a resulting rise in sales revenue and customer profitability.

Some companies are looking to each other for answers. Hamilton, a point person in J&J’s customer satisfaction initiatives, recently chaired a consortium of nine blue-chip U.S. companies that meets three times per year to discuss best practices in customer satisfaction. One of the main issues is how to standardize surveys, especially when business units may be spread around the world.

A big challenge for the global business entity, for example, is how to fold cultural differences into measurement systems, particularly in businesses where service is important to decision making. For example, J&J is very decentralized and comprised of more than 160 operating units worldwide. One of the first things to do, says Hamilton, is determine what kind of measurement scales to use.

“In Germany, scales must be reversed because in the German school system one is considered high and five is considered low, whereas in the U.S., five is typically considered a favorable score and one is lower,” Hamilton says. “In the Far East, questions that require negative responses are typically not desirable because of the cultural orientation against using the word ‘no’.”

Another challenge for far-flung companies is instituting measures across various business units. “You have to get a firm-wide measure, but it’s so dependent on what’s happening in local markets,” says Madway. “In my company, these business units tend to be very autonomous. It has varied from city to city as to whether the individual office did any type of customer satisfaction study. And, if they did, it wasn’t uniform. One of my goals is to standardize the measurement process.”

Equally difficult for managers like Madway is restating questions in a more detailed and comprehensive way to get the most useful information.

“Sophisticated performance measures are very new to many industries, especially newspaper publishing,” Madway says. “In the past, newspaper executives would test readership behavior by asking their readers about a few major attributes such as appearance, reporter quality and quality of coverage, using simple scales ranging from excellent to poor.”

Today, Madway notes, there are much better gauges in predicting readership and loyalty such as the degree to which the paper contains information or articles that help the reader with their business, or the degree to which interesting articles are easy to find. “We’ve taken the same types of questions and changed the way they’re asked.”

III. Linking Customer Satisfaction Measurements to Quality

Larcker and Ittner recently surveyed senior managers responsible for managing quality programs at major U.S. firms and found that 75 percent felt considerable pressure to demonstrate the financial consequences of their quality initiatives. Yet only 29 percent thought they could link quality to accounting returns such as return on assets, and only 12 percent could link quality to stock price returns. Similarly, only 27 percent could link customer satisfaction measures to accounting or stock price returns.

According to Ittner, a common stumbling block is most firms’ inability to link internal and external quality data. For example, defect rates and cost of poor quality are typically tracked by the quality or operations department, whereas customer satisfaction is tracked by those in marketing research. “If you really want to find out the returns of these quality programs, or which projects are going to give the biggest payback, you’d better look at both internal and external effects simultaneously,” Ittner says.

“Putting more emphasis on nonfinancial measures allows a much broader understanding of what quality means,” he adds. “For example, does the product or service meet customer needs? Measuring defects is one component of that, but it’s a very minor component compared to things like ‘Is it aesthetically pleasing?’ ‘Can it be serviced?’ ‘Is it reliable?’ Incorporating nonfinancial measures gives a more encompassing definition of quality.”

The difficulty, however, is that managers do not always know what to do with reams of data, says Patrick Harker, UPS Transportation Professor for the Private Sector and professor of operations and information management. “There was a lot of frustration in the early quality movement because organizations wouldn’t do anything about the information they were collecting about customers. It would get lost in a bureaucratic abyss.”

Indeed, several recent studies indicate that a majority of quality programs have failed to produce significant economic returns. McKinsey & Company, for example, found that nearly two-thirds of the quality programs that it examined had either stalled or fallen short of delivering real improvements. A survey by A.T. Kearney found that 80 percent of more than 100 British firms reported “no significant impact as a result of TQM,” and more than 300 U.S. companies surveyed by Arthur D. Little found “zero competitive gain.”

“Everybody assumes that customer satisfaction translates into better performance in the future,” says Larcker. “But so far there has been relatively little evidence to back this up …

“From a business perspective, there are two considerations: First, can you develop measures of satisfaction — relating to customers, employees, whatever — that actually tell you something about future economic consequences? Second, what component, or drivers, can you change to cause this measure to go up? If you can answer these two questions, this will tell you where you want to focus your quality initiatives.”

The methodology used by Ittner and Larcker — which is based on several recent statistical advances and innovations in the analysis of qualitative and quantitative data — provides customer satisfaction measures that are more predictive of the ultimate economic goals of the quality program. In addition, the model explicitly includes the elements of service, quality and so on that are leading to customer satisfaction, thereby providing insight into those quality improvement opportunities offering the highest potential economic payback.

To test their model, Larcker and Ittner have used data from several manufacturing and service organizations. Ittner says the first step in estimating this model is conducting qualitative research through techniques like one-on-one customer interviews and focus groups to determine the primary quality components, such as product price, packaging, cleanliness of facilities, courteous service and breadth of promotion programs.

“You have to figure out what people really mean by quality,” Ittner says. “For example, in the fast food industry, initiatives like offering the latest promotions (the newest Walt Disney toy, for example) can have a big impact on customer satisfaction. Quality is a lot more than just the product, especially in service businesses. Ultimately you don’t just want customers to be happy; you want them to come back and buy your product or service again.”

After collecting the initial qualitative data and identifying the potential drivers of customer satisfaction, the researchers use a sophisticated statistical technique that “maximizes the ability of the customer satisfaction measures to predict economic outcomes (retention, customer profitability, the likelihood of the customer providing positive word-of-mouth advertising) and ultimately economic outcomes such as stock price or return on assets,” Ittner says. “You don’t just want to maximize customer satisfaction, but those aspects of customer satisfaction that have the biggest impact on economic performance.”

After making that assessment, a company has a ranking of quality components in terms of their input on customer satisfaction and the creation of economic value. This is a key input in deciding how a company allocates the budget for quality improvement.

To test whether the link has been made, sales figures should be checked against satisfaction scores over some time period (generally six to twelve months). Controlling for things such as layoffs, strikes, or unexpected downturns in the economy, a positive relation between customer satisfaction and financial performance is typically observed.

“Information like this is invaluable for companies in industries with short customer cycles like banks or the Yellow Page directories, or in industries like telecommunications and health care where customers can easily switch allegiance,” says Ittner. “You can see fairly quickly if people aren’t happy, and this allows you to make adjustments where needed.”

However, this analysis is not limited to companies with short customer repurchase cycles. In industries with long repurchase cycles, nonfinancial customer satisfaction measures can provide early warning of problems long before financial results turn down.

For those companies that are able to more accurately measure soft assets, say Larcker and Ittner, there is a payoff. A study they completed in 1996 found that companies ranking the highest in the American Customer Satisfaction Index (discussed annually in Fortune) — a leading tracker of customer satisfaction data for major U.S. companies — significantly outpaced lower-ranked companies in the stock market.

IV. Looking Ahead

“The cutting-edge work now being done links different parts of a company into a system that is essentially mathematically based, but feeds on data collected within the firm and from its customers,” says Claes Fornell, the Michigan professor who is also founder of The CFI Group, an Ann Arbor-based consulting firm specializing in customer satisfaction measurement.

“For example, if we change this aspect of an employee’s working environment, we can trace the effects of that all the way through to shareholder value. We can see how employee satisfaction impacts the customer’s evaluation of the firm, how that affects return on investment and how return on investment impacts shareholder value.”

Fornell, a research collaborator of Larcker’s, says that one of the outcomes of the work being done by Larcker, Ittner and others, is that companies will not be as captive of short-term financial performance. “Quarterly earnings reports will still be significant, but not as important as they are today. Intangible assets such as customer satisfaction are much more long-term in nature.”

Moreover, customer satisfaction and quality are just the tip of the non-financial asset iceberg. “There are other assets — alliances, supplier relationships, intellectual capital, employee diversity, channels of distribution — that are equally, or even more, valuable,” says Larcker.

The work in this area may also mark a resurgence in the quality movement, according to some long-time experts in the quality field.

“Quality is not passé,” says Wharton Professor Paul Kleindorfer, Universal Furniture Professor of Operations and Information Management. “It’s the very core of an analytically oriented, customer-focused person who wants knowledge about the market and how his or her organization can create value for that market. In the hands of such a person, quality is a very valuable perspective and a real motor for change and growth.”

 

GE Capital

General Electric, long at the forefront of quality initiatives, is also taking a lead in linking those initiatives directly to financial performance. One of GE’s top subsidiaries, GE Capital, is in the midst of rolling out a customer loyalty survey and customer service program to each of its 26 different businesses, which range from consumer credit cards to highly specialized financing.

David Larcker, Ernst & Young Professor of Accounting, has been directly involved with GE Capital’s initiatives in conjunction with Pamela Cohen of CFI Group, the Ann Arbor-based consulting firm.

The first phase of GE Capital’s initiative involves the collection of customer information and the establishment of extensive databases. Research firms, such as CFI, then work directly with senior managers from the individual businesses to identify target customers and determine what GE Capital employees do day-to-day to affect customer loyalty and retention. One major problem that was uncovered in some of the businesses was a slow and confusing billing process.

Little time was wasted as project teams were deployed from across various GE Capital businesses to redesign the billing process. But, says Jim Brown, WG’92, vice president of quality and business development at GE Capital, “it is not sufficient to look just at the billing process. That might be where the problem manifests itself with the customer, but a lot of times the problem may actually start with sales people not understanding what they’re selling customers and customers not understanding what they’re buying.”

According to Brown, a team’s size depends on the business, but typically ranges from as little as six to eight employees, or, in some of the larger businesses, as many as 30 employees across a number of teams and various geographies. It is essential, he says, to pull together teams from across a number of internal functions to solve the problems.

“The most difficult thing in this whole initiative is to take cross-functional teams and have them work on problems spanning across functions. Then, you actually have to get the improvements implemented and make them stick. We’ve found that we need to continually measure these processes and control them through reporting and constant feedback. We currently use a quarterly reporting system, but we’re probably heading to a monthly system.”

GE Capital demonstrates its commitment to improving quality by pulling employees from their regular jobs and dedicating them to quality improvement projects on a rotation that can last up to two or three years. In addition, Brown says, the company has instituted several financial incentives at both upper and lower levels of the organization to be more team-based and to focus specifically on these project results.

“The cutting edge is focusing everything around the customer and linking it to actual behavior, which is probably the next phase we’re going into,” Brown says. “Now, we’re relying a lot on interview data and what customers tell us. Next, we want to link it to actual behavior and finally, link it to financial performance. That doesn’t mean that we’re going to have perfect data, but it’s something we can begin estimating over the next several months, then refine over the next few years. It’s difficult, but there’s no question it can be done.”

Common Pitfalls in Customer Satisfaction Measurement

Although there is no Holy Grail of customer satisfaction measurement, David Larcker, Ernst & Young Professor of Accounting, and Christopher Ittner, KPMG Peat Marwick Term Assistant Professor of Accounting, offer their observations on limitations in typical customer satisfactions measurement programs.

Customer satisfaction measures are unstable over time. In many companies, customer satisfaction measures exhibit considerable variation from one period to the next. However, unless the company (or its competitors) has made a significant change in its operations, this variation is likely to be due to measurement error, rather than to actual changes in real customer satisfaction.

Customer satisfaction is measured using a single overall question. Customer satisfaction is too complicated to be measured with one survey item. However, many companies use a single question and simply tabulate the percentage of respondents that provide the highest rating (i.e., a “top-box” measure). But research demonstrates that these methods have dubious reliability.

Customer satisfaction measures cannot be linked to desired economic outcomes. Many firms are frustrated in their efforts to demonstrate that changes in customer satisfaction lead to future changes in customer purchase behavior, sales growth, accounting returns and shareholder value creation. Unless customer satisfaction can be linked to either current or future economic performance, customer satisfaction measures are unlikely to be providing any useful information for decision making and performance evaluation.

The drivers included in the customer satisfaction measurement are not actionable. Even if reliable and valid measures of customer satisfaction can be obtained, they are of little use unless the analysis also identifies the factors that drive customer behavior. Most customer satisfaction analyses include some examination of factors that are assumed to drive customer behavior. However, unless these factors can be translated into action plans, the customer satisfaction measures alone are unable to guide improvement initiatives. For example, customers may indicate that “professionalism” is an important determinant of their satisfaction rating. However, without some additional detailed qualitative analysis of what “professionalism” means to the customer, it is almost impossible to develop action plans for increasing customer satisfaction.

Customer satisfaction measures are not linked to customer profitability analysis. Sophisticated customer satisfaction measures provide valuable information about future customer purchase behavior and the revenue consequences of improvement initiatives. However, it is still necessary to determine the cost required to increase customer satisfaction. In some cases the cost of increasing customer satisfaction can exceed the associated revenue gains. In other instances, the most highly satisfied customer may be the least profitable due to the higher costs of serving these customers. Without supporting internal accounting systems to capture customer-level profitability data, customer satisfaction measures alone provide limited guidance for improving profitability.

Using Nonfinancial Measures to Determine Compensation

Although companies have traditionally relied on financial measures to reward executives, David Larcker, Ernst & Young Professor of Accounting, says that an increasing number of nonfinancial measures like customer satisfaction and employee satisfaction are being used in the bonus contracts of executives. For example, the Chrysler Corporation ties a large amount of its executive bonuses to the J.D. Power and Associates quality survey.

Larcker, Wharton’s Christopher Ittner, KPMG Peat Marwick Term Assistant Professor of Accounting, and Madhav V. Rajan, associate professor of accounting, collected data from 317 companies across many industrial sectors and found that 36 percent of the companies sampled used nonfinancial measures to determine executive rewards.

They also uncovered several key factors that influence which type of measure a company preferred. The primary insight is that companies that pursued strategies founded on innovation and new product development (strategies that are fundamentally long-term in scope) tended to favor nonfinancial measures. In addition, nonfinancial measures are used more frequently in companies with a well-developed quality program, in utilities and telecommunications firms that face regulatory and competitive pressures to improve nonfinancial dimensions such as safety and customer satisfaction, and in firms where financial performance measures are heavily influenced by factors outside the executives’ control.

Finally, they found no evidence that executives manipulated boards of directors into adopting nonfinancial measures to pad bonuses (an important finding because it contradicts a common fear that greed may motivate the adoption of such measures).

Although many companies have embraced the tying of compensation to nonfinancial performance measures, some experts say that the results have been mixed and that success often depends on the organization. In some companies, this approach is applied not just to senior executives, but taken down to the sales level, says Denis Hamilton, WEMBA’87, director of quality management at Johnson & Johnson. But, says Hamilton, that may not necessarily be positive. “The general view I’ve heard is that there’s too much manipulation of the results when you take it down to that level.”

However, Bill Madway, W’79, WG’85, director of research at American City Business Journals, says that companies can always do a better job of satisfying customers, and one way is to tie employee bonuses into nonfinancial performance measures like customer satisfaction. “This applies to everyone, especially the lower-level employees because they’re on the front lines with the customers,” Madway says. “If employees are able to drive up customer satisfaction and it helps the company’s bottom line, they should be rewarded for that.”