When I studied economics in the early 1970s, the totally dominant point of view in academic circles was that differences in profit rates were industry effects. High profits were due, it was argued, to muted competition taking place behind high barriers to entry. Economic studies compared industry profitability to overall industry concentration and found a relationship. A done deal.
This dominant view had serious consequences. It shaped the Federal Trade Commission’s guidelines on concentration and mergers and served as the background to many antitrust battles in the courts. At this time, the strategy field was just beginning to find its legs as a legitimate discipline and I began to realize that our basic proposition was that this “industry” view was terribly incomplete. Case studies of a number of industries had revealed that competing businesses did not all approach the market in the same way, and often achieved sharply different levels of performance. We called these differences in approach strategies. Our field data indicated that many industries were far from homogeneous—within each industry, firms differed greatly in their approaches and their results.
It was 1974 when I first had the insight that the traditional measures of industry profitability might be spurious. Imagine a world in which school students differ in their ability to do math. Now, line those students up randomly and put the first ten in one classroom, the second ten in the next classroom, and so forth. Because students differ, and because the classes aren’t that big, the average math scores in each classroom will differ from one another. Along comes a bleary-eyed economist who says “This is clearly a classroom effect. Some classrooms produce higher math scores than others.” His conclusion is incorrect because the differences in classroom averages are just reflections of individual differences, not the causes of the differences in average scores.
Similarly, the traditional measures of “industry” performance might show industry differences even if there are no industry effects. The computer industry might appear profitable (in 1974) because it contained IBM, then a world-class stellar performer. And, the farm equipment industry might appear unprofitable because it contained International Harvester, a troubled company.
If you measure the profitability (return on capital) of each large company in the United States, the resulting pile of numbers has a mean and a variance (a measure of dispersion). How much of this variance, I wondered, was due to differences among real industry factors and how much was due to especially high and low performing firms sprinkled among the industries? And, of this dispersion, how much was stable over time? What if, I wondered, all the studies of industry performance were wrong-headed? What if the differences in industry performance being so carefully analyzed were mostly an illusion, being noisy reflections of profit differences among companies?
The question I was asking was unfamiliar in economics and business research. Most empirical research on industry and business performance used regression analysis to search for the “causes” of performance differences and used “statistical significance” as the indicator that a valid cause had been found. There were two problems with this way of working. First, industries are made up of businesses—technically, businesses are nested within industries. Consequently, one industry can appear more profitable than another simply because it contains one or two highly profitable companies. Disentangling this kind of mess is a problem in variance components analysis, not regression. Secondly, I was not interested in statistical significance, but in importance, and they are far from being the same thing. To say an effect is statistically significant is to say that it isn’t zero. I was willing to grant that pure industry effects were not zero. But were they important compared to business effects?
My early results showed that industry factors were there, but that stable business-to-business differences in performance were very much larger. It was an exciting finding and I sent the paper to the American Economic Review. It received a speedy rejection. The editor questioned the cleanliness of corporate accounting data and, more depressing, had found the topic of little interest. Inexperienced at rejection, I let the matter rest.
A decade later I was surprised to see the American Economic Review carry an article on this very subject. Richard Schmalensee,1 a noted MIT economist (who later served on Clinton’s Council of Economic Advisors and who became Dean of MIT’s Sloan School in 1999) had used a newer better data set, collected by the Federal Trade Commission (FTC). Schmalensee reported that most of the variance was industry related—a direct contradiction to what I thought I had observed. In fact, he concluded that industry effects accounted for about 20 percent of the variance in business returns, that less than 1 percent was related to market share, and that the corporate parent effect was zero.
Reading his piece brought on a sharp pang of regret. Schmalensee’s work was both innovative and careful. But this had been my private topic—I should have kept working the data, improving my methods. I should have seen the usefulness of the new FTC data. This lead article in an important journal could have been mine. But there was no one to blame but myself. I just hadn’t tried hard enough. Near the end of his life, Justice Thurgood Marshall said “I did the best I could with what I had.” Unfortunately, I hadn’t done all I could and it hurt.
After loss came puzzlement. How could he and I have obtained such a contrary results? Was it the new data? On closer reading, I saw the problem. Just as I had done, Schmalensee used variance components methods to analyze the industry data. But he assumed that all meaningful differences among businesses, apart from industry and corporate effects, were due to market share differences. This approach reflected the dominant opinion—that firms within industries were identical but for scale. Were this true, Schmalensee’s market-share measure would have captured the only source of business-to-business variation. But strategy research had shown that there were many ways other than scale in which businesses differed within industries. There was systematic variation his method did not capture. In my opinion, the right approach was to use variance components methods for both industry and business unit performance. My hypothesis still had legs and the game was not over.
Three years afterward I found myself crouched in a small dark room at the Federal Trade Commission, trying to learn how to use their antiquated computer to perform the analyses I wanted. After months of fussing, the results appeared on a print-out—business unit effects far out-weighted industry effects. Schmalensee’s conclusion was not correct. The within-industry differences his analysis had rejected as “noise” actually contained the majority of the story. The dominant opinion was wrong.
Of the total variance in ROI, 46% was due to stable business-unit effects and a much-smaller 8% was due to stable industry effects. (In addition, 37% was due to annual fluctuations in business-unit ROI and 8% was due to annual fluctuations in industry ROI.)
The result which was to be the most controversial was that only 1% of the variance was due to corporate effects. Despite the extensive literature on corporate strategy and the focus on the senior leaders of large diversified firms, this data did not support the idea that corporate management had much independent effect on the performances of the businesses in their portfolios. Yes, corporations differed from one another in performance, but that appeared to be almost completely due to having a few high-performing or low-performing businesses.
Writing up my findings, I sent the paper to the American Economic Review. Once again, a rejection letter came back, the editor opining that “The subject is not interesting to our readers.” It seemed to me that if one of the journal’s lead articles was mistaken, interest was automatically achieved. But, the social “sciences” are not like physics or medicine, where journals print many short articles simply reporting research results. In the social sciences, people write long-winded articles and the resulting space constraints allow journal editors to make all sorts of judgments about what is interesting and what is not interesting. In any event, I resubmitted the piece to the Strategic Management Journal.2 The paper was published in 1991. Six years later, it won the journal’s Best Paper award.
Some industries are more profitable than others—the FTC data showed that these “industry” effects accounted for about 8% of the total variance in business-unit profitability). On the other hand, a whopping 46% of the variance was due to stable sources of profitability that were specific to each individual business unit. Finally, the idea that some diversified corporations are much better than others at managing the various lines of business in their portfolio received no support. Only 1% of the total variance had anything to do with corporate management.
The overall summary conclusion I drew in that paper was this:
To the extent that accounting returns measure the presence of economic rents, the results obtained here imply that by far the most important sources of rents in U.S. manufacturing businesses are due to resources or market positions that are specific to particular business units rather than to corporate resources or to membership in an industry. Put simply, business units within industries differ from one another a great deal more than industries differ from one
Yes, industry matters. But strategy matters more. Within most industries, some businesses do much better than others.