Under radically new conditions, some of the old insights need modifying.
A nation’s vision is more than the sum total of its citizens’ desires.
Inez Stepman observes that Milton Friedman failed to foresee the emergence of tech monopolies with powers to suppress free speech rivaling those of the world’s most oppressive states. Nor, indeed, did anyone else. Mark Zuckerberg and Jack Dorsey became the gatekeepers of public communication not through conspiracies to suppress trade, but rather by the emergence of natural monopolies driven by technology. One social media site to post political comments or cat pictures is better than two. Life is easier with Facebook than with Facebook and Myspace. Two pyramids, two masses for the dead, are better than one, quipped Keynes, but not so two railways from London to Manchester.
Nor did the tech entrepreneurs anticipate their monopoly position. Bill Gates tried to sell his PC operating system to IBM, and became for a time the richest man in the world because IBM’s penny-pinchers offered him royalties instead. Once Microsoft controlled the operating system, it controlled the applications as well. Excel and MS Word may be inferior to Lotus 123 or WordPerfect, but the seamless communication among apps outweighs any minor imperfections.
Government had little to do with this. The free choice of consumers built the Big Tech monopolies. Their technology itself fostered centralization, and as private firms, Facebook and Twitter were free to include or exclude such discourse as they deemed fit. It is conservatives rather than liberals who now appeal to the power of the state to break the information stranglehold of private monopolies.
Free choice among competitive products surely is a good thing when it applies to the array of existing products. Competition requires providers of goods and services to improve their products and change reasonable prices. But the greatest economic changes—for better or worse—occur when entrepreneurs invest in products that consumers do not yet know they want. When Friedman received the Nobel Prize in 1976, no American wanted a personal computer, a CD player, a microwave oven, cable television, or a cell phone. They couldn’t have because such things didn’t yet exist.
And so the problem of tech monopolies points to a broader failing in economics. What is the price of a product that doesn’t yet exist except in the twinkle of an entrepreneur’s eye? And what is the expected return on investment in a technology that has never been employed? Friedman advanced a one-period model of the market in which the array of available goods and services was more or less fixed, along with the investment opportunity set.
Under conditions of technological change, the economic models go haywire. At the center of modern finance theory is the Capital Asset Pricing Model (CAPM), which concludes that a “market portfolio” consisting of all available investments is the best portfolio to own—unless technological change threatens to transform the set of available investment opportunities. What would you have bought in 1997, anticipating the great changes in valuation due to information technology? The models spit out arbitrary high valuations for hedges against technological change.
The Internet oligarchs stumbled onto their monopolies only after a technological revolution realized the potential of quantum mechanics in the form of semiconductors and their applications. And this occurred because the urgent needs of American national security forced the development of these technologies before entrepreneurs began to imagine them.
Cheap, fast, and energy-efficient chips capable of powering personal computers first became available in 1974 when RCA commercialized the CMOS chip manufacturing process and other producers followed suit. But PCs were an afterthought: funding for the new chip processes came from the Defense Department, and the first application of the new hardware put lookdown radar in F-15s, starting the revolution in defense avionics. Defense funding also made possible the semiconductor laser, the basis of optical networks, before anyone knew that the public wanted cable television.
Once the Defense Department funded basic research on these and many other items (the GUI interface, LED and plasma displays, the Internet itself), and private entrepreneurs invested in them, consumers had a wide variety of competing products from which to choose. But consumer choice played no role whatever in their development. Rather, the state set national objectives—for space travel and national defense—that indirectly led to the development of new products.
Freedom Isn’t Enough
Markets do not do a good job of forming long-term expectations. Although Ronald Reagan spoke of Friedman with genuine admiration, his supply-side economics came from another Nobel Prize winner whom Friedman opposed bitterly: Robert Mundell. Friedman wanted to cut taxes to starve the beast of federal spending; a dollar taxed or borrowed by the government was a dollar less in private hands, and less likely to be spent well. But Mundell argued that the government could do things that the private sector could not. Suppose the government cut taxes to promote growth, but as a result had to borrow money to cover the revenue shortfall, he wrote. If tax revenues rose sufficiently to pay the interest on the newly-issued government bonds, the new government debt constituted an increase in wealth. Mundell’s argument reproduced Alexander Hamilton’s 1790 Report on the Public Credit.
Even more, Mundell argued, private capital markets do not do a good job of discounting future household income flows. An increase in government debt funded by increasing household tax payments discounts these income flows and represents an increase in market efficiency. Of course, the market could instead provide home equity loans to households. That price nearly blew up the world financial system in 2008 and required a trillion-dollar bailout of the banking system.
Friedman thought in terms of short-term exchange; Mundell thought about long-term investment. Friedman advocated floating exchange rates and the issuance of money by private banks. Consumers might pay more for a J.P. Morgan dollar than a Citibank dollar, depending on their perception of credit quality. Mundell instead proposed exchange rates fixed to tradable prices such as gold and other commodities, because long-term investment requires a stable unit of account.
One might extend Mundell’s “Hamiltonian” argument to another dimension of Hamilton’s thinking (although Mundell never did), namely: “public improvements.” It is simply not true that the private sector always will spend a dollar more efficiently than the government. All of the great technological changes of the past three generations—air and space travel, computation, communications, and nuclear energy—began with military needs. No corporate board of directors can justify spending on technologies that have no known commercial value. But the exigencies of war-fighting push the boundaries of physics and justify fundamental research in areas that do not yet have commercial application. Without exception, every element of the Digital Age began with Defense Department investment in new weapons systems.
Irving Kristol’s “two cheers” critique of capitalism, as cited by Stepman, is part of the same issue with our vision of the future (as it happens, Kristol first brought Mundell’s work to the attention of a lay audience in 1974 when he published Jude Wanniski’s essay “The Mundell-Laffer Hypothesis” in Policy Review). There is something of a higher order than the whims of consumers on any given day at work in the marketplace. America’s brightest minds and most aggressive investors have devoted the past 20 years to consumer-oriented software: Apps, games, social media, and so forth. This has created hitherto unimaginable valuations for successful companies, but it has also produced a generation of young people (“I-Gen” in Jean Twenge’s bon mot) that is fearful, lonely, risk-averse, and short of concentration span.
A generation earlier, America’s leaders rallied the public around a vision of unlimited progress whose most compelling expression was manned flight to the Moon. Nothing less than a national commitment to grand technological goals is required to maintain America’s preeminence against the challenge from China. The ambient culture is not an entity that evolves sui generis according to its own internal logic. The challenge of the moonshot transformed the culture. In the early 1960s, teenagers kept models of the Apollo spacecraft on their desks and dreamed of becoming engineers. John F. Kennedy’s summons to accomplishment and sacrifice thrilled my generation. There is something more to a nation than the sum of the desires of its citizens. We do not need to sink into an atavistic national will in order to respond to a higher national purpose.
Instead, we are repeating today the experience of late Hellenism, when Hero of Alexandria (died 70 C.E.) invented the steam engine and linear programming but used the steam engine for magic tricks and programming to run a puppet theater. In this reiteration of imperial decline, the bread comes from fast food outlets and the circuses are digital.
Milton Friedman’s eloquent defense of personal freedom will resonate for years to come, as a corrective against the unwarranted intrusions of government into private decision-making. But it isn’t enough to guide us through the challenging years ahead.
The American Mind presents a range of perspectives. Views are writers’ own and do not necessarily represent those of The Claremont Institute.
The American Mind is a publication of the Claremont Institute, a non-profit 501(c)(3) organization, dedicated to restoring the principles of the American Founding to their rightful, preeminent authority in our national life. Interested in supporting our work? Gifts to the Claremont Institute are tax-deductible.