
Thomas Edsall is sort of the hype man for doomerism of all kinds, especially when the doom can be connected to conservatives in general or President Trump in particular. Still, he’s often interesting to read. That’s the case today where he has written a typical column about how AI could in fact go horribly wrong in the hands of bad people.
He of course thinks the bad people are billionaires in Silicon Valley but his points are more widely applicable. The most interesting thing in the column is a point he makes about central planning and Hayek’s insight into its limitations.
“If we stay on the current path, the risk of extreme concentration — both economic and political — is very real,” Erik Brynjolfsson, a professor of economics and director of the Digital Economy Lab at Stanford, wrote by email…
I found a 2025 paper by Brynjolfsson and Zoë Hitzig, a junior fellow at Harvard, “A.I.’s Use of Knowledge in Society,” to be exceptionally informative.
Brynjolfsson and Hitzig showed how the ability of A.I. to collect, manage, gain access to and store information upended Friedrich Hayek’s classic economic argument that free markets are inherently superior to the central planning of socialism…
“Hayek’s famous insight,” they wrote, “was that central planning — even if economically efficient — is not feasible because the necessary knowledge is inherently dispersed throughout the economy.”
Hayek wrote “The Use of Knowledge in Society” in 1945. It was a response to some earlier writings which suggested that an efficient economic system could be organized through central planning. Hayek argued that in fact there can never be a single central planner because all of the information necessary to make efficient use of resources is highly distributed. Here’s a bit of what he wrote:
…the “data” from which the economic calculus starts are never for the whole society “given” to a single mind which could work out the implications and can never be so given.
The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess.
The core of his argument is that decentralization is necessary and inevitable if good decisions are to be made. The results of those decisions are then transmitted to other economic actors through prices which allow them to make their own decentralized decisions. There’s a famous simplification of this essay called “I, Pencil” which was written in 1958. It’s literally written from the point of view of a pencil telling his family history:
I am a lead pencil—the ordinary wooden pencil familiar to all boys and girls and adults who can read and write…
…if you can become aware of the miraculousness which I symbolize, you can help save the freedom mankind is so unhappily losing. I have a profound lesson to teach. And I can teach this lesson better than can an automobile or an airplane or a mechanical dishwasher because—well, because I am seemingly so simple.
Simple? Yet, not a single person on the face of this earth knows how to make me. This sounds fantastic, doesn’t it? Especially when it is realized that there are about one and one-half billion of my kind produced in the U.S.A. each year.
The author then goes into all the specialized work and machinery necessary to make pencils:
My family tree begins with what in fact is a tree, a cedar of straight grain that grows in Northern California and Oregon. Now contemplate all the saws and trucks and rope and the countless other gear used in harvesting and carting the cedar logs to the railroad siding. Think of all the persons and the numberless skills that went into their fabrication: the mining of ore, the making of steel and its refinement into saws, axes, motors; the growing of hemp and bringing it through all the stages to heavy and strong rope; the logging camps with their beds and mess halls, the cookery and the raising of all the foods. Why, untold thousands of persons had a hand in every cup of coffee the loggers drink!
The logs are shipped to a mill in San Leandro, California. Can you imagine the individuals who make flat cars and rails and railroad engines and who construct and install the communication systems incidental thereto? These legions are among my antecedents.
Consider the millwork in San Leandro. The cedar logs are cut into small, pencil-length slats less than one-fourth of an inch in thickness. These are kiln dried and then tinted for the same reason women put rouge on their faces. People prefer that I look pretty, not a pallid white. The slats are waxed and kiln dried again. How many skills went into the making of the tint and the kilns, into supplying the heat, the light and power, the belts, motors, and all the other things a mill requires? Sweepers in the mill among my ancestors? Yes, and included are the men who poured the concrete for the dam of a Pacific Gas & Electric Company hydroplant which supplies the mill’s power!
You get the idea. It’s a great take on Hayek’s concept that knowledge is highly distributed. The guy who poured the concrete for the dam doesn’t know anything about how to run a lumber mill and the guy who runs the mill probably doesn’t know how to mine iron to make sawblades. And so on and so on. There is no one mind, as Hayek says, that ever contains all the needed knowledge and so central planning will never work.
But what if there were a single mind that could understand all of it. Not a human mind, but a machine mind. That’s the point Erik Brynjolfsson at Stanford is trying to make. Maybe things have changed.
The rise of A.I., however, blasts a gaping hole in Hayek’s thesis by opening the door to a 21st-century form of central planning, in this case by government or more likely by private-sector corporations and their chief executives. “Powerful A.I. can shift the optimal locus of control through two channels: (1) by codifying local knowledge that was previously tacit and inalienable and (2) by expanding information processing capacity to aggregate, interpret and act on data,” the authors said.
These forces, Brynjolfsson and Hitzig contended, make “centralized coordination and control more feasible and more efficient,” creating incentives for “larger average firm size, greater industry concentration and reduced local managerial autonomy.”
The implications, they continued, extend “beyond economic considerations: Centralization of economic power can lead to centralization of political power and dampen incentives to invest in human capital.”
Does he have a point? I don’t know. It’s certainly interesting to consider. There were no computers when Hayek wrote “The Use of Knowledge in Society” in 1945. The possibility of an electronic brain that could know more than any person and think faster than any person wasn’t even an idea until author Murray Leinster wrote a short story called “A Logic Named Joe” describing something like a smart, networked home computer in 1946.
Hayek’s point was about more than just knowing how things are done in some bookish sense. He was talking about local experts responding to the minute and constant changes of conditions that experts learn from years of doing something, whether it’s pouring concrete or cutting down trees. I don’t think we’re at the point yet where even the best AI could know all of those fine details about dozens of diverse topics.
But is there some future where AI can know enough to help governments or big corporations centralize their decisions? In a time when big machines are trained by essentially reading the entire internet, the limits of what a single mind can know really have changed a lot. And that’s what’s happening now. What will it be capable of doing in five or ten more years?
Where I disagree with Edsall is that he always seems to see the looming dark cloud over everything. I think that’s probably realistic if we’re talking about how autocratic governments like China and Russia might try to use AI to centralize their control over their own people, i.e. to turn the knowledge of everything into a cage. But I don’t think the same threat will be coming from Silicon Valley because I don’t think the people there are constitutionally the same as Putin or Xi Jinping. Still it will be interesting to see how Hayek’s insight holds up in the 21st century and the age of thinking machines.
Editor’s Note: Do you enjoy Hot Air’s conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.
Join Hot Air VIP and use promo code FIGHT to receive 60% off your membership.











