The Core Thesis

Throughout human history, the difference in earning potential between two people has always been determined by a single, dominant arbitrage — some scarce human advantage that one person could leverage over another. Technology has repeatedly destroyed these arbitrages, one by one, each time democratizing what was previously rare and shifting the axis of economic competition to something new.

We are now approaching a moment where artificial intelligence threatens to collapse the most recent — and perhaps most powerful — arbitrage of all: cognitive ability. What comes after is unknown and unknowable, but understanding the pattern of arbitrage decay is the best preparation we have.


A History of Arbitrage Decay

Strength: The Original Differentiator

For most of human civilization, physical strength was the primary determinant of economic value. The stronger you were, the more you could earn. This was not metaphorical — it was direct and literal.

A strong person could farm more land, haul more goods, build larger structures, forge better weapons, and survive wars. Armies were won by the side with more capable bodies. Cities were built by those who could move stone. The pyramids of Egypt, the Colosseum of Rome, the Great Wall of China — these were monuments to organized physical labor.

If you were born weak or small or in a body that couldn’t do heavy work, your economic ceiling was low. Strength was the arbitrage, and there was almost no way around it.

What killed it: The machine. The steam engine, the cotton gin, the tractor, the crane. Once mechanical power replaced muscle power, a 60-kilogram person operating a machine could outproduce the strongest human alive. The arbitrage of strength collapsed almost entirely. Today, being able to deadlift 300 kilograms is a hobby, not an economic strategy. No one gets rich by being physically strong.

Knowledge: The Post-Industrial Arbitrage

With machines handling physical labor, the new differentiator became knowledge — understanding how the world worked, how to build things, how to organize systems, how to apply science.

But knowledge was scarce and gatekept. To learn astronomy, you needed access to an astronomer. To learn medicine, you needed apprenticeship with a physician. Knowledge was transmitted person-to-person, master-to-student. To become a Kepler, you needed a Tycho Brahe. To become a great engineer, you needed proximity to other great engineers. Geography and social class determined access to knowledge, and knowledge determined earning potential.

A brilliant mind born in a remote village with no teachers had almost no way to realize that potential. Meanwhile, a mediocre mind born in a university town could absorb enough knowledge to build a comfortable career.

What killed it: The printing press. Gutenberg’s invention in the 15th century began the slow democratization of knowledge. Books could be copied at scale. Ideas could travel without a human carrier. Over centuries — accelerated by public libraries, universal education, and eventually the internet — knowledge became abundant. You no longer needed a master. You needed a library card.

Mental Computation: The Calculator Arbitrage

Even with widespread access to knowledge, some people had natural advantages in mental arithmetic, spatial reasoning, and rapid calculation. Accountants who could compute faster, engineers who could run numbers in their heads, traders who could calculate risk on the fly — these people commanded premiums.

Being good with numbers was a genuine economic edge. Banks, trading floors, engineering firms, and scientific institutions all rewarded computational speed.

What killed it: The calculator, and then the computer. Once a $5 device could outperform the fastest human mind at arithmetic, there was no premium for mental math. Computers extended this to complex modeling, simulation, and data processing. The person who could do differential equations in their head was no more valuable than the person who could type the equation into MATLAB.

Information Access: The Memory and Research Arbitrage

Even after computers handled computation, there remained an arbitrage in information access and memory. The person who had read widely, who remembered obscure facts, who knew where to find specific data — they had an edge. Lawyers who remembered case precedents, doctors who recalled rare diagnoses, consultants who could pull the right framework from memory — all commanded premiums for what was essentially superior information retrieval.

What killed it: The search engine. Google made the world’s information accessible in milliseconds. It no longer mattered whether you had memorized a fact — what mattered was whether you could use it. The premium for encyclopedic memory collapsed. A junior employee with Google was nearly as informed as a senior expert with 30 years of accumulated reading.

Geographic Reach: The Location Arbitrage

For centuries, commerce was local. A craftsman in a remote village could only sell to the people within walking distance. A merchant in a dense city had access to thousands of customers. Location was destiny — economically and commercially.

This extended to every scale of business. A restaurant on a busy Main Street with heavy foot traffic would outperform an identical restaurant in a back alley. A shop in Manhattan had more economic potential than the same shop in rural Kansas. The arbitrage was simply: proximity to people.

What killed it: Globalization and the internet — in waves. First, shipping and trade routes expanded commercial reach. Then the internet obliterated geographic constraints for digital goods entirely. A developer in Nairobi could sell software to a customer in New York. Platforms like Uber Eats and DoorDash killed the location arbitrage even for physical businesses — a restaurant in a back alley paying low rent could suddenly reach the same customers as the Main Street location paying five times more. The back-alley restaurant, freed from high rent, could actually become more profitable.

And it kept going. Cloud kitchens — facilities with no dine-in area, running ten brands out of one industrial kitchen — disrupted even the delivery-optimized restaurants. Dark stores disrupted supermarkets. Each wave killed the previous arbitrage and created a new one.

Pattern Recognition and Synthesis: The Current Arbitrage

After the search engine leveled information access, the remaining edge was what you could do with the information. Two people with the same computer, same internet connection, same search engine, and same data could produce wildly different outcomes based on their ability to recognize patterns, connect disparate ideas, interpret ambiguous data, and synthesize conclusions.

This is the world of the last 15–20 years. The data analyst who spots the anomaly. The strategist who connects a demographic trend to a product opportunity. The investor who reads the same earnings report as everyone else but sees something others miss. The premium has been for cognitive synthesis — the ability to turn information into insight and insight into decisions.

This is the arbitrage most knowledge workers are currently optimizing for. It is also the one that is about to be destroyed.


The AI Disruption: Cognition Gets Commoditized

Artificial intelligence — specifically large language models and their successors — is now commoditizing the very cognitive abilities that defined the last era’s arbitrage. Pattern recognition, data interpretation, logical reasoning, synthesis across domains, and even strategic analysis are increasingly being performed at superhuman levels by AI systems.

This is not a gradual shift. The capabilities are improving on a timeline of months, not decades. Tasks that required a senior analyst in 2023 can be performed by a well-prompted AI in 2025. The trajectory suggests that within 5–10 years, the gap between AI and human cognition for most professional tasks will be similar to the gap between a crane and a human bicep.

When that happens, optimizing for analytical thinking, decision-making frameworks, or data interpretation will be equivalent to the person in 1850 who spent their time getting stronger while factories were being built around them. Not wrong — just irrelevant.

The “Taste” Objection — and Why It Fails

A common response to this thesis is that taste — aesthetic judgment, product vision, creative direction — will be the next human advantage. The argument goes: AI can execute, but someone still needs to decide what’s worth building. This is the “Steve Jobs function” — the visionary who knows what people want before they know it themselves.

This argument collapses under the same logic that killed previous arbitrages. Taste was only economically valuable because each iteration was expensive. When producing an advertisement in the 1960s cost months of work and millions of dollars, you needed someone with extraordinary judgment to get it right on the first or second attempt. The legendary Apple “1984” ad — a woman hurling a sledgehammer at a screen — was a product of an era when you got one shot and it had to be perfect.

Today, digital advertising runs on A/B testing. You don’t need a visionary copywriter. You create 10 variants, measure click-through rates and conversion, and scale the winner. The creative genius of the 1960s ad world has been replaced by iterative testing and data feedback loops. The output is often less romantic, but it’s more effective and far cheaper.

AI accelerates this to its logical extreme. When the cost of producing a variant — of an ad, a product, a design, an app — drops to near zero, taste becomes unnecessary. You simply generate many options and let real-world feedback select the winner. Vision is replaced by volume and measurement.

The “Product Visionary” Objection — and Why It Fails

A related argument is that great product managers, designers, and founders will always be needed to build products that satisfy human needs. Someone has to understand what people want and shape the experience.

But the entire “product person” archetype exists because of a market inefficiency: building products is expensive. You need someone to correctly guess what millions of people want because you can’t afford to build 50 versions and see which one works.

As AI makes building radically cheap, two things happen. First, the iterate-and-test approach replaces the guess-correctly approach, just as it did in advertising. Second — and more fundamentally — people will increasingly build hyper-personalized tools for themselves. Why settle for an app that satisfies 80% of your needs when you can prompt an AI to build one that satisfies 99%? The role of the product visionary was to aggregate demand across millions of users. When each user can have their own version, there is no demand to aggregate.

The Capital Objection — and Why It’s More Complex

Another natural response is that capital ownership will be the durable advantage. Whoever owns the AI, the compute infrastructure, the robots, and the data will capture the returns — just as factory owners captured the returns when machines replaced muscles.

This argument has surface plausibility but fails a historical test. Capital has not persisted across technological waves. It has been recreated by new entrants who rode the wave, while incumbents who held the old capital often lost.

Kings held all the wealth in feudal societies. Merchants displaced them. Bankers displaced merchants. Industrialists displaced bankers — but not the old industrialists. Dhirubhai Ambani, one of the wealthiest people in Indian history, started with borrowed money and no generational wealth. Elon Musk was a relatively poor immigrant. Bill Gates, Mark Zuckerberg, Larry Page, Sergey Brin — none of them inherited the previous era’s capital. They created new capital by recognizing and exploiting a new arbitrage before others did.

The pattern is clear: each technological wave doesn’t enrich the existing capital holders. It creates new ones. The person who will accumulate capital in the AI era is probably not today’s billionaire — it’s someone who will see the next arbitrage first.


Can the Next Arbitrage Be Predicted?

Earlier in this framework, we argued that the next arbitrage is unknowable — that a Roman soldier couldn’t have predicted software, and so we can’t predict what follows AI. There is truth in the general case: most people at any given transition cannot see what comes next. A Roman soldier in 100 AD had no conceptual vocabulary for transistors or search engines.

But the claim that nobody can see it is historically false. The people closest to the frontier of each new technology often saw exactly what was coming.

Thomas Edison and Nikola Tesla understood what electrification would do to the world decades before it happened. Bill Gates famously declared “a computer on every desk and in every home” when computers were room-sized machines owned by corporations and universities. He didn’t just see that compute would improve — he saw that it would become a commodity, that it would move from institutional to personal, and that the real value would be in the software layer on top. Steve Jobs saw that the phone would become a general-purpose computer. Jeff Bezos saw that the internet would reshape retail before most retailers took the web seriously.

The pattern isn’t that the next arbitrage is unknowable. It’s that it’s unknowable to people who aren’t paying close attention to the frontier. The Roman soldier couldn’t predict software — but if he had been an engineer working on early mechanical devices, he might have seen further than his peers. Each technological wave is visible in advance to the people standing closest to it.

This means the correct question isn’t “what will matter after AI?” asked abstractly. It’s: if you look at AI’s current trajectory and its unsolved problems, what capability gap is most likely to become the next arena of competition?


The Next Arbitrage: Spatial Intelligence

The entire world right now is focused on one type of artificial intelligence: knowledge-based intelligence. Logical reasoning, data interpretation, language understanding, synthesis, and decision-making. This is the domain of chatbots, coding assistants, AI analysts, and autonomous agents. It is impressive, it is advancing rapidly, and — critically — it is approaching saturation.

In five years, knowledge-based AI will be commodity infrastructure. Dozens of companies are racing toward the same capability. The marginal improvement between the best and tenth-best language model will be negligible for most practical purposes. Trying to build a career or company around knowledge AI today is like trying to enter the search engine market in 2005. The puck is already there. The question is where it’s going.

The puck is going to spatial intelligence.

Here is the logic. Knowledge AI tells you what to do. But knowing what to do is only half the problem. The other half is altering the physical world — actually doing the thing. That requires robots. And robots require a fundamentally different type of intelligence: spatial reasoning, physical manipulation, real-time adaptation to unpredictable environments.

The Body Is an Operating System. The Hand Is the Unsolved Problem.

Consider what a humanoid robot actually is. Strip away the science-fiction aesthetics and what remains is surprisingly simple in structure: legs and hands. These are the two interfaces between any agent and the physical world. Ninety-nine percent of human interaction with the physical environment happens through feet and hands. No one opens a door with their chest, pulls a chair with their thigh, or picks up an object with their stomach.

Of these two interfaces, legs are largely a solved problem — or at minimum, a rapidly solving one. Bipedal locomotion, balancing, navigating terrain — Boston Dynamics, Tesla’s Optimus, and numerous Chinese robotics companies have demonstrated increasingly capable solutions. Walking, running, dancing, even recovering from pushes — the balancing problem is well on its way to commodity status.

Hands are not solved. And this is where the real frontier lies.

Robotic hands have been built for specific, constrained use cases. A factory robot can weld the same joint thousands of times. A cooking robot can execute predefined recipes in a controlled kitchen. But these are narrow solutions — the equivalent of a calculator that can add but cannot run a spreadsheet.

The human hand is extraordinarily complex. It has dozens of degrees of freedom. It dynamically adjusts grip strength, orientation, and finger placement based on the object, the task, and real-time feedback. Consider something as simple as picking things up: when you lift a pen, your palm faces downward. When you pick up a frying pan by its handle — which is cylindrical, like a pen — your palm faces upward, and you apply completely different force. You do this without thinking. A toddler who has never seen a frying pan will initially try to grip it like a pen, feel it slipping, and spontaneously adjust — changing grip style, wrist orientation, and force simultaneously. This adaptive, generalizable dexterity is the unsolved grand challenge of robotics.

Why Humanoid? Because the World Is Already Designed for Humans.

A reasonable objection is that humanoid robots are an inefficient form factor. For many industrial tasks, that’s true — a conveyor belt moves boxes better than a bipedal robot carrying them. But for domestic and shared-space applications, the humanoid form is not a stylistic choice. It is an infrastructural necessity.

Every human environment on earth is designed for the human body. Door handles sit at hand height. Kitchen counters are at waist height. Stairs are sized for a human stride. Cabinet knobs are shaped for fingers. Light switches, faucets, drawers, hangers, stovetop knobs, refrigerator shelves — trillions of dollars of built infrastructure, globally, is optimized for a bipedal creature roughly 150–190 centimeters tall with two arms and ten fingers.

Redesigning homes, offices, and hospitals to accommodate non-humanoid robots would be astronomically expensive and practically impossible. The far cheaper and more logical path is to build robots that fit the world as it already exists.

Consider what doing laundry actually involves: walking to a bedroom, picking up clothes from a hamper, carrying them to a washing machine, loading it, later unloading wet clothes, carrying them to a balcony, hanging each garment on a line or rack, collecting them when dry, folding them with care for different garment types, and storing them in a wardrobe. This is a sequence of tasks across multiple rooms, requiring navigation through doorways, manipulation of dozens of differently shaped objects, and coexistence with human residents in narrow hallways and shared spaces.

You cannot solve this with conveyor belts and specialized machinery. There would be no room left for the people who live there. The robot must move like a person, fit in the same spaces, and use the same objects — because the entire environment was built for a person.

The same is true for cooking: opening the fridge, selecting ingredients, washing vegetables, retrieving utensils, adjusting stove controls, chopping, stirring, plating, cleaning, drying, and storing — all within a kitchen designed for human reach, human grip, and human-scale movement.

For industrial settings, specialized robots will continue to dominate. But for the home, for hospitals, for retail spaces, for offices — anywhere robots must share space with humans — the humanoid form factor is the only one that works without rebuilding the world.

Spatial Intelligence: The Domain, Not Just the Product

The exact commercial form that spatial intelligence takes — whether it’s modular applications, integrated platforms, or something else entirely — is secondary. What matters is that spatial intelligence as a domain is where the next wave of value creation will happen, just as software was the domain that defined the PC era regardless of whether any individual predicted that spreadsheets specifically would be a killer app.

The core problems of spatial AI are deep and largely unsolved: generalizable manipulation across novel objects, real-time adaptation to unstructured environments, sim-to-real transfer of learned dexterity, haptic feedback integration, and the combinatorial complexity of hand movements with dozens of degrees of freedom. These problems represent the frontier — the equivalent of operating systems and compilers in the early days of computing. They are hard, early-stage, and foundational.

The people who invest now in developing expertise and technology in this domain — training methods for robotic dexterity, simulation environments for physical manipulation, novel approaches to spatial reasoning — are building the knowledge base that will be scarce and high-value when the robotic hardware matures. They are the equivalent of the person who learned to code in 1985: not yet riding the wave, but learning to surf on a beach where the wave is visibly forming.


Two Ways to Make Money: Winning the Arbitrage vs. Killing It

There is a distinction hidden in this entire history that changes the strategic calculus significantly. At every stage, there have been two types of people making outsized money — and they are playing fundamentally different games.

The first type wins the current arbitrage. They are the strongest warrior, the most knowledgeable scholar, the restaurant on Main Street, the analyst with the sharpest pattern recognition. They succeed by being better than others along the axis that currently matters. Their wealth lasts as long as the arbitrage does — and collapses when it’s destroyed.

The second type kills the arbitrage. They are the ones who build the machine that makes strength irrelevant, the printing press that makes knowledge abundant, the search engine that makes memory worthless, the delivery platform that makes location meaningless. They don’t compete within the existing game — they change the game entirely.

Look at who actually accumulated the most durable wealth in each era. It was rarely the person who was best at the old game. It was the person who made the old game obsolete:

  • Gutenberg didn’t win the knowledge arbitrage — he killed it by making books accessible, and created an industry in the process.
  • Google’s founders didn’t have the best memories or the largest personal libraries — they built the tool that made memory and personal libraries irrelevant.
  • DoorDash and Uber Eats founders didn’t own the best restaurant locations — they built the platform that made location irrelevant.
  • WordPress, YouTube, Reddit, and Facebook didn’t secure better publishing deals — they made publishers unnecessary, allowing anyone to reach millions without gatekeepers.

This distinction matters enormously for strategy. The first path — winning the next arbitrage — requires predicting the future. You have to guess what the next scarce advantage will be and build it before others do. As argued above, this is likely impossible because the next arbitrage depends on concepts that don’t exist yet.

The second path — killing the current arbitrage — requires no prediction at all. The current arbitrage is visible and obvious: cognitive work, data interpretation, pattern recognition, decision-making. The tools to destroy it are being built right now. AI, robotics, autonomous agents — these are the printing presses and steam engines of this moment.

The person who participates in building, deploying, or commercializing the tools that commoditize today’s cognitive arbitrage is playing the historically proven wealth-creation game. They don’t need to know what comes next. They just need to be part of what’s dismantling what exists now.


What You Can Do: Go Where the Puck Is Going

Wayne Gretzky’s famous advice — “skate to where the puck is going to be, not where it has been” — is the operating principle here.

Right now, the puck is at knowledge-based AI. Millions of people are learning prompt engineering, building chatbot wrappers, training to be AI-augmented analysts. They are skating to where the puck is. By the time they arrive, it will be crowded and commoditized.

The puck is going to spatial intelligence. The people who start working on robotic dexterity, manipulation learning, haptic feedback systems, and task-specific spatial AI today are positioning themselves for the wave that will define the next decade of wealth creation.

There are two paths available, mirroring the two types of money-makers identified earlier:

Path one: Kill the current arbitrage. Participate in building the AI tools that commoditize today’s cognitive work. This is the proven wealth-creation game — be part of the technology that makes the old advantage irrelevant.

Path two: Position for the next arbitrage. Start building expertise, companies, and technology in spatial intelligence — robotic dexterity, manipulation learning, sim-to-real transfer, physical reasoning. This is the equivalent of learning to write software in 1985: the hardware isn’t quite ready for mass adoption yet, but it will be soon, and the people who are already deep in the domain when it matures will have an insurmountable head start.

The worst position is the one most people are in: optimizing skills that are about to be commoditized, skating to where the puck already is, or — worst of all — still perfecting the previous era’s game entirely. Sharpening your data analysis skills today is lifting weights in 1850. It is not wrong. It is just irrelevant to where value is going.

The next decade belongs to the people who understand that intelligence is moving from the screen into the physical world — and who are building the software that makes that transition possible.

Tags:

Updated: