top of page

What the Industrial Revolution Teaches Us About AI A press photographer's perspective on technological disruption

  • idavidson1
  • 11 minutes ago
  • 6 min read
Fei-Fei Li by Ian Davidson
Fei-Fei Li by Ian Davidson

Standing in Downing Street at dawn, waiting for an arrival that may or may not happen, I've had time to think about what's coming. As a press photographer with Westminster credentials, I operate in a world that AI cannot easily enter — you need a body present, at that place, at that moment. Yet I watch the stock photography market I also work in being transformed by AI-generated imagery. I find myself living across two realities: one resistant to automation, one being consumed by it.

This dual position, combined with my reading of Nassim Nicholas Taleb, has led me to look backwards rather than forwards. Taleb argues that the past is a better guide to the future than forecasting. The track record of technological prediction is dismal, but the patterns of how societies absorb disruption are remarkably consistent. If we want to understand what AI will do to us, we should study what the steam engine and the power loom did to our ancestors.

The Arc of Industrial Transformation

The standard timeline gives us 1760 to 1840 for the first industrial revolution, with a second wave from 1870 to 1914. What strikes me is the length of this transition — roughly 80 years for the first phase alone. The breathless commentary about AI assumes transformation in a decade. History suggests otherwise.

More sobering still is the distribution of gains. The famous "Engels' Pause" describes how real wages stagnated or even declined for perhaps 40 to 60 years while productivity rose. The benefits accrued first to capital owners and entrepreneurs. It wasn't until the mid-Victorian period that working-class living standards clearly improved. Two generations lived through the disruption before the broad uplift materialised.

Recent coverage in The Economist suggests we're already seeing something similar: gains flowing to the owners of capital and some entrepreneurs, while wages for most workers stagnate. The parallel is uncomfortable but hard to dismiss.

The Middle Class as the New Handloom Weavers

The handloom weavers weren't unskilled labourers. They were artisans with genuine expertise, relatively well-paid, possessing autonomy and status. The power loom didn't just replace them; it deskilled their work, turning craft into tending machinery. What took years to master became something anyone could learn in weeks. Their numbers peaked around 1820 and then collapsed over two decades.

The graduate professional class today faces something analogous. Paralegals, junior analysts, copywriters, mid-level administrators — their value proposition was mastery of cognitive routines: research synthesis, document preparation, standard analysis, competent prose. These are precisely the tasks where current AI excels. The deskilling dynamic applies: what required years of education and experience becomes a prompt.

What happened to the working class in the 19th century may happen to the middle class in the 21st. The logical extension is a rapidly growing divide between those who own or direct AI systems and those displaced by them — perhaps an 80/20 split, with 80 per cent experiencing stagnant or declining prospects while wealth concentrates at the top.

Why Disruption Proceeds Slower Than Expected

Yet I'm sceptical of the compressed timelines many predict. This works on two levels.

First, it's the application of technology that matters, and application takes time. Large organisations move slowly. They're heavily invested in the status quo — culturally, technologically, and in terms of existing power structures, both obvious and hidden. The professions are guilds with statutory protections, educational gatekeeping, and regulatory capture. The Law Society, the BMA, the accounting bodies — they exist partly to maintain quality but substantially to restrict supply and protect incumbents. They will resist.

Consider the gap between technological capability and deployment with electricity. The dynamo was developed in the 1870s, but productivity gains didn't appear until the 1920s — a lag of 50 years. Capturing the benefits required not just the technology but the complete reorganisation of factory design, workflow, training, and management practice. Banks still run COBOL. The NHS still uses fax machines. Organisational inertia is a genuine force.

Second, as Taleb points out, many of the breakthroughs of the industrial revolution happened almost by accident and from unexpected directions. The steam engine wasn't invented by natural philosophers reasoning from first principles but by practical men solving immediate problems — pumping water from mines. The transformative AI applications may similarly emerge from directions no one is currently watching.

India: The Crucible of AI Application

This brings me to where I expect the unexpected to emerge. I've been watching India's rapid development as an AI hub with great interest.

The conditions there are precisely those that historically produce rapid technological adoption: a large educated population with strong quantitative training, English language facility enabling direct access to the global knowledge base, and intense economic pressure creating motivation that comfortable Western workers lack. Add relatively weak institutional barriers compared to Europe's regulatory ossification, a massive diaspora creating knowledge transfer channels with Silicon Valley and London, and an existing IT services industry that has spent decades learning to deliver cognitive work at scale.

The parallel might be Japan in the 1950s to 1970s or China in the 1990s to 2010s: a population hungry for advancement, with just enough infrastructure to participate, and powerful incentives to move fast. Unlike manufacturing, AI doesn't require massive capital investment. A bright graduate in Bangalore with a laptop can contribute at the frontier.

Contrast this with China and Russia. Central control produces what the Soviets discovered: impressive concentration of resources on visible priorities, catastrophic misallocation everywhere else. Command economies can copy and scale but struggle to innovate, because innovation requires freedom to fail in unexpected directions, to pursue hunches that don't fit the plan. China's AI is impressive in state-prioritised domains — surveillance, military applications — but the dynamism that produces unexpected breakthroughs tends to emerge from environments the CCP finds threatening.

The Social Absorption Question

The industrial revolution produced widespread social pathology during the transition: alcoholism, family breakdown, crime, despair. This came not merely from poverty but from the destruction of meaningful social roles. The handloom weaver lost his income, but he also lost his identity — his standing as a skilled craftsman in his community.

The professional middle class derives enormous identity from work. The doctor, lawyer, accountant, analyst — these aren't just jobs but social positions, sources of meaning, answers to the question "what do you do?" The loss isn't merely financial.

Yet I'm struck by how limited organised resistance was during the industrial revolution, despite genuine immiseration affecting millions. The Luddites involved perhaps 15,000 active participants at peak, in a population of 10 million in affected regions. Several factors explain this: the disruption was geographically and temporally staggered, alternative employment existed even if worse, the state was willing to use coercion, and there was no coherent alternative vision that commanded broad support.

I expect the same pattern. Gradual change is tolerable when each step is only marginally worse than the last. The causation is diffuse — who does a displaced legal researcher blame? And when you're struggling to meet immediate needs, strategic resistance becomes a luxury. The vast majority will accept, as they have always done, the status quo. This isn't cynicism; it's the historical record.

Positioning for Uncertainty

My photography operates under what Taleb would call an antifragile model. I maintain consistent presence rather than selective shooting, accepting bounded losses for uncapped gains. The income is modest; the costs roughly match it. But the cognitive engagement, the professional community, the purpose of being present when history happens — these provide value that doesn't appear on any balance sheet.

This may be the template for navigating what's coming: activities that provide meaning and identity independent of whether they're economically optimal. Separating what sustains you financially from what sustains you psychologically. Building optionality rather than betting on predictions.

The AI can generate a photorealistic image of a politician at a podium. It cannot generate that politician at that podium on that day saying that thing. As AI-generated imagery floods the zone, authenticated human capture may become the scarce resource. Whether the market will pay for this scarcity is uncertain. But the structural logic suggests it's possible.

The industrial revolution took 80 years to unfold. Along the way, it destroyed livelihoods, created new ones, concentrated wealth, eventually spread prosperity, and fundamentally reorganised society. We got through it — not without suffering, not without loss, but through. That's not a prediction about AI. It's a reminder that we've navigated transformations of this magnitude before, and we absorbed them not through foresight but through adaptation.

The past doesn't tell us what will happen. But it tells us how these things tend to go.

 
 
 

Comments


Featured Posts

Recent Posts

Archive

Search By Tags

Follow Us

  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square

© 2025

by Ian Davidson. 

bottom of page