I am an optimist that believes in human resilience.
I think we underestimate ourselves on how adaptable and robust we are as a race.
Therefore, whenever a new technology paradigm drops, I think: how is this any different from what we haven't already stood through as a race?
I love to go back to the example of the camera.
If your value as a painter was your ability to capture likeness, perspective, realism, then overnight you were competing with a box. Something that did not misjudge proportions.
People must have thought art was done. That soul was on the verge of being automated.
Instead, painting did not die. It mutated with impressionism, abstraction and surrealism. The camera eliminated a narrow definition of art, instead of killing it. We, humans were forced upward into interpretation.
Portrait painters, darkroom technicians and realist illustrators became art directors, photographers and visual storytellers. They didn't die.
Let’s talk about steam engines and overall, the Industrial Revolution.
Machines could create machines. That was **recursive, **just like what we are seeing with large language models now. Once industrial tooling existed, factories produced more factories. Output scaled and entire professions disappeared. Industrialisation:
- destroyed entire professions
- urbanised millions
- created harsh working conditions
- triggered political unrest
- radically widened economic and class inequality
And yet over 80 years, it also:
- increased life expectancy
- reduced extreme poverty
- expanded education
- created prosperous middle-classes
There is absolutely no denying that there was upheaval. Yes, there was displacement. But the arc bent toward abundance. Eventually. (Any guesses who this credit goes to?)
So when I think about AI, I wonder how it is any different.
**AI can assist in building better AI. **That is recursive too. It spreads globally in days, not decades. It does not require railways or factories to deploy. It is far more easily reproducible. So what?
(It is worth pausing on the word “recursive.” It sounds like runaway self improvement, but today’s loop is STILL heavily scaffolded. Models help with code, experimentation, and architecture search, but humans still define objectives, evaluate outputs, allocate compute, and ship systems. There are huge bottlenecks in data quality (especially with tribal knowledge of complex systems), energy, hardware fabrication, and capital allocation. So yes, there is a feedback loop, but it is not an unconstrained exponential spiral detached from physical and institutional limits.)
One critique is that AI touches white collar work, not just manual labour. Analysts, designers, coders, lawyers. These were supposed to be safe.
But perhaps this only reveals that parts of knowledge work were procedural all along. Every era has its version of “skilled” labor that turns out to be pattern based once the right tool appears.
Yes, everyone scared is most likely today's manual labour.
Wow, okay: this reminds me of my dad's era circa Y2K. Every run-of-the-mill engineer, clerk, accountant was rushing to buy "How to learn computer programming?" books and IT skill tuitions.
Not everyone became a programmer. Most did not. Instead, computers dissolved into every profession. Accountants became spreadsheet fluent, not disappear. Engineers became CAD native. Clerks became system operators. The baseline skill level of the economy shifted upward.
Y2K felt like a cliff but actually: it was a ramp.
Oh, but coming back: people say...
- The transition is too fast
- Entry level roles will collapse
- Wealth concentrates around AI capital owners
- Social systems cannot adapt quickly enough
That is a distribution and coordination problem. Regulation will follow. We, again, as a race, will adapt.
And then there is demand.
When productivity rises, humans do not stop wanting. They redirect desire.
- When food became cheap, we spent more on travel and entertainment.
- When computation became cheap, we created entire digital economies.
If intelligence becomes cheaper, what new markets emerge? Personalized education? Hyper customized media? AI assisted entrepreneurship at scale? Entirely new categories we cannot yet name?
The fear assumes supply increases while demand stays static. History suggests demand expands.
So maybe the real question is not: are we out of jobs?
Maybe the question is: Who moves up fast enough, and who gets stuck on the layer that just became automatable?
Because, if 80 years of steam engines produced a world with higher living standards despite immense disruption, then 5 years of AI acceleration does not logically imply collapse.
I remain optimistic not because disruption is painless, but because we as a race, consistently keep redefining value whenever tools redefine capability.
