Humanity in Crisis When 'Encountered' by Constant Change

When humans are unable to keep up with change, they suffer

Humanity in Crisis When  'Encountered' by Constant Change
Author looking at the stars.

Humans have a limited capacity and speed to adapt to changes happening in physical world (nature) and cultural world (society).

Agree?

Our bodies take time to adapt. So do our brains. It is a very slow and gradual process.

Humanity takes time, though not as slow as our bodies, to adapt to cultural changes. For example, social media became prominent in the 2010s and its disturbing impact emerged in the mid-2010s.

Common people began noticing the harms in the 2020s. Policymakers moved even slower—Australia and Spain only began restricting youth access around 2024-2025, a full decade after the problems became visible.

So, humans took around 10-15 years to respond appropriately to social media.

What's different with AI

AI will evolve rapidly than we can catch up. There lies the problem.

We won't have time to adapt. Think of Covid virus mutating every few months instead of decades.

You take a year to prepare for Scenario X and by the time you try to implement it we are already riddled with Scenario Y.

Preparing and implementing a Strategy will take much longer than AI changing Scenarios.

Humans figure out later

Humans have always figured out later how to tackle the 'technology' they invented.

It's like building a nuclear bomb first and then figuring out when and why to use it.

We're in that phase where people have huddled in the Los Alamos village of San Francisco, trying to gather fissionable Plutonium and Uranium in the AI labs.

In five years, an Oppenheimer from Silicon Valley will regret his invention.

The other side of the story

Yes, AI might find a cancer cure or reverse aging. But mostly that's the carrot to manipulate the gullible masses to get what they want.

Yes, nuclear fission gave us alternative energy sources, but also brought us closer to Midnight. We're now just seconds away from Midnight.

Yes, social media blessed us with birthday and anniversary reminders but also beset teenagers and adults alike with mental illnesses (read The Anxious Generation by Jonathan Haidt).

What it boils down to

The more powerful the tool or technology is, the more intense the consequences on either side of the spectrum.

Both the Good people and the Bad people can use it for making the world a better or worse place.

Accepted.

Now ask yourself this one simple question:

Are the Good people using Social Media anymore?

None of my friends, either from school or college, are on social media anymore.

No, they have long run away.

Do the Zucks' or Musks' kids use it? No, they know the dangers.

In AI, the pattern repeats. Researchers focused on safety and alignment face institutional pressure—some leave labs, others pivot to adjacent fields. But here's what concerns me: this exodus concentrates power among those less cautious about consequences. It's not that good people lack conviction; it's that the system rewards speed and profit over caution.

It's mostly the Bad people using powerful tools and technology for nefarious ends.

So, the likelihood of a highly intelligent machine being exploited by a tyrant is greater.

My dilemma with incentive (mis)alignment

Here's my real concern: the people with the best intentions toward AI are vastly outnumbered by those chasing profit and power. It's not about spiritual experience versus truth—it's simpler and grimmer. When a powerful tool attracts both altruists and exploiters, the exploiters win because they're willing to move faster and compromise less. By the time the careful builders have assembled safeguards, the reckless have already deployed the technology.

The common man won't be able to keep up with the rapid and massive change. Someone rightly said, "There will be blood on the streets."

My hope

Yet I can't shake the hope that informed resistance matters. Not because good people will suddenly dominate—they won't. But because public awareness constrains the worst impulses. Tyrants prefer darkness; they move slower under scrutiny.

What can we actually do? Read carefully about AI's real risks, not hype. Talk to people outside tech bubbles. Write and speak with precision, not panic. The revolution isn't techno-utopian; it's just widespread understanding that might force better choices.