Thursday, March 19, 2026

 I Watched the Same Computing Revolution Happen Four Times

I didn’t set out to study technological cycles. I was just trying to do my job.

Over the course of my career, I worked in environments that were considered “high-end” at the time—places where the biggest, fastest, and most expensive systems lived. What I didn’t realize at first was that I was repeatedly standing at the same inflection point in computing history.

I saw the same pattern unfold again and again. Different technologies. Different companies. Same outcome.


The First Time: Time-Sharing vs Personal Systems

In the 1970s, I worked in the time-sharing business. Computing was centralized. Large systems served many users. Access was controlled and expensive.

Then something started to happen.

Some customers realized that their workloads—at least part of them—could be run on smaller systems. Early personal computers and workstations weren’t as powerful, but they were:

  • cheaper

  • local

  • under the user’s control

At first, they weren’t taken seriously. But over time:

  • non-critical workloads moved off

  • then routine workloads

  • then more and more of the total workload

The large systems didn’t disappear. But their role shrank.


The Second Time: Supercomputers vs Workstations

Later, I worked around high-end systems at Cray.

These were the pinnacle of computing—specialized, incredibly powerful, and incredibly expensive.

Then along came systems from companies like Sun, DEC, and SGI. They weren’t as powerful. Not even close.

But they were:

  • cheaper

  • more flexible

  • improving quickly

At first, they couldn’t compete. Then they became “good enough” for more tasks. Then they became the default for most tasks.

Again, the pattern repeated:

  • edge workloads moved first

  • then mainstream workloads

  • then the center shifted

The supercomputers remained—but only for the highest-end problems.


The Third Time: SGI vs Commodity Hardware

At SGI, I saw the same thing from the other side.

SGI systems were state-of-the-art for graphics and visualization. Powerful workstations. Specialized hardware.

But commodity PCs were improving rapidly. GPUs were evolving. Linux was gaining traction.

The PCs weren’t as good—until they were.

Engineers began to realize they could do much of their work on:

  • cheaper machines

  • widely available hardware

  • systems they controlled directly

And once again:

  • work began to move off

  • then more work

  • then most work

SGI didn’t disappear overnight. But the direction was clear.


A Side Story: Search That Actually Worked

While at SGI, I was responsible for maintaining a commercial search engine used by the support organization. It indexed dozens of data sources—documents, databases, tickets, engineering notes.

It was large. It was expensive. And it didn’t work particularly well for the kinds of queries engineers actually made.

Engineers searched for things like:

  • error messages

  • part numbers

  • hex codes

  • version strings

The commercial system was tuned for text. It struggled with those “odd” tokens.

So I built a small alternative using Xapian, running on my own desktop PC. I didn’t have the storage to index everything, so I chose the highest-value sources—support cases and OS data.

I made one key decision: use the same parsing rules for indexing and searching.

That was it.

The result?

Some engineers preferred my system over the official one.

Not because it was bigger. Not because it was more sophisticated.

Because it preserved the signal they needed.


The Pattern

After seeing this multiple times, the pattern becomes obvious:

  1. A centralized, expensive, high-end system dominates

  2. A smaller, cheaper, less capable alternative appears

  3. The alternative improves rapidly

  4. It becomes “good enough”

  5. Work begins to migrate

  6. The center of gravity shifts

It’s not a sudden revolution. It’s a gradual migration.

centralized → partially distributed → mostly distributed → specialized central core

The high-end systems don’t disappear. They become niche.


Why It Keeps Happening

The drivers are consistent:

  • Cost: cheaper systems are accessible

  • Control: users prefer systems they own

  • Improvement rate: commodity systems evolve faster

  • Scale: larger ecosystems drive faster innovation

The key insight is simple:

The new system doesn’t have to be better. It just has to be good enough—and improving.


Where We Are Now: AI

We’re seeing the same pattern again.

Today:

  • large AI models run in massive data centers

  • access is centralized

  • hardware is constrained

But:

  • smaller models are improving

  • local hardware is getting better

  • tooling is evolving rapidly

Right now, we’re in the early stages:

not quite practical locally → barely practical → good enough → dominant for many tasks

There are bottlenecks—GPU availability, memory cost, infrastructure—but those are temporary. They’ve always been temporary.

There will be breakthroughs:

  • in hardware

  • in model efficiency

  • in system design

And when they happen, the shift will accelerate.


What Doesn’t Change

One thing I’ve learned:

This isn’t about any specific technology.

It’s about a recurring dynamic:

capability increases
cost decreases
control shifts
work migrates

And it happens over and over again.


Final Thought

I didn’t set out to study this pattern. I just happened to be in the right places to see it happen multiple times.

But once you’ve seen it enough, you start to recognize it early.

And when you do, the future doesn’t look random anymore.

It looks familiar.