The technological dead-end is near?

raver119
3 min readMar 20, 2021
Photo by Eleventh Wave on Unsplash

We use modern technologies every day. For better, mostly.

They became essential for us: we’re communicating, commuting, and collaborating by using cutting-edge technologies, consuming powers unthinkable 25 years ago: modern smartphone has peak performance like 10 times better than a supercomputer from the 1990s!

Additional computational power makes further advances possible. Think convolution neural networks adoption explosion — it happened mostly due to parallel computation framework & devices provided by nVidia.

That, in its turn, spiked ultra-wide adoption of other machine learning algorithms, making products that were never possible before: real time speech-to-text conversion, real time translation, contextual search, etc. So yea, modern technology is everywhere now.

With all that said, this situation has a significant drawback, which is typically ignored: while complexity is growing — the reliability suffers. And this trend has been worsening for quite some time now.

Think about “tech interactions” everyone has every day:

I’m asking Siri to start playing music. Can someone guarantee that my voice command will be recognized properly? Nope. It might be, but it might not, as well.

Some time ago I’ve managed to hit the ground hard. Do you think fall detection on my shiny smartwatches was triggered properly? Nope, didn’t happen. But sometimes it gets triggered when I pet my dog.

Don’t get me wrong, I’m not saying neural networks are bad. The problem is way deeper, and has been with us for a long time!

When I play an online game, maybe there’s someone who can guarantee my next TCP packet will be delivered within a certain timeframe? No way, we have a “jitter” word for that.

Maybe, if I store a bit into your RAM, there’s a guarantee I’ll read the same value later? Haha. Bit flipping is a thing.

Our technologies these days are probabilistic, at best. Probabilistic by nature sometimes, and quite often probabilistic by design. Maybe things will work properly on the first attempt, but if not — well, try again, and hope for success. If you want more reliability — i.e. you’re building aircraft or space systems — use complex ensembles of systems coupled with sophisticated consensus algorithms, and hope for an increased chance of success. And that’s a problem waiting for a solution — you can’t stack probabilistic tech over and over again — eventually, this sandcastle will fall apart!

Human nature doesn’t help here as well: every developer knows what “tech debt” means. Every developer knows that “bugs do exist” and “edge cases” are real. All these idioms do not improve reliability, for sure.

Right now probabilistic tech causes some annoyance and jokes, at most, but I’m afraid it will become a blocker for our progress in the foreseeable future: unreliable basement of our technologies will just make further improvements too complex or way too expensive, which is the same outcome anyway: progress slowdown. I suppose we can see this happening right now if we’ll look into the silicon manufacturing process: every subsequent technological improvement significantly increases complexity and price, virtually leaving no room for competition. Without competition, the progress will be crippled to death.

It’s time to shift the paradigm before we are in the dead-end. Reliability must become front and center for engineering. Whatever your product is — make sure it works in all possible use cases.

--

--