news

Terrified of AI? Don’t, It May Be The Key to Tomorrow’s Survival- Valutrics

AI has been getting a lot of attention lately. Much of it is fueled We are drawn to doomsday scenarios. It’s in our nature. In many ways the history of civilization has been one of fearing and resisting the same technological advances that somehow help us beat the odds and propel us to the next level of progress. AI is no different.

Still, trying to separate the hyperbole from the facts is not always easy.

I got wrapped up in it myself earlier this week when I wrote about an erroneous online report that Facebooks’s engineers has pulled the plug on chatbots which had developed their own language in a simulation being run The attention that incident has received, and all of the hype surrounding it, speaks to how incredibly sensitized we are to the threat of AI.

However, somewhere between indifference and the fear of AI overlords lies the truth about AI. While we may not yet know exactly where to draw that line it’s worthwhile to take a step back in the hope that we can gain some perspective.

First, we always see the threat before we see the opportunity.

Human beings are wired to first see the threat and then to magnify it through a social lens. It’s how we survived in an evolutionary poker game where the odds were clearly stacked against us. It’s the reason innovation is so hard, and it’s why we gravitate towards gloom and doom so easily.

I’m not advocating that we ignore the risk of AI, but instead understand that our hyperconnected world easily amplifies the risk through traditional and non-traditional media. AI will evolve and it will find its way into how we live, work, and play. In a free market innovation will happen wherever it can. Seeing AI accurately through experience rather than speculation is the only way to develop an understanding of the role it will play, its risks and its benefits.

In short, rather than fear it, we’d be much better off preparing for it.

Second, AI requires a new way of thinking about computers.

In my last column I talked about how we are moving from the traditional model of computing, where all actions are programmed in anticipation of an event, to the AI model in which the technology actually develops its own rules as it encounters new and unanticipated situations.

We fear this because AI develops rules and takes actions that we cannot decipher or fully understand. That’s why so much attention was paid to the story about Facebook’s bots creating their own language.

Yet, we are measuring AI against a nonexistent standard of perfection. Robots do kill people today. Google “robots” and “death” and “killing” and you will find a slew of incidents where robots have been responsible for human deaths, and countless more injuries. When we talk about the threat of self-driving cars we discount the fact that humans have, Holding any technology to a standard of perfection is not only irrational but delays a move significantly closer to perfection than wherever we currently are.

Third, big challenges require big bets.

There are many things we do, as individuals, nations and as a society, which come at a great cost but we do them anyway because they further our collective values and our fundamental desire to survive.

Learning how to split atoms gave us humanity’s deadliest weapons but it also gave us nuclear energy, and our greatest insights yet into how the universe operates; insights that may ultimately enable us to become an interplanetary species.

Closer to home, the complexity of the world, the growth in population as we head towards 10 billion inhabitants, the rate at which we are transitioning people from the developing world to being connected and productive participants and consumers in the global economy (about 1,000,000,000 in the last decade), and the global challenges we face–from climate change to terrorism to pandemics–are all far exceeding our abilities to cope and threatening our survival.

This realization that we are in trouble isn’t a sudden one. It’s been building for some time.

A study published as the book Limits to Growth in 1972 sounded the alarm The study was derided at the time for its assumption that growth and consumption were increasing exponentially. Yet, its projections have been frighteningly accurate.

The authors were blunt about humanity’s ultimate destination, a collapse of our cornerstone economic and social institutions if we did not somehow change the dynamics of growth and consumption.

Given that trajectory you might believe that our future is locked in to a rather bleak scenario. Not so. There is counterforce at play that also rises exponentially with relatively few additional resources, its capacity is infinite and yet we’ve barely tapped its potential.

In my mind AI is that counterforce; it is a critical, if not the central player, increasing our chances to not just survive but continue to thrive in an exponentially challenging world.

The bottom line is that AI is not alone when it comes to advances in technology that stir our anxiety and feed our instinctive aversion to big change. But we find ourselves at an especially precarious moment in time, with challenges on a vast scale that will require a new way of thinking about how we go about addressing them.

In that scenario AI is simply one more tool in humanity’s race to survive and to continue beating the odds.