Source | LinkedIn : By John Battelle
One of the most intriguing public discussions to emerge over the past year is humanity’s wrestling match with the threat and promise of artificial intelligence. AI has long lurked in our collective consciousness — negatively so, if we’re to take Hollywood movie plots as our guide — but its recent andvery real advances are driving critical conversations about the future not only of our economy, but of humanity’s very existence.
In May 2014, the world received a wakeup call from famed physicist Stephen Hawking. Together with three respected AI researchers, the world’s most renowned scientistwarned that the commercially-driven creation of intelligent machines could be “potentially our worst mistake in history.” Comparing the impact of AI on humanity to the arrival of “a superior alien species,” Hawking and his co-authors found humanity’s current state of preparedness deeply wanting. “Although we are facing potentially the best or worst thing ever to happen to humanity,” they wrote, “little serious research is devoted to these issues outside small nonprofit institutes.”
That was two years ago. So where are we now?
Insofar as the tech industry is concerned, AI is already here, it’s just not evenly distributed. Which is to say, the titans of tech control most of it. Google has completely reorganized itself around AI and machine learning. IBM has done the same, declaring itself the leader in “cognitive computing.” Facebook is all in as well. The major tech players are locked in an escalating race for talent, paying as much for top AI researchersas NFL teams do for star quarterbacks.
Let’s review. Two years ago, the world’s smartest man said that ungoverned AI could well end humanity. Since then, most of the work in the field has been limited to a handful of extremely powerful for-profit companies locked in a competitive arms race. And that call for governance? A work in progress, to put it charitably. Not exactly the early plot lines we’d want, should we care to see things work out for humanity.
When it comes to managing the birth of a technology generally understood to be the most powerful force ever invented by humanity, exactly what kind of regulatory regime should prevail?
Which begs the question: When it comes to managing the birth of a technology generally understood to be the most powerful force ever invented by humanity, exactly what kind of regulation do we need?
Predictably, last week The Economist says we shouldn’t worry too much about it, because we’ve seen this movie before, in the transition to industrial society — and despite a couple of World Wars, that turned out alright. Move along, nothing to see here. But many of us have an uneasy sense that this time is different — it’s one thing to replace manual labor with machines and move up the ladder to a service and intellectual property-based economy. But what does an economy look like that’s based on the automation of service and intellect? The Economist’s extensive review of the field is worthy reading. But it left me unsettled.