Source | LinkedIn : By Ravi Venkatesan
For many decades now, regulators have lagged behind innovators. This is no bad thing, for many good things would have never been commercialized. But the rate of innovation, especially of the game-changing, new-to-the-world sort, has accelerated sharply; this has opened up a vast and widening gap between innovators and the ability of regulators and policymakers to make sense of the social and moral implications of innovations. Driverless cars, autonomous drones, gene editing, 3D printing, artificial intelligence (AI), smart robots and synthetic biology are some of the new developments that have caught our imagination. Each of these has staggering potential for both good and bad. They can liberate humankind but equally they can become the stuff of dystopian movies. The risks of being overzealous in regulating these are significant as are the consequences of inertia. Especially in a country like India, new technologies offer enormous hope of ending scarcity and enabling affordable access to hundreds of millions of people. Think about the shortage of doctors, nurses, teachers, policemen or judges. Smart bots can at the very least enormously increase the productivity of a doctor, judge or teacher; over time, they may even be better than their human counterpart. Driverless cars can ease congestion and make public transportation convenient even for the affluent. Gene editing can help engineer crops with the ability to withstand extremes of climate change. The possibilities are limitless.
India must not be slow or hesitant in embracing the benefits that these innovations offer. Indeed, we must see these as opportunities to leapfrog other nations and our own linear development. This is exactly what China is doing. However, standing by passively has another set of risks. We do not need to encourage a Terminator or Frankenstein-type scenario, for instance, where we unleash something that we cannot control. The leaders in most of these innovations are currently companies outside India; what policies are necessary to ensure that India isn’t colonized again, this time by companies? In a country with a huge labour surplus and employment challenges, is it okay for companies to automate everything or should some boundaries exist? In an age of smart machines, do businesses have a social responsibility when it comes to employment? In a country where literacy levels are still relatively low, gullibility is high, fraud is rampant and justice is slow, what guardrails are needed to protect people as the country goes digital? As you can see, there are many more questions than we have answers to.
No country has a perfect solution to striking the right balance on such matters. This is no excuse to stay away from probing models of responsible innovation to ameliorate the perils of going too far. Other countries, individuals, and organizations have already started doing so. The Virtual Institute for Responsible Innovation, supported by the National Science Foundation and housed at the Center for Nanotechnology in Society, Arizona State University, brings together a global community of scholars and practitioners with a common conception of responsible innovation, for purposes of research, training and outreach. This is in keeping with the spirit of responsible innovation, which attempts to address the ethical and societal concerns arising from highly promising innovations in parallel with research advances made on the innovation itself.
The recently launched Leverhulme Centre for the Future of Intelligence brings together four of the world’s leading universities (Cambridge, Oxford, Berkeley and Imperial College London) to explore implications of AI for human civilization. Stephen Hawking, who spoke at the launch of this centre, remarked that AI could “either be the best, or the worst thing, ever to happen to humanity”. Elon Musk, more famous for his revolutionary ideas and projects, has publicly stated that AI is “potentially more dangerous than nukes” and followed up his concerns with OpenAI, a non-profit artificial intelligence research company with the stated mission to build safe AI and to ensure AI’s benefits are widely and evenly distributed.