The rise of artificial intelligence (AI) is undeniable. From AI-based medical diagnostics to algorithms used in recruitment and self-teaching robots used in logistics and food serving, AI has permeated all aspects of our lives.
Its arrival has been long in the making, for scientists have been mulling a superiorly intelligent machine for almost a century. However, as algorithms become better and better at copying humans and, often times, at exceeding their abilities, the shortcomings of their unbridled development are becoming more and more apparent. And the voices asking that AI be regulated before it is too late are multiplying. Seeing how we at the AtomLeap High-Tech Accelerator have worked and continue to work with numerous AI startups over the years, we decided to devote this week’s post to looking at the arguments for and against regulating AI.
China’s Social Credit System, which Beijing recently announced would roll out by 2020, is perhaps the best known controversial use of AI. The Chinese government’s intention to block individuals who are deemed to be “bad” from flying and participating in other social activities has sparked fears all over the world that we are indeed headed in the direction of an AI-fuelled Orwellian dystopia.
For every positive use of AI (its employment in the performance of repetitive tasks, its ability to curtail fraud, to enhance efficiency, and reduce costs) there is a downside. The main ones are (1) the fact that those algorithms trained using data are subject to the same biases embedded in the data sets by the humans that collected them; (2) that algorithms can make inferences and correlations that are inscrutable to humans, therefore incorrigible; and (3) that the same algorithms used for neutral or positive purposes can fall into the wrong hands and used for destructive purposes. After all, as American historian Melvin Kranzberg posited in his laws of technology, “technology is neither good, nor bad; nor is it neutral”. Rather, its normative impact depends on who employs it and for which purposes.
From the algorithm that can detect individuals with a probability of depression based on their Facebook posts to Amazon’s misogynistic recruiting algorithm and to lethal autonomous weapons, AI is riddled with ethical dilemmas. The instances of false positives abound — such as when this Chinese businesswoman was accused of jaywalking after an AI camera caught her face in an advert on a bus — as do numerous questionable choices made by AI (Google Photo’s recognition engine identifying black people as gorillas, Microsoft’s chatbot Tay spewing racist trash on Twitter).
To regulate or not to regulate?
The above-mentioned are the issues associated merely with artificial narrow intelligence (ANI), or algorithms trained to perform specific, narrowly defined functions. The stuff of sci-fi and nightmares that is frequently cited by those cautious about AI refers to artificial general intelligence (AGI) or the singular intelligence; the all-encompassing intelligence from which is not here yet, but that can take over humanity to become “an immortal dictator from which we can never escape”, in the words of Elon Musk, and that “could spell the end of the human race”, as per Stephen Hawking.
That is perhaps why some of the brightest minds in science and tech — among them Hawking, Musk, Steve Wozniak, and Bill Gates — have called for AI to be regulated and its development better controlled before AGI takes over. Backed up by some early studies indicating that AI may indeed become self-protective and work systems of incentives to maximize their outcomes, proponents of regulating AI have grown in both number and influence in recent years.
Meanwhile, a 2016 Stanford University expert panel and many AI practitioners opine that “attempts to regulate ‘AI’ in general would be misguided, since there is no clear definition of AI”. Furthermore, regulating AI in certain nations but not in others, will skew the technological playing field in favor of the unregulated nations. Case in point, China already has one of the most developed AI sectors worldwide and its capabilities are developing exponentially, fuelled both by government and private sector investments. Not only does Beijing not appear likely to curtail the development of the technology, but quite the contrary: it appears willing to support it wholeheartedly.
Meanwhile, Europeans, who are more concerned about human rights and privacy compared to Chinese citizens, may be more successful at convincing national and EU policymakers to further regulate AI and other high tech. But, in so doing, they will likely put high-tech innovators hailing from Europe at a disadvantage compared to their competitors based elsewhere, pushing them to relocate. The result could be a brain drain and loss of innovation capacity.
The way out of this conundrum would, of course, be the global policy approach. That is, devising international regulations to govern the development of AI, very much like international regulations used to contain nuclear proliferation and fraud. This suggestion is increasingly being put forward by German researchers and policy specialists. The downside to this solution, however, is that international negotiations tend to be lengthy and riddled with controversy themselves, meaning that, even if countries come together in agreement about how to manage the development of AI, that agreement will likely be too little, too late to make a difference.
The silver lining
But not all is doom and gloom when it comes to AI. Prominent AI experts — such as venture capitalist, former Big Tech executive and author Kai-Fu Lee — believe that, far from spelling disaster for humankind, AI and humanity can co-exist. By freeing people up from having to complete repetitive tasks, Lee believes that AI will actually remind us of what it means to be human. That it will enable us to spend more time with those we love, care for our dear ones, and use our creativity and empathy more in our everyday lives. While displacing us out of repetitive jobs, AI will support us in creating opportunities for more social workers, teachers, and counsellors and will enable us to come together as a society.