Sorry, this page is locked for editing due to heavy traffic and edit volume.
A lot of technologists, scientists & researchers believe that un-regulated advances in AI could lead to a Human Extinction event. Some of these people signed an open letter in Jan 2015 talking about AI making more beneficial while reducing the risks of AI. .
A sub-set of these people (and more who have not signed the document) have talked about openly about the dangers of AI and also about having some regulation.
We collected the list of these famous people and also tried to link to their comments.
Feel free to add your own set of people with any links / commentary to their beliefs by using the form at the bottom.
Musk is one of the most outspoken critics of AI without boundaries. Here are some comments from him:
I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen..." - Nov 2014
Bill Joy wrote an Epic Essay in April 2000 on "Why The Future Doesn't Need Us"
It's easy to get jaded about such breakthroughs. We hear in the news almost every day of some kind of technological or scientific advance. Yet this was no ordinary prediction. In the hotel bar, Ray gave me a partial preprint of his then-forthcoming book The Age of Spiritual Machines,which outlined a utopia he foresaw - one in which humans gained near immortality by becoming one with robotic technology. On reading it, my sense of unease only intensified; I felt sure he had to be understating the dangers, understating the probability of a bad outcome along this path.
While Altman has believed that AI is going to go mainstream.
He argued this in a subsequent essay on Jan 15 2015.
I think some regulation is a good thing. In certain areas (like development of AI) I'd like to see a lot more of it.
He continued further on On Feb 16th 2015:
Two of the biggest risks I see emerging from the software revolution AI and synthetic biology may put tremendous capability to cause harm in the hands of small groups, or even individuals. It is probably already possible to design and produce a terrible disease in a small lab; development of an AI that could end human life may only require a few hundred people in an office building anywhere in the world, with no equipment other than laptops. The fact that we dont have serious efforts underway to combat threats from synthetic biology and AI development is astonishing.
DeepMind, a deep learning startup acquired by Google for $400 Million was founded by Shane Legg & Demis Hassabis
Here is Peter Thiel on the Dangers of AI in FT in Oct 31 2014: