Be Knowledgeable

A lot of technologists, scientists & researchers believe that un-regulated advances in AI could lead to a Human Extinction event. Some of these people signed an open letter in Jan 2015 talking about AI making more beneficial while reducing the risks of AI. .

A sub-set of these people (and more who have not signed the document) have talked about openly about the dangers of AI and also about having some regulation. 

We collected the list of these famous people and also tried to link to their comments. 

Feel free to add your own set of people with any links / commentary to their beliefs by using the form at the bottom.

  1. Elon Musk - Founder Tesla & SpaceX

    Musk is one of the most outspoken critics of AI without boundaries. Here are some comments from him:

    "The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.

    I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen..." - Nov 2014

  2. Bill Joy - CoFounder of Sun Microsystems

    Bill Joy wrote an Epic Essay in April 2000 on "Why The Future Doesn't Need Us"

    His Essay begins with outlining this concern:

    It's easy to get jaded about such breakthroughs. We hear in the news almost every day of some kind of technological or scientific advance. Yet this was no ordinary prediction. In the hotel bar, Ray gave me a partial preprint of his then-forthcoming book The Age of Spiritual Machines,which outlined a utopia he foresaw - one in which humans gained near immortality by becoming one with robotic technology. On reading it, my sense of unease only intensified; I felt sure he had to be understating the dangers, understating the probability of a bad outcome along this path.

  3. Sam Altman - President of Y Combinator

    While Altman has believed that AI is going to go mainstream.

    He argued this in a subsequent essay on Jan 15 2015.

    I think some regulation is a good thing. In certain areas (like development of AI) I'd like to see a lot more of it.

    He continued further on On Feb 16th 2015:

    Two of the biggest risks I see emerging from the software revolution AI and synthetic biology may put tremendous capability to cause harm in the hands of small groups, or even individuals. It is probably already possible to design and produce a terrible disease in a small lab; development of an AI that could end human life may only require a few hundred people in an office building anywhere in the world, with no equipment other than laptops. The fact that we dont have serious efforts underway to combat threats from synthetic biology and AI development is astonishing.

  4. Shane Legg & Demis Hassabis - DeepMind Founders

    DeepMind, a deep learning startup acquired by Google for $400 Million was founded by Shane Legg & Demis Hassabis

    Both of them are on the record outlining concerns of AI.

    I suspect that once we have a human level AGI, it's more likely that it will be the team of humans who understand how it works that will scale it up to something significantly super human, rather than the machine itself. Then the machine would be likely to self improve.

    How fast would that then proceed? Could be very fast, could be impossible -- there could be non-linear complexity constrains meaning that even theoretically optimal algorithms experience strongly diminishing intelligence returns for additional compute power. We just don't know.

    It's my number 1 risk for this century, with an engineered biological pathogen coming a close second (though I know little about the latter).

    Demis Hassabis has joined an Ethics Group to monitor the progress of AI.

  5. Stephen Hawking

    "The development of full artificial intelligence could spell the end of the human race."

    "It would take off on its own, and re-design itself at an ever increasing rate,"

    "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

  6. Peter Thiel (Paypal Co Founder & Investor)

    Here is Peter Thiel on the Dangers of AI in FT in Oct 31 2014:

    People are spending way too much time thinking about climate change, way too little thinking about AI, Behind all the warnings is a growing belief among computer scientists that machines will, within decades, reach the condition of artificial general intelligence and match humans in their intellectual capacity. That moment, Thiel says, will be as momentous an event as extraterrestrials landing on this planet. It will mark the birth of an intellect that is as capable as that of humans but is entirely inhuman, with unpredictable results.

    While Comparing AI to Aliens- he said
    The first question we would ask if aliens landed on this planet is not, what does this mean for the economy or jobs, says Thiel. It would be: are they friendly or unfriendly?

  7. Bill Gates

    I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.
  8. James Barrat - Author of Our Final Invention Artificial Intelligence and the End of the Human Era

    Imagine: in as little as a decade, a half-dozen companies and nations field computers that rival or surpass human intelligence. Imagine what happens when those computers become expert at programming smart computers. Soon we’ll be sharing the planet with machines thousands or millions of times more intelligent than we are. And, all the while, each generation of this technology will be weaponized. Unregulated, it will be catastrophic.

  9. Nick Bostrom - Author of SuperIntelligence
Add a Resource to this List
Not more than 250 characters.