I really love movies such as "Her", "Transcendence" or "Ex Machina" and am fascinated by the mysterious qualities of (human?) conscience and cognition. But there are some really serious concerns one has to have in regard of development of artificial intelligence (AI). Some of them are topics of the named movies, others may still be unforseen or they slumber in the way too less read stories by Philip K. Dick, Stanisław Lem or Isaac Asimov ... Those risks may vary from the application of weak AI techniques as machine translation and face recognition in non liberal societes up to the development of strong AIs (whatever this may be and however one should know) and the questions raised by artificial beings. No wonder, that Elon Musk wrote the following on Twitter:
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) 3. August 2014
Some weeks ago, the who-is-who of computer scientists and tech pioneers signed an Open Letter with the title "Research Priorities for Robust and Beneficial Artificial Intelligence".
In addition to this letter I present to you the top 5 of the alerters of the dangers of artificial intelligence with the link to a relevant article by or about them:
But, as I said, this leads to the crossing of a very subtle line, and after running over that line during programming, the first impression many people get is that the person is inferior to the computer — that the programmer is in some way a defective imitation. And in certain ways the computer is better than human beings.
Computers will be smarter than humans (as in Space Odyssey) when they learn to cheat and lie. That is the highest form of intelligence because it requires a “theory of mind”. I have to put myself in the shoes of the other people in order to lie effectively. Monkeys do not have a theory of mind, as has been proved with experiments. They are terrible liars. When computers could lie to us as effectively as HAL, those computers would pass the Turing Test and would be intelligent.
3. Elon Musk (tesla motors, xspace): Elon Musk Compares Building Artificial Intelligence To “Summoning The Demon”
With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.
4. Bill Gates (richest man on earth, microsoft founder, bill and melinda gates foundation) Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned'
First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.
5. Stephen Hawking (theoretical physicist): Stephen Hawking warns artificial intelligence could end mankind
Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.
There are other perspectives aswell: Take a look at The Myth of AI by computer scientist Jaron Lanier. He argues, that AI is a fraud and should not be hyped as a quasi religious topic (and here is a replica by io9: The Myth of AI Is More Harmful Than AI Itself)