AI

DeepMind can play it hard aswell

Googles AI DeepMind is an impressive system that was able to master hard tasks as playing Go or generating humand language in the recent time. In a new article, Google scientists describe the way, DeepMind reacts on challenges as competing or cooperating with another player for success in computer games. DeepMind plays those games and learns by doing so. In the end, it shows that it plays aggressive when competing and cooperative when cooperating. This is no surprise and the term aggressive is only used as the game describes colored pixels that are competing for green pixels (apples) that appear randomly on the screen. In order to prevent the other player from reaching an apple, they are able to "fire" "laser beams" to let them skip a round - which could also be dubbed as "send" "love messages" but whatever. I am still in awe about DeepMind and its abilities and long for every new detail - but after playing the apple gathering game it is a bit early for the Terminator references.

Business Insider: Google's new AI has learned to become 'highly aggressive' in stressful situations

Interlingua in Google Translate

Machine Translation is the master discipline in the computational linguistics; it was one of the first major tasks defined for computers back in the times of Post-World War II. Warren Weaver, an American science administrator stated in a famous memorandum called "Translation" in 1949: „It is very tempting to say that a book written in Chinese is simply a book written in English which was coded into the 'Chinese code'. If we have useful methods for solving almost any cryptographic problem, may it not be that with proper interpretation we already have useful methods for translation?

After many ups and downs in the coming decades, the first real breakthrough came with fast PCs, fast web connections and the possibility to compile and process immense language data sets. But instead of compiling grammar sets in order to define one language and than another and their relationships, the use of statisical models became en vouge: Instead of years of linguistical work, they used some weeks of processing with similar results. While rules based systems created nice looking sentences with often stupid word choiced, statistics based systems created stupid looking sentences with good phrase quality. One thing, linguists as well as statisticians were always dreaming about was the so called Interlingua. A kind of a neutral language in between which would allow to translate the pure meaning of one sentence into this Interlingua and afterwards to construct a sentence in the target language that bears the same meaning. There is a common three step pyramide to the describe the raising quality of machine translation:
First level: Direct translation from one language to another
Second level: Transfer using one elaborated way or another, e.g. rules, statistics, etc.
Third level: Using an Interlingua.

There were many attempts, from planned languages as Esperanto up to semantic primes and lexical functions - the result was always the same: There is no Interlingua. "Meaning" is a to complex concept to model it in a static way.

In 2006, Google released Google Translate, a nowadays very popular system of MT that was statistics based originally, created by the German computer scientist Franz Josef Och (not at Human Longevity). This was an event that inspired me in a very personal way to focus my linguistics career on computational lingustics and inspired me to write my Magister Thesis with the Title "Linguistic Approaches to improve Statistical Machine Translation" (Linguistische Ansätze zur Verbesserung von statistischer maschineller Übersetzung) at the University of Kassel. This is 10 years ago. Recently, I talked to a friend about the success of the Google AI beating of the first Go-Master Lee Sedol using a neural network. Would this be able to change Machine Translation aswell? 

In September, Google announced in their research blog that they are switching their Translation system from statistics based to the Google Neural Machine Translation (GNMT), "an end-to-end learning framework that learns from millions of examples, and provided significant improvements in translation quality". This system is able to make zero shot translation, as they write in an article published three days ago, on November 22th. A zero shot translation is a translation between two languages while the system does not have examples of translation between those two, e.g. it is trained by examples to translate between English and Japanese and between English and Corean, a zero shot translation would be between a data-less translation Japanese and Corean.. As Google state in their blog:

To the best of our knowledge, this is the first time this type of transfer learning has worked in Machine Translation. 
The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”?

This is indeed hard to tell: Neural networks are closed systems. The computer is learning something out of a data set in an intelligent but incomprehensible and obscure way. But Google is able to visualize the produced data and you've got to take a look at the blog post to understand this in detail, but: 

Within a single group, we see a sentence with the same meaning but from three different languages. This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network. 

Google, this is awesome! Thank you so much for sharing!

Image: Mihkelkohava Üleslaadija 

Top 5 Critical Voices against Artificial Intelligence

Danger - Artificial Intelligence Ahead

I really love movies such as "Her", "Transcendence" or "Ex Machina" and am fascinated by the mysterious qualities of (human?) conscience and cognition. But there are some really serious concerns one has to have in regard of development of artificial intelligence (AI). Some of them are topics of the named movies, others may still be unforseen or they slumber in the way too less read stories by Philip K. Dick, Stanisław Lem or Isaac Asimov ... Those risks may vary from the application of weak AI techniques as machine translation and face recognition in non liberal societes up to the development of strong AIs (whatever this may be and however one should know) and the questions raised by artificial beings. No wonder, that Elon Musk wrote the following on Twitter:

Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.

— Elon Musk (@elonmusk) 3. August 2014

Some weeks ago, the who-is-who of computer scientists and tech pioneers signed an Open Letter with the title "Research Priorities for Robust and Beneficial Artificial Intelligence".

In addition to this letter I present to you the top 5 of the alerters of the dangers of artificial intelligence with the link to a relevant article by or about them:

1. Joseph Weizenbaum (computer science philosopher): Computerized Gods and the Age of Information

But, as I said, this leads to the crossing of a very subtle line, and after running over that line during programming, the first impression many people get is that the person is inferior to the computer — that the programmer is in some way a defective imitation. And in certain ways the computer is better than human beings.

2. Raul Rojas (ai computer scientist): We should try to move slower

Computers will be smarter than humans (as in Space Odyssey) when they learn to cheat and lie. That is the highest form of intelligence because it requires a “theory of mind”. I have to put myself in the shoes of the other people in order to lie effectively. Monkeys do not have a theory of mind, as has been proved with experiments. They are terrible liars. When computers could lie to us as effectively as HAL, those computers would pass the Turing Test and would be intelligent.

3. Elon Musk (tesla motors, xspace): Elon Musk Compares Building Artificial Intelligence To “Summoning The Demon”

With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.

4. Bill Gates (richest man on earth, microsoft founder, bill and melinda gates foundation) Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned'

First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.

5. Stephen Hawking (theoretical physicist): Stephen Hawking warns artificial intelligence could end mankind

Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.

There are other perspectives aswell: Take a look at The Myth of AI by computer scientist Jaron Lanier. He argues, that AI is a fraud and should not be hyped as a quasi religious topic (and here is a replica by io9: The Myth of AI Is More Harmful Than AI Itself)

Quantum Power Handwriting Recognition

DWave 128chip.jpg
 

We expect quantum computers to outperform binary computers by an order of magnitude. But today, they are still quite experimental and only work on small scales. So, this article (dating back to October) is amazing news: Physicist Zhaokai Li and his team from the University of Science and Technology of China in Hefei realized a Handwriting OCR on a quantum computer which means they created the first demonstration of Artificial Intelligence (in the meaning of machine learning) for a quantum computer:

That’s an interesting result for artificial intelligence and more broadly for quantum computing. It demonstrates the potential for quantum computation, not just for character recognition, but for other kinds of big data challenges. “This work paves the way to a bright future where the Big Data is processed efficiently in a parallel way provided by quantum mechanics,” say the team.

Physics arXiv Blog: First Demonstration Of Artificial Intelligence On A Quantum Computer

 

Image: "DWave 128chip" by D-Wave Systems, Inc. - D-Wave Systems, Inc.. Licensed under CC BY 3.0 via Wikimedia Commons.

Quantum Computer reads handwritten characters

There is a quote often attributed to Richard Feynman and it goes like this:

"If you think you understand quantum mechanics, you don't understand quantum mechanics."

So, here is the first application of quantum computing in the area of artifical intelligence:

A Chinese team of physicists have trained a quantum computer to recognise handwritten characters, the first demonstration of “quantum artificial intelligence”

Physics arXiv: First Demonstration Of Artificial Intelligence On A Quantum Computer