Google

DeepMind can play it hard aswell

Googles AI DeepMind is an impressive system that was able to master hard tasks as playing Go or generating humand language in the recent time. In a new article, Google scientists describe the way, DeepMind reacts on challenges as competing or cooperating with another player for success in computer games. DeepMind plays those games and learns by doing so. In the end, it shows that it plays aggressive when competing and cooperative when cooperating. This is no surprise and the term aggressive is only used as the game describes colored pixels that are competing for green pixels (apples) that appear randomly on the screen. In order to prevent the other player from reaching an apple, they are able to "fire" "laser beams" to let them skip a round - which could also be dubbed as "send" "love messages" but whatever. I am still in awe about DeepMind and its abilities and long for every new detail - but after playing the apple gathering game it is a bit early for the Terminator references.

Business Insider: Google's new AI has learned to become 'highly aggressive' in stressful situations

Interlingua in Google Translate

Machine Translation is the master discipline in the computational linguistics; it was one of the first major tasks defined for computers back in the times of Post-World War II. Warren Weaver, an American science administrator stated in a famous memorandum called "Translation" in 1949: „It is very tempting to say that a book written in Chinese is simply a book written in English which was coded into the 'Chinese code'. If we have useful methods for solving almost any cryptographic problem, may it not be that with proper interpretation we already have useful methods for translation?

After many ups and downs in the coming decades, the first real breakthrough came with fast PCs, fast web connections and the possibility to compile and process immense language data sets. But instead of compiling grammar sets in order to define one language and than another and their relationships, the use of statisical models became en vouge: Instead of years of linguistical work, they used some weeks of processing with similar results. While rules based systems created nice looking sentences with often stupid word choiced, statistics based systems created stupid looking sentences with good phrase quality. One thing, linguists as well as statisticians were always dreaming about was the so called Interlingua. A kind of a neutral language in between which would allow to translate the pure meaning of one sentence into this Interlingua and afterwards to construct a sentence in the target language that bears the same meaning. There is a common three step pyramide to the describe the raising quality of machine translation:
First level: Direct translation from one language to another
Second level: Transfer using one elaborated way or another, e.g. rules, statistics, etc.
Third level: Using an Interlingua.

There were many attempts, from planned languages as Esperanto up to semantic primes and lexical functions - the result was always the same: There is no Interlingua. "Meaning" is a to complex concept to model it in a static way.

In 2006, Google released Google Translate, a nowadays very popular system of MT that was statistics based originally, created by the German computer scientist Franz Josef Och (not at Human Longevity). This was an event that inspired me in a very personal way to focus my linguistics career on computational lingustics and inspired me to write my Magister Thesis with the Title "Linguistic Approaches to improve Statistical Machine Translation" (Linguistische Ansätze zur Verbesserung von statistischer maschineller Übersetzung) at the University of Kassel. This is 10 years ago. Recently, I talked to a friend about the success of the Google AI beating of the first Go-Master Lee Sedol using a neural network. Would this be able to change Machine Translation aswell? 

In September, Google announced in their research blog that they are switching their Translation system from statistics based to the Google Neural Machine Translation (GNMT), "an end-to-end learning framework that learns from millions of examples, and provided significant improvements in translation quality". This system is able to make zero shot translation, as they write in an article published three days ago, on November 22th. A zero shot translation is a translation between two languages while the system does not have examples of translation between those two, e.g. it is trained by examples to translate between English and Japanese and between English and Corean, a zero shot translation would be between a data-less translation Japanese and Corean.. As Google state in their blog:

To the best of our knowledge, this is the first time this type of transfer learning has worked in Machine Translation. 
The success of the zero-shot translation raises another important question: Is the system learning a common representation in which sentences with the same meaning are represented in similar ways regardless of language — i.e. an “interlingua”?

This is indeed hard to tell: Neural networks are closed systems. The computer is learning something out of a data set in an intelligent but incomprehensible and obscure way. But Google is able to visualize the produced data and you've got to take a look at the blog post to understand this in detail, but: 

Within a single group, we see a sentence with the same meaning but from three different languages. This means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of existence of an interlingua in the network. 

Google, this is awesome! Thank you so much for sharing!

Image: Mihkelkohava Üleslaadija 

Incredible WaveNet Speech Synthesis

Yaaaay, there is certainly some magic in deep neural networks - after mastering Go or making huge progress in the field of Spoken Language Recognition, Google now presents WaveNet, a deep neural networks-based approach to Speech Synthesis. It sound astoundingly real and even can compose music or fictional languge-like sounds. Amazing. And spooky. 

WaveNet changes this paradigm by directly modelling the raw waveform of the audio signal, one sample at a time. As well as yielding more natural-sounding speech, using raw waveforms means that WaveNet can model any kind of audio, including music.

Heise: http://www.heise.de/newsticker/meldung/Google-DeepMind-Sprachsynthese-so...

DeepWave: https://deepmind.com/blog/wavenet-generative-model-raw-audio/

Google Invincible

German Newspaper Frankfurter Allgemeine Zeitung talked with the bavarian SEO specialist Marcus Tandler from OnPage. Topics lead from the rise and fall of platforms as Yatego and Googles influence on this up to the rise of Google, the fall of Altavista and Tandlers prognosis, that Google seems to be invincible (although he thought the same on Altavista once).

FAZ.NET: Eine Plattform für alles

 

Google in der FAZ

Einige sehr interessante Artikel zu Google auf FAZ.net, man kommt mit dem Lesen kaum hinterher.
Zunächst hier eine Konversation mit Google:

3. April - Robert M. Maier: Von der Suchmaschine zur Weltmacht - Angst vor Google
9. April - Eric Schmidt: Eric Schmidt about the good things Google does - A chance for growth
16. April - Mathias Döpfner: Offener Brief an Eric Schmidt - Warum wir Google fürchten
17. April - Thomas Thiel Reaktionen auf Döpfners Google-Kritik - Ein Goliath macht sich ganz klein

Und hier einige andere Artikel zum Thema:
7. April - Self-censorship in the digital age We won’t be able to recognize ourselves
10. April - Einsatz im Krankenhaus - Googles wundertätige Datenbrille
16. April - Gegen Googles „Library Project“ - Unterschätzt die Absichten nicht!

Google Research: Relation Corpus

One of the most difficult tasks in NLP is called relation extraction. It’s an example of information extraction, one of the goals of natural language understanding. A relation is a semantic connection between (at least) two entities.

Und weil es sich um so eine schwere Aufgabe handelt veröffentlicht Google ein Set an Daten, dass anderen Wissenschaftlern beim Trainieren von Information Retrival bzw. Relation-Extraction-Systemen helfen soll. Es handelt sich um 10.000 “place of birth”, und mehr als 40.000 “attended or graduated from an institution” Beziehungen, die aus der Wikipedia extrahiert und von jeweils mindestens fünf menschlichen Gutachtern als richtig beurteilt wurden. Die Daten liegen als "Prädikat Subjekt Objekt" Tripel vor, zahlreiche weitere Daten wie Links oder Judgement-Details sind auch dabei. Außerdem sollen weitere Relations folgen. Alle Details dazu im Google Research Blog:

50,000 Lessons on How to Read: a Relation Extraction Corpus

English Word and Letter Frequency

perhaps your group at Google might be interested in using the computing power that is now available to significantly expand and produce such tables as I constructed some 50 years ago, but now using the Google Corpus Data, not the tiny 20,000 word sample that I used.

English Letter Frequency Counts: Mayzner Revisited or ETAOIN SRHLDCU

Pages