Google’s latest update in language translation has made a technological leap in the field of translation greater than any in the last ten years combined. Google translate will now utilize Neural Machine Translation (GNMT) for translating whole sentences at a time, as opposed to word-by-word translation. The outcome, researchers say, will be much closer to a human translation and the final output will be much smoother to read.

In September 2016, Google announced that neural networks would start powering Google translate, and in mid-November, it has released the technology for eight language pairs, specifically to and from English, French, German, Spanish, Portuguese, Chinese, Japanese, Korean, and Turkish. The system is not flawless and still makes mistakes (i.e., it forgets to translate words or fails to understand people’s names) but the errors have been cut by 55 to 85% in a number of languages.

Will this mark the beginning of a new era for the translation industry? If you are familiar with translating, you can easily picture how hard it is to deliver a translation from one language to another conveying all the nuances, and rendering culture-specific values and context from one culture to another.  So how could a machine ever perform such a complicated task?

Phrase-based Deep Learning and AI

Neural Machine Translation (NMT) employs state-of-the-art technology to deliver more context-accurate translations instead of broken sentences translated word-by-word, thus significantly improving translation quality. It relies on short-term memory recurrent neural networks (LSTM-RNNs) and neural networks that have been trained by graphics processing units (GPUs) and tensor processing units (TPUs). The knowledge transfer is the key ingredient to this new recipe and Google calls it “Zero-Shot Translation,” which basically allows machines to learn how to translate between two languages without being previously trained to do so. To put this in simple terms and using the example made by Google, the new system is able to teach the machine to translate between Korean and Japanese without having been previously trained in those language pairs. Google researchers performed experiments in order to train the multi-lingual system with desirable pairs, such as Japanese-English and Korean-English. The GNMT system then shared its parameters to translate between these four language pairs and they discovered that it was possible for the system to then translate between Korean-Japanese without any previous training. But how so?

The new system, Google researchers discovered, transfers the “translation knowledge” from one language pair to the others and, according to Google researchers, “this means the network must be encoding something about the semantics of the sentence rather than simply memorizing phrase-to-phrase translations. We interpret this as a sign of the existence of an interlingua in the network.” The details of this breakthrough are still opaque, but it is nonetheless a pioneering invention, as this is the first time this type of transfer learning has worked in Machine Translation.

Conclusion

So far, GNMT has proved to be the most effective translation software as it looks at a sentence as a whole segment to translate and thus attempts to capture the nuances behind the single words. However, it still makes mistakes, particularly when it encounters rare words or proper names in which case the system defaults to a word-by-word translation again. Surely, there is still a gap between human and machine translation and machine translations will always require time to be proofread and oftentimes will need to be rewritten. But Google’s latest accomplishment in pioneering new translation technology marks the beginning of a new era in advanced machine translations and it has the potential to revolutionize global communication. It takes forward-thinkers to change the world, and Google may again be at the cusp of doing so.

This article is written by a professional writer, Ilaria Ghelardoni, associated with Ulatus.