In a borderless business and cultural environment, translating between languages has become more of an urgent, ongoing need – even for small businesses. Not only is globalisation pushing this industry, so is digitalisation. The international translation market is huge, globally. In 2017, it will boast a roughly $34 billion dollar business. Translation, however, is a time-consuming task. Even when it is performed by people, no two results are exactly the same. Why? Because cultural nuances abound. And there is frequently no one “right” way to translate something. Interpretation is highly subjective and contextual.
Because computers cannot interpret language in the same way that humans do, this makes machine translation an even trickier proposition. Computer “languages” are basically command codes that ultimately come down to one thing: a binary yes/no, go/no-go understanding of the world. Computer language, in other words, is not subjective. Artificial intelligence, or AI, is a field of study that examines how to broaden that functionality, and MT – Machine Translation – is part of that discussion. Here is the basic problem in a nutshell. Computers cannot understand context. They are literal. This opens up a range of issues, particularly concerning translation and localisation. That said, new advances in computer algorithms and artificial intelligence are closing the gap. What is new in the world of machine translation?
WHAT IS MACHINE TRANSLATION?
Machine translation is a field of linguistics that uses software to interpret and frequently translate between languages. On one level, translations are actually very simple. A dog is a dog in every language. However, what that dog does, or how that dog acts is a different matter entirely. MT cannot necessarily distinguish the cultural or local context of language. Its interpretation of phrases and idioms, for example, is very hit or miss as a result. It is difficult enough when human beings interpret each other’s languages. When computers get into the game, this sparks an even more complicated digital conversation.
Professional MT software usually offers customisation by profession. It also tends to try to improve output by limiting the scope of input. So, for example, where formulaic input is higher, MT tends to work better. There is less ambiguity. In addition, MT has improved dramatically since the 1950’s, when this first became a topic of scientific interest. There are still those who argue that artificial intelligence will never match the human brain, and as a result, MT will never be able to rival “real humans” when it comes to the recognition of meaning. Recent developments appear to chart a course that is still somewhere in the middle.
NEW LINGUISTIC ALGORITHMS CHANGE THE GAME
For all the naysayers, the field continues to expand and develop new tools. For example, researchers from the University of Liverpool introduced a groundbreaking new tool last year. The researchers developed a set of algorithms that can help computers contextualise speech. In other words, the algorithms will enable computers to act much like a human would. The algorithms will look up the word, then help the computer guess what should appear next to it. The researchers are then able to score and compare how the algorithms work. So far, the results have been extremely interesting, if not encouraging.
In the future, computers will become more adept at translating, summarising, and contextualising language. While the technology is clearly not fully-baked yet, it continues to progress. Though whether it will ever progress enough to match the insights of a human is still very much in doubt.