The birth and history of machine translation

Hello Yuqo
M
Machine translation is no longer a science fiction fantasy. Computer systems are improving drastically in understanding the complex nature of language. But are these systems sophisticated enough to take on human translators?

 
Machine translation has been in the works for decades, and every day, it is becoming less of a science fiction hope and more of a reality. Understanding the nuances of language is difficult even for people to pick up, and it is now apparent that this is the very reason why machine translation has only been able to develop so far.
 

EARLY HISTORY

Developers have dreamed of computers that quickly understand and translate language since the potential of such a device was first realized. One of the most important outcomes of creating and improving upon translation technology is that it opens up the world of computers beyond just mathematical and logical functions, into more complex relationships between words and meaning.
The early history of machine translation began around the 1950s. Warren Weaver of the Rockefeller Foundation began putting together machine-based code breaking and natural language processing, which pioneered the concept of computer translation as early as 1949. These proposals can be found in his “Memorandum on Translation.”
The early history of machine translation began around the 1950s.
Fascinatingly enough, it did not take long before computer translation projects were well underway. The research team that founded the Georgetown-IBM experiment had a demonstration in 1954 of a machine that could translate 250 words from Russian into English.
 

CURRENT DEVELOPMENTS

People thought that machine translation was on the fast track to solving a great number of problems surrounding communication barriers, and many translators began to fear for their jobs. However, advancements ended up stalling before they hit their stride due to subtle language nuances that computers simply could not pick up on.
No matter the language, words often have multiple meanings or connotations. Human brains are simply better equipped than a computer to access the complex framework of meaning and syntax. By 1964, the US Automatic Language Processing Advisory Committee (ALPAC) reported that machine translation was not worth the effort or resources being used to develop it.
 

1970-1990

Not all countries had the same views as ALPAC. In the 1970s, Canada developed the METEO system, which translated weather reports from English into French. It was a simple program that was able to translate 80,000 words per day. The program was successful enough to enjoy use into the 2000s before requiring a system update.
The French Textile Institute used machine translation to convert abstracts from French into English, German, and Spanish. Around the same timeframe, Xerox used their own system to translate technical manuals. Both were used effectively as early as the 1970s, but machine translation was still only scratching the surface by translating technical documents.
By the 1980s, people were diving into developing translation memory technology, which was the beginning of overcoming the challenges posed by nuanced verbal communication. But, systems continued to face the same trappings when trying to convert text into a new language without losing meaning.
 

2000

Due to the creation of the Internet and all the opportunity it offered, Franz-Josef Och won a machine translation speed competition in 2003 and would become head of Translation Development at Google. By 2012, Google announced that its own Google Translate translated enough text to fill one million books in a day.
Japan also leads the revolution of machine translation by creating speech-to-speech translations for mobile phones that function for English, Japanese, and Chinese. This is a result of investing time and money into developing computer systems that model a neural network instead of memory-based functions.

As such, Google informed the public in 2016 that the implementation of a neural network approach improved clarity across Google Translate, eliminating much of its clumsiness. They called it the Google Neural Machine Translation (NMT) system. The system began translating language pairings that it had not been taught. The programmers taught the system English and Portuguese and also English to Spanish. The system then began translating Portuguese and Spanish, though it had not been assigned that pairing.
 

FUTURE ENDEAVORS

It was once believed that the time had finally come when machine translation might be able to outperform human counterparts. In 2017, Sejong Cyber University and the International Interpretation and Translation Association of Korea put on a competition between four humans and leading machine translation systems. The machines translated the text faster that the humans without any doubt, but they still could not compete with the human mind when it came to nuance and accuracy of translation.
People have been dreaming about the swiftness and ease promised by accurate, reliable machine translation since before the 1950s. The fanciful idea of a shared way to communicate worldwide still has a long way to go. Creating a computer that thinks more like a human will open the world to possibilities beyond just simple communication. Technology has advanced well beyond using a machine to crunch numbers – it brings the world closer and closer together with each passing year. But for now, you are better sticking with human translators for the texts that matter.