Language
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
post
page
Get Free Quote
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
post
page
Get Free Quote

The Surprising History of Machine Translation

Google Translate

The Surprising History of Machine Translation

The only type of machine translation many people are familiar with is Google Translate. However, machine translation has been around for almost 70 years! In fact, machine translation got its start during the first computers! Language barriers have been around since the existence of humanity. It only makes sense that the one of the first problems computers would be asked to solve would be language problems. Machine translation has come a long way in 70 years. It is now a valuable asset for translation companies and their clients. Let’s learn about the history of google translate and see where it is going.

1950s-1980s: Rules Based Machine Translation (RBMT)

Mathematicians like Alan Turing, J.V. Atanasoff, and Clifford Berry built the machines that paved the way for computers in the 1930s and 1940s. It wasn’t until the 1950s and the invention of the transistor that true computers came about. A year after the first computer coding language, COBOL, was invented, the first machine translation took place.

In 1954, a joint project between IBM and Georgetown University had a computer translate 60 sentences. An IBM 701 computer automatically translated 60 sentences from Russian into English. This experiment proved that machine translation was possible. However, it had limitations. First of all, the experiment took lots of research and only translated unambiguous sentences. It also was very expensive, time-consuming work to program the computers. Many companies and world governments stopped funding machine translation in the late 1950s.

These first experiments of machine translation used rules-based machine translation (RBMT). RBMT used a series of programmed language rules to translate. Rules-based machine translation is not very good, though. There are a few reasons for this. The first is that you need to teach the computer the full vocabulary of each language. This takes a lot of time! Second, you need to teach multiple types of grammar. And third, RBMT’s simplistic approach to translation means it is very low-quality work.

1980s: Example-Based Machine Translation (EBMT)

In the 1980s, Japanese researchers pioneered the next phase of machine translation. Example-based machine translation (EBMT) uses previously translated phrases and alters them slightly for various situations. For example, say you need to translate “I went to the cinema.” In RBMT, you would need to translate each word’s equivalent. In EBMT, you could use a similar sentence, “I went to the park” and just change “park” to the translation for “cinema”. This cuts down on the time it takes to translate.

Example-based machine translation was a big breakthrough. It showed that you could teach a machine from a previous example. Now you didn’t need to spend lots of time programming grammar and vocabulary rules in the computer.

1990s-2015: Statistical Machine Translation (SMT)

In 1990, IBM undertook the first statistical machine translation (SMT). Statistical machine translation used parallel texts in different languages to create statistical models. Researchers theorized that if they looked at enough text, they could find patterns in the translations. Then, they could program these patterns into the computer and it would translate quicker and more naturally. SMT was a huge breakthrough.

Statistical machine translation’s creation also coincided with the rise of the internet. Companies like Systran and Altavista Babelfish started offering free machine translation online. For 15 years small companies offered SMT as value-added service for clients. However, in 2006 Google launched Google Translate. Google Translate was a huge hit. Google Translate started with using statistical machine translation. Most people knew the translations weren’t very good, but they were convenient. By 2012, Google Translate said it translated enough text to fill 1 million books every day.

 2015 and On: Neural Machine Translation (NMT)

Then in 2015, machine translation changed forever. In 2015 Google started using neural machine translation for Google Translate. Neural machine translation (NMT) uses advanced algorithms called neural networks in addition to artificial intelligence. Neural machine translation software is able to quickly and easily produce translations for huge bodies of text, very accurately. NMT makes use of what are called “deep learning”. Using simulations based on how the brain thinks, scientists are able to teach machines to translate closer to how humans translate. This means that NMT translations are more fluent and easier to read. Neural machine translation also can learn from mistakes. This is one of the beauties of it! By correcting a mistake made by NMT, the computer learns to correctly translate something next time.

While neural machine translation can be quite good, it still isn’t always accurate. Humans are still better at judging style and context when translating. The next wave of machine translation will likely use additional artificial intelligence to translate. Researchers across the world are training AI to create better NMT solutions. That said, human translators won’t be replaced any time soon. They are still so much better than machine translation.

 

Acutrans is at the forefront of technology. We are proud to offer machine translation, and machine translation with post-editing to our clients. These services are ideal for clients with large volumes of text in non-crucial applications. Machine translation is fast and cost-effective. Even with post-editing, machine translation is a fraction of the cost of normal translation. Reach out today for your free quote for machine translation.

Machine Translation vs. Human Translation: Which is Better?