Taking New Directions in Human-Centered Machine Translation

It is a truth universally acknowledged that machine translation can be bad at times. And when it happens, we ask ourselves why it has to be so inaccurate. What exactly are the problems that plague machine translation—especially frontend engines such as Google Translate—and what are researchers and engineers doing to fix it?

Douglas Hofstadter, a cognitive science and comparative literature professor at IU Bloomington, talks about his distrust of Google Translate in an article for The Atlantic aptly titled “The Shallowness of Google Translate.” To give a visual explanation of the shortcomings of this beloved translation machine. Hofstadter takes a passage from a book by Austrian mathematician Karl Sigmund, written originally in German:

Nach dem verlorenen Krieg sahen es viele deutschnationale Professoren, inzwischen die Mehrheit in der Fakultät, gewissermaßen als ihre Pflicht an, die Hochschulen vor den “Ungeraden” zu bewahren; am schutzlosesten waren junge Wissenschaftler vor ihrer Habilitation. Und Wissenschaftlerinnen kamen sowieso nicht in frage; über wenig war man sich einiger.

Here is Hofstadter’s eloquent translation into English:

After the defeat, many professors with Pan-Germanistic leanings, who by that time constituted the majority of the faculty, considered it pretty much their duty to protect the institutions of higher learning from “undesirables.” The most likely to be dismissed were young scholars who had not yet earned the right to teach university classes. As for female scholars, well, they had no place in the system at all; nothing was clearer than that.

And here is the same passage, but translated by the neural machine translation engine that is Google Translate:

After the lost war, many German-National professors, meanwhile the majority in the faculty, saw themselves as their duty to keep the universities from the “odd”; Young scientists were most vulnerable before their habilitation. And scientists did not question anyway; There were few of them.

Hofstadter points out numerous errors, some of them inconsequential, others critical mistakes. For starters, the original German word Ungeraden is translated by Hofstadter as “undesirables,” in line with his historical, contextual understanding of the term; Google, on the other hand, takes the word for its literal meaning—“un-straight” or “uneven”—given that, in its database, the word was almost always translated as “odd.” 

The long German word Wissenschaftlerinnen in the last sentence is of particular note. While Hofstadter recognizes the feminizing suffix “-in” and hence translates it in its grammatically correct meaning—“female scholar”, Google misses that grammatical notation and translates it to “scientists,” which is not just grammatically incorrect but ends up misrepresenting the entire point of the paragraph. 

And with this translation—can we even call it that?—Hofstadter expresses his distrust of mainstream translation engines. “[The translation] doesn’t mean what the original means—it’s not even in the same ballpark. It just consists of English words haphazardly triggered by the German words. Is that all it takes for a piece of output to deserve the label translation?” The main problem, as Hofstadter points out, is that machines are not yet capable of understanding context, which resides primarily in the human mind—unconsciously. There is a wide-reaching net of interconnected ideas and related concepts that humans employ to synthesize whole, structured pieces of writing, and this is what’s missing from machine translation. 

It’s easy to think, given the sound structures of Google’s neatly packaged translations, that machine translation has reached human parity, but scientists beg to differ. While language processing and machine translation are heralded as the next big thing by media, the field is still in its infancy; current machine translation models are nowhere near the level media purport it to be. Here is an infographic from the Economist highlighting the surprisingly brief and recent development of machine translation: 

Image Credits: The Economist, “Finding a Voice

As evidenced by the chart above, the history of language technology is surprisingly short. It has only been half a decade since Google released a more modern neural version of Google Translate (only for eight languages then). Hofstadter believes that people think of machine translation as capable and developed more than it is, due to the ELIZA effect—the tendency to assume humanness and conscience in artificial intelligence. The more fluid and verbal a translation is, the more people believe it to be human-like in its accuracy; the content and ideas associated with the original text, however, are all but distorted and destroyed. “The [machine translation] engine isn’t reading anything,” Hofstadter clarifies, “not in the normal human sense of the verb “to read.” It’s processing text.” 

The challenge now is not to perfect an already capable machine, but rather, to lead current research into new directions that will bridge the gap between what people deem machine translation to be, and what it actually is. In 2021, researchers at the University of California, Berkeley, Black in AI, and Google huddled together to examine the possible improvements to machine translation, and in doing so, point out a few directions in which machine translation could improve. 

Given the near impossibility of a successful, completely AI-powered translation model (at least, in the near future), the researchers posit that some level of human supervision is necessary for quality machine translation. Until machine translation is imbued with some sense of agency and conscious contextualization, human-centered machine translation is the best choice for users who need to have a document or writing translated into a language they do not know, and vice versa. 

In their 2021 paper, “Three Directions for the Design of Human-Centered Machine Translation,” the aforementioned researchers introduce innovative directions for further improvement in machine translation. First, implementing methods to help users craft good inputs. The best way to overcome MT shortcomings is to render the source text as compatible as possible, making translation easier to do for engines. Quantitative and qualitative evidence have revealed that “MT models perform best on simple, succinct, unambiguous text,” and having the machine first determine if an input text is easy to translate—and offer advice on how to better phrase it—would help tremendously with the quality of output translations. Of course, there are questions as to how such a system would be programmed and implemented. 

Another direction is to have MT systems “identify errors and initiate repairs without assuming proficiency in the target language.” Given access to back-translation or a bilingual dictionary, users will be able to partake in the translation process, editing the final translation to better fit their original sentiments and intentions. Finally, users will have a much better time with a machine that adjusts its level of formality and literal translation according to the user’s needs. For example, someone might “prioritize accurate translation of domain-specific terms over fluency when using MT at a doctor’s office.” 

While none of these directions are particularly groundbreaking or ingenious, they are realistic, human-centered factors to be considered as developers and researchers design new models. These directions prove to be promising improvements, hopefully helping users make do with the current machine translation available to the public. We hope to see such changes implemented in translation machines in the near future. 

However, there does remain some skepticism; for example, how much freedom do translators have with source texts? It’s not very feasible to assume that translators—or even the writers themselves—can easily alter the phrasing and word choice of the original document. Furthermore, there is always a limitation of knowledge; the average person is usually not privy to the inner workings of machine translation. These human-centered improvements to machine translation are predicated on human ability and capacity, although these aren’t always guaranteed. The authors sum this problem up with the question: “What affordances might help users to make informed judgments about when to rely on a machine translation and when to seek alternatives?”

The authors of the article are hopeful, however, as they apply their findings to their project TranslatorBot. With the implementation of interactive translational suggestions and messages, TranslatorBot drastically improves user interface; the researchers hope that this will, too, lead to improvement in translation quality. 

An example of an MT system mediating communication by providing extra support for users, for example, by suggesting simpler input text. Image Credits: Robertson et al., “Three Directions for the Design of Human-Centered Machine Translation

To sum up, there are three directions for machine translation through which they can offer more reliable translations for the average user:

  1. helping users craft better inputs
  2. helping users improve translated outputs, and
  3. expanding interactivity and adaptivity.  

It’s important to note that these directions are targeted not to the professional translator, but rather, people who use MT on their own terms. The three directions pertain to a translation paradigm in which the original writer is at once the translator and the editor; under these new suggestions, the writer crafts better inputs, improves translated outputs, and receives more nuanced translations. 

What does this mean for professional translators? As intermediary agents, we’re always on the lookout for developments in AI that might put us out of a job, and these three directions sound suspiciously like something that will effectively take our role away from us. But for some reason, it’s hard to think of them as such. 

What the three directions aim to do—in reality—is to cultivate a writing and translation environment and foster a certain style and format of writing that will work better with translation engines. This means increased literacy in the art and function of translation, as well as more intimacy between humans and machine translation engines. A more integrated human-MT paradigm, if you will. 

Insofar as translators are concerned, we will be here until the day machine translation achieves complete human parity. Despite even the best inputs from the writer, translation issues are bound to arise with modern translation engines, and this means translators will stick around with our specialized knowledge to offer the best translations and editing we can give.

Until then, Sprok DTS is here to help you with your translation needs. We offer top-quality translations in 72 languages, fully utilizing the powers of machine translation and computer-assisted translation to ensure the most accurate, most nuanced translations available. With our top-notch private data handling policy, we make sure your translations are safe and usable for your everyday business needs. Ask for a quote today and start your journey with Sprok DTS. 

 

References
https://www.economist.com/technology-quarterly/2017-05-01/language
https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/
https://storage.googleapis.com/pub-tools-public-publication-data/pdf/1bb66ff36a5eb4650a76a3d05ea57e09c0203366.pdf