Hamilton auf Deutsch

The German cast of Hamilton

Translating always means approximating the meaning of the original text. That is especially the case for translating literary texts, where it’s not just a matter of finding equivalent meanings but also of conveying the feel and style of the original. When the texts are poetry that makes the process even more complicated, as there is a need to make decisions on issues like rhyme, alliteration, meter, rhythm, etc. Then there is the cultural component. Literary texts are embedded in socio-historical contexts, which may be familiar to the intended readers of the original. Translators need to determine whether to simply convey the cultural context as in the original or add or explain so that it is understandable to the readers of the translation. Then there is humor and word play. Music complicates further, as the translated language must fit in to the time constraints built into song lyrics.

Translating the celebrated musical Hamilton into another language has all those complications and more. The story told in the musical is deeply enmeshed in the history and mythology of the founding of the United States. The story and the central figures in it are well known to anyone who has attended schools in the US. For those not having that background, the dynamics of the exchanges among characters in the musical, taken from historical accounts, will be unfamiliar. Then there is the kind of music in Hamilton, namely hip-hop or rap. While that style of music originated in the US, it has spread across the world, so the musical form will likely be familiar, at least to young theater goers. However, in the US, the cultural context of rap is tied closely to African-Americans and that is reflected in the musical, at least in its original stage version and movie, in which the main characters are Black.

So, translating Hamilton into German was no easy task, as pointed out in a recent piece in the New York Times: “Hamilton is a mouthful, even in English. Forty-seven songs; more than 20,000 words; fast-paced lyrics, abundant wordplay, complex rhyming patterns, plus allusions not only to hip-hop and musical theater but also to arcane aspects of early American history.” It wasn’t just the challenge of keeping the musical character as close as possible to the original, it was also the problem linguistically of going from English to German, as the piece states, “a language characterized by multisyllabic compound nouns and sentences that often end with verbs”. Translations from English to German often end up being considerably longer than the original. That was going to be a problem here. So, the translators had to be flexible and creative, finding ways to keep the wordage down, while maintaining the essentials of the content and including as many of the artistic effects in the lyrics as possible. The latter included the internal rhyming that is characteristic of rapping and is used extensively in Hamilton. The translators were able to work with the musical’s creator, Lin-Manuel Miranda, who monitored the translations to make sure the lyrics in German fit the spirit of the original. The New York Times article and a piece in National Public Radio provide examples of the wording in German. The clip below shows the results.

The German Hamilton started playing this month in Hamburg; it will be interesting to see how German theater goers react. One of the interesting aspects of the German production is that the makeup of the cast mirrors that of the New York production, with actors of color playing the main roles. The fact that this was possible in Germany is a demonstration of the surprising diversity in contemporary Germany, with waves of immigration having significantly changed the homogeneity of the population there. In fact, many in the cast are immigrants, or the children of immigrants. Not all are German citizens, but they all speak fluent German, mostly as their first language. For my money, the songs sound very good (and natural!) in German.

Advanced tech: No need to learn a language?

From Ciklopea (Juraj Močilac)

I’m currently in Belgium, attending a conference on language learning and technology (EuroCALL 2019). Many topics are presented and discussed at such conferences, but one which came up repeatedly at this one is the use of smart digital services and devices which incorporate voice recognition and voice synthesis, available in multiple languages. Those include Apple’s Siri, Amazon’s Alexa, and Google Assistant, available on mobile phones/watches, dedicated devices, and smart speakers. In addition, machine translation such as Google Translate is constantly improving, as artificial intelligence advances (especially through neural networks) and large collections of language data (corpora) are collected and tagged. There are also dedicated translation devices being marketed, such as Pocketalk and Illi.

I presented a paper on this topic at a previous conference this summer in Taiwan (PPTell 2019). I summarized current developments in this way:

All these projects and devices have been expanding continuously the number of languages supported, with as well language variations included, such as Australian English, alongside British and North American varieties. Amazon has begun an intriguing project to add additional languages to Alexa. An Alexa skill, Cleo, uses crowdsourcing, inviting users to contribute data to support incorporation of additional languages. Speech recognition and synthesis continue to show significant advancements from year to year. Synthesized voices in particular, have improved tremendously, sounding much less robotic. Google Duplex, for example, has rolled out a service which is now available on both Android and iOS devices to allow users to ask Google Assistant to book a dinner reservation at a restaurant. The user specifies the restaurant, date and time, and the number of the party. Google Assistant places a call to the restaurant and engages in an interaction with the restaurant reservation desk. Google has released audio recordings of such calls, in which the artificial voice sounds remarkably human.

Advances in natural language processing (NLP) will impact all digital language services – making the quality of machine translations more reliable, improving the accuracy of speech recognition, enhancing the quality of speech synthesis, and, finally, rendering conversational abilities more human-like. At the same time, advances in chip design, miniaturization, and batteries, will allow sophisticated language services to be made available on mobile, wearable, and implantable devices. We are already seeing devices on the market which move in this direction. Those include Google Pixel earbuds which recognize and translate user speech into a target language and translate back the partner’s speech into the user’s language.

Conference participant, Mark Pegrum, kindly summarized some of the other informationpresented in his blog.

The question I addressed at the conference was, given this scenario, will there still be a need for language learning in the future. Can’t we all just use smart devices instead? My conclusion was no:

Even as language assistants become more sophisticated and capable, few would argue that they represent a satisfactory communication scenario. Holding a phone or device, or using earbuds, creates an awkward barrier, an electronic intermediary. That might work satisfactorily for quick information seeking questions but is hardly inviting for an extended conversation, that is, even if the battery held out long enough. Furthermore, in order to have socially and emotionally fulfilling conversations with a fellow human, a device would need support far beyond transactional language situations. Real language use is not primarily transactional, but social, more about building relationships than achieving a goal. Although language consists of repeating patterns, the direction in which a conversation involves is infinitely variable. Therefore, language support needs to be very robust, to support all the twists and turns of conversational exchanges. Real language use is varied, colorful, and creative and therefore difficult to anticipate. Conversations also don’t develop logically — they progress by stops and starts, including pauses and silences. The verbal language is richly supplemented semantically by paralanguage, facial expressions, and body language. This reality makes NLP all the more difficult. Humans can hear irony and sarcasm in the tone of voice and receive messages accordingly. We understand the clues that nonverbals and the context of the conversation provide for interpreting meaning.

It remains to be seen how technology will evolve to offer language support and instant translation, but despite advances it is hard to imagine a future in which learning a second language is not needed, if not alone for insights it provides into other cultures. Smart technology will continue to improve and offer growing convenience and efficiency in providing language services but is not likely to replace the human process of person-to-person communication and the essentially social nature of language learning.

Translator? No

This past week, a number of world leaders spoke at the United Nations, including the President of Iran, Mahmoud Ahmadinejad.  In reporting his speech, NPR cited parts of his speech, indicating that he was “speaking though a translator”.  Actually, NPR was the one using and needing a translator.  Ahmadinejad was perfectly okay with speaking Persian.  Yet the reporting (not only at NPR) implied through the frequently used wording that his speech could only be understood as rendered “through a translator”.  Is this a picky point that doesn’t actually matter?  I don’t think so – it reinforces the unfortunate American sense that to be understood and taken seriously people need to be using English.  Couldn’t we report Ahmadinejad’s hateful speech as “translated here from the original Persian” or something else along those lines.  It would be great to show that it’s possible to use other languages to communicate.