For anyone familiar with the ethnic and linguistic make-up of Florida, it will come as no surprise that in the Miami area (and elsewhere in the state), everyday language use is characterized by a mix of English and Spanish. “Spanglish” is a term used to describe the influence of English into spoken Spanish. Now a linguist at Florida International University (Phillip Carter) has published findings that indicate a new version of English is emerging in the Miami area which is heavily influenced by Spanish. An article on the research in Scientific American provides some examples:
“We got down from the car and went inside.”
“I made the line to pay for groceries.”
“He made a party to celebrate his son’s birthday.”
For most speakers of North American Spanish, those sentences sound strange. In fact, as Carter points out, these are “literal lexical calques,” i.e., direct, word-for-word translations.
“Get down from the car” instead of “get out of the car” is based on the Spanish phrase “bajar del carro,” which translates, for speakers outside of Miami, as “get out of the car.” But “bajar” means “to get down,” so it makes sense that many Miamians think of “exiting” a car in terms of “getting down” and not “getting out.” Locals often say “married with,” as in “Alex got married with José,” based on the Spanish “casarse con” – literally translated as “married with.” They’ll also say “make a party,” a literal translation of the Spanish “hacer una fiesta.”
Carter provides additional examples that are based on phonetic transfers: “’Thanks God,’ a type of loan translation from ‘gracias a Dios,’ is common in Miami. In this case, speakers analogize the ‘s’ sound at the end of “gracias” and apply it to the English form.
A YouTube video provides further examples from Carter’s research:
I’m currently in Belgium, attending a conference on language learning and technology (EuroCALL 2019). Many topics are presented and discussed at such conferences, but one which came up repeatedly at this one is the use of smart digital services and devices which incorporate voice recognition and voice synthesis, available in multiple languages. Those include Apple’s Siri, Amazon’s Alexa, and Google Assistant, available on mobile phones/watches, dedicated devices, and smart speakers. In addition, machine translation such as Google Translate is constantly improving, as artificial intelligence advances (especially through neural networks) and large collections of language data (corpora) are collected and tagged. There are also dedicated translation devices being marketed, such as Pocketalk and Illi.
I presented a paper on this topic at a previous conference this summer in Taiwan (PPTell 2019). I summarized current developments in this way:
All these projects and devices have been expanding continuously the number of languages supported, with as well language variations included, such as Australian English, alongside British and North American varieties. Amazon has begun an intriguing project to add additional languages to Alexa. An Alexa skill, Cleo, uses crowdsourcing, inviting users to contribute data to support incorporation of additional languages. Speech recognition and synthesis continue to show significant advancements from year to year. Synthesized voices in particular, have improved tremendously, sounding much less robotic. Google Duplex, for example, has rolled out a service which is now available on both Android and iOS devices to allow users to ask Google Assistant to book a dinner reservation at a restaurant. The user specifies the restaurant, date and time, and the number of the party. Google Assistant places a call to the restaurant and engages in an interaction with the restaurant reservation desk. Google has released audio recordings of such calls, in which the artificial voice sounds remarkably human.
Advances in natural language processing (NLP) will impact all digital language services – making the quality of machine translations more reliable, improving the accuracy of speech recognition, enhancing the quality of speech synthesis, and, finally, rendering conversational abilities more human-like. At the same time, advances in chip design, miniaturization, and batteries, will allow sophisticated language services to be made available on mobile, wearable, and implantable devices. We are already seeing devices on the market which move in this direction. Those include Google Pixel earbuds which recognize and translate user speech into a target language and translate back the partner’s speech into the user’s language.
The question I addressed at the conference was, given this scenario, will there still be a need for language learning in the future. Can’t we all just use smart devices instead? My conclusion was no:
Even as language assistants become more sophisticated and capable, few would argue that they represent a satisfactory communication scenario. Holding a phone or device, or using earbuds, creates an awkward barrier, an electronic intermediary. That might work satisfactorily for quick information seeking questions but is hardly inviting for an extended conversation, that is, even if the battery held out long enough. Furthermore, in order to have socially and emotionally fulfilling conversations with a fellow human, a device would need support far beyond transactional language situations. Real language use is not primarily transactional, but social, more about building relationships than achieving a goal. Although language consists of repeating patterns, the direction in which a conversation involves is infinitely variable. Therefore, language support needs to be very robust, to support all the twists and turns of conversational exchanges. Real language use is varied, colorful, and creative and therefore difficult to anticipate. Conversations also don’t develop logically — they progress by stops and starts, including pauses and silences. The verbal language is richly supplemented semantically by paralanguage, facial expressions, and body language. This reality makes NLP all the more difficult. Humans can hear irony and sarcasm in the tone of voice and receive messages accordingly. We understand the clues that nonverbals and the context of the conversation provide for interpreting meaning.
It remains to be seen how technology will evolve to offer language support and instant translation, but despite advances it is hard to imagine a future in which learning a second language is not needed, if not alone for insights it provides into other cultures. Smart technology will continue to improve and offer growing convenience and efficiency in providing language services but is not likely to replace the human process of person-to-person communication and the essentially social nature of language learning.
As a further example of the growth in support for STEM fields in education (science, technology, engineering, math) in the US, and the concurrent drop in support for the humanities, the state legislature in my state of Virginia this year changed the requirements for the academic diploma from high school to enable computer science to substitute for foreign language study. This aligns with the perception that employment opportunities of the future will call for skills in coding, not in speaking another language. In the US, the trend in education emphasizing vocational training is often linked to the economic downturn following the 2008 financial collapse. The result has been a decade of gradual decline in enrollment in foreign language classes at high schools and universities. As a consequence, fewer US students are learning a second language (at least formally in school). This has unfortunate consequences both individually and society-wide. On a personal level, students lose the opportunity to develop a new identity, by experiencing the world through the different cultural lens that a second language provides. That has social ramifications, as monolinguals tend to cling to the one culture they know and tend therefore to be less receptive to other ways of life.
These thoughts were prompted through a recent piece by linguistic anthropologist Daniel Everett, “Learning another language should be compulsory in every school“. Everett is Dean of Arts and Sciences at Bentley University in Massachusetts. He is best known for his work as a field linguist in the Amazon, learning Pirahã, which he described in the wonderful book, Don’t Sleep, There Are Snakes: Life and Language in the Amazonian Jungle (2008). He’s also famous for his spat with the most famous linguist on earth, Noam Chomsky over the concept of universal grammar, aspects of which Everett found to be contradicted by characteristics of the Pirahã language.
In his essay, Everett, talks about his own language experiences, growing up in California, along the border with Mexico, learning Spanish so well that he joined a Mexican rock ’n’ roll band based in Tijuana:
Learning Spanish changed my life. It taught me more about English, it gave me friendships and connections and respect I never could have otherwise received. Just as learning Portuguese, Pirahã and smatterings of other Amazonian languages continued to transform me during my entire life. Now, after spending most of my adult life in higher education, researching languages, cultures and cognition, I have become more convinced than ever that nothing teaches us about the world and how to think more effectively better than learning new languages. That is why I advocate for fluency in foreign languages. But for this to happen, language-learning needs to make a comeback as a requirement of both primary and secondary education in the United States. Learning another language benefits each learner in at least three ways – pragmatically, neurologically and culturally.
He describes the practical and potential employment advantages of knowing another language. Then he provides an example of how learning a language provides insight into how other cultures see the world:
Beyond the pragmatic benefits to learning languages are humanistic, cultural benefits. It is precisely because not all languages are the same that learning them can expand our understanding of the world. Every language has evolved in a specific geocultural niche, and thus has different ways of talking and codifying the world. But this is precisely why learning them is so beneficial. Take, for example:
John borrowed $10,000 from the bank of Mom and Dad to pay down his college loan.
The Pirahã language of the Amazon has no words for numbers or for borrow, dollars, bank, mom, dad, college or loan. So this sentence cannot be translated into Pirahã (it could be if their culture were to change and they learned about the modern economy). On the other hand, consider a common Pirahã phrase:
This phrase means ‘go upriver’ in Pirahã. Innocuous enough, until you realise that Pirahãs use this phrase instead of the less precise ‘turn left’ or ‘turn right’ (which depends on where the speaker and hearer are facing) – this is uttered from deep within the jungle or at the river’s edge – all Pirahãs carry a mental map of tens of thousands of acres of jungle in their heads, and thus know where all points of reference are, whether a river or a specific region of the jungle. They use absolute directions, not relative directions as, say, the average American does when he says ‘turn left’ (vs the absolute direction, ‘turn north’). So to use Pirahã phrases intelligibly requires learning about their local geography.
Everett points to the broadening horizons language learning brings with it:
Language-learning induces reflection both on how we ourselves think and communicate, and how others think. Thus it teaches culture implicitly. Languages should be at the very heart of our educational systems. Learning languages disables our easy and common habit of glossing over differences and failing to understand others and ourselves. You cannot achieve fluency in another language without learning its speakers’ perspectives on the world, and thereby enriching your own conceptual arsenal.
Everett ends with a reference to a video presentation of the method he uses in learning indigenous languages, while not having a common language. In the video, Everett uses Pirahã in eliciting language lessons from a woman speaking Hmong. After the demonstration, Everett describes some of the fascinating aspects of Pirahã, including the variety of communication forms including humming, whistle speech, musical speech, and yelling. Speech acts are radically different from Western norms, with no greeting or leave taking rituals. Men and women speaking Pirahã use different consonants.
The experience of speaking Pirahã involves a very different human experience than speaking English. This is true of all languages, ways of life are embedded deeply in how languages work. Learning to program computers is a highly valuable skill, especially in today’s world, but it shouldn’t substitute for the invaluable human experience of learning a second language.
I attended yesterday the official swearing-in of a new judge on the Virginia Supreme Court, Steve McCullough. The process was an interesting example of a powerful “speech act” (an utterance that causes something to happen), namely having the Chief Justice of the Supreme Court transform an ordinary citizen into a fellow Supreme Court Justice by a few simple words spoken as an oath of office. The magical transformation only happens, however, if the oath is spoken correctly. President Obama in 2009 had to retake later the oath of office because Chief Justice John Roberts did not prompt him with the exact correct language, so that Obama did not say the line correctly. It’s not likely in that particular case that anyone would question whether Obama was really the President of the United States (although, given the birther controversy, that may have happened, if he had not retaken the oath). In the case of marriage vows, a similar transformational speech act, the absence of the concluding statement, “I now pronounce you husband and wife” often is played up in fiction and film as meaning that the couple is in fact not legally wed (i.e., The Graduate, The Princess Bride).
The oath of office actually was made interesting by the fact that the new Justice held his 2-year son, Andrew, as the oath was being administered, while Andrew’s mother held the bible on which Steve swore the oath (and tried to quiet down her son). Speech acts may be accompanied by such nonverbal signals as the presence of an official document of some sort or the wearing of prescribed clothing, as, in this case, the black robes, which visually transformed Steve from a citizen into a Justice (the “robing” was also done ceremoniously). Part of this official act was also that it took place in a prescribed location, namely the Chambers of the Virginia Supreme Court, a place set apart from normal everyday life (no cell phones allowed, sitting and standing at prescribed times, always beginning one’s addressing of the Justices with the formula, “may it please the court”). In linguistics, we would say that the “felicity conditions” (i.e., in this instance, all the outer trappings) for this “commissive” speech act (it commits Steve to fulfilling the oath of office) have been fulfilled.
Into this august environment came Andrew, who chimed in merrily while his father took the oath. It struck me at the time that the kind of informality represented by having a new Supreme Court Justice hold a babbling 2-year old during such an important official government ceremony, in the presence not only of the other Justices, but also of the Governor of Virginia, members of the Virginia legislature, and other high officials, was something that might not be done in all cultures. In the US, we like to see our government officials as being no different from ourselves, often choosing a political candidate by the fact that he’s “just like us” (open question of how well that works out). Being a doting father is part of that image projection and having children close by in such instances is both accepted and valued. When you’re the US President on an official visit to Vietnam, having noodles and cold beer at a neighborhood joint in Hanoi is just the kind of “ordinary joe” behavior we like to see.
I don’t usually hobnob with the Governor and the Supreme Court, but Steve is a former VCU student, whom I hired as a lab assistant back in the early 1990’s when I ran the Language Lab. After graduating from law school, he worked for a while for my wife’s law firm, so we’ve known Steve and his family for a long time. It’s great to see a former student have this degree of success. I asked Steve after the ceremony whether he figures that being a Supreme Court Justice will be an interesting line of work. He answered that there are likely to be engrossing cases that come up, but that a lot of it will be routine and less than exciting – just like all jobs.
One of the trickier issues that may arise in encounters across cultures is the difference in attitudes towards particular smells, especially body odor. There are distinct cultural associations with smell which vary considerably and can cause friction in cross-cultural encounters. One of the other aspects of smell that are interesting from an intercultural perspective is the connection to language, in particular to what extent languages have vocabularies for describing and distinguishing different smells. An article in the current issue of The Economist describes a presentation recently given by a well-known researcher in the field, Asifa Majid of Radboud University in the Netherlands. She studies Maniq and Jahai speakers (languages in the Austro-Asiatic family), who live in the Malay peninsula. She and her colleagues asked speakers to identify a variety of smells and discovered they could do so much more quickly and with greater confidence than a representative Western group tested. They also had many distinct terms for the smells, which in contrast to Western languages were not based on a smell’s source (i.e. lemony) or its properties (stinky) but were abstract terms used only for that smell. The article concludes:
This finding challenges the long-standing idea that something about the way human brains are wired limits their ability to put words to smells swiftly. Dr Majid says that in the West, “nothing should have a smell unless you put it there,” and that there are many taboos surrounding talking about odours. The smell-rich environment, scent-centric cultural practices and evident lack of taboos enjoyed by the Jahai and Maniq are, she believes, correlated with their recall and use of smell-related words. It may simply be that having more smells around, and talking about them, ensures that the varieties of smell all have names.
Anyone familiar with the Sapir-Whorf hypothesis in linguistic anthropology will find this familiar, the idea that cultures have an extensive vocabulary for objects and phenomena that are important in their way of life. Survival and success for hunter-gatherers in the bush is likely much more tied to smell than is the case for stock brokers on Wall Street. In an interview Professor Majid describes the extensive organized structure the Jahai and Maniq have for smell terms, akin to color terms in other languages.
Having just completed a book chapter on culture, language learning, and technology, I was intrigued by this past week’s episode of On the Media, which replayed a story I had missed from the spring, related to how using a second language can affect decision making and moral choices. The story was based on an article in PLoS One entitled “Your Morals depend on Language” by researchers from Spain and the U.S. The authors proposed to examine the wide-spread assumption that moral judgments about right and wrong are the “result of deep, thoughtful principles and should therefore be consistent and unaffected by irrelevant aspects of a moral dilemma. For instance, as long as one understands a moral dilemma, its resolution should not depend on whether it is presented in a native language or in a foreign language.” What they found was quite different. Using several different experiments they found that using a second language resulted in more dispassionate, utilitarian decision making. They used, for example, the well-known “trolley problem” – if you could save 5 people by sacrificing one (throwing a fat man over a bridge to stop a runaway trolley), would you do it. Typically only 18% of those presented with the dilemma sacrifice the man; but for those considering the problem in a foreign language, 44% were ready to throw the man off the bridge. Through this and other experiments, the researchers concluded that using a second language provides a helpful psychological and emotional distance to a difficult decision such as a moral dilemma.
An article in Scientific American reporting on the piece on PLoS One speculates that there may be cultural ramifications to the results: “Using a native language plausibly induces the feeling that one is reasoning about in-group members. Conversely, a foreign language could signal that the scenario is relevant to strangers, foreigners, or out-group members.” In fact, studies have been done which show different responses if it is implied that the people in danger are of a different ethnicity. Another question the study raises is that of bilinguals – do they decide differently depending on language used. This question was discussed in the interview from On the Media, with the researchers having found that with true bilinguals there was no distinction, with the assumption that the emotional resonance of the trolley dilemma is the same in both languages.
The idea that different languages can call up different emotional responses is consistent with what second language teachers experience in the classroom. Learning and using swear words in a foreign language, for example, can be tricky from a cultural perspective. The visceral reaction to certain naughty words does not normally carry over to a new language, thus making it likely to use them inappropriately. Linguists know that playing with language – experimenting with how to use new vocabulary or constructing nonsense sentences – can be quite helpful in acquiring a second language. One of the other findings of the study was that users of a foreign language increased their willingness to takes risks. This can in itself be helpful in language learning, i.e. trying out language combinations not used before. The study showed, however, it can also have a not so positive result – an inclination to gamble.
One of the more interesting experiences I have had in my time here in India has been to experience the reality of the multilingual environment. Many Americans when they think of India and languages (which they do rarely), their assumption likely will be that English is all you need to communicate effectively in that country. In fact, I have encountered that view when discussing with colleagues at VCU the decision to start teaching Hindi as part of our language program. The reality on the ground is quite different. I had a conversation yesterday with a colleague from the Indian Institute of Technology (Kharagpur), where I’m giving a series of lectures this week. She is from the north of India and grew up with Urdu, Hindi, and English (at a school run by an Irish nun). Now living in Kharagpur, she told me she had to learn Bengali in order to live here. In fact, she said that in her spare time she is also learning to read Bengali (which uses a different script from Hindi). She described her impression of speaking Bengali as speaking Sanskrit with your mouth full. Bengali is widely spoken in India and Bangladesh. In fact, it is the seventh most widely spoken language in the world (ahead of Russian, Japanese, German or French). When I was in Ahmedabad last week, I experienced a similar phenomenon, namely that everyone was not speaking either English or Hindi, India’s two lingua francas, but rather Gujarti. Gujarti was the mother tongue of Mahatma Gandhi. Both Bengali and Gujarti are Indo-European languages (Indo-Aryan branch), but other language families are represented as well among the 22 official (indigenous) languages of India: Dravidian, Austroasiatic, Tibeto-Burman, and a few minor language families and isolates.
In my experience here, English is in fact a language that lets you navigate everyday life, but it doesn’t allow you full access into Indian culture. That comes only through the local language, and to some extent, through Hindi. It’s also not the case that everyone in India actually speaks English. Even those who have learned it in school may not be proficient in spoken English. That includes Indians in service industries that cater to tourists. My difficulties undoubtedly derive not just from a lack of proficiency on the part of the Indians, but more from my lack of knowledge of the Indian cultural context. Not knowing how many things work in India, or what the expectations are for holding conversations are, means that often we are approaching a topic from widely different perspectives. Of course, sometimes, it comes down to differences in the meanings of common words in American and Indian English. When I arrived at the Ahmedabad airport, there was no taxi stand in front of the airport. After asking fruitlessly a number of by-standers (including a policeman) about taxis, I asked a young man running a small drink kiosk. He asked if I wanted him to get me an “auto”, which I was happy to accept. 45 minutes later (not the 10 minutes he had promised), what pulled up was in fact not a car, but a motorized rickshaw. In Indian English this is a autorickshaw or “auto” for short.
I am in my last day in Johannesburg, South Africa, having attended a linguistics conference here for the past week. It’s been a fascinating experience, both attending presentations at the conference and experiencing South Africa for the first time. The language situation in South Africa is complex. There are 11 official languages (versus just 2 in the Apartheid days, English and Afrikaans – derived from Dutch). The fascinating fact for me is that virtually everyone here speaks English, but it is rare that it is anyone’s native tongue. Most likely it is the second language (for white Afrikaans speakers) or the second or third language (for black South Africans). Many South Africans speak more then two languages, especially Blacks, who often speak their home language (such as Xhosa or Sotho) and also English and possibly also one or both of the other two languages which have a lingua franca function here, namely Afrikaans and Zulu. In one presentation today it was mentioned that adult Blacks do not necessarily view their native language as their best language – that might be English or Afrikaans, languages which are vital for success in higher education and in the professional world.
There was a presentation today by a Swiss linguist, who reported on language issues in health care in Switzerland. She started off by mentioning that her country also has multiple official languages (German, French, Italian, Romantsch). At the end of her talk there was a question from a South African in the audience, asking why, if Switzerland were multilingual like South Africa, there was any need for interpreters or other assistance in health care. She was assuming that the situation was analogous to multilingual South Africa, where it’s the norm to speak multiple languages, and, if possible, to learn to speak (although not necessarily to read or write) all the languages used widely in your region. The Swiss linguist was somewhat taken aback by the question and responded that while there are multiple official languages, they are not all spoken throughout the country but rather are limited to particular geographic regions. Many Swiss within those regions speak only their native tongue (and often school English).
Maybe the South Africans learn more languages because they’re more open and approachable. One South African linguist at the conference reported an anecdote of a woman in a township being asked why she was learning an additional African language (her 4th or 5th language) – she responded that it would be rude not to be able to speak to her new neighbor who spoke that language.
I’m just returning from the annual CALICO Conference (Computer-aided language instructional consortium) where I gave a presentation on the creation and use of e-books in language learning. The most interesting presentation I attended was a study on the use of Rosetta Stone in learning Spanish at the elementary level. The session, “Online and Massive, but NOT the Future of Language Learning: Further Evidence in the Case Against Rosetta Stone” by Gillian Lord of the University of Florida, presented the findings from a study in which the results of three different groups of beginning Spanish were compared. One group used Rosetta Stone exclusively, the second used the software but also had some class meetings, and the third was a traditional face-to-face class. Lord has not yet completed the analysis of data from the study, and the sample size in the study was small, but the preliminary findings are revealing. In some areas, student outcomes were comparable, particularly in the area of vocabulary acquisition. Where the outcomes differed significantly is in the area most touted in Rosetta Stone’s massive marketing, namely the ability to conduct a conversation in the target language. Lord showed transcripts of the conversations in Spanish she had with students from each of the groups several times during the semester. They showed that the students in the Rosetta Stone groups had acquired a good amount of vocabulary, and had gained some proficiency in listening comprehension, but had great difficulty in coming up with anything to say in Spanish, often using English in place of Spanish. They were particularly weak in the area of strategic competence in Spanish, that is, the ability to express a lack of understanding, to ask for assistance, or to find work-arounds for missing vocabulary or structures. Using language requires the ability to go beyond learned words and phrases, to be able to negotiate meaning with your conversation partner, through asking for help or re-stating in another way what you meant to say. Rosetta Stone’s software does not provide practice in that area.
Lord’s study did not address what I find to be an additional shortcoming in Rosetta Stone – the lack of cultural context. I experienced this myself several years ago when I was taking courses in both intermediate Russian and intermediate Chinese. As part of the Russian course, we were assigned to use Rosetta Stone in our language lab. I was curious how the program differed from language to language, so I also used the Chinese version, at the same proficiency level. I was surprised to find that the images, situations, sentences, and even vocabulary were exactly the same in the two languages. Language in Rosetta Stone is decontextualized, disembodied from the culture it represents. This not only provides little insight into the target culture, it also suggests that language can be divorced from culture and that learning a language is a simple process of substituting words and phrases in the target language for those in your mother tongue. No need to adjust culturally. This may, in fact, be the key to Rosetta Stone’s popularity: it takes away the messy complexity of language learning. The linear approach, along with the feel-good positive feedback the program provides, as you progress from level to level, gives users the impression that they are indeed becoming proficient in the language. In fact, in this way the Rosetta Stone ad I just read in my in-flight magazine is right on target: “The success you feel when you learn the Rosetta Stone way can change the way you feel about yourself”. That’s a much more accurate statement about the program than the company’s tag line: “Language learning that works”. Anyone having struggled to become proficient in a second language has experienced personally that language learning is not the simple, linear process suggested by Rosetta Stone’s approach and marketing, but rather more of a lurching experience, with lots of frustration, punctuated by occasional triumphs. The image that calls to mind is not a straight line, but at best a spiral, in which we go round and round, re-learning and perfecting material already encountered.
Francesca Marciano, the author of ‘The Other Language’
Interesting piece this week-end in the NY Times about authors writing in adopted languages, such as Francesca Marciano, the author of a collection of short stories, he Other Language. It’s hard enough to be a writer in your native tongue, but imagine writing publishable stories in a second language. Creative writing is quite different from conveying information – that’s something that can be done, even if grammar is faulty and word choice seems strange. To write well necessitates having a “feel” for the language, including the use of idiomatic expressions and of valid collocations – those chunks of languages that go together. Native speakers have internalized this kind of pragmatic language use through extensive exposure to the language over time. It’s much harder for non-native speakers to capture believably the tone and nuance of a language, including hitting the right registers – for example, what level of informality or slang to use. It’s notoriously difficult, for instance, to use profanity correctly in a foreign language. The popular view of language learning is that it involves learning new words and new rules, but anyone who has tried to function in another language/culture has experienced the reality that such knowledge is necessary but insufficient. Speaking rather than writing provides some help in the form of non-verbals – facial expressions, tone, etc. – but in writing it’s just you and the blank page.
There are famous examples of writers of English who have not grown up speaking the language, Joseph Conrad or Vladimir Nabokov (although he learned English and other languages at a young age). What’s remarkable about those two writers, is the fact that they are know as great stylists, writing in a learned language. In fact, non-native speakers (like non-native language teachers) have a critical perspective on the language that may offer new insights. The Times article quotes Chinese writer Yiyun Li, who has just published her third novel in English: “If you are a native speaker, things are automatic…For me, every time I say or write something, I have to go back and ask, ‘Is this what I want to say?’ ”. Non-native writers may feel freer to play and experiment with the new language, more so than when writing in their native language.
An interview this week on NPR’s Fresh Air brought back to light the case of “Clark Rockefeller”, the con man who was in reality Christian Gerharstreiter from Germany, not, as he claimed, a member of the Rockefeller clan. That claim came after previous identities adopted over the years including being a British baronet, a cardiologist in Las Vegas, a Hollywood producer, and a bond broker in New York. The interview was with author Walter Kirn, who was a friend of “Rockefeller” for years. Listening to the interview, it seems hardly credible that Kirn would accept the wild stories he was being told: that his friend was a “freelance central banker”, that he had never eaten in a restaurant, that he had attended Yale when he was 14, that he had a master key to the Rockefeller Center, that on successive week-ends he had as house guests Brittany Spears and then Angelika Merkel. It turns out that Gerhartsreiter was not only a con man, but he had brutally murdered a man in California in the 1980’s, a crime for which he was convicted last year and resentenced to prison for 27 years to life.
How was the massive deception possible? Gerhartsreiter was skilled in reading his conversation partner and accurately gauging what stories would be believable. His verbal skills were impressive, given the fact that he was not a native speaker of English. The family with whom he originally stayed on coming to the U.S. as an exchange student (which he fraudulently arranged himself) when he was 18 reported that he experimented with different accents. Eventually, he settled, according to Kirn, on an accent that sounded like Katharine Hepburn’s cousin. But, probably even more important than his language ability, were Gerhartreiter’s non-verbal skills. His bearing, dress, and general appearance seemed to confirm his identity. He acted out his roles with confidence and great self-assurance. His success calls to mind the TED talk by Amy Cuddy in which she discusses the importance of body language and the possibility of adjusting your posture and appearance to convince yourself and others of the identity you want to convey.
Gerhartsreiter’s case also for me brings to mind a concept in modern linguistics: performativity, originally conceived by J.L Austin. The idea of “performance” is that communicative competence is not, as often conceived, a matter of mastering the whole grammar system of a language, but rather of being able to strategically choose what is needed in particular contexts. In this way, according to Suresh Canagarajah, performance “gives primacy to acts of creative and strategic communication motivated by the enigmatic purposes of complex individuals. It draws attention to playfulness, fabrication, strategic negotiations, situationally motivated shifts, multiple identities” (from the forward to Clemente/Higgins, Performing English with a post-colonial accent: Ethnographic narratives from Mexico). Christian Gerhartsreiter was clearly a master of performance in a number of senses.
That title is used in a Washington Post article this week about a recent article in the field of historical linguistics published in the Proceedings of the National Academy of Sciences, which claims to trace a group of words back through seven different language families to a common ancestor language spoken some 15,000 years ago. Historical linguists use cognates (words with similar sounds and meanings) to trace language evolution and are able to compare cognates that appear to be related in a variety of language back to common ancestral languages. The best-known example is Proto-Indo-European, an ancestor of a number of language families in contemporary Europe and India.
Mark Pagel (University of Reading) and collaborators built a sophisticated statistical model to try to identify specific words across language families that are similar enough, they believe, to have had a common origin. They found 23 “ultraconserved” words. Some which one might expect, including hand, give, I, thou, old and mother, but some which may be surprising such as spit, worm, and bark (of a tree). The original Washington Post article imagines what might have been said with these words:
You, hear me! Give this fire to that old man. Pull the black worm off the bark and give it to the mother. And no spitting in the ashes!
It’s an odd little speech. But if you went back 15,000 years and spoke these words to hunter-gatherers in Asia in any one of hundreds of modern languages, there is a chance they would understand at least some of what you were saying. That’s because all of the nouns, verbs, adjectives and adverbs in the four sentences are words that have descended largely unchanged from a language that died out as the glaciers retreated at the end of the last Ice Age.
As intriguing as this theory is, many linguists are skeptical, as nicely explained by Sally Tompson on the Language Log. So maybe don’t count on being able to communicate with a caveman if you encounter one.
When Pope Benedict announced this week that he was stepping down, he did so in Latin. The small group of reporters listening to the Pope on a live feed at the Vatican scrambled to understand what they quickly saw was a significant announcement. Italian reporter, Giovanna Chirri confirmed for her colleagues (and the world through a tweet) that in fact the Pope had announced he was resigning. My Latin teacher in high school would have jumped at the opportunity to point out – “You see how useful learning Latin can be! You could be the first to announce to the world something that hasn’t happened in 600 years!” The reporters’ scramble reminded me of the press conference where Watergate conspirator G. Gordon Giddy suddenly quoted Nietzsche in German, “Was mich nicht tötet, macht mich stärker” (What doesn’t kill me, makes me stronger). Reporters back then were also struggling to understand.
The Pope’s announcement has led to a flurry of interest and news stories about Latin, not often in the news these days. Latin of course is still studied in schools and universities. There is a radio station in Finland that has been broadcasting a program in Latin for a number of years. But learning Latin is quite different in nature from learning Spanish or any other living language, and not just because it is no longer spoken, As Annalisa Quinn puts it, “Latin serves as a cultural signifier — if you are studying classics, you announce either your wealth or your devotion to the selfless pursuit of knowledge. In movies and books, knowledge of Latin or Greek is a little bit like glasses and knitwear — a kind of shorthand for intelligence.” Aside from the cultural messaging, learning Latin can actually be useful, even if you’re not a Vatican reporter. It is an essential enabler of classical studies, but also is an excellent way to learn about the nature of language and linguistics, and to learn about the etymology of words in English and many other languages (as my Latin teacher hammered home at every opportunity). According to the Telegraph, Pope Benedict is one of the last truly fluent Latin speakers – apparently not all the cardinals present at the announcement actually understood the momentous news.
A possible solution to one of the thorny questions in historical linguistics – where was the ancestor of many European and Indian languages of the Indo-European language family spoken – has been proposed using techniques normally enlisted in battling disesases. According to findings recently published, the parent “Proto Indo-European” originated in the Anatolia region of present-day Turkey.
The techniques used actually go back to work in 2003 by Russell Gray and Quentin Atkinson at the University of Auckland in New Zealand: ” Genes and words have several similarities, and language evolution has conventionally been mapped using a “family tree” format. Gray and Atkinson theorized that the evolution of words was similar to the evolution of species, and that the ‘cognate’ of words — how closely their sounds and meanings are related to one another — could be modelled like DNA sequences and used to measure how languages evolved.
By extension, the rate at which words changed — or mutated — could be used to determine the age at which Indo-European languages diverged from one another.” (Nature) The new study adds to the information about when the ancestor language was spoken with the new geographical information, by using ” the type of geography-based computer modelling normally used by epidemiologists to track the spread of disease” (Nature). Not all linguistis are convinced but as Aktinson comments in the article, it does point to a shift in acceptance of new methodologies in languistic research, indicating a “shift in attitudes towards computational-modelling approaches in historical linguistics, from being just an odd sideshow to a clear focus of attention”
Interesting story today on NPR about research done at the University of Arizona that used an audio recorder carried by volunteers programmed to record for 30 seconds every 12 minutes. One of the things they investigated with the data collected was the volume of speech of men versus women. Turns out it’s not the case that women speak dramatically more than men (as the urban myth has it): “Both men and women speak around 17,000 words a day, give or take a few hundred.” This is something that Deborah Cameron pointed out in her book, The Myth of Mars and Venus (2008).
Another finding (the main one reported on in the story) is that female scientists talk differently to male and female colleagues: they had “male and female scientists at a research university wear the audio recorders and go about their work. When the scientists analyzed the audio samples, they found there was a pattern in the way the male and female professors talked to one another.” They found that the women scientists talked about their work in a quite different, and less confident, way to men than to other women. However, “when the male and female scientists weren’t talking about work, the women reported feeling more engaged.” The investigators concluded that the women were (likely unconsciously) responding (through hesitation, unassertiveness, self-doubts) to the wide-spread belief that women aren’t as competent as men in science.
This is a phenomenon known as “stereotype threat”, well-known in social psychology. Experiments have shown that stereotype threat affects performance in a wide variety of domains. The lead scientist, a German, reported that he experienced it himself when going dancing with his wife (a Mexican) and other Latinos: everyone knows German can’t hold a candle to Mexicans in salsa dancing and registering that thought in the act of dancing likely negatively affected his performance.