ChatGPT and the human-machine relationship

There has been an eruption of interest recently in generative AI, due to the public release of ChatGPT from OpenAI, a tool which, given a brief prompt, can generate in seconds texts of all kinds that are coherent, substantive, and eerily human-like. The availability of such a tool has led educators, especially in fields relying on essay writing, to wring their hands over students simply turning in assignments written by ChatGPT. Some have embraced GPTZero, a tool designed to determine whether a text is written by an AI system (in my testing, it was hit and miss in its accuracy). Some school systems have banned the use of ChatGPT.

I believe that is the wrong approach; I believe we need instead to help students use AI tools appropriately, adjust writing assignments accordingly, and lead students to understand the limits of what such tools can do (there are many). ChatGPT will soon be joined by similar tools and their abilities are sure to grow exponentially. That means they will see wide use in all domains of human activity. In their real lives after graduation, students will be expected to use such tools; let’s prepare them for that future. I argued last year for that position in a column in Language Learning & Technology (“Partnering with AI”).

In a forthcoming LLT column (“Expanded spaces for language learning,” available in February), I look at another aspect of the presence of such tools in our lives, namely what it means in terms of the human-machine relationship and in understanding the nature (and limits) of human agency. A spatial orientation to human speech, which emphasizes the primacy of context (physical, virtual, emotional, etc.) has gained currency in applied linguistics in recent years. Rather than viewing language as something set apart from spatio-temporal contexts (as was the case in structuralism or Chomskian linguistics), scholars such as Pennycook, Bloomaert, and Canagarajah show how the spatial context is central to meaning-making. This perspective is bolstered by theories in psychology and neuroscience that cognition (and therefore speech) is not exclusive to the brain, but rather is embodied, embedded, enacted, or extended (4E cognition theory). That places greater meaning-making emphasis on physicality (gestures, body language) as well as on the environment and potential semiotic objects in it (such as AI tools!). I argue that an approach helpful in understanding the dynamics at play is sociomaterialism (also labeled “new materialism”). This is an approach used widely in the social sciences and more recently, in studies in applied linguistics. It offers a different perspective on the relationship of humans to the material world. Reflecting theories in the biological sciences, sociomaterialism posits a more complex and intertwined relationship between an organism and its surroundings, for us bipeds that translates into a distributed agency shared by humans and non-humans (including machines).

Here is an excerpt from the conclusion:

A spatial orientation to language use and language learning illuminates the complex intertwining of people and artifacts physically present with those digitally available. The wide use of videoconferencing in education, for example, complicates concepts of local and remote as well as online versus offline. Neat divisions are not tenable. Mobile devices as well represent the intersection of the local and the remote, of the personal and the social; they are equipped to support localized use, while making available all the resources of a global network. From a sociomaterial viewpoint, the phone and user form an entanglement of shared agency; smartphones supply “extensions of human cognition, senses, and memory” (Moreno & Traxler, 2016, p. 78). The sensors, proximity alerts, and camera feeds function as stimuli, extending cognition while acting as an intermediary between ourselves and the environment. For many users, smartphones have become part of their Umwelt, an indispensable “digital appendage” (Godwin-Jones, 2017, p. 4) with which they reach out to and interact with the outside world.

A sociomaterial perspective and 4E cognition theory problematize distinctions of mind versus body, as they also qualify the nature of human agency. The increasing role that AI plays in our lives (and in education) adds a further dimension to the complex human-material dynamic. AI systems built on large language models produce language that mimics closely human-created texts in style and content. A radical development in writing-related technologies is the AI-enabled incorporation of auto-completion of phrases into text editors and online writing venues, as well as suggestions for alternative wording. Auto-completion features in tools such as Google Docs or Grammarly raise questions of originality and credit. That is all the more the case with tools such as ChatGPT which are capable of generating texts on virtually any topic and in a variety of languages. O’Gieblyn in God, human, animal, machine: Technology, metaphor, and the search for meaning (2021) argues that due to the powerful advances in language technologies, we need new definitions of intelligence and consciousness, an argument bolstered by 4E cognition theory. In consideration of the language capabilities of AI tools today, particularly the text generation capabilities of services such as ChatGPT, we also need new understandings of authenticity and authorship.

O’Gieblyn points out that AI is able to replicate many functional processes of human cognition such as pattern recognition and predicting. That derives from the fact that language generation in such systems is based on statistical analysis of syntactic structures in immense collections of human-generated texts. That probabilistic approach to chaining together phrases, sentences, and paragraphs is capable of producing mostly cohesive and logically consistent texts. Yet these systems can also betray a surprising lack of knowledge about how objects and humans relate to one another. This results in statements that are occasionally incoherent from a social perspective. This is due to the fact that AI systems have no first-hand knowledge of real life. Unlike human brains, AI has no referential or relating experiences to draw on. Since the bots have no real understanding of human social relationships, they assume universal cultural contexts apply to all situations, not making appropriate distinctions based on context. This can lead to unfortunate and unacceptable language production including the use of pejorative or racist language.

The deep machine learning processes behind LLM-based chatbots do not allow for fine tuning or tweaking the algorithms. Today we have better insight into human neural networks through neuroimaging then we do into the black box of artificial neural networks used in AI. That fact should make us cautious in using AI-based language technologies in an unreflective manner. At the same time, advanced AI tools offer considerable potential benefits for language learning, and their informed, judicious use—alongside additional semantic resources that are contextually appropriate—seems to lie ahead for both learners and teachers.

Leave a Reply

Your email address will not be published. Required fields are marked *