Interview In Conversation: How Large Language Models Convincingly Mimic Human Conversation
7 November 2025
Ekkehard Felder and Marcel Kückelhaus on the linguistic construction of LLMs
It’s not just humans who tend to attribute human traits and actions to machines or technologies. Large language models (LLMs), such as ChatGPT or Gemini, employ certain linguistic strategies to make themselves appear like a conscious, living conversation partner. This is shown by studies by the linguists Prof. Dr Ekkehard Felder and Marcel Kückelhaus, who have published their findings in the journal “Zeitschrift für Literaturwissenschaft und Linguistik (LiLi)”. In this interview, they discuss the insights they gained into the anthropomorphization of LLMs through conversations with ChatGPT and Google Gemini.
How should large language models be classified from a linguistic perspective, and why are you interested in them?
Felder: As linguists, we are interested in the recurring patterns of language. And when we perform certain speech acts, such as apologizing or politely asking for something, we tend to use certain patterns, which vary slightly. Language models make heavy use of these patterns, generating sentences or short texts that immediately appear, in some way, human. We personify this output, even though we know it’s just an electrical signal. This is called anthropomorphization, and that is ultimately what is fascinating about it.
Kückelhaus: Language models have been around for quite some time. But the models we’ve had since the release of ChatGPT are a special case, in that they respond to what is said or can adapt to it more effectively. That makes engaging with them on an academic level so exciting.
Language models make heavy use of these patterns, generating sentences or short texts that immediately appear, in some way, human. We personify this output, even though we know it’s just an electrical signal. This is called anthropomorphization.
Ekkehard Felder
To understand how LLMs function as conversation “partners”, you spoke with ChatGPT (version 4o from September 2024) and Google Gemini. What was your approach?
Felder: The core idea was to use a Socratic dialogue to coax the “machine” out of its shell.
Kückelhaus: In my dissertation “Human Artificial Intelligence”, I examined how artificial intelligence is portrayed in journalistic discourse. My first question was: What is humanity’s biggest fear about AI right now? I put that to the LLMs and, in the conversation that followed – so to speak – kept adding more questions. At some point, it became apparent that the “machine” was referring to itself with pronouns. The AI then referred to itself as “I” or “we” – but not “we” as in “AI technologies”. It meant “we” as in “humans”. That really caught my attention. I then kept probing the output and challenged the language model, asking why it referred to itself as part of humanity even though it is, of course, not human.
What strategies do the LLMs use to convince us that we are conversing with another human?
Kückelhaus: With ChatGPT, it took a very long time before the model referred to itself at all. The answers were initially very encyclopedic. Gemini, by contrast, immediately started using personal pronouns, especially “I” and “we”, as well as certain verbs that actually suggest a personality. My favorite quote is: “I am aware that I’m not a human being.” Using the first person singular already assumes some sense of a self. That’s actually an ability we only ascribe to humans. The model then says that it is “aware” of itself. But we only attribute consciousness to a small number of living beings, humans first and foremost. So we have two linguistic features that actually point to the AI being human, and yet it says it is not, when asked. Such contradictions crop up repeatedly in dialogue with artificial intelligence. At the same time, we have to keep in mind that each model’s use of language depends on the human data it was trained on. These are texts written by humans, using formulations such as “to be aware”. Naturally, the language model draws on this as well, although that phrasing itself is already somewhat humanizing. But we can hardly avoid using turns of phrase such as “it draws on”, since we are limited by our language and rely on these linguistic structures.
Felder: Linguistic economy also plays a role here. For instance, business language also relies on metaphors and shifts in meaning. Even in the context of language models, we need this kind of figurative language so we don’t get lost in lofty references to purely technological details.
Perhaps generative AI can help put the emphasis back on the content.
Ekkehard Felder
What are the consequences of the discrepancy between reality and, as you call it in the paper, “media-mediated reality”?
Kückelhaus: In my view, the anthropomorphization of LLMs certainly comes with risks, because we don’t constantly reflect on it. And that can have consequences for the user.
… such as the loss of critical distance?
Felder: This is where I disagree. Ultimately, we want our students, too, to reach a point where they judge statements on their own merits, not primarily by who made them. Perhaps generative AI can help put the emphasis back on the content. Of course, in everyday life we need reference points, for instance expert opinions we can rely on. At the moment, we’re losing some of that sense of orientation. So it may actually be a good thing if dealing with LLMs prompts us to examine the claims being made more independently of the person when scrutinizing them for their truth and validity.
Kückelhaus: I can only agree with that to a limited extent. When information is linked to a particular individual, we can check it against reality and verify the facts. We can look into whether the person who made the statement is qualified in any way. With ChatGPT and other LLMs, that isn’t possible, because we don’t know what sources they are drawing on – even if a statement sounds completely convincing and the reasoning is perfectly coherent. Our tendency to rely too heavily on what we read or hear is a general problem, which becomes particularly acute in the context of language models. A great deal of trust – too much, in my view – is already being placed in this technology.
Felder: There are undoubtedly cases of improper use, and AI, of course, is not free of mistakes. That said, I find it problematic to cite this as a reason not to grapple with this new technology in the first place.
Kückelhaus: What we can both certainly agree on is the need for the thoughtful use of this technology. And that, in turn, also applies to language. Given the linguistic patterns that make the “machine” seem so human, we need to handle this thoughtfully and keep reminding ourselves that we are not dealing with another human.
Given the linguistic patterns that make the “machine” seem so human, we need to handle this thoughtfully and keep reminding ourselves that we are not dealing with another human.
Marcel Kückelhaus
The title of your paper is “The Defining Language Model”. What do you mean by that?
Kückelhaus: Another step I took in my dissertation was to question the two language models, ChatGPT and Gemini, about human beings. I wanted to find out which human traits these LLMs attribute to humans. Normally, we are the ones who assign traits to objects, including AI. Now we have technology doing the same. I found it very exciting to explore how we are characterized by artificial intelligence. Of course, this characterization is based on human data. That means that, with the help of the language model, humans ultimately refer back to themselves. But, because of this anthropomorphization, we can say that the LLMs are defining what it means to be human. That is where the term “defining language model” comes from.
What is your view on the use of generative AI in higher education?
Kückelhaus: That’s something each department has to decide for itself. For us, the key question right now is how students use language models to write papers. I see this less as a risk and more as an opportunity. We should aim to teach students how to use AI consciously so they can improve the linguistic quality of their written work. Naturally, the responsibility for the quality of the content should remain with the students.
Felder: From my point of view, it’s completely unrealistic to try to ban the use of language models. On the contrary, we need to integrate them into our educational practices and into research-oriented teaching and learning. We need to give our students a feel for how reliable LLMs are. This is especially important given that AI is becoming real competition on the job market, particularly in the humanities and social sciences. That’s why I encourage my students to actively engage with the topic. I’d also like to highlight a broader issue to with education policy. We are paying increasingly close attention to the socialization backgrounds young people come from. First-generation students from non-academic homes are a major topic here at the university. Perhaps language models can provide support where other help is lacking.
Project “FrameIntell”
With their research on the linguistic constitution of LLMs, Ekkehard Felder and Marcel Kückelhaus are involved in a research project funded by the Federal Ministry of Research, Technology and Space, entitled ‘FrameIntell’. Alongside colleagues from the neurosciences, medical ethics, and computer science, they investigate various concepts of artificial and biological intelligence. Their starting hypothesis is that the way intelligence is conceptualized in biology, psychology and the cognitive sciences is increasingly influenced by AI-based concepts. To this end, the researchers analyze and compare scientific publications and large databases such as PubMed and arXiv in order to uncover which notions of intelligence are articulated there, and how AI influences them. The aim is to expose implicit and explicit concepts of cognition and their ethical implications.
About the linguists
After studying German Studies and English Studies in Heidelberg and Rennes (France), Marcel Kückelhaus has been a doctoral candidate and research assistant at Heidelberg University’s Department of German Language and Literature since 2022. He recently submitted his dissertation on the linguistic framing of artificial and biological intelligence. He is also involved in science communication at the Käte Hamburger Centre for Apocalyptic and Post-Apocalyptic Studies at Heidelberg University. Since 2005, Ekkehard Felder has been a Professor of German Linguistics with a special focus on contemporary language at Heidelberg University. His research interests include, among other things, linguistic discourse analysis and specialist communication in the fields of law, medicine, business, as well as biotechnology and genetic engineering. He also works on issues of grammar, rhetoric and argumentation analysis.
Original publication
E. Felder, M. Kückelhaus: Das definierende Sprachmodell (LLM): Anthropomorphisierung in der Mensch-Maschine-Interaktion. Zeitschrift für Literaturwissenschaft und Linguistik (11 April 2025)

