Introduction
Since the dawn of the computer age, people pondered the question of machine-intelligence. Could there be something like a ‘deus ex machina’ and if so, what were the appropriate scientific tools to assess such a phenomenon? British mathematician Alan Turing researched the question, whether machines have some sort of intelligence and he came up with the imitation game. His idea rooted on the fact that machines (digital computers) would imitate human intelligence in increasing believeability up until a point where a blind observer could not tell the difference between machine and human.
Since the last three years the term ‘aritfical intelligence’ rose from the ashes of an AI-winter[1] and the question resurfaced: Can we tell the difference between a human and a machine imitating a human? In this essay I will focus on the method that Turing dervived. We will cover the basics of Turing’s thought process and try to answer the following question: Does the imitation game, derived by british mathematician Alan Turing, still hold true for the modern age, especially more advanced Large-Language-Models (LLM)?
Methodology
To assess the applicability of the imitation game for the 21st century sufficently, we will establish a little bit of historical and theoretical background. The works of Mr. Turing will be explained briefly and then subsequently connected to the work of other researchers, formost Joseph Weizenbaum, Julian Togelius and Anna Stresser. After an explanaition of the imitation game itself, it is time to jump to present-day LLMs and ask the question, whether the test is still a good yardstick of intelligence on the quest for an Aritifcal General Intelligence (AGI).
* * *
Main
The Mathematician Alan Turing
Alan Turing was born in 1912 and remains one of Britain’s Math prodigies. He is famous for several things that accelerated the advent of the computer-age, and he is also credited with leading a team at Bletchley Park that ultimately deciphered the ENIGMA, hence greatly influencing the outcome of the Second World War. His homosexuality had been an open secret and in 1952 he was convicted and forced to do hormone therapy (effectively chemical castration). A treatment that so severely challenged his mental health that he might have taken his own life in 1954 (evidence is unclear).[2]
Turing’s Scientific Legacy
Turing did his PhD “On the Gaussian error function” in 1934 and layed groundbreaking work in the field of computer science. Prominent are two scientific papers, namely “On Computable Numbers with an Application to the Entscheidungsproblem” (1936)[3]and “Computing Machinery and Intelligence” (1950)[4]. His name is attached to several mathematical models and for half a century the terms “Universal-Turing-Machine” and “Turing-Test” were held as a gold-standard for machine operability.
His most famous legacy includes working on the cracking of the ENIGMA, the communication device that was used in Nazi-Germany to encrypt text-based messages. A feat that under the Official Secret Act[5], has only been published in the late 20th century. Turing’s involvement on the practical side of the codebreaking itself has been subject to much debate, yet his theoretical foundation made the creation of the cracking-device possible in the first place. His nephew Dermot Turing said in an interview:
“So, if you look at his contribution in that context, it was quite limited in terms of scope and the amount of time he spent on codebreaking; but on the other hand, it was enormous, in terms of the sheer volume of decrypts and intelligence that came out of the processing of Enigma as a result of his invention of the bombe machine.”[6]
After the war, he worked as an advisor for the US-Government, continuing his mathematical career.[7] The pioneering work of Turing allowed the ongoing field computer science to flourish. The Alan-Turing-Award, colloquially known as the ‘Nobel Prize of Computer Science’ is named in his honor.[8]
A Brief History Of AI-Models
Being scientifically sound, the colloquial term ‘artificial intelligence’ is not correct when talking about all past and present models that mimic intelligent machines. ChatGPT is a so called Large-Language-Model[9] and the underlying mechanism of its knowledge is attributed to machine learning. Presently researches still work on an Artificial General Intelligence (AGI). Since various definitions of intelligence exist, it is hard to tell, when a specific model/engine/bot/program will have reached this point.[10]
Even though the public seems to mix up the terms AI, Machine Learning and LLM, we will briefly retrace the development of the models and projects so far. The emergence of ChatGPT shed a light on research that had been going on for the past 60 years, mostly unbeknownst to the general public.
Dartmouth Conference
The term artificial intelligence, as opposed to natural intelligence, originated on the Dartmouth Conference[11], held on Dartmouth College in Hanover, New Hampshire (USA). The event took place in 1956 and is commonly accepted as the starting point for AI-research. Key concepts, such as “symbolism” or “connectivism” were already discussed, as well as machine learning. The optimistic ideal by the initiators M. Minsky, J. McCarthy, C. Shannon, and N. Rochester, that the problem of machine intelligence could be solved within the upcoming decade, did not hold true. Instead, the foundation of neural networks, and machine learning was established. The belief that an artificial neural network is de facto capable of learning anything that a natural neural network can learn still guides the process today.[12]
Weizenbaum and ELIZA
In 1966, a decade after the Dartmouth Conference, US Scientist Joseph Weizenbaum (1923-2008) lead a team of computer specialists at MIT to create ELIZA[13], a program that is much alike chatbots such as ChatGPT, yet way more simplistic. His idea was the establishment of a machine-therapist that operated on the principles of the psychotherapist Carl Rogers.[14] Do to so, the machine had to show empathy, scan for keywords and converse in a human-like way.
Since this was early days in the field of computer science, the setup was rather limited. Inputs were given on a keyboard, and outputs were printed on an infinite-paper.
Figure 1: ELIZA and Joseph Weizenbaum (www.masswerk.at/elizabot/weizenbaum-eliza.jpg)
Weizenbaum himself believed that his invention served as a solid example of the insufficiency of the 1960s technology, yet the contrary happened. People who tested the machine in the lab felt a deep sense of empathy by ELIZA. They also frequently asked others to leave the room, when conversing with ELIZA, just as if it was a thinking entity.[15] Weizenbaum himself stated:
“The idea was to create the powerful illusion that the computer was intelligent. I went to considerable trouble in the paper to explain that there wasn’t much behind the scenes, that the machine wasn’t thinking.”[16]
After the experiment Weizenbaum published a book on human-computer-interaction (“Computer Power and Human Reason”(1975) ger.: “Die Macht der Machinen und die Ohnmacht der Vernunft”(1976)[17]). In it he encapsulates the faultyness of anthropomorphization of machines and the prospective relationship they might play in the years to come.
Connectionists and Symbolists
Early on, the development of AI-Systems split into two main branches. The ideas being, that a machine intelligence could either arise from sophisticated logic and superb reason, or that it would inevitably emerge, given enough connections in a large enough set of training data. The belief that logic and reason should be used, was called Symbolism, whereas Connectionists believed in the superiority of the neural network approach. Throughout the last 70 years, developments on either side have been made and at various times one approach seemed more promising than the other. Present-day LLMs utilize a connectionists approach with a vast amount of training data within a complex neural network – essentially a machine-learning approach.[18] The addition GPT = ’Generative Pretrained Transformer’ hints at this method.
The so-called AI-Winter
Much of the research that has been done in the field of AI, has been funded by a US-Military budget. Early developments in the field are one of the outcomes of the Dartmouth Conference.[19] What seemed promising and glorified in the 1950s-70s, lacked a considerable amount of computing power and network architecture. During the late 1980s, up until around 2010 the internet took the spotlight, and the term AI faded to a background position. If we view this graphically, we see a declining curve in connectionist approaches, a trend that experts called an ‘AI-Winter’.[20]
Figure 2: Progression of AI-Development (Cardon et al, 2018)
OpenAI and ChatGPT
The founding of OpenAI[21] by Sam Altman and Elon Musk, among others, set out to change the landscape of AI. Humbly beginning as a Non-Profit, the company is now among the most successful in the AI-industry. Their chatbot ChatGPT[22] changed the field of text-production. OpenAI released other products, such as DALL-E, an image-generation program, but we will focus on text-based AIs.
ChatGPT is called a Large-Language-Model (LLM) and is one of many currently available ones.[23] LLMs work with vast amounts of text, that ‘trains’ the program to identify patterns and statements within a given corpus of content. A certain input may prompt the LLM to consult its database and generate and answer that is most likely to be tailored to the given input. There is little guarantee that it is necessarily true, nor that it is the same output any time – quite the contrary.
For example, users reported that even earlier versions of ChatGPT were able to write plausible looking math.[24] A key distinction is that plausible looking does not necessarily equate logically derived. The neural network just formulated an answer that sounded likely which is sometimes fatal in mathematical contexts, among other scientific fields.
An artificial general intelligence
The holy grail within AI research is the establishment of a general intelligence. Within the current framework of different AI-Systems that are in place, we see a necessary development towards specialization. Programs are made for a specific purpose, after all. An artificial general intelligence would dwarf currently available systems and be able to excel at any new task.
A misconception in popular discourse is the distinction between general and universal intelligence. Universal intelligence is akin to an omniscient entity, excelling at anything, whereas a general intelligence is a little harder to define. In his book on AGI Julian Togelius writes:
“Researchers have made various attempts at defining general intelligence, seeking to go beyond the subjectiveness and shallowness of the Turing test and the abstraction and impracticality of universal intelligence. One noteworthy example is François Chollet’s work. Chollet defines intelligence in a way reminiscent of universal intelligence, but with several caveats. The informal version of his definition is as follows: ‘The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization’ “[25]
A key distinction is the stress of skill-acquisition. The common assessment of a person’s intelligence is usually weighed against their ability to acquire new skills in shorter periods of time. Pair this with the ability to improvise under changing circumstances and we get fluid-intelligence in the Cattel-Horn-Carrol theory on intelligence.[26] The researchers differentiated between pretrained (static) and improvising (dynamic) approaches to new skills. To use more tangible metaphors, they are commonly called crystallized and fluid.[27]
A major problem in defining and assessing AGI is the inability to sharply describe what ‘general’ means. If we focus on the progressions from a general to a universal intelligence, then we would like to develop a system that supersedes any other form of (speciated) artificial intelligence. This would prompt the impossible definition of what any task means. A universal intelligence could be unbelievably good at writing mathematical theorems, yet very bad at generating random numbers. Even more bluntly, this universal intelligence would not have the need to express itself in human-understandable language if it would find a more convenient approach, nor should it have the ultimate ability to alter an electrical grid or influence national security.[28]
It is rather difficult to define an LLM like ChatGPT, Gemini or Grok as AGI, since the definitions diverge as well as the underlying contents that these LLMs were trained on. The same goes for non-text-based AI-Systems, e.g. such as TESLA’s Full-Self-Driving (FSD).[29] The quest for an AGI seems therefore to be another example of why definitions (that of intelligence in this case) matter so much – at least from a media theory perspective.
The Imitation Game by Alan Turing
“Can machines think?” Alan Turing posed this question in his paper ‘Computing Machinery and Intelligence’(1950)[30]. To cleverly asesss a machine as a thinking one, we need to establish a form of standardised test. Turing calls for an imitation game, what asks an observer to identify a human, or a machine in the following test scenario:
Suppose we have three parties. A person (A), a second person (B) and an interrogator (C) who is tasked to identify A and B as either male or female, only by asking questions. To minimze cues such as tone of voice, all questions and answers are text-based and transmitted via screen. All particpants are obvioulsy in different rooms. Suppose further that a human interogator is able to identify a male or female person with high accurcay, then two questions arise:
- Could a machine come to the same conclusions with similar accurcary?
- Could the interrogrator tell a machine from a person, within the same setup?
Variant 2) is the essence of what is later been described as the Turing-Test. Turing himself calls it the imitation game[31]. He then rephrases the question, stating “Can machines think?” should be replaced by “Are there imaginable digital computers which would to well in the imitation game?”[32]
It is important to note, that there is a crucial human element in the test. Scientifc methods focus on objectivity and replicability, mostly indedepent from human influence. Simone Natale writes on Turing and the imitation game: “By defining AI in terms of a computer’s ability to pass the Turing test, Turing included humans in the equation, making their ideas and biases, as well as their psychology and character, a crucial variable in the construction of ‘intelligent’ machines.”[33]
Testing Human-Like Capabilities with LLMs
Turing’s imitation game centers on the approach that a human tester correctly identifies the human and computer most of the time. The original paper does not give explicit statistical idea, since Turing’s approach was more likely to put a philosophical and ethical question, than to suggest a concrete testing-environment.[34] Since the game is centered around a form of communication, this is the real assessment of a test. Turing’s question, ‘Can a human identify the computer in the imitation game?’ lingers on the question ‘Can a computer hold conversation?’. As S. Natale writes:
“Likewise, human players who act as conversation partners in the test will have their own motivations and ideas on how to participate in the test. Some, for instance, could be fascinated by the possibility of being exchanged for a computer and therefore tempted to create ambiguity about their identities. Because of the role human actors play in the Turing test, all these are variables that may inform its outcome.”[35]
ChatGPT-4 is allegedly good a holding conversation and was therefore described as passing the imitation game.[36] We will discuss more nuanced details of (empathetic) conversations later on.
A Theory of Mind – False Belief-Task
From the ELIZA-Project spanning all the way to ChatGPT-4o[37], people have attributed anthropomorphic elements to AI-Systems. Test subjects have allegedly asked to have more privacy during conversations with ELIZA. [38] One could argue that the ability to express empathy might hint at the existence of intelligence. Would it not qualify as informed decision making to phrase an answer in this or that way – aimed at an understanding approach? Looking back at ELIZA, we find that empathy is somehow keyword related. Test subjects described ELIZA’s conversations as more empathetic, when the program mimicked their choice of words, or repeated certain phrases that the user wrote. In that sense, the imitation game flipped with the computer imitating the user’s choice of language, to appear more believable. It would, of course, be absurd to assess this to an intelligent agent and try to verify that the computer did this on purpose. ELIZA was a program after all, meaning a slavish follower of pre-determined commands, manipulating only the inputs, it was given.[39]
Yet to argue and infer a user’s emotion requires some kind of deeper reasoning from the machine. Pure choice of words won’t do, when it comes to the assessment of more complex situations. A frequently tested situation is the False Belief Task. Anna Strasser describes it as follows:
“According to most theories, it is essential for communication to possess the capacity to correctly attribute mental states to others, typically beliefs and intentions – a capacity often called Theory of Mind […]. The False Belief Task is an experimental paradigm that is widely used to test ToM. Here is a version of it. Subjects are told a story about Sally, who puts her candy in a box and leaves the room. In her absence, another character moves the candy to a basket. When Sally returns, subjects are asked where Sally will look for her candy. Young children have difficulties with this question: they typically respond that Sally will look in the box. They fail to attribute to Sally a belief that they know to be false. Being able to pass this test is often interpreted as evidence that one has a developed ToM”[40]
Stresser states that current AI-models lack this capacity, with varying results on GPT-4.[41] Various versions of this test exist, and it seems promising to have AI models undergo similar psychological tests as for young children. As for young children, the ability to infer situational information is difficult for current AI-Systems. Human communication relies on the ability to read between the lines, or more linguistically, to “infer meaning beyond what signs encode”[42]
Consciousness
Does thinking require consciousness? Turing asks this questions as a follow-up to his line of argumentation, leading up to the imitation game. An effective and affective form of communication might evoke empathy on the part of the observer. Within the context of the imitation game that means that the machine not only has to cleverly phrase words and sentences to trick the observer into being human; it also must make him believe that it has a form of consciousness.[43]
It is rather simplistic to anthropomorphize machines. We tend to give ‘faces’ to cars, give names to automatic vacuum cleaners and sometimes yell at computers, as if they were really listening. A similar effect happens with LLMs. People tend to address them in second person (“Hey Alexa, can you do …”), likewise the opening screen of ChatGPT asks “What can I help with?”.[44]
Turing himself talks about consciousness in his paper, quoting Professor Jefferson’s Lister Oration (his italics), stating essentially that not until a machine could write e.g. a sonnet, we could regard it as thinking. The argument being that this would require emotion to be put into words. With LLMs increasing capabilities, we may smirk at this remark. Turing himself answers the following, that I do not want to paraphrase:
“This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ‚ A thinks but B does not ‚ whilst B believes ‚ B thinks but A does not ‚. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.”[45]
Given the capabilities of present-day LLMs and their abilities to write a multitude of types of text, we might assess them thinking capabilities but not consciousness. To do so would be in accordance with Turing’s argument from consciousness, as well as adhering to the proposition that we can only speak of thinking, when a complex task (such as writing poetry) can be accomplished. Likewise, Anna Stresser talks about phenomenal consciousness which would be sufficient to pass the imitation game but does not offer further insight into the existence of a thinking and perceiving conscious entity. She says that, “Indeed, it is at least conceivable that we’d have no way of knowing whether and AI that passes all tests with human-like results would or wouldn’t be phenomenally conscious.”[46]
The sufficiency of the Turing-Test (his own verdict)
If a machine then passes the imitation game, should we assess that it thinks? Turing gives his own verdict and presents some counter arguments. In the section above the argument from consciousness[47] has been elaborated on. Given the complexity of machines and their increasing computing power since the time Turing published his paper, other objections arise.
In the late 19th century Charles Babbage and Ada Lovelace worked on the ‘Analytical Engine’[48], a machine that precedes the computer, yet the idea lives on. Babbage’s machine ran on mechanical power, whereas a digital computer could utilize way more components, because they were electrical and also smaller. The Lady Lovelace Objection, as Turing calls it[49], alludes to the idea that computers are made for more than just crunching numbers. Lovelace envisioned a future, where machines would be used to enhance creative tasks, write poetry and novels, and be a part of the artistic process.
During her time (and Turing’s), this was considered absurd. The almost cynical statement by Lovelace being that “The Analytical Engine has no pretentions to originate anything. It can do whatever we know how to order it to perform”.[50] Turing comments on this statement and refers to the state of technology during Lovelace’s time and also his own. Just because a machine that can “originate” something does not exist during their time, does not imply that the construction and programming is impossible.
The sentiment that a machine is a slavish tool, confined by the restrictions and permissions that its programmers put up, might be outdated; but so is the belief that creativity and surprise is a purely human thing – at least according to Turing. His argument follows the philosophical definition of creativity in the sense of recombination of previous materials/ideas into new forms. This could very well be done by a machine, given enough controlled randomness and variability.[51]
Humans and Tests | ChatGPT, Perplexity and Gemini on the Turing-Test
Lastly, we can turn the interrogation around and put the human to the test. Do we, as humans, even do well on the tests that we have LLMs undergo – and if not, what qualifies us as human? There are several methods to test an AI’s capability and therefore assess, whether it can be qualified as an AGI.
Currently, AI-Models, such as ChatGPT, Perplexity and Gemini seem to do well on the imitation game.[52] To what degree certain versions of the aforementioned models fool researchers is difficult to assess with statistical data. The human element in the conversation is still present, since some people are more susceptible to the imitating machine than others.[53]
As we have seen before, the imitation game does not really answer, whether a machine can actually think. Moreover, it assesses the ability to hold conversation for an extended period of time. Likewise, we might weigh a child’s responses against an adult within the imitation game and ask the observer the trick question, whom he thinks is the computer. The child might lose the game objectively, but on a larger scale the test would be deemed insufficiently. Methods like the False Belief Method seem more promising on presently available LLMs. As Togelius puts it: “It seems very possible to create a definition of general intelligence that humans would not score very highly on.”[54]
AI-research faces the problem, what general intelligence even means and how to sharply define it. Since the basic functionally (neural networks) mimics a nervous system, it is tempting to compare it to (natural) human intelligence. Togelius then poses the question:
“If humans turn out to have low universal intelligence, what would that mean? Would it mean that we have the wrong definition, and general intelligence is something else? Or would it mean that humans are not generally intelligent?”[55]
This would complicate matters on which LLMs and AI-Systems are assessed. It also goes beyond the scope of Turing’s original paper. Moreover, it highlights the importance of standardized tests on both parties, biological and artificial, if we want to compare them after all.
Conclusion
Does the imitation game, derived by british mathematician Alan Turing, still hold true for the modern age, especially more advanced Large-Language-Models (LLM)? After reconstructing Turing’s line of argumentation and the forms and appearences that AIs/LLMs have taken, we can answer a solid ‘No’ on this question. Early models like ELIZA, that were meant to be almost satricial of computational capabilites, fooled many of the researchers. Current LLMs by OpenAI, Google and alike have far more capabilites than their forbearers. Even with Turing’s alteration from “Can machines think?” to “Can a machine pass the imitaion game?” onto the implicit question “What does intelligence even mean, for both, machine and human?” – we still have no clear answer.
What has become more conrete is the notion that testing is need and that humans are faillable.
As technology advances, so do the testing methods. One of the difficulties of the upcoming future might also be the definition of intelligence itself, so that we can recognize an Artifical General (or Universal) Intelligence, when we see one.
This is of course a long shot to the question of artifical consciousness itself. Turing’s remark that consciousnees by our current standards it even harder to measure and merely to be experienced, will put even more stress on the testing environments of AI-Systems.
In the end, Turing had published his paper in MIND[56], a journal on philosophy and psychology, not on mathematics or computer science – a nod to the fact that understanding artifical intelligence/consciousness is parallel to understainding natural intelligence/consciousness. Turing was surely aware of this when writing the last line of his paper: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”[57]
Sources
[LIMITS] | Berry, David, The Limits of Computation Joseph Weizenbaum and the ELIZA Chatbot, Paper, Weizenbaum-Journal of a digital society, vol03:03, 2023
[WINTER] | Haigh, Thomas, There was not ‘First AI Winter’, Essay, Opinion, 2023
[NUMBERS] | Huws, C.F., Finnis, F.C., On computable numbers with an application to the Turingproblem, Artificial Intelligence and Law, (2017) 25:181–203
[CAPTCHA] | Justie, Brian, Little history of CAPTCHA, Essay, Internet Histories Digital Technology, Culture and Society, 2020
[IMIT-GAME] | Matthews, Joshua, Volpe, Catherine Rita, Academics‘ perceptions of ChatGPT-generated written outputs: A practical application of Turing’s Imitation Game, Australasian Journal of Educational Technology, 2023, 39(5).
[DECEIT] | Natale, Simone, Deceitful Media Artificial Intelligence and Social Life after the Turing Test, Oxford University Press, 2021
[ANTHOLOGY] | Strasser, Anna (Ed.), Anna’s AI Anthology – How to live with smart machines, xenomoi Verlag e.K., Berlin, 2024
[AGI] | Togelius, Julian, Artificial General Intelligence, MIT Press, 2024
[TURING-INT] | Turing, Alan, Computing Machinery and Intelligence, Essay, Mind, New Series, Vol. 59, No. 236. (Oct. 1950), pp. 433-460
[TURING-NUM] | Turing, Alan, On computable numbers with an application to the Entscheidungsproblem, Essay, Proceedings of the London Mathematical Society. 58: 230–265., 1936
[VERNUNFT] | Weizenbaum, Joseph, Die Macht der Computer und die Ohnmacht der Vernunft, Suhrkamp, 1978
[HIST-VID] | The history of Chat-GPT, Video, URL: https://www.youtube.com/watch?v=OFS90-FX6pg, YouTube (09.03.25, 11:57)
Referenzen
[1] cf. [WINTER] p.37 f.
[2] Alan Turing, Wikipedia, URL: en.wikipedia.org/wiki/Alan_Turing, (04.03.25, 14:25)
[3] cf. [TURING-NUM]
[4] cf. [TURING-INT]
[5] Alan Turing and the Hidden Heroes of Bletchley Park – Interview with John D. Turing,
www.nationalww2museum.org/war/articles/alan-turing-betchley-park (05.03.25, 10:42)
[6] ibid.
[7] ibid.
[8] A.M Turing Award – Official Website, URL: amturing.acm.org/ (05.03.25, 10:33)
[9] See Introducing ChatGPT, URL: openai.com/index/chatgpt/, (04.03.2025, 14:40)
[10] [AGI] p.72
[11] cf. [WINTER] p.36
[12] cf. [TURING-INT] p. 451 ff.
[13] cf. [ANTOLOGY] p. 245
[14] [AGI] p. 54
[15] [ANTOLOGY] p. 246
[16] [LIMITS] p.8
[17] See [VERNUFT] in the bibliography
[18] cf. [ANTHOLOGY] p. 252
[19] cf. [WINTER] p. 36
[20] cf. [WINTER] p.37 f.
[21] OpenAI, URL: www.openai.com
[22] ChatGPT by OpenAI, see: www.chatgpt.com
[23] Other notable LLMs are: Gemini (Google), Perplexity, Grok (x.com), Copilot (Microsoft)
[24] cf. [HIST-VID] 9:41 f.
[25] [AGI] p.65
[26] [AGI] p. 66
[27] ibid.
[28] [AGI] p. 64 ff.
[29] Full Self-Driving (Supervised), Tesla Model Y – Users Manual, URL: www.tesla.com/ownersmanual/modely/en_us/GUID-2CB60804-9CEA-4F4B-8B04-09B991368DC5.html (10.03.25, 15:40)
[30] cf. [TURING-INT] p.433f.
[31] ibid. p. 434
[32] ibid. p.442
[33] [DECEIT] p.22
[34] cf. [TURING-INT] p.460f.
[35] [DECEIT] p.21
[36] cf. [ANTHOLOGY] p.261f.
[37] The latest, publicly available version, as of writing this paper (Mar 2025)
[38] cf. [LIMITS] p.11
[39] cf. [TURING-INT] p. 446 f.
[40] [ANTHOLOGY] p.261
[41] cf. ibid. p.261 f.
[42] cf. ibid. p.262
[43] cf. ibid. p. 266 f.
[44] See: www.chatgpt.com/ (10.03.25, 16:42). The publicly available versions of Gemini (gemini.google.com/app) and Perplexity (www.perplexity.ai/) ask a more neutral “What do you want to know?” (Mar 2025)
[45] [TURING-INT] p.446 (original italics)
[46] [ANTHOLOGY] p. 266f. (original italics)
[47] cf. [TURING-INT] p. 446 f.
[48] Turing cites Ada Lovelace in his bibliography: Countess of Lovelace, Translator’s notes to an article on Babbage’s Analytical Engine, Scientific Memoirs, (ed. by R. Taylor), vol. 3 (1842), 691-731.
[49] cf. [TURING-INT] p. 450
[50] ibid. p. 450 (italics originally by Lovelace)
[51] Ibid. p. 451
[52] cf. [ANTHOLOGY] p. 247f.
[53] cf. [IMIT-GAME] p.85
[54] [AGI] p. 67
[55] [AGI] p.63
[56] See opening page on [TURING-INT]
[57] [TURING-INT] p.460