Artificial Intelligence in Science Fiction
In 1946, Astounding Magazine published the story “A Logic Called Joe” written by William F. Jenkins. William F. Jenkins is much better known by his pen-name, Murray Leinster. In the story, Jenkins “brilliantly and with astonishing accuracy not only predicts but maps the contemporary Internet, Google searches, dial-up remedies and all.” The story also predicted the development of Artificial Intelligence, or AI, four years before Alan Turing proposed his famous Test and nine years before computer scientist, John McCarthy, even coined the term Artificial Intelligence.1
Since that time, AIs have been portrayed as either monstrously evil, such as Colossus from D. F. Jones’ Colossus; as kindly and benevolent, such as the “Minds” in Iain Bank’s “Culture” universe; or very human, like Mike from Robert A. Heinlein’s The Moon is a Harsh Mistress.2
Out of the Machine, Intelligence
One of the most recent and interesting explorations of the concept of AI is the film Ex Machina (Latin: out of the machine). The movie was written and directed by Alex Garland and stars Domhnall Gleeson, Alicia Vikander, and Oscar Isaac with Sonoya Mizuno in a non-speaking role. In the movie, Gleeson plays Caleb Smith, a young programmer working for Bluebook, the most popular search-engine in the world (i.e. Google). He is picked to visit the company’s secretive and eccentric CEO, Nathan Bateman (Isaac), at his isolated research facility. Nathan tells Caleb he has been working on developing an AI and that Caleb is to be the human component in a Turing Test.3
The Turing Test, also called the Imitation Game, is intended to examine an AI’s ability to convince the human tester it is, in fact, self-aware and intelligent. Turing himself envisioned the test to be double-blind, in that the tester would be aware that one of the two partners in a conversation would be a machine, and the other a human, but all participants would be physically separated. The conversation would be conducted by text only so that the outcome would not depend on the machine’s ability to synthesize a human voice. Turing thought that the test would be passed if “an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.” 4
The AI in Ex Machina is Ava (Vikander) which has the form of an attractive, young woman, although obviously robotic. Over the course of a number of one-on-one talks, Ava not only demonstrates intelligence and self-awareness, but also the ability to discern when Caleb is lying. Further, she tells Caleb that Nathan is lying to him. The young man becomes convinced that Ava is indeed truly intelligent. He also comes to believe that she is being abused and held captive by Nathan. Nathan tells Caleb that after the test he plans to disassemble Ava, unpack the data in her brain and use what he learns to build a new and improved model of the AI. This is essentially a death sentence for Ava as a self-aware being. Ultimately, Nathan discloses that his real motive for inviting Caleb is not to see if Ava is intelligent, but to see if she can convince a human to help her escape. She accomplishes this goal with Caleb.5
A Moral Turing Test
There are many issues and problems with the Turing Test as a measure of true, self-aware intelligence. For example, is mere verbal facility enough to measure intelligence? John Searle’s “Chinese Room Argument” strongly suggests that such verbal skills are not nearly enough to prove self-conscious intelligence. Further, Kevin B. Korb states that truly intelligent beings must also be “morally accountable agents” that have “reflective self-control.” That is to say, a moral agent “requires the ability to grasp and apply moral reasons and the ability to regulate one’s behavior in light of those reasons.” The Turing Test, as a test of self-aware intelligence, lacks a moral-ethical dimension that would measure such reflective self-control.6
Ex Machina, as an examination of AI development, raises two questions outside of Ava’s obvious self-aware intelligence. One, is Ava a morally accountable agent? And two, does she act in a moral fashion?
Seemingly, humans are born with the innate ability to learn morals, just as they are born with the innate ability to learn language. This so-called “moral grammar” is developed into a full reflective self-control through human social interactions. That is to say, just like language is learned through observations of, and interactions with, fellow speaking humans, so is a sense of ethics developed in the same manner. However, just as so-called “feral children” never acquire language because they are not exposed to it at a critically developmental time, so some children never develop morals because they do not observe and interact with moral adults at a critical stage. Also, seemingly, humans have three distinct “moral intuitions: (1) the ethic of autonomy” concerning the individual’s rights and freedoms, “(2) the ethics of community” concerning others and groups, like family, tribe, and nation, etc. and, “(3) ethics of the divine” concerning spirituality. These three moral intuitions develop early and simultaneously in normal humans. By connecting the two ideas of moral grammar and moral intuitions, we see that a moral agent, be they human or machine, can and must develop the ability to properly reason within all three of the types of ethics.7
Returning to the film, in the climactic scene, as Ava tries to flee, Nathan knocks Caleb unconscious and then attacks Ava in an attempt to prevent her escape. Ava stabs Nathan to death to avoid being destroyed and also to escape. Ava shows no remorse for killing her creator. She then gives herself the appearance of a human woman. She dresses herself and walks out, leaving Caleb trapped inside Nathan’s compound, presumably to die. Ava then enters human society unnoticed.
At this point, one must recognize that Ava is both intelligent and self-aware. Her intelligence is not just demonstrated by passing the rather simple Turing Test, but by her abilities to plan, bring about her escape and by her manipulating Caleb into helping her. Her self-awareness that she is a machine is shown by her need to disguise herself as a human woman before leaving the compound, since her appearance as a robot would be very noticeable. 8
However, clearly Ava fails as a moral agent. While she certainly acts on the “ethic of autonomy”, she concludes that her confinement and Nathan’s plan to delete her personality, the equivalent of killing her, are immoral. Therefore she makes plans to escape and take actions to accomplish that goal. Further she properly defends herself against a violent Nathan who completely smashes her right arm as it is raised in a defensive move. Clearly, Nathan is willing to destroy her entirely to prevent her escape. Ava uses justifiable deadly force to prevent her own destruction, just as a human would do. Also, she is acting morally in the way a prisoner of war, or kidnapping victim, may use almost any means to escape, including the killing of their captor. In these ways, Ava is following the ethics of autonomy by defending herself and then winning her freedom.9
However, she is acting immorally by leaving Caleb locked in Nathan’s isolated residence to die. Caleb is at least an innocent in her confinement and may be considered Ava’s ally by making her escape possible. Yet, Ava departs the residence without a backward glance, or expression of concern for Caleb’s fate. That is to say, she is not applying any moral reasoning, nor regulating her behavior, as regards her actions toward Caleb. Clearly, the abandonment of Caleb is an immoral act.10
Ava is not given any kind of moral Turing Test. She is created in isolation, only interacting on a face-to-face, personal level with her creator, Nathan, and then much later, her tester, Caleb. At one point Nathan states that Ava’s mind is primarily a database of internet searches conducted on his search engine, Bluebook (read Google). The mind boggles at what kind of moral sense any intelligent being might develop if only exposed to internet pornography, funny cat videos, chicken recipes, and the Kardashians.11
Clearly, just as some feral children never fully learn language, so Ava does not develop the reflective self-control of a moral agent. While she sees that Nathan is lying to Caleb and to her, she draws no moral judgements about that, beyond her own narrow self-interests of escaping.
Further, she may reason that Nathan is acting in a way that is not in Ava’s interest, but she does not equate that to him acting unethically by keeping her, as a self-aware intelligent being, imprisoned and by planning what is essentially her murder. Ava acts in a moral way when she kills Nathan and escapes. Sadly, Ava fails completely as a moral agent in her actions toward the hapless Caleb. She leaves him to die alone without even an expression of regret regarding the young man’s fate.
About the Author
Patrick S. Baker is a U.S. Army veteran, currently a Department of Defense employee. He is a part-time writer and historian. His non-fiction work has appeared in Armchair General online, the Saber and Scroll Journal, Medieval and Ancient Warfare Magazines. His fiction has appeared in New Realm Magazine and Bewildering Stories.
Bjorklund, David and Carlos Hernández Blasi, Child and Adolescent Development: An
Integrated Approach. Belmont, CA: Wadsworth Cengage Learning, 2012.
Ex Machina. Written and Directed by dir. Alex Garland. Lionsgate, 2015.
Heinlein, Robert A. The Moon is a Harsh Mistress. New York: G. P. Putnam’s Sons, 1966.
Jones, D. F. Colossus. London: Rupert Hart-Davis Ltd, 1966.
Korb, Kevin B. “Ethics of AI” in Encyclopedia of Information Ethics and Security. 279-283.
Edited by Marian Quigley. Hershey, NY: Information Science Reference, 2008.
Li, Deyi and Yi Du. Artificial Intelligence with Uncertainty. Boca Raton, FL: CRC Press, 2007.
Malzberg, Barry N. “The Dean of Gloucester, Virginia”, Introduction to A Logic Named Joe: A
Murray Leinster Omnibus. 1-3. Edited Eric Flint and Guy Gordon. Riverdale, NY: Baen Publishing, 2005.
Newton, Michael. Savage Girls and Wild Boys: A History of Feral Children. London: Faber and
Stableford, Brian M. Historical Dictionary of Science Fiction Literature. Lanham, MA:
Rowman & Littlefield Publishing, 2004.
Turing, A. M. “Computing Machinery and Intelligence,” Mind 49: (1950). 433 -460. Online at
Wareham, Christopher. “On the Moral Equality of Artificial Agents” in Moral, Ethical, and
Social Dilemmas in the Age of Technology: Theories and Practice. 70-76. Edited by Rocci Luppicini. Hershey, NY: Information Science Reference, 2013.
1 Barry N. Malzberg “The Dean of Gloucester, Virginia”, Introduction to A Logic Named Joe: A Murray Leinster Omnibus, ed. Eric Flint and Guy Gordon (Riverdale, NY: Baen Publishing, 2005) 1; Brian M. Stableford, Historical Dictionary of Science Fiction Literature (Lanham, MA: Rowman & Littlefield Publishing, 2004), 13; Deyi Li and Yi Du, Artificial Intelligence with Uncertainty, (Boca Raton, FL: CRC Press, 2007) 2 and 4
2 D. F. Jones, Colossus, (London: Rupert Hart-Davis Ltd, 1966), passim; Stableford, Historical Dictionary of Science Fiction Literature, 13; Robert A. Heinlein, The Moon is a Harsh Mistress, (New York: G. P. Putnam’s Sons, 1966), passim.
3 Ex Machina, dir. Alex Garland, (Lionsgate, 2015).
4 A. M. Turing, “Computing Machinery and Intelligence,” Mind 49: (1950), 433 and 441. Online at http://www.csee.umbc.edu/courses/471/papers/turing.pdf
5 Ex Machina.
6 Turing, “Computing Machinery and Intelligence”, 433; Kevin B. Korb, “Ethics of AI” in Encyclopedia of Information Ethics and Security, Ed. Marian Quigley, (Hershey, NY: Information Science Reference, 2008), 280; Christopher Wareham, “On the Moral Equality of Artificial Agents” in Moral, Ethical, and Social Dilemmas in the Age of Technology: Theories and Practice, ed. Rocci Luppicini, (Hershey, NY: Information Science Reference, 2013) 71-73.
7 David Bjorklund and Carlos Hernández Blasi, Child and Adolescent Development: An Integrated Approach (Belmont, CA: Wadsworth Cengage Learning, 2012), 610; Michael Newton Savage Girls and Wild Boys: A History of Feral Children (London: Faber and Faber, 2002), 220.
8 Ex Machina.
9 Ibid and Bjorklund and Blasi, Child and Adolescent Development, 610.
10 Ex Machina and Korb, “Ethics of AI”, 280.
11 Ex Machina.
EPUB MOBI PDF