Creation narratives articulate primal values for the cultures that they serve. Genesis, for example, not only explains the creation of the universe, but the origin of human suffering and of the differentiation of male and female suffering through its different sources (men in their vocational lives; women in their subjection to men and to pain in childbirth), of meat eating and the estrangement between humans and animals, of the creation of the nation and of its laws and political structure, and even of the origins of local landmarks. In the way that it identifies both meat eating and human suffering as the product of a fall, it affirms that the world should not be this way, that people should not suffer, that work shouldn’t be so hard and unforgiving and that women should not be subject to men in the ways that they have been and usually still are. Creation narratives around the world vary widely in details as they affirm different cultural values, but they tend to serve the same purposes: they explain origins and establish defining values.
But with the growth of the arts human beings started to think of themselves as creators as well, a line of thought that followed a trajectory to human beings bringing their creations to life. The ancient Greeks entertained a number of these stories – stories about automata that could crush its victims or about bringing statues to life. The Greeks tended to think about art the way we think about trades and applied science: they thought about art as techne, a skill that could produce things in a controlled, deliberate way. An artist learns how to create a lifelike sculpture, for example, and then Daedalus, our primal engineer, learns how to bring that statue to life using quicksilver. The granddaddy of all of these stories in English literature is Mary Shelley’s Frankenstein, which mythologized the transition from alchemy to empirical science while meditating on their shared ambitions. But alongside stories about creators as technicians are stories that have religious dimensions. Golem stories tend to feature a group of knowing rabbis who would band together to create a golem that would protect the Jews in times of trouble. The Supernatural episode “Everybody Hates Hitler” (S8 E13) is a very good recent retelling of a golem story that captures the golem’s purpose and meaning within Jewish literature. One of everyone’s favorite scenes in Fantasia, “The Sorcerer’s Apprentice,” immediately derived from Goethe, follows the pattern of a golem story in which a housewife orders a golem to carry water and the golem gets out of hand.
One of the more recent examples of a tech-based creation story is Alex Garland’s 2015 film Ex Machina. In it, tech genius Nathan Bateman (perf. Oscar Isaac), the founder and CEO of the world’s largest tech company, Blue Book (think Google, as it’s search-engine based), has been working on advanced A.I. and invites one of his employees, Caleb (perf. Domhnall Gleeson) to come and test his latest iteration of A.I. But don’t think this is a Turing Test, which only tests a machine’s ability to trick a human being into thinking that it is human. The Turing Test is not a test of A.I. It is only a test of human perception, and if it were used as an A.I. test it would be somewhat fascist in nature: agency and even “life” then becomes possessed by its bearer only when it is recognized by another. The Turing Test is irrelevant to this film, even by Nathan’s account, who does nothing to disguise his A.I.’s robotic body parts. The real test of A.I. according to Nathan is that you believe it’s alive (rather than just running a program) even if you know it’s artificial.
Besides, how many real human beings would fail a Turing Test if human observers were told they might be a machine?
Caleb was invited to visit Nathan presumably because he won a contest allowing him to spend a week with the CEO of his company. He is told upon his arrival at his employer’s very remote home/facility that he is going to be asked to help test and evaluate Nathan’s A.I. The real contest, however, is between Nathan and his A.I., Ava (perf. Alicia Vikrander), over Caleb’s allegiance. Caleb, for his part, is being manipulated by both of them for their own ends. Nathan did want Caleb to help him test his A.I., but he chose Caleb specifically for this purpose and modeled Ava on Caleb’s internet porn preferences. After every day of Caleb’s encounters with Ava, who is kept in a separate area behind a glass partition and never allowed to leave, Nathan asks Caleb subtle but leading questions. These questions take the form of one programmer asking another for his opinion about the tech, and then increasingly about his human reaction to the tech, but these questions are really designed to test how much information Caleb is withholding from Nathan and how much he is revealing – which is the real test of Caleb’s allegiance. Yes, all of their interactions are recorded, but the facility suffers periodic power outages in which Caleb’s conversations with Ava aren’t being recorded – except by hidden, off-grid cameras about which Caleb and Ava are unaware, so that Nathan is able to monitor their private conversations as well.
Ava, for her part, has a brain modeled on an internet search engine and all of its connections, or in other words, on the internet itself. She has learned the precise physical signals that communicate sexual interest between a man and a woman: not just body language, but the position of specific facial muscles and the extent of pupil dilation, which she can read perfectly in others (so she knows when they are lying – she is Wonder Woman without a golden lasso) and control in herself. So Ava wins, and viewers learn just how much she has been playing Caleb when she gets him to help her escape, asks him to wait for her in a room, then shuts down the facility, locking him in, and has Nathan killed by an earlier version of the A.I. that Nathan kept around for household chores. Ava covers the rest of her body in a synthetic skin, gets dressed, and then leaves the facility when the weekly helicopter arrives. Caleb is left locked alone in a room to suffocate or starve to death.
Ex Machina therefore asks just as many questions about gender as it does about A.I., combining Frankenstein with the Pygmalion myth in order to do so. Pygmalion is an artist who created a statue of a woman so beautiful that he fell in love with it. The gods smile on his work and bring the statue to life. The story is about the male creation of standards of female beauty and of female subjectivity through male hegemony: in a society in which men control everything, they also control the development of female consciousness. As commentary on romantic relationships, the Pygmalion myth observes how men often fall in love with an artificial woman that they have created in their minds rather than the real woman who is before them. John Upike’s short story “Pygmalion” modernizes this myth by depicting a man who tries to turn his second wife into a second version of his first one – with the same results.
In Ex Machina, a female Pinocchio becomes real and then resents her Geppetto for keeping her a prisoner in his workshop. Nathan is a Pygmalion-like character who doesn’t understand the woman he has created. The real contest that exists between Nathan and Ava over Caleb maps out a contest between Ava’s erotic control and Nathan’s tech-driven economic control in which erotics wins. This development addresses some of the central and very common questions about the creation of A.I. that have persisted with us since at least Frankenstein. Does technology defeat nature? Ex Machina answers “no”: it was not Ava as A.I. that won Caleb’s loyalty, but Ava as a convincing representation of a beautiful young woman. How do we know when A.I. becomes a real intelligence? How do we know it’s not just continuing to run increasingly complex but meaningless routines? I think the movie deliberately fails to answer that question, and Alex Garland said as much in an SXSW panel discussion about the film. How can we ever know? The failures and limitations of the Turing Test extend to all possible varieties of it, as they all rely upon external measures to answer a question about the internal nature of the subject – which perhaps tells us more than anything else that our approach to this question is all wrong.
Garland’s comment about how we should treat A.I. is the only sane, ethical one: once a thing is able to say “no” and set boundaries, we have to respect them as autonomous, intelligent agents whether we can be sure that they are or not. We also need to explore our own reasons for asking this question. Centuries ago, one European Christian sect asserted that since animals don’t have souls, people are allowed to be cruel to animals. Protestants and Catholics rejected this position, but the problem is with basing our treatment of someone or something else based upon the nature of the thing. The issue is not, “To whom are we allowed to be cruel and to whom are we not?” The issue is ourselves: are we ever allowed to be cruel people? The focus should be on the character of the acting subject, not on the recipients of our actions. When the Pharisees were told by Christ that “Love your neighbor as yourself” was one of the two greatest commandments of the law, they asked, “Who is my neighbor?” Christ responded with the Parable of the Good Samaritan and then asked, “Who was the neighbor?” He shifted the focus of the question from the nature of the recipient of an action to the nature of the actors themselves.
The question is never about who or what they are, but who or what we should be. How we treat A.I., should it ever come into being, is first of all a reflection of who we are, not what A.I. is.
The film also seems to mock unintentionally the overwhelmingly geekboy culture that has been behind the development of A.I. and our imagination of it in popular consciousness. I don’t think it’s coincidental that our first great proto-myth in this genre was written by a woman who positioned the narrative as written for a woman who shared the author’s initials: M.W.S., or Margaret Walton Saville. Mary Wollstonecraft Shelley’s novel is early reporting on her own immersive geekboy clique consisting at the time of Percy Shelley, Lord Byron, and Polidori. Ex Machina’s girlbot Ava represents woman in western history: kept behind glass, kept without rights, kept under observation, and kept as an object of male desire exchanged between men for their purposes. She beats her own coterie of geekboys using the only power women have had throughout most of western history: emotional manipulation and the promise of sex deferred until after a payoff of some kind.
One of the most interesting moments in the Ex Machina SXSW panel occurred when the female moderator interpreted Ava’s character differently than the director. At one point in the film, the director had been assuming Ava’s “innocence” or lack of intentionality in her actions, but the female moderator made a case for Ava’s knowing, calculated, and deliberate manipulation of both Nathan and Caleb from beginning to end. You could see Garland’s wheels turning when she made this suggestion. He couldn’t (and didn’t seem to want to) contradict her very plausible reading of the character with any appeal to the film’s details. I think this leads us to another question, which is perhaps the most important meta question posed by the film about sub-creator narratives: how are geekboys supposed to understand something as completely alien as A.I. when they don’t even understand girls? It’s not coincidental that the only women in Nathan’s life are robotic. This lack of understanding always proceeds from the same sources: an illusion of control, an underestimation of the controlled object because of that illusion, and a failure to understand ourselves. Of course the girlbot will always win. I would reference here Penny in Big Bang Theory. She is the least educated member of her social group but, alongside Sheldon, exercises the most control. For a long time I would have retitled Big Bang Theory the “Sheldon and Penny Show.”
But there’s another meta question that the film does not directly address: why do we always imagine that the creation of A.I. will go wrong? There isn’t a single version of this narrative that doesn’t go bad. Bicentennial Man has perhaps the most benign ending, but human acceptance of A.I. in that film only occurred after the A.I. had fully humanized itself by accepting aging and death. Otherwise, the creation of A.I. or varieties of it almost always end in disaster, from Frankenstein to Metropolis to 2001: A Space Odyssey to the Terminator films to A.I. to I, Robot to Stealth (Frankenplane!) to Ex Machina. Part of this is simply due to the repetition of a set of narrative conventions begun by Mary Shelley’s novel and then widely popularized by James Whale’s film, but the book Blake and Kierkegaard: Creation and Anxiety (Bloomsbury/Continuum 2010) suggests another answer. It asks the question, “Why do we fear what we create?”, a fear it identifies as “creation anxiety,” and it answers that question with a study of William Blake’s The [First] Book of Urizen, which was perhaps the first sub-creator narrative in English literature, preceding Mary Shelley’s novel by over two decades.
Blake and Kierkegaard suggests that both Blake and Kierkegaard were responding to different Enlightenment psychologies from the standpoint of a much earlier psychology based upon Plato’s tripartition of the soul. Plato believed the individual could be governed by bodily influences (which extends to our physical environment), soulish influences (our social and educational environments), and spirit (in Plato, Divine reason operating within us). For Blake, Enlightenment psychology meant Locke’s blank slate, which in Blake’s thinking reduced human psychology to the same dull round, repeating social and environmental influences over and over again like a windmill. Blake’s immaterialism and assertion of spirit was intended to make possible human agency through the imaginative capacity. We fear what we create, then, because our new sciences, our new technologies, are turning us into something unfamiliar. Again, our reflection should be upon the creating or acting subject, not upon the created thing. What is Ex Machina’s Ava but the sum of all possible social influence in the form of an embodied search engine? You’ve read the meme: Ultron spends two minutes on the internet and decides that humanity must be destroyed. Ex Machina’s answer to the question posed by Creation Anxiety is that the body wins over technology, and that the controls posed by our tech are always illusory, so that our fears of our tech are the source of our problems with it.
If Ava really was an independently thinking agent, a person, one has to wonder how things would have turned out had she been treated like one from the beginning, with the dignity and respect that all persons deserve. We could ask the same question about Frankenstein’s Creature. Ex Machina doesn’t offer answers that we cannot have: we have not yet created a new consciousness, we aren’t even sure how we would know if we did, so how can we know what will happen? We can’t.
We can, however, know who we are by the way that we act. If we act monstrously, we shouldn’t be surprised when our creations turn out to be monsters.