“”Man,” I cried, “how ignorant art thou in thy pride of wisdom! Cease; you know not what it is you say.”
-Mary Shelley, Frankenstein.
As a child, did you ever find yourself staring up into the night sky, and wondering about the vastness of the universe, the origins of human consciousness and how a non-human sentience would comprehend reality?
Theories of the human mind, and its uniqueness on our planet, have been proliferating since philosophers asked some of the first, fundamental questions of our existence: ‘What makes us human?’ and, in conjunction with that: “How do our brains make us human?’
Over the last decade, the landscape of pop culture has been filled with a range of stories, with these questions as their thematic core. Films like Ex Machina and Her, TV series such as Black Mirror and Westworld, have speculated on these questions by contrasting our human minds with the workings of artificial intelligence: Will the machines that we’ve created ever be capable of free will? Will a computer ever desire, make art, behave at cross purpose to its maker?
Or, as Phillip K Dick – the science-fiction novelist behind the book that would become Bladerunner – once put it ‘Do Androids Dream of Electric Sheep?’ It’s a question that has no easy solution, for answering the question would provoke either moral unease or a sense of profound isolation.
Alexander Turing, one of the fathers of modern computing, came up with a thought experiment that he believed would solve the complex issue of where machine intelligence ends, and machine cognition begins. Turing’s test was based on the premise of a text-based, onscreen conversation between a human (A) and a machine (B), with an onlooker (C) who would evaluate the conversation.
This thought experiment has been dubbed the imitation game, because the test was designed not to test the factual accuracy of the computer’s response, but the ability of the onlooker to distinguish which participant was the human, and which was the machine.
If a machine’s sentience is indistinguishable from a human’s, does that vest it with a soul? What is one’s responsibility towards an intelligence that we’ve created, so vastly different from our own?
On the other end of the spectrum, we have Princeton University psychologist Julian Jaynes, who traced the origins of human consciousness with the controversial theory of the bicameral mind. In his hypothesis, Jaynes traces human consciousness back to its murky, primordial origins. He speculated that as our brains evolved – and we began to hear the first stirrings of self-awareness in our heads – we did not initially realise its origins within our selves. We believed that it was the voice of a higher being. And so we built temples, scribbled holy books, girded our souls with garb and ritual.
We spoke to God, not realising that we were only ever speaking to ourselves.
Therein, perhaps, lies our fixation with the potentials of artificial intelligence, and its proliferation in our pop culture: Our fascination with machine sentience are essentially questions that interrogate our species’ unique self-awareness, and its’ unique solitude.
For if Julian Jaynes theory of the past proves credible, we live in a world with neither gods nor masters. And if Turing’s test holds true, we may soon have to play our own imitation game, one that would have us mimicking divinity… one that our finite minds may not be equipped to handle.