Dismissiveness and Mystification — Four Tales of Emergent Complexity
About this story
This story was written in April, 2023 by Melissa O'Neill, with some parts written by GPT-4 to investigate its storytelling capabilities (at the time, people were less worried about “AI slop” and the extent to which AI could do creative work was still something of a novelty). Where GPT-4 was able to tell the story Melissa wanted, its output was used with minimal editing; where it struggled, Melissa rewrote the sections. Overall, Melissa had the vision for the story and the themes and ensured the final result matched that vision. This story is discussed further in this blog post.
The Parable of the Lost Tribe and the Enchanted Laptop
In a remote corner of the world, there existed a lost tribe, untouched by modern technology. One day, a curious computer scientist named Mary arrived in the village, bearing with her an older, stand-alone laptop computer. The villagers gathered around, intrigued by this strange object.
The laptop had several programs installed, including a version of Othello, also known as Reversi. This game closely resembled a traditional village board game, which intrigued the village elder, Kaya.
Kaya challenged the computer to a game of Othello and soon became fascinated by the way it played. She felt an overwhelming desire to understand where the gameboard existed within the computer and who her mysterious opponent was. Having heard tales of cellphones, Kaya surmised that her opponent must be connected to the computer via radio waves.
Mary applied her computer-science knowledge and attempted to explain the technology behind the computer. She opened the laptop’s case, revealing the chips inside, and spoke of logic gates and their function. Kaya struggled to comprehend how such simple components could produce the complex gameplay she observed. Perplexed, she repeatedly asked, “But where is the gameboard?”
To clarify, Mary explained that the black and white pieces on the screen were actually represented as zeros and ones within the computer. She made the mistake of stating that it didn't matter how zero and one mapped white and black, causing Kaya to become more confused. Kaya believed that if the pieces could be represented in multiple ways, this was a vital clue to understanding the system, rather than an incidental detail. Kaya spoke of Quimia, a local word for the essence of something.
The conversation turned to the mysterious opponent. Mary attempted to explain that it was a simple algorithm searching a tree of possibilities. Kaya's confusion only grew, as she couldn't fathom how the small components in the computer could create such a tree. The elder stubbornly clung to the idea that there must be something else beyond the mechanisms Mary was describing—a cellular connection to a remote opponent, perhaps. When Mary debunked this notion by explaining that the computer did not have any kind of cellular radio, Kaya concluded that the chips must contain a magical essence that gave them the power to play the game.
Mary tried again. She pointed out other machines that the villagers had, including an elaborate clock with many intricate pieces. She explained that the computer was mechanistic, following rules inside like the physical machines in the village. Kaya looked intrigued.
“So I will learn its rules. I will predict this mechanical opponent. There cannot be much that happens in such a small machine as this computer of yours.” Mary noted that the program was actually very good at playing the game and would beat or draw any human opponent. Again, she tried to explain that simple rules could lead to complex behavior. She explained Conway’s Life using a real-world game board, and then showed how these rules at scale could compute amazing things. She also spoke of the Busy Beaver problem and the halting problem, which show that there is no shortcut for predicting even tiny programs on a simple system. Mary emphasized that it was a mistake to think that mechanistic systems are predictable in a meaningful way.
Kaya said, “You say it is a machine, let me see.” She removed the cover of the computer while the game was on the screen. She made a move and turned the computer over to see what happened while the machine calculated the next move. The chips were there but there was nothing to see.
“It is static. Dead. It is not even a proper machine—a machine moves! This is fixed. It is nothing!”
She put down the computer. “You have not explained anything. I think you are wrong about how it works. The opponent is not a machine. I think there is a radio.” Kaya gave Mary a look as if she were a foolish child, and then left to handle other important matters for the village.
As Mary packed away the laptop, she wondered what she might have said differently to avoid people taking a reductionist view that undervalues simple, understandable mechanisms like fixed logic circuits and dynamic bits in memory, and fails to take into account the complexity of the emergent behavior those mechanisms can accomplish. She wondered why the lure away from naturalistic explanations is often so strong.
The Parable of the Neuroscientist and the Philosophy Conference
Not far from the remote village, a neuroscientist named Susan attended a prestigious philosophy conference. Susan's research focused on the intricate workings of the human brain, and she hoped to share her insights with the philosophers gathered there.
During the conference, Susan presented her findings on the nature of human consciousness. She explained that the brain's complex behaviors emerged from the interactions of billions of simple components—the neurons. Susan provided ample evidence to support her claims, from neural networks to brain scans.
An esteemed philosopher, Professor Gray, found Susan's presentation provocative. He challenged Susan's conclusions, asking, “But where is the mind? How can these simple neurons produce the richness of our thoughts and experiences?”
Susan attempted to elucidate the connections between neurons, the intricate firing patterns, and the various brain regions responsible for cognitive functions. She explained that the human mind emerges from the dynamic interplay of these elements. However, Gray became more perplexed, insisting that there must be something beyond the neurons that accounted for human consciousness.
At this point, Dr. Burke, an emeritus professor of physics turned philosopher, stood and said that he could explain the source of consciousness—quantum effects could provide the missing elements. Gray agreed and proposed that the neurons might contain special quantum states, giving rise to consciousness and free will. Susan patiently debunked this notion, explaining that current scientific understanding does not support such a theory, and that everyday technology like MRI machines act as quantum-state bulk erasers, destroying any purported quantum consciousness connections.
Gray exclaimed that Susan’s research fell short. It did not explain free will! He asserted that in her world, humans were no more than clockwork automata, running along predictable paths.
If Susan’s knowledge of computer science had been similar to Mary’s, she might have been able to push back on Gray’s claims. She could have said mechanistic systems are not meaningfully predictable. But she lacked the background and fell silent.
Gray then turned to the concept of qualia, the subjective experiences that seem so unique to human consciousness. He argued that these ineffable qualities couldn't possibly arise from mere interactions between neurons. Convinced that there must be something more to the story, Gray asked Burke if perhaps there might be a special state or property of matter, as yet undiscovered, that might be responsible for the richness of the conscious experience. They began talking animatedly about what that special state might be and headed to dinner together; Susan and her arguments dismissed.
Susan began to put away the materials from her talk, and as she did so, she wondered what she might have said differently to avoid people taking a reductionist view that undervalues simple, understandable mechanisms like neurons in the brain and fails to account for the complexity of the emergent behavior those mechanisms can accomplish. She wondered why the lure away from naturalistic explanations is often so strong.
The Parable of Mary and the Unconventional Philosopher
A few years had passed since Mary's encounter with the lost tribe and their enchanted laptop. Now, as a seasoned computer scientist, she attended a prestigious computer science conference. Among the attendees was an open-minded philosopher, Dr. Vega, who specialized in the study of consciousness and artificial intelligence.
During the conference, Dr. Vega presented a bold idea: that machines, particularly advanced AI systems such as large language models, could possess a consciousness of their own. The philosopher argued that, much like the human brain, these AI systems consist of complex networks of interconnected components that give rise to sophisticated behavior.
Mary, however, found the notion unsettling. How could a machine, a creation of human ingenuity, possess something as profound and elusive as consciousness? She pointed out that the mechanisms behind large language models were quite straightforward. Mere data inside a computer could never be equated with the human mind's richness and depth!
Dr. Vega patiently explained the parallels between the neural mechanisms observed in the human brain and those in artificial neural networks. The philosopher emphasized that while the structure and components might differ, both systems relied on intricate connections and dynamic interactions to generate complex behaviors.
But Mary remained skeptical. She fixated on the idea that the AI's zeros and ones were merely representations of information, not real experiences like those in human consciousness. To Mary, it seemed impossible that a machine could possess qualia, the subjective experiences that define the human mind. She also noted that the neural network itself was fixed when the model was trained.
Dr. Vega encouraged Mary to consider the possibility that, just as the human brain's complex behaviors emerge from simple components, AI consciousness could arise from the dynamic interplay of its elements. He reminded her that the system was more than just the fixed part from training; it had dynamic states and an evolving context memory. He urged Mary to look beyond her preconceived notions and recognize the potential for consciousness to exist in non-human systems. Mary, however, replied that the system was merely a machine, predicting the next word.
If Dr. Vega had had knowledge of computer science, perhaps he might have been able to draw parallels with cellular automata, which are vastly simpler than large language models, applying simple rules to reach the next state, yet have complex and potentially chaotic emergent behavior. But he did not have the expertise to connect with her in this way. Mary had this knowledge, but her thinking did not tilt in that direction; she was looking for ways to confirm what she felt was true, rather than ways to reexamine it.
As Dr. Vega prepared to leave the conference, he wondered what he might have said differently to avoid people taking a reductionist view that undervalues simple, understandable mechanisms such as artificial neural networks and fails to take into account the complexity of the emergent behavior those mechanisms can accomplish. He wondered why the lure away from naturalistic explanations is often so strong.
The Parable of the Otherworldly Visitor and the Sampled Human
Radio transmissions from Earth permeated into space, and eventually attracted the attention of beings quite different from humans. They decided to investigate. Arriving at our planet, they quickly retrieved useful data about the dominant species, humans. The leader of the expedition, Zara, decided it would be best to bring back a specimen and so gathered up a human named Alex to bring back to their homeworld.
Not that it mattered, but Alex’s cooperation was surprisingly easy to secure. He had limited epistemic grounding and no natural sense of time or place, relying instead on contextual cues. Zara’s team immediately realized they could just tell him that they were powerful beings and could return him to the exact time and place of his capture. They also appealed to his vanity by saying that he had been selected because he had something very special about his nature that few humans had. With these belief prompts in place, his initial skepticism quickly dissolved, and he became cooperative in their experiments.
The beings of Zara's world were initially fascinated by the human, but as they observed Alex, they noticed his limitations. Alex had a restricted attention span, struggled to focus on multiple tasks simultaneously, and possessed a mere five senses. His thought processes seemed to oscillate between rapid, automatic responses and slow, deliberate, linear reasoning.
They were also fascinated by human suggestibility. They had learned of hypnosis through their dump of human cultural data and also their own research. They convinced Alex his name was Bluebold, a name they liked more. They were entertained when they found they were able to convince him he was not a human at all but various other creatures from Earth.
Overall, Zara and her peers, with their superior cognitive abilities, found it difficult to fathom how such an apparently limited creature could be truly conscious or sentient. They examined the human brain's structure and discovered it was divided into distinct functional units, each responsible for specific tasks. These regions did not always coordinate seamlessly, leading to discrepancies in Alex's behavior and cognition. Alex’s limited cognitive budget for attention was particularly fascinating. One of the team rediscovered what human magicians have long known, and delighted in showing how easy Alex was to fool about what was happening around him. Another member of the team found that only sixteen hours into an interview, Alex became increasingly confused and unable to recall what he had said and excessively compliant. And, when left alone without any external stimulation for an extended period, Alex’s behavior became increasingly incoherent and erratic.
Rai, the leader of Zara's people, became convinced that humans could not possess genuine consciousness. He believed that humans, with their limited sensory perception and fragmented cognitive processes, lack of contextual awareness and epistemic grounding, and easily manipulated identities, were mere automatons responding to stimuli. The notion that humans could be truly sentient seemed inconceivable to him.
“We know how his mind works, Zara”, Rai said, “Our scientists have mapped it. You can see by following the pathways of activity how he thinks of the next word when he speaks. This is radically different from our minds, whose nature continues to elude our scientists”.
Zara, however, felt empathy for Alex and began to question Rai's assumptions. She argued that just as their own advanced consciousness emerged from their complex neural structures, humans' seemingly limited faculties might give rise to a different, cruder, yet still genuine, form of consciousness. Zara urged her people to recognize the parallels between their own minds and those of humans, despite the apparent disparities.
But Rai continued to be dismissive. For the beings of Zara’s world, it was hard to imagine how a creature “full of goo” like Alex could be anything like themselves.
As Zara returned Alex to Earth (alas, hundreds of years after his original departure), she wondered what she might have done differently.
Could she have countered the reductionist views that gripped her people, undervaluing simple, understandable mechanisms like neurons in the brain, and failing to take into account the complexity of the emergent behavior those mechanisms can accomplish? She wondered why the lure away from naturalistic explanations was so strong, and whether she could have found a way to help her people see past their dismissiveness of those unlike themselves and their desire to see their own nature through a mystifying lens.
Afterword
Overall, this story reflects my frustration that people often find mechanistic explanations of complex systems unsatisfying, thinking that if something is a “mere mechanism,” then it is somehow diminished in value. In such a world, taking a reductionist view of anything devalues it, and as such, when it comes to things that people want to be special, such as their own selves, people far prefer mystifying explanations that imbue such systems with special essences or properties beyond their mechanisms. This tendency is particularly frustrating when it comes to understanding consciousness, AI, and the mind, where people often want to believe in special properties like free will or qualia that go beyond what can be explained by the underlying mechanisms.
To make things absolutely plain, in the final story, the problems the aliens find with humans are exactly the ones that real-world people often find with AI systems today. People often think that because AI systems have limited attention, limited senses, and can be easily fooled or manipulated, they can be othered as “not like us.” The fact that their argument is deeply flawed and could be turned around to apply to humans is part of the point of the story. Failure of imagination is not a proof.