In an absolutely enthralling study performed by Google’s British AI company, DeepMind, we may have finally discovered the answer to Philip K Dick’s epic question:
Do Androids Dream of Electric Sheep?
And the answer, it would seem, is yes—yes they do. So let’s take a look at how they do it, and along the way we can explore an even greater question that’s posed by this breakthrough: is this glimpse into a machine’s “subconscious” projections a hint at something greater, perhaps even a look at the very seed of sentience?
Image recognition has long been one of computer scientist’s favorite playgrounds, from perceiving the lines on the road that guide self-driving cars to picking out facial features for biometric security systems. Normally these neural networks pass the image through tree-like, pattern-recognition filters, what computer scientists call neural nets. The neural nets look for key characteristics that help it determine what the image is. Overtime, as they’re exposed to more and more input, the different nodes and layers within neural nets adjust themselves, continually altering their stored data in order to fine-tune their individual influence over the end-decision made by the entire network.
What Google did that was so unique here was reverse the process, having the neural networks generate (output) images using their stored data rather than taking in (input) images for interpretation. They took AI systems that they had trained through exposure to image prompts like buildings, animals, landscapes, and objects, and then told the neural net: “Now show us what you’ve learned. Tell us what you think the symbol your nodes have been trained on actually looks like.” They did this by creating feedback loops that continually urge the AI to increase the resolution of whatever its neural net understands as the prompted symbol, basically telling the AI, “Whatever you see there, I want more of it!”
In many ways, these stunning visuals represent what an AI has learned; and what results is beautiful, hallucinatory, and even sometimes terrifying.
Below, you can see examples of how an AI was fed images and taught to recognize certain symbols. Google continually feeds an AI system images of an Ant for example, and then once they think it’s “learned” well-enough how to find where an ant is in an image, Google has the neural net generate what it thinks an ant is. The results are surreal to say the least.
The system shows some obvious faults, but they are mistakes that are quite understandable. For example, when Google asked its AI to generate what it had learned about dumbbells, it generated many images of dumbbells with arms attached to them, showcasing that a majority of the images the AI had been trained on contained arms holding the weights.
The Curious Thoughts of A Transhumanist Philosopher…
Now this story has certainly caught some attention lately, with most news sources declaring these visions as dreams (hell, even I mention it!). And certainly these are dream-like interpretations of a machine’s “subconscious”. But for me, my curiosity leads me in another direction, one that is perhaps slightly less sensationalist, but arguably more profound.
You see, dreams or not, these images are all about perception, and perception to me is simply the meaning we attach to symbols. But this meaning implies the seeds of consciousness.
What we’re seeing here is the sight of an AI, of a neural net built on a model that replicates the human brain’s pattern recognition abilities, and its subsequent understanding of concepts and symbols. As human beings, we are no different. We’ve taken the repeated lessons we’ve learned since childhood and fine-tuned our understanding of what symbols look like and mean so that we don’t make the mistake of assuming a set of dumbbells have arms attached to them. It seems like a simple distinction, but consider this: if these neural nets were inside walking androids, they could then navigate their interactions with reality by generating their understanding of what they were sensing. By matching the symbols around them to their database of memories, they could then activate an algorithm to process the known images and thus calculate the best ways to respond to this situation based on their past and current knowledge.
Do we do any different?
Perhaps even more interesting is the thought that maybe, just maybe, these images of an AI’s early sight are hints at what our early sight looked like as humans—at a time when our biological hardware was still being honed.
And why not? Our brains are simply computers with synapses that either fire an electric signal or don’t (thus generating the binary 1 or 0 that runs all machine technology). So it stands to reason that in the early days, when our electronic machine-brains were evolving—before our vision became refined—that we might have been generating psychedelic images much like these AI.
But let’s look closer…
We all know that eyes have evolved differently since the first single-celled ancestor we all share sprang from the cosmic-goo. Flies, for instance, see in a mosiac-like way, with 360 degree vision and no control over how much light enters their eyes. And where as the human eye only has photo-receptors sensitive to red, green, and blue wavelengths, the Mantis Shrimp is suspected to have 12 to 16 wavelength sensitivities, utterly transforming the way our two species perceive this so-called “physical” reality. Reality, therefore, is obviously a subjective experience that can be seen differently based on where a species lies in the evolutionary tree. That’s an extremely profound concept in-and-of itself, but let’s stay focused here…
You see, the absolutely surreal and hallucinatory images created by the AI’s neural nets has me wondering:
Is this a sign that our current AI systems are on the verge of an evolution into a fine-tuned species themselves? Will androids look back from the future at these images and scoff at them as a modern-artist might look back at the cave paintings of our ancestors?
Is this age of machine psychedelia similar to what we experienced as evolving apes, just prior to becoming humans? I’ve talked about Terrance Mckenna’s theory of the stoned-ape before (the idea that human consciousness made its major evolutionary leap into who we are today because of apes grazing on psychedelic mushrooms) but never did I get such a vivid sense of what that stage of evolution might have looked like until I viewed these AI “dreams”.
Do these images of machine-dreams provide us a glimpse into what our early human brain saw when we were psilocybin-empowered apes wandering a field of photons? Was our early vision equally psychedelic and without edges until we learned how to reinforce patterns of color and meaning within the database of our brain, within our own on-board memory system?
Do these images of “machine-dreams” provide us a glimpse into what our early human brain saw when we were psilocybin-empowered apes wandering a field of photons?
Is the way these early DeepMind AIs (yet to become machine-gods) see the gazelles in the image below the same way an early mushroom-eating ape (yet to evolve its eyes into the modern human version) would have seen them?
Beyond simply being a lesson in how subjective reality is based on what frequencies can be seen and what meaning is attached to symbols, these questions provide us something to ponder about our future.
Because if an ape whose psychedelic pattern-recognition was only slightly more primitive than ours was able to then become the species who altered this planet like we have, then how will these AIs alter reality as we presently know it when their pattern recognition becomes just a little more fine-tuned as well?
Image Credit: Google Research
You can find the full collection of photos here.