Can a machine dream?
Since it is the most high-profile example of purported machine dreaming and inspired the name of this blog arc in the first place, I will use the example of Google’s Deep Dream to discuss whether or not a machine can dream.
Deep Dream is a computer program developed by Google in 2014 and released in 2015. At the time Google was experimenting with image recognition in its search engine and one of the machine learning techniques it adopted was using neural networks, convolutional neural networks (CNN) in particular. You can create your own Deep Dream-influenced images here. Mine of the Statue of Liberty is below.

Basically, a neural network imitates physical neural networks in humans and other animals by attributing one or more specific functions to individual neurons, or nodes, in a network. In the case of image recognition, some nodes would focus on recognizing specific edges, other specific shapes, others colors, and so on. Ultimate, higher level recognition of “dog” or “motorcycle” would occur with all those nodes in concert, as they fed off each other’s activity and output and “learned” from researchers what were correct and incorrect responses.

But beyond recognizing images and sorting them into categories (helpful for a search engine with massive amounts of data, or to conduct facial recognition such as what Facebook does), the researchers discovered that they could run the program in “reverse,” or inside-out: The program could create a representation of what it thought (using the term loosely) or at least predicted, the thing they were recognizing would look like. In other words, after showing Deep Dream millions of pictures of cats, dogs and other things, researchers asked the program to look at an image and tell them what they saw, and of course, they saw, and created images of, a bunch of animals in the image.

These pictures have been likened to dreams, although they’re really more akin to nightmares or psychedelic experiences. But if you look at the way Deep Dream works from both the ground-up, shape recognition and the top-down, high-level recognition, it’s actually surprisingly similar to at least my conception of how dreams are formed (see my dream journal for more details). When you are in a dream, clearly your lower-level shape and form perception has to operate in order for you to make any sense of the images you are seeing, and those neural networks, if you will, have been trained on the years of exposure to the outside world.
Then at a higher level, there are, at least in my dreams, certain references to daily activities or plots or cultural references (mostly anime and computer games, go figure) that I have to interpret as such upon seeing them, and they in turn conform to my preconceived cultural or emotional knowledge. It’s a two-way street, much in the way Deep Dream is interpreting and creating images based on its “incepted” knowledge.
Not to mention the dreams I have about rooms or cityscapes filled with objects or buildings that conform to some sort of emotional or physical memory of such experiences, but do not exactly mimic the experiences that created them in the first place. So this is similar to the repetitive, “variations on a theme” nature of the Deep Dream program when it comes to recognition of objects within images. There is an internal alchemy that is producing these images, a specific set of conditions within the neural network including memories that is giving rise to them.

But similarities or mimicry of human/animal experiences and functions does not necessarily equate to actual experience and functioning. In the next post, I will explain the conditions which should be met for a machine to dream.





