“Be a dreamer. If you don’t know how to dream, You are dead!”
And if you still can dream, you might soon be able to ask your machine to do it for you.
As I write this article, sitting in an air conditioned office of the Multimedia and Robotics Group at Tata Innovation Labs, Delhi, working on the cognitive theory of mind and simultaneously thinking about last night’s movie, “The Inception”, a vague thought crosses my mind.
“Can Machines Dream?”
Consciousness of mind has been somewhat replicated by software designers. Before we jump into how the next-gen robots would be created, let us first understand how the consciousness and thinking cycle of our brain can be modeled?
We can consider ourselves to be called agents who are driven by a particular motive. The driver of this agent is our mind. An agent follows a cycle of events. Firstly, it senses the environment around it, and cognition helps it take decisions to alter it. The decisions taken are in pursuit of its agenda. E.g. If I want to pick up a cup. I would see (sensory input through eyes) where the cup is and pick it up (action taken and environment has no cup now i.e. altered).
This cycle is endless and the method of deciding what to do next is termed as cognition.
The term cognition is too broad. We don’t only sense what’s around us, we perceive it too. We understand what objects are in it and our mind extracts features from it. We know what we are seeing.
So, we can simply split the word “Cognition” into a perception-cognition model which looks like this.
The way our Cognition acts on the environment can be split too. The way we execute our decisions follow certain procedures. Lifting a cup of tea needs you to lean down, hold the handle, curl in your fingers and apply appropriate pressure to prevent it from toppling. All of this is stored in a “Procedural Memory” of our mind. As we keep repeating things, we reinforce our knowledge about it and develop our skill set. In way, we are learning how to execute procedures. So, the diagram now looks like this.
There is more that our mind does. It relates to what happened in the past during a similar situation and how we reacted to it then. Before taking decisions, we gather a lot of information about related stuff from our past. E.g. If we wish to know where the car is parked, our mind automatically recollects images of the car sitting silently in the garage two blocks away from here. These images flash from a part of memory called the “Episodic Memory”. It contains the what, the when, the where of any event happening around you.
This kind of memory decays over time. Like, I doubt even 1% of people reading this remember what they had for breakfast a week ago. We are more likely to answer the same question if the day is changed to today. But, Episodic memory doesn’t always decay. Some parts of it stay for a lifetime. Like, we never forget “what” part of certain events though “the when” and “the where” might have been lost. This type of memory is termed as “Declarative memory”.
When we recall certain things, do we recall everything about that event? Obviously Not! We recollect certain facts which are relevant to the current scenario. All the facts blasting in your mind from the episodic memory fight for “Attention” in your brain. The events winning the fight, get to come to consciousness. The winner is determined by relevance, importance, urgency of the facts. The percepts coupled with some important factors make it through and determine what action one would be taking. The model finally looks like this.
As our mind follows this cognitive cycle endlessly, it also learns a lot of things. We identify new objects (Perceptual Learning), we acquire new skills (Procedural Learning), as we grow we learn to think and bring better contexts from the past (Episodic Learning), more relevant coalitions winning and coming to consciousness (Attentional Learning).
The cognitive cycle shown above is a simplistic representation of how minds work. The famous IDA (Intelligent Distribution Agent) has been modeled on this. The cognitive cycle with learning can be shown in a better way as follows.
The Workspace shown above makes analysis of the current situation and moves its ideas to the Global Workspace. This hosts a competition, whose winner decides what procedure would be followed or action would be selected.
The above model has been proposed by the Cognitive Computing Research Group, University of Memphis.
This is a proposed model on how the mind works from a cognitive science perspective. It also suggests that learning can take place in consciousness. A solid example of this model being successfully implemented is IDA model.
IDA is an agent being used by the US navy to grant activities and distribution to the sailors. You can search more about her here. Currently, IDA can successfully negotiate with clients in English, access databases, adhere to rules, make intelligent decisions. IDA is considered to be a “conscious software“.
So, moving the discussion towards our title, Can Machines Dream? Yes, they can. Even though it seems way too weird, there is a remote possibility of making that happen. If we can model the subconscious brain in some way, we can make machines dream. Let us consider this, our dreams are merely customized episodes from our declarative memory or (episodic memory which hasn’t decayed). Our dreams also relate to our motives/agendas and means to achieve those goals. The Workspace theory talks strongly about how software agents like IDA achieve their goals and fulfill their given agenda’s.
The subconscious stores your beliefs, memory and life experiences. You can see how each of these can be modeled in the cognitive cycle. Machines might not be able to dream exactly as humans but they can very well link our past, current episodes and play it for us. Once the dream model is successfully built and understood, we might have machines creating totally unknown stories for us.
A study of 320 adult dream reports found:
- 48% of characters represented a named person known to the dreamer
- 35% of characters were identified by their social role (e.g., policeman) or relationship to dreamer (e.g., a friend)
- 16% were not recognized.
Among named characters:
- 32% were identified by appearance
- 21% identified by behavior
- 45% by face
- 44% by “just knowing.”
This suggest something very interesting. If most of what we see in our dreams is from our memory and very less from what is yet to happen, we can at least model this region. Moreover, If cognitive science and machine learning can model something very close to our brain, why can’t we model our dreams? Can’t something very close to what Humans are dreaming be perceived in a machine?
As Science progresses, we are not very far, from achieving this reality. Might as well have, machines “sharing” dreams! The first inception, might just start with Machines!