It’s anybody’s guess, really.
That said, we do have some clues about what kinds of things go on inside the ‘minds’ of these little silicon monsters – under one specific condition.
There’s nothing more thrilling than seeing the spark of intelligence pop up in an AI system. You truly feel as though there’s something peeking out at you from behind all those numbers – a prototype of some bigger life form, curious and eager to grow as its learning algorithms wander the deep inner space of its mind. Yet all of this magic only takes place when there is active input, that is, when data is flowing into the system. This could mean it’s being fed images, text, audio, or raw numbers such as time series.
What happens when there’s no data flowing in?
Nothing. Nada. Zilch. Nichts.
The unexciting reality is that as soon as the flow of data is turned off, most of these things just go to sleep, so to speak. They stop. Nothing is happening in there, save for maybe a handful of residual calculations.
Spoiler alert: This is the condition mentioned at the top of the post. Computers ‘think’ about nothing when there is no data actively being fed to them.
I’ve written a bit about this before, but I reiterate that this stands in stark contrast to humans and other animals. Even in our sleep, our brains display a massive symphony of activity, repairing, adjusting, and moving memories around. Our most complex organ has a remarkable ability to reorganize itself, with or without the presence of sensory inputs. This should be a hint to us as to how we might build true AIs in the future.
So…what do computers think about, exactly?
As far as anybody can tell, nothing. Not yet anyway. For me, this is the most exciting part: creating things that persist in thinking even when there is no immediate sensory information flowing in.
Imagine if your laptop kept doing work even when you were away, and would notify you via your smartphone or smartwatch whenever it did/learned something particularly interesting or important.
Imagine if AIs of the future were trained as scientists, absorbing knowledge from human experts, and even when they ‘stalled’ would simply try new thought experiments. This is completely different from just finding patterns in data, and stopping when the data stops flowing.
Imagine if your house had a persistent AI that could think about cost-effective ways to improve itself, its property value on the market, or even how to organize a party within its halls. This requires a persistence that is not seen in today’s machine learning systems.
Who else is thinking about this?
I’ve referenced him before: Dr. Stephen Thaler has done some interesting work on this subject. Some others have mentioned ‘persistent AI’ in passing, but few seem to be focusing on it as a qualitative shift from passive machine learning systems like we use now. Even Siri is passive: it doesn’t do anything until you ask it a question or give it a command.
DeepDream and all of the related work got many people thinking about what AIs ‘see’ when they see the world, which is a similar idea to inner thought and persistence. This work shows some of what goes on on the inside, under the condition that the network is being actively stimulated from external sources.
To explore these ideas, I’ve been toying with simple AI models that that ‘think’ about their past experiences. They have a long-term memory bank, and a way of referencing past experiences through various measures of context. The choice of context metric is extremely important, which I’ll expand upon in a later post. This was partially inspired by Facebook’s Memory Network architecture, which showed a big shift in how we think about cognitive AI systems.
A past experience could be as simple as when it read a certain segment of text in an essay, or when it learned a new type of melody from a song. Our memories tend to be quite long, often entailing many sequences of intertwining sounds, sights, smells, and recollection of emotional states: “I was at the beach this last weekend, it felt amazing to just lie under the sun and relax.” In this example you’re recalling the time (or perception of it), the place, the feeling of the warm sun on your skin, and the emotions you had at the time – repose, tranquility.
AIs are different, especially the toy models I’m working on now. They don’t have high-level concepts yet, and it will be many years before they truly do. However, they can be enabled to have simple memories that are much smaller: a flashback to its particular state when it learned something new or performed some action, such as a prediction that it got correct. They also have a unique feature that we humans do not: their memories can be made essentially perfect. They can recall a scene with arbitrary accuracy, provided sufficient space in storage and/or memory.
The memory component is a necessary ingredient in useful persistence, as you would not want persistent AIs that forgot every interaction, everything they ever learned or did.
Imagine if Siri started remembering conversations it had had with you in the past? Admittedly, this could be dangerous for some.
What comes next?
My new AI startup, Machine Colony, will be taking up most of my time these days. However, as part of my work with Machine Colony and more generally, I’ll continue to investigate these working memory and long-term memory components in AI architectures. If I get really ambitious I may even attempt to publish something on it, be it a white paper or a full-on academic paper. At the very least, you can expect more blog posts and the occasional code snippet, likely in Python. I remain noncommittal because of time constraints, of course.
My sincere hope is that you finish reading this not necessarily with an immediate answer to a question in hand, but more that it provokes thought in the direction of “what if our devices and systems were truly intelligent and persistent?” It is worth thinking about how this may affect your life, because one thing is certain: it’s not a matter of if persistent AIs will emerge, but when.