Context & Permutations


In the pursuit of Artificial General Intelligence, one of the challenges that comes up again and again is how to deal with context.  To illustrate: telling a robot to cross the street would seem simple enough.  But consider the context that five minutes ago somebody else told this robot not to cross the street because there was some kind of construction work happening on the other side.  What does the robot decide to do?  Whose instruction does it consider more important?

A robot whose ‘brain’ did not account for context properly would naively go crossing the street as soon as you told it to, ignoring whatever had come before.  This example is simple enough, but you can easily imagine other situations in which the consequences would be catastrophic.

The difficulty in modeling context in a mathematical sense is that the state space can quickly explode, meaning that the number of ways that things can occur and sequences they can occur in is essentially infinite.  Reducing these effective infinities down to manageable size is where the magic occurs.  The holy grail in his case is to have the computing of the main algorithm remain constant (or at least linear) even as the number of possible permutations of contextual state explodes.

How is this done?  Conceptually, one needs to represent things sparsely, and have the algorithm that traverses this representation only take into account a small subset of possibilities at a time.  In practice, this means representing the state space as transitions in a large graph, and only traversing small walks through the graph at any given time.  In this space-time tradeoff, space is favored heavily.

The ability to adeptly handle context is of utmost importance for current and future AIs, especially as they take on more responsibility in our world.  I hope that AI developers can form a common set of idioms for dealing with context in intelligent systems, so that they can be collaboratively improved upon.