Short post today, but a few things occurred to me as I was reading the paper on Bayesian Program Learning:
- This form of recursive program induction starts to look suspiciously like simulation – something we do in our minds all the time.
- Simulation may be a better framing for concept formation than via the classification route.
- Mapping the ‘inner world’ to the ‘outer world’ seems a more sensible approach to understanding what’s going on. If you look at the paper, you also see some thought-provoking examples of new concept generation, such as the single-wheel motorbike example (in the images). This is the most exciting point of all.
A final design?
Combine all the elements together, along with ideas from my last post, and you get something that:
- Simulates an internal version of the world
- Is able to synthesize concepts and simulate the results, or literally ‘imagine’ the results – much like we do
- Is able to learn concepts from few examples
- Has memories of events in its lifetime / runtime, and can reference those events to recall the specific context of what else was happening at that time. That is, memories have deep linkage to one another.
- Is able to act of its own volition, i.e. in the absence of external stimulus. It may choose to kick off imagination routines – ‘dreaming’, if you will – optimize its internal connections, or do some other maintenance work in its downtime. Again, similar to how our brains do it while we sleep.
This starts to look like a pretty solid recipe for a complete cognitive architecture. Every requirement has been covered in some way or another, though in different models and in different situations. To really put the pieces together into a robust architecture will require many years of work, but it is worth exploring multi-model cognitive approaches.
If it results in a useful AI, then I’m all in.