A slight misunderstanding


Often in discussions of artificial intelligence I see and hear the quote, “The brain does around X calculations per second.”  Usually this number is around 100 trillion.  Why?

This is presumed because the brain is said to have about 100 trillion synapses between all of its neurons.  By treating each synapse as a computational element capable of performing an action based on a stimulus, the brain is then modeled as “something doing 100 trillion calculations per second.”

There are several problems with this:

1. What is the nature of these “calculations” we’re talking about?  Is this simple addition, probability tables, or differential equations?

2. The language of “per second” could wrongly imply that the brain somehow runs on a constant master clock.

3. Are there additional layers of important information being exchanged beyond merely the synapses?

In more detail,

#1: Given the tendency to want to measure things in FLOPS (Floating Point Operations Per Second), I can see why this approach would be appealing.  It’s as simple as just counting how many “computational elements” are in the system, and then saying it can do that many calculations per second, right?  I’m led to think, well, no.  A FLOP is likely to be an extremely simple operation such as addition or multiplication.  Something more complex, such as linear algebra routines or probability functions, will require sophisticated code and hence numerous instructions/FLOPS to execute it.  The argument that “the brain does 100 trillion calculations per second and therefore we will have true AI when computers can do 100 trillion CPS” then is as useful as saying “a human is made of 150 lbs of matter so when you have 150 pounds of matter you’ll have a human”, or something equally ridiculous.  The number of calculations is not completely unimportant, but it is secondary to what kind of calculations are being done.  In the case of classical computers, as stated, complex instruction sets use up many operations to do their work, and so a raw measure of simple calculations isn’t very informative of the system’s overall capability.

#2: We’re used to think of calculations happening in a uniform, clock-like way because of the way our chips are designed.  The problem is, the brain, for as far as we can tell, processes everything asynchronously.  Each node is operating more or less independently of the others.  That’s not to say that classical computers won’t be useful in emulating brain-like mechanics, but modeling an asynchronous system with a highly synchronized one comes with complications that should not be ignored.

#3: This also ties in with #1.  Namely, due to the passing of information through electrochemical channels, there are additional layers of computation that, as of yet, have not been completely modeled nor understood.  The actual communication mechanisms of the brain could be simpler than we thought, or they could turn out to be vastly complex.  It’s anybody’s guess right now.  But in any guess, a direct conversion from calculations per second (as in the brain) to something like FLOPS (as in classical computers) is like saying “a machine that can add 10 numbers together in a second can also solve 10 high-order partial differential equations per second”.  With extremely clever software something like this may eventually be possible (perhaps a map from addition operators to matrix solvers or something like that), but for now this kind of crude conversion is wildly inaccurate.

I worry that a lot of people are buying this idea that once we get 100 TeraFLOPS machines we’ll somehow have an uber-AI.  Unless software comes a long ways, those who are counting on this idea may be very disappointed when this emergence doesn’t happen.

It is worth noting that a quantum computer used for AI would be a completely different picture – different from both classical computers and from the brain.  A behemoth of the quantum variety would be capable of things that neither an Intel i7 nor a human brain can do, but that is another discussion entirely.

Until next time, then.