IBM / DARPA SyNAPSE announce new brain simulation at Supercomputing Conference

Update:  The reports of  this breakthrough at a ‘cat brain’ level may be quite misleading or exaggerated.  I’m in contact with Henry Markram, a leading brain researcher spearheading the “Blue Brain” simulation in France, and waiting for his permission to post his concerns about the claims from IBM researchers.

At the Supercomputing Conference SC09 in Portland Oregon IBM has announced a spectacular advance in our ability to mechanically simulate cognitive activity with machines – they have developed a brain simulation that approximates a cat brain in complexity.

We have profiled the SyNAPSE project here at Technology Report thanks to a guest post by one of those working there. This new development is a remarkable advance given that SyNAPSE has been going strong for under one year. With a cat brain complexity under its belt it appears only a matter of a few more years before the project is likely to have modeled interactions at the scale of human brain complexity.

The most provocative idea about brain modelling is that these models will at some point attain human-like consciousness along with the ability to communicate with humans and (hopefully) cooperate with us in problem solving. No longer just a science fiction topic, this potential “explosion of intelligence” relates to one of the hottest topics in technology – the Singularity.

More on the IBM Blue Matter project from:

Forbes
Popular Mechanics

DARPA SyNAPSE Project Summary

Today we have a guest post with permission from Max over at the “Neurdons” blog, written by a group working on the DARPA SyNAPSE project we have discussed here before.   SyNAPSE seeks to create a fully functional artificial intelligence.

This piece was written by Ben Chandler, an AI researcher with the SyNAPSE project:

About SyNAPSE

First the facts: SyNAPSE is a project supported by the Defense Advanced Research Projects Agency (DARPA). DARPA has awarded funds to three prime contractors: HPHRL and IBM. The Department of Cognitive and Neural Systems at Boston University, from which the Neurdons hail, is a subcontractor to both HP and HRL. The project launched in early 2009 and will wrap up in 2016 or when the prime contractors stop making significant progress, whichever comes first. ‘SyNAPSE’ is a backronym and stands for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. The stated purpose is to “investigate innovative approaches that enable revolutionary advances in neuromorphic electronic devices that are scalable to biological levels.”

SyNAPSE is a complex, multi-faceted project, but traces its roots to two fundamental problems. First, traditional algorithms perform poorly in the complex, real-world environments that biological agents thrive. Biological computation, in contrast, is highly distributed and deeply data-intensive.  Second, traditional microprocessors are extremely inefficient at executing highly distributed, data-intensive algorithms. SyNAPSE seeks both to advance the state-of-the-art in biological algorithms and to develop a new generation of nanotechnology necessary for the efficient implementation of those algorithms.

Looking at biological algorithms as a field, very little in the way of consensus has emerged. Practitioners still disagree on many fundamental aspects. At least one relevant fact is clear, however. Biology makes no distinction between memory and computation. Virtually every synapse of every neuron simultaneously stores information and uses this information to compute. Standard computers, in contrast, separate memory and processing into two nice, neat boxes. Biological computation assumes these boxes are the same thing. Understanding why this assumption is such a problem requires stepping back to the core design principles of digital computers.

The vast majority of current-generation computing devices are based on the Von Neumann architecture. This core architecture is wonderfully generic and multi-purpose, attributes which enabled the information age. Von Neumann architecture comes with a deep, fundamental limit, however. A Von Neumann processor can execute an arbitrary sequence of instructions on arbitrary data, enabling reprogrammability, but the instructions and data must flow over a limited capacity bus connecting the processor and main memory. Thus, the processor cannot execute a program faster than it can fetch instructions and data from memory. This limit is know as the “Von Neumann bottleneck.”

In the last thirty years, the semiconductor industry has been very successful at avoiding this bottleneck by exponentially increasing clock speed and transistor density, as well as by adding clever features like cache memorybranch predictionout-of-order execution and multi-core architecture. The exponential increase in clock speed allowed chips to grow exponentially faster without addressing the Von Neumann bottleneck at all. From the user perspective, it doesn’t matter if data is flowing over a limited-capacity bus if that bus is ten times faster than that in a machine two years old. As anyone who has purchased a computer in the last few years can attest, though, this exponential growth has already stopped. Beyond a clock speed of a few gigahertz, processors dissipate too much power to use economically.

Cache memory, branch prediction and out-of-order execution more directly mitigate the Von Neumann bottleneck by  holding frequently-accessed or soon-to-be-needed data and instructions as close to the processor as possible. The exponential growth in transistor density (colloquially known as Moore’s Law) allowed processor designers to convert extra transistors directly into better performance by building bigger caches and more intelligent branch predictors or re-ordering engines. A look at the processor die for the Core i7 or the block diagram of the Nehalem microarchitecture on which Core i7 is based reveal the extent to which this is done in modern processors.

Multi-core and massively multi-core architectures are harder to place, but still fit within the same general theme. Extra transistors are traded for higher performance. Rather than relying on automatic mechanisms alone, though, multi-core chips give programmers much more direct control of the hardware. This works beautifully for many classes of algorithms, but not all, and certainly not for data-intensive bus-limited ones.

Unfortunately, the exponential transistor density growth curve cannot continue forever without hitting basic physical limits. At this point, Von Neumann processors will cease to grow appreciably faster and users won’t need to keep upgrading their computers every couple years to stave off obsolence. Semiconductor giants will be left with only two basic options: find new high-growth markets or build new technology.  If they fail at both of these, the semiconductor industry will cease to exist in its present, rapidly-evolving form and migrate towards commoditization. Incidentally, the American economy tends to excel at innovation-heavy industries and lag other nations in commodity industries. A new generation of microprocessor technology means preserving American leadership of a major industry. Enter DARPA and SyNAPSE.

Given the history and socioeconomics, the “Background and Description” section from the SyNAPSE Broad Agency Announcement is much easier to unpack:

Over six decades, modern electronics has evolved through a series of major developments (e.g., transistors, integrated circuits, memories, microprocessors) leading to the programmable electronic machines that are ubiquitous today. Owing both to limitations in hardware and architecture, these machines are of limited utility in complex, real-world environments, which demand an intelligence that has not yet been captured in an algorithmic-computational paradigm. As compared to biological systems for example, today’s programmable machines are less efficient by a factor of one million to one billion in complex, real-world environments. The SyNAPSE program seeks to break the programmable machine paradigm and define a new path forward for creating useful, intelligent machines.

The vision for the anticipated DARPA SyNAPSE program is the enabling of electronic neuromorphic machine technology that is scalable to biological levels. Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications—but useful and practical implementations do not yet exist.

SyNAPSE seeks not just to build brain-like chips, but to define a fundamentally distinct form of computational device. These new devices will excel at the kinds of distributed, data-intensive algorithms that complex, real-world environment require. Precisely the kinds of algorithms that suffer immensely at the hands of the Von Neumann bottleneck.

SyNapse and Blue Brain Projects Update

As noted before I think the two most promising “Artificial Intelligence” projects are Blue Brain and DARPA SyNAPSE and I’m happy to see in this Boston blog “Neurdon” by some of the SyNAPSE project folks a few of the DARPA bucks going to elaborate on some of the technical goals of the SyNAPSE project:

SyNAPSE seeks not just to build brain-like chips, but to define a fundamentally distinct form of computational device. These new devices will excel at the kinds of distributed, data-intensive algorithms that complex, real-world environment require…

It’s very exciting stuff this “build a brain” competition.  Although I think the theoretical approach taken by Blue Brain is more consistent with what little we know about how brains work, I’d guess SyNAPSE’s access to DARPA funding will give it the long term edge in terms of delivering a functional thinking machine in the 15-20 year time frame most artificial intelligence researches believe we’ll need for that ambitious goal.

My optimism is greater than many because I think humans have rather dramatically exaggerated the complexity of their own feeble mental abilities by a quite a … bit, and I’d continue to argue that consciousness is much more a function of quantity than quality.

Another promising development in the artificial brain area is in Spain where  Blue Brain project partner universities are working on the project:  Cajal Blue Brain

Top Ten Technologies of 2008 from Wired. What? No Blue Brain?

Wired has their list of the top ten technology breakthroughs of 2008.    Here’s the list.  I’m not sure I can get too excited about anybody that makes breakthrough *number one* the Apple Developer Aps Store, but it’s an interesting list at the least with everything from flexible displays (cool, but not out yet) to the Speedo LZR super slick swimsuits.     Hmmm – swimsuits and Apple application stores?    I’m kind of wondering if somebody put this together last night after a big turkey dinner.

C’mon Wired, surely there’s better stuff out there than this list?

With CES 2009 coming up in about ten days I’m confident that a better list than this, which we’ll be creating here at Technology Report during the Las Vegas conference, will have the top ten innovations *including* some of the superb innovations we are likely to see unveiled at CES.

However I should say it’ll be hard to top my current choice for top technology breakthrough of 2008 which came from the IBM Blue Brain project lab – a computerized simulation of a rat neocortical column.    Doesn’t sound impressive to you?   Keep in mind – that would be in the mind that equals your cerebrum which is largely composed of neocortical columns –  that the Blue Brain project may be on track to deliver a fully functional artificial intelligence.

Artificial Intuition a key to AI?

Convergence08 was a great conference with many interesting people and ideas. Thankfully the number of crackpots was very low, and even the “new age” mysticism stuff was at a minimum. Instead I found hundreds of authors, doctors, biologists, programmers, engineers, physicists, and more clear thinking folks all interested in how the new technologies will shape our world in ways more profound than we have ever experienced before.

My favorite insights came from Monica Anderson’s presentation on her approach to AI programming, which she calls “Artificial Intuition“. Unlike all other approaches to AI I’m familiar with Anderson uses biological evolution as her main analogs for conceptualizing human intelligence. I see this approach as almost a *given* if you have a good understanding of humans and thought, but it’s actually not a popular conceptual framework for AI, where most approaches rely on complex algorithmic logic – logic that Anderson argues clearly did not spawn human intelligence via evolution. Yet Anderson is by no means a programming neophyte – she’s a software engineer who has researched AI for some time, then spent two years programming at Google and then quit to start her own company, convinced that her AI approaches are on the right track.

Anderson’s work is especially impressive to me because as someone with a lot of work in biology under my belt (academically as well as corporeally) it has always surprised me how poorly many computer programmers understand even rudimentary biological concepts such as the underlying simplicity of the human neocortex and the basic principles of evolution which I’d argue emphatically have defined *every single aspect* of our human intelligence over a slow and clumsy, hit and miss process operating over millions of years. I think programmers tend to focus on mathematics and rule systems which are great modelling systems but probably a very poor analog for intelligence. This focus has in many ways poisoned the well of understanding about what humans and other animals do when they … think… which I continue to maintain is “not all that special”.

Anderson’s conceptual framework eliminates what I see as a key impediment to creating strong AI with conventional software engineering – ie having to build a massively complex programmable emulation of human thought. Instead, her approach ties together many simple routines that emulate the simple ways animals have developed to effectively interact with a changing environment.

Combining Anderson’s approach to the programming with the physical models of the neocortical column such as IBM Blue Brain would be my best bet for success in the AI field.

Convergence08: Unconferencing Begins

Gary at Future Blogger just did a nice summary of the AI session here at Convergence 08, where we are now getting the housekeeping spiel and speakers are putting up the proposed sessions.

So far the tone of the conference is great – very professional with serious people and companies coming together to discuss the technologies that are very likely to re-invent the world over the coming decades.   Very pleased to see Pell and Norvig – both of whom represent companies and perspectives that understand and represent the critical intersection of commercial success and innovation.