Archive

Posts Tagged ‘darpa’

SyNAPSE Chip: “Someday, you’ll work for ME!”

August 21st, 2011 Comments off
SyNAPSE Project Chip

SyNAPSE Project AI Neuromorphic Chip

IBM’s Aug 18th Press Release announced another significant milestone for the DARPA SyNAPSE project, the world’s best funded and arguably the “most likely to succeed” approach to creating a general artificial intelligence.

The release notes that the new chips represent a departure from traditional models of computing:

…. cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember – and learn from – the outcomes, mimicking the brains structural and synaptic plasticity.

To do this, IBM is combining principles from nanoscience, neuroscience and supercomputing as part of a multi-year cognitive computing initiative. The company and its university collaborators also announced they have been awarded approximately $21 million in new funding from the Defense Advanced Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

As we’ve noted here many times, another remarkable project is the Blue Brain Project in Europe spearheaded by Dr. Henry Markram.     That team has joined with many others and is in the process of applying to the European Union for substantial funding – perhaps as much as 1.6 billion dollars.    Although Blue Brain tends to shy away from stating that their objective is a general artificial intelligence,  I would argue that they should have that goal and also that they are much more likely to be funded by stating that goal in no uncertain terms.

Unfortunately there remain many both in and outside of technology circles who believe the search for a general artificial intelligence is either dangerous or a waste of time and money.   Both these scenarios are possible but unlikely.   Sure, intelligence can be dangerous but given human history compared to technology history it seems odd to argue that we are more likely to create a Frankenstein than a helpful machine process.    Computers don’t kill people, people kill people.

In terms of a waste of time and money, clearly we humans have overrated our intelligence for some time – probably since the beginning of self-awareness.   There are few rational reasons to reject the idea that we cannot duplicate processes that are similar to our own thinking in a machine.   The advantages of machine based intelligence are likely to be  substantial – probably on the order of a new human age with vastly improved resource efficiency, poverty reduction, and more.  Thus the costs – currently measured in the low tens of millions – pale in comparison to almost all other government projects – many with massively dubious and negative ROIs.

SyNAPSE Update from Dr. Dharmendra Modha’s Team

August 7th, 2011 Comments off

Dr. Dharmendra Modha and his SyNAPSE gang recently published an excellent paper about “Cognitive Computing” that updates what appears to be excellent progress in the effort to create a general artificial intelligence:

http://cacm.acm.org/magazines/2011/8/114944-cognitive-computing/fulltext

One of the paper’s most notable items asserts that within a decade the project expects to have the computational scale needed for human level modelling, though it also notes that this is not the same as creating a model of the human brain – this may require computational structures yet to be invented.    However on balance it would seem the SyNAPSE project continues to build on their core assumptions, taking us ever closer to the holy grail of technology – a general artificial intelligence.

More at Dr. Modha’s blog , where we learn more about the new approaches the SyNAPSE team at IBM will take in an effort to achieve human quality cognition in a machine:

18 Aug 2011: Today, IBM (NYSE: IBM) researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. The technology could yield many orders of magnitude less power consumption and space than used in today’s computers.

In a sharp departure from traditional concepts in designing and building computers, IBM’s first neurosynaptic computing chips recreate the phenomena between spiking neurons and synapses in biological systems, such as the brain, through advanced algorithms and silicon circuitry. Its first two prototype chips have already been fabricated and are currently undergoing testing.

Called cognitive computers, systems built with these chips won’t be programmed the same way traditional computers are today. Rather, cognitive computers are expected to learn through experiences, find correlations, create hypotheses, and remember – and learn from – the outcomes, mimicking the brains structural and synaptic plasticity.

Brain Chips from DARPA

May 22nd, 2010 Comments off

Wired’s  Danger Room is reporting on a new DARPA project to build brain implant chips that will fix brain injuries.    The focus appears to have come from the large number of  returning veterans who suffer from brain injuries.

However the implications of this type of research go far beyond simple repair.   As science improves the current state of the art of brain implants (which now offer only rudimentary connections to actual brain functions), we are likely to see a spectacular increase in human intellectual capabilities.   Our current limitations to information processing include the very slow speeds with which we can interact with computers – usually via keyboards.   When implants will allow brains to *directly* interface with, for example, internet information, we are very likely to experience an explosion of human capabilities.

IBM’s Artificial Intelligence – is the cat brain out of the bag or not?

March 1st, 2010 Comments off

We’ve profiled two of the world’s most promising AI efforts here at Technology Report.     Blue Brain in Switzerland and DARPA SyNAPSE here in the USA, a newer project that appears to be getting better funding thanks to backing from the US Defense Department.   Both of these projects rely on IBM supercomputers for their simulations of neurons and their interactions, and both are optimistic about the potential to develop thinking machines within the next decade.

The project leader of Blue Brain, Dr. Henry Markram, has been very vocal and very critical of claims by  the IBM team leader Dr. Dharmendra S. Modha.    Markram’s concerns are expressed here in his Technology Report guest post about the IBM project claims.

We asked Dr. Modha for a response but didn’t hear back, so I’d like to refer folks to the Modha blog here, especially to the post called ‘The Cat is Out of the bag and BlueMatter”, which details progress in the SyNAPSE project and explains the claims made that they are simulating brain activity that is roughly equivalent to that we’d see from a cat.  Here’s an excerpt from that post:

Towards this end, we are announcing two major milestones.

First, using Dawn Blue Gene / P supercomputer at Lawrence Livermore National Lab with 147,456 processors and 144 TB of main memory, we achieved a simulation with 1 billion spiking neurons and 10 trillion individual learning synapses. This is equivalent to 1,000 cognitive computing chips each with 1 million neurons and 10 billion synapses, and exceeds the scale of cat cerebral cortex. The simulation ran 100 to 1,000 times slower than real-time.

Second, we have developed a new algorithm, BlueMatter, that exploits the Blue Gene supercomputing architecture to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging. Mapping the wiring diagram of the brain is crucial to untangling its vast communication network and understanding how it represents and processes information.

Finally, here is an excellent presentation by Dr. Modha that outlines in simple terms what they are trying to do with DARPA SyNAPSE, which is build a human scale brain by 2018.

DARPA Red Balloon Challenge – Social Media Information or Disinformation?

December 5th, 2009 Comments off

The DARPA Red Balloon project launched 10 weather balloons across the USA this morning in a well publicized effort to gauge the power of social media in completing the task of finding all balloons and reporting their lat and long coordinates back to DARPA.   The first person or team to do that wins  $40,000

Many teams have sprung up across the country and are acting competitively – I think probably because of the large payout – making the project very different from a simple test of crowdsourcing where the social media “universe” might work together for the fun of the game, reporting the coordinates publicly.     As of 2:40 PM EST we have no winner and I can’t even find a single online reference to a lat long location of a balloon.

Secretiveness appears to be trumping the social media crowdsourcing here, so I’m not sure DARPA is measuring things as advertised – though maybe they also wanted to look at the deception / competition angle.

More from my post at JoeDuck.com:

DARPA – the advanced technology research wing of the US Military – is always coming up with the most fun research and today’s Red Balloon social media experiment is no exception to that rule.

Ten huge red weather balloons were launched this morning at 10am EST and DARPA will pay 40,000 to the first team or person that can identify all the balloons by number and latitude / longitude.

Now, in my view as a social media expert (aka a web surfer), DARPA’s payout of 40,000 is distorting the experiment in a confusing way, encouraging secretiveness and deception rather than cooperation.    That may be intentional, but I think they wanted people to “really try” and wrongly felt this was the best way to do it.    All of the serious efforts I’ve seen so far are actually  *discouraging* people from using the power of social media to find the balloons, instead asking them to email or phone in sightings and then in some cases share in the proceeds, in other cases promising to give them to charity.

DARPA should consider repeating this experiment as a TWITTER crowdsource where there is NO money offered and each report is posted at Twitter where the crowd can sort the fakes from the real data.    I think that task would likely only take minutes rather than the hours the current project appears to need to get a complete result from the secretive teams.

Here are more stories  about the DARPA Red Balloons:

Wall Street Journal: Spot 10 Balloons, Win $40,000

Gizmodo:  DARPA’s Giant Red Balloons Officially at Large

Neuroscience Expert Dr. Henry Markram on the IBM “Cat Brain” Simulation: “IBM’s claim is a HOAX”

November 25th, 2009 Comments off

Editors Note:   We’re hoping for more information from Dr. Modha who is also welcome to a Guest post here at Technology Report.

——   Guest Post by Dr. Henry Markram of the Blue Brain Project —-

IBM’s claim is a HOAX.

This is a mega public relations stunt – a clear case of scientific deception of the public. These simulations do not even come close to the complexity of an ant, let alone that of a cat. IBM allows Mohda to mislead the public into believing that they have simulated a brain with the complexity of a cat – sheer nonsense.

Here are the scientific reasons why this is a hoax and misleading PR stunt:

How complex is their model?
They claim to have simulated over a billion neurons interacting. Their so called “neurons” are the tiniest of points you can imagine, a microscopic dot. Over 98% of the volume of a neuron is branches (like a tree). They just cut off all the branches and roots and took a point in the middle of the trunk to represent a entire neuron. In real life, each segment of the branches of a neuron contains dozens of ion channels that powerfully controls the information processing in a neuron. They have none of that. Neurons contain 10′s of thousands of proteins that form a network with 10′s of millions of interactions. These interactions are incredibly complex and will require solving millions of differential equations. They have none of that. Neurons contain around 20’000 genes that produce products called mRNA, which builds the proteins. The way neurons build proteins and transport them to all the corners of the neuron where they are needed is an even more complex process which also controls what a neuron is, its memories and how it will process information. They have none of that. They use an alpha function (up fast down slow) to simulate a synaptic event. This is a completely inaccurate representation of a synapse. There are at least 6 types of synapses that are highly non-linear in their transmission (i.e. that transform inputs and not only transmit inputs). In fact you would need a 10′s of thousands of differential equations to simulate one synapse. Synapses are also extremely complex molecular machines that would themselves require thousands of differential equations to simulate just one. They simulated none of this. There are complex differential equations that must be solved to simulate the ionic flow in the branches, to simulate the ion channels biophysics, the protein-protein interactions, as well as the complete biochemical and genetic machinery as well as the synaptic transmission between neurons. 100′s of thousands of more differential equations. They have none of this. Then there are glia – 10 times more than neurons..And the blood supply…and more and more. These “points” they simulated and the synapses that they use for communication are literally millions of times simpler than a real cat brain. So they have not even simulated a cat’s brain at more than one millionth of it’s complexity.

Is it nonetheless the biggest point neuron simulation ever run?
No. These people simulated 1 billion points interacting. They used a formulation to model the summing up and threshold spiking of the “points” called the Izhikevik Formulation (an extremely simple equation). Eugene Izhikevik himself already in 2005 ran a simulation with 100 billion such points interacting just for the fun of it: (over 60 times larger than Modha’s simulation). This simulation ran on a cluster of desktop PCs and which every graduate student can run This is no technical achievement and certainly not even a record number of point neurons. That model exhibited oscillations, but that always happens so even simulating 100 Billion such points interacting is light years away from a brain.
see: http://www.izhikevich.org/human_brain_simulation/Blue_Brain.htm#Simulation%20of%20Large-Scale%20Brain%20Models

Is the simulator they built a big step?
Not even close. There are numerous proprietary and peer-reviewed neurosimulators (e.g., NCS, pNEURON, SPLIT, NEST) out there that can handle very large parallel models that are essentially only bound by the available memory. The bigger the machine you have available, the more neurons you can simulate. All these simulators apply optimizations for the particular platform in order to make optimal use of the available hardware. Without any comparison to existing simulators, their publication is a non-peer reviewed claim.

Did they learn anything about the brain?
They got very excited because they saw oscillations. Oscillations are an obligatory artifact that one always gets when many points interact. These findings that they claim on the neuroscience side may excite engineers, but not neuroscientists.

Why did they get the Gordon Bell Prize?
They submitted a non-peer reviewed paper to the Gordon Bell Committee and were awarded the prize almost instantly after they made their press release. They seem to have been very successful in influencing the committee with their claim, which technically is not peer-reviewed by the respective community and is neuroscientifically outrageous.

But is there any innovation here?
The only innovation here is that IBM has built a large supercomputer – which is irrelevant to the press release.

Why did IBM let Mohda make such a deceptive claim to the public?
I don’t know. Perhaps this is a publicity stunt to promote their supercompter. The supercomputer industry is suffering from the financial crisis and they probably are desperate to boost their sales. It is so disappointing to see this truly great company allow the deception of the public on such a grand scale.

But have you not said you can simulate the Human brain in 10 years?
I am a biologist and neuroscientist that has studied the brain for 30 years.  I know how complex it is. I believe that with the right resources and the right strategy it is possible. We have so far only simulated a small part of the brain at the cellular level of a rodent and I have always been clear about that.

Would other neuroscientists agree with you?
There is no neuroscientist on earth that would agree that they came even close to simulating the cat’s brain – or any brain.

But did Mohda not collaborate with neuroscientists?
I would be very surprised if any neuroscientists that he may have had in his DARPA consortium realized he was going to make such an outrages claim. I can’t imagine that that the San Fransisco neuroscientists knew he was going to make such a stupid claim. Modha himself is a software engineer with no knowledge of the brain.

But did you not collaborate with IBM?
I was collaborating with IBM on the Blue Brain Project at the very beginning because they had the best available technology to faithfully allow us to integrate the diversity and complexity found in brain tissue into a model. This for me is a major endeavor to advance our insights into the brain and drug development. Two years ago, when the same Dharmendra Mhoda claimed the “mouse-scale simulations”, I cut all neuroscience collaboration with IBM because this is an unethical claim and it deceives the public.

What IBM allowed Modha to do here is not only wrong, but outrageous. They deceived millions of people.

Henry Markram
Blue Brain Project

IBM / DARPA SyNAPSE announce new brain simulation at Supercomputing Conference

November 18th, 2009 Comments off

Update:  The reports of  this breakthrough at a ‘cat brain’ level may be quite misleading or exaggerated.  I’m in contact with Henry Markram, a leading brain researcher spearheading the “Blue Brain” simulation in France, and waiting for his permission to post his concerns about the claims from IBM researchers.

At the Supercomputing Conference SC09 in Portland Oregon IBM has announced a spectacular advance in our ability to mechanically simulate cognitive activity with machines – they have developed a brain simulation that approximates a cat brain in complexity.

We have profiled the SyNAPSE project here at Technology Report thanks to a guest post by one of those working there. This new development is a remarkable advance given that SyNAPSE has been going strong for under one year. With a cat brain complexity under its belt it appears only a matter of a few more years before the project is likely to have modeled interactions at the scale of human brain complexity.

The most provocative idea about brain modelling is that these models will at some point attain human-like consciousness along with the ability to communicate with humans and (hopefully) cooperate with us in problem solving. No longer just a science fiction topic, this potential “explosion of intelligence” relates to one of the hottest topics in technology – the Singularity.

More on the IBM Blue Matter project from:

Forbes
Popular Mechanics

US Military killed the Biologically-Inspired Cognitive Architectures (BICA) Project without explanation

August 27th, 2009 Comments off

It is somewhat tempting to think like a conspiracy theory buff and suggest that the Biologically-Inspired Cognitive Architectures project “BICA” – a major effort to create artificial intelligence – has succeeded and gone off the record rather than been cancelled by the US Government.

However the idea that BICA has simply been “cancelled” in favor of newer approaches seems far more likely, especially given the new focus of the DARPA SyNAPSE project we’ve discussed here at Technology Report several times before.    It appears that the more general and decentralized approach of BICA has been replaced with a more collaborative and engineered approach taken in the DARPA SyNAPSE project.

The Defense Advanced Research Projects Agency (DARPA) is one of the world’s best funded advanced technology research groups. DARPA’s most impressive accomplishment to date has been to fund the prizes that have inspired several university groups to create fully autonomous vehicles that can navigate both city traffic and complex off the road tracks without any human controls.

BICA Project:
http://www.darpa.mil/IPTO/programs/bica/bica_phase1.asp

DARPA SyNAPSE Project:

DARPA SyNAPSE Project Summary

July 23rd, 2009 1 comment

Today we have a guest post with permission from Max over at the “Neurdons” blog, written by a group working on the DARPA SyNAPSE project we have discussed here before.   SyNAPSE seeks to create a fully functional artificial intelligence.

This piece was written by Ben Chandler, an AI researcher with the SyNAPSE project:

About SyNAPSE

First the facts: SyNAPSE is a project supported by the Defense Advanced Research Projects Agency (DARPA). DARPA has awarded funds to three prime contractors: HPHRL and IBM. The Department of Cognitive and Neural Systems at Boston University, from which the Neurdons hail, is a subcontractor to both HP and HRL. The project launched in early 2009 and will wrap up in 2016 or when the prime contractors stop making significant progress, whichever comes first. ‘SyNAPSE’ is a backronym and stands for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. The stated purpose is to “investigate innovative approaches that enable revolutionary advances in neuromorphic electronic devices that are scalable to biological levels.”

SyNAPSE is a complex, multi-faceted project, but traces its roots to two fundamental problems. First, traditional algorithms perform poorly in the complex, real-world environments that biological agents thrive. Biological computation, in contrast, is highly distributed and deeply data-intensive.  Second, traditional microprocessors are extremely inefficient at executing highly distributed, data-intensive algorithms. SyNAPSE seeks both to advance the state-of-the-art in biological algorithms and to develop a new generation of nanotechnology necessary for the efficient implementation of those algorithms.

Looking at biological algorithms as a field, very little in the way of consensus has emerged. Practitioners still disagree on many fundamental aspects. At least one relevant fact is clear, however. Biology makes no distinction between memory and computation. Virtually every synapse of every neuron simultaneously stores information and uses this information to compute. Standard computers, in contrast, separate memory and processing into two nice, neat boxes. Biological computation assumes these boxes are the same thing. Understanding why this assumption is such a problem requires stepping back to the core design principles of digital computers.

The vast majority of current-generation computing devices are based on the Von Neumann architecture. This core architecture is wonderfully generic and multi-purpose, attributes which enabled the information age. Von Neumann architecture comes with a deep, fundamental limit, however. A Von Neumann processor can execute an arbitrary sequence of instructions on arbitrary data, enabling reprogrammability, but the instructions and data must flow over a limited capacity bus connecting the processor and main memory. Thus, the processor cannot execute a program faster than it can fetch instructions and data from memory. This limit is know as the “Von Neumann bottleneck.”

In the last thirty years, the semiconductor industry has been very successful at avoiding this bottleneck by exponentially increasing clock speed and transistor density, as well as by adding clever features like cache memorybranch predictionout-of-order execution and multi-core architecture. The exponential increase in clock speed allowed chips to grow exponentially faster without addressing the Von Neumann bottleneck at all. From the user perspective, it doesn’t matter if data is flowing over a limited-capacity bus if that bus is ten times faster than that in a machine two years old. As anyone who has purchased a computer in the last few years can attest, though, this exponential growth has already stopped. Beyond a clock speed of a few gigahertz, processors dissipate too much power to use economically.

Cache memory, branch prediction and out-of-order execution more directly mitigate the Von Neumann bottleneck by  holding frequently-accessed or soon-to-be-needed data and instructions as close to the processor as possible. The exponential growth in transistor density (colloquially known as Moore’s Law) allowed processor designers to convert extra transistors directly into better performance by building bigger caches and more intelligent branch predictors or re-ordering engines. A look at the processor die for the Core i7 or the block diagram of the Nehalem microarchitecture on which Core i7 is based reveal the extent to which this is done in modern processors.

Multi-core and massively multi-core architectures are harder to place, but still fit within the same general theme. Extra transistors are traded for higher performance. Rather than relying on automatic mechanisms alone, though, multi-core chips give programmers much more direct control of the hardware. This works beautifully for many classes of algorithms, but not all, and certainly not for data-intensive bus-limited ones.

Unfortunately, the exponential transistor density growth curve cannot continue forever without hitting basic physical limits. At this point, Von Neumann processors will cease to grow appreciably faster and users won’t need to keep upgrading their computers every couple years to stave off obsolence. Semiconductor giants will be left with only two basic options: find new high-growth markets or build new technology.  If they fail at both of these, the semiconductor industry will cease to exist in its present, rapidly-evolving form and migrate towards commoditization. Incidentally, the American economy tends to excel at innovation-heavy industries and lag other nations in commodity industries. A new generation of microprocessor technology means preserving American leadership of a major industry. Enter DARPA and SyNAPSE.

Given the history and socioeconomics, the “Background and Description” section from the SyNAPSE Broad Agency Announcement is much easier to unpack:

Over six decades, modern electronics has evolved through a series of major developments (e.g., transistors, integrated circuits, memories, microprocessors) leading to the programmable electronic machines that are ubiquitous today. Owing both to limitations in hardware and architecture, these machines are of limited utility in complex, real-world environments, which demand an intelligence that has not yet been captured in an algorithmic-computational paradigm. As compared to biological systems for example, today’s programmable machines are less efficient by a factor of one million to one billion in complex, real-world environments. The SyNAPSE program seeks to break the programmable machine paradigm and define a new path forward for creating useful, intelligent machines.

The vision for the anticipated DARPA SyNAPSE program is the enabling of electronic neuromorphic machine technology that is scalable to biological levels. Programmable machines are limited not only by their computational capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations. Since real world systems are always many body problems with infinite combinatorial complexity, neuromorphic electronic machines would be preferable in a host of applications—but useful and practical implementations do not yet exist.

SyNAPSE seeks not just to build brain-like chips, but to define a fundamentally distinct form of computational device. These new devices will excel at the kinds of distributed, data-intensive algorithms that complex, real-world environment require. Precisely the kinds of algorithms that suffer immensely at the hands of the Von Neumann bottleneck.