Google’s Ray Kurzweil on Google’s role in the Future of Artificial Intelligence

Inventor and futurist Ray Kurzweil is generally regarded as one of the world’s top engineers working on Artificial Intelligence, and he’s certainly the world’s top *evangelist* for AI, arguing that general AI, or thinking machines, will inevitably arise, and fairly soon, as another step down the evolutionary path of the human species.    His book  “The Singularity is Near”, is the key popular work addressing what many believe will become the biggest technological theme in history – the creation of an intelligent computer that is capable of human-like thought processes.

Bill Gates has called Ray Kurzweil the leading thinker in the area of artificial intelligence.

Google very recently hired Kurzweil as Director of Engineering, promising a marriage of his ideas with the company that is probably best suited to fund and deploy general AI applications.

Here, in an interview at Singularity Hub, Kurzweil discusses Google’s role in the advancement of AI:

Ray Kurzweil On Future of AI at Google:

http://singularityhub.com/2013/01/10/exclusive-interview-with-ray-kurzweil-on-future-ai-project-at-google/

The Goldilocks Planets and SETI

Two extraordinary technology items this week are the identification of a new “goldilocks planet” named Kepler-22b.  Kepler-22b may have attributes so similar to earth it could harbor life that is “like us”.   This isn’t the first such planet, and researchers in this field are increasingly optimistic about finding many, many planets that could harbor life something like what evolved here on earth.  Generally they are looking for stable temperatures that allow for the presence of liquid water, thought to be a good “breeding ground” for the building blocks of evolution – increasingly complex molecular structures that change through random mutations over long periods of time into simple and then into complex organisms… like us.

SETI, the “Search for Extraterrestrial Intelligence”, has been around for some time but thanks to new funding and tech and discovery advances it will have a better chance of success.  Many believe that other life is more than 99.99% likely (we are NOT that special!), but *finding it* with our primitive technologies is going to be difficult.

www.seti.org

MOUNTAIN VIEW, CA – The Allen Telescope Array (ATA) is once again searching planetary systems for signals that would be evidence of extraterrestrial intelligence. Among its first targets are some of the exoplanet candidates recently discovered by NASA’s Kepler space telescope.

The SETI array of radiotelescopes will be able to focus on planets like this, hoping to pick up a signal from other civilizations that may have evolved on these other planets. A

 

Artificial Intelligence Pioneer Marvin Minsky on the current state of AI Research

Here, from PBS, is an interesting interview with Marvin Minsky, one of the key pioneers of Artificial Intelligence research.    Although Minsky remains somewhat optimistic about developing a general artificial intelligence, he believes that the current approaches are misguided and too narrow – that researchers are now looking for “a magic bullet”, and that it’s going to take a lot longer to create generalized AI than if we applied a more general approach:

How hard is it to build an intelligent machine? I don’t think it’s so hard ….   The basic idea I promote is that you mustn’t look for a magic bullet. You mustn’t look for one wonderful way to solve all problems. Instead you want to look for 20 or 30 ways to solve different kinds of problems. And to build some kind of higher administrative device that figures out what kind of problem you have and what method to use.

Now, if you take any particular researcher today, it’s very unlikely that that researcher is going to work on this architectural level of what the thinking machine should be like. Instead a typical researcher says, “I have a new way to use statistics to solve all problems.” Or: “I have a new way to make a system that imitates evolution. It does trials and finds the things that work and remembers the things that don’t and gets better that way.” And another one says, “It’s going to use formal logic and reasoning of a certain kind, and it will figure out everything.” So each researcher today is likely to have one particular idea, and that researcher is trying to show that he or she can make a machine that will solve all problems in that way.

I think this is a disease that has spread through my profession. Each practitioner thinks there’s one magic way to get a machine to be smart, and so they’re all wasting their time in a sense.

I was surprised to see his lack of optimism in the face of so much progress in areas I’d argue are very generalized indeed.     The DARPA  SyNAPSE project we’ve discussed several times here at Technology Report remains the best funded AI research to date, and lead researchers seem to feel optimistic that progress there could lead to a human scale general intelligence within several years rather than several decades that Minsky implies may be required given the new approaches.

Simply put, DARPA SyNAPSE  is creating a computing infrastructure to rival the human brain in terms of connectivity, and counting on the possibility that we are dealing mostly with *quantity of connections* rather than *quality of connections* when we talk about human level intelligence.

The other very promising project for generalized AI is somewhat at odds with the DARPA SyNAPSE view.    The Blue Brain project is also a promising development ground for general artificial intelligence, but the approach is very different as described by Dr. Henry Markram, the project manager at Blue Brain.      The Blue Brain team is focusing more on “reverse engineering” animal brains and eventually a human brain.

Given the new level of enthusiasm and funding from DARPA, it seems likely that progress will continue at a faster pace that at anytime in the past.

Ironically I think Minsky’s early optimism in the 1950s  was more justified than his current pessimism, though his observation that academics are working in too much isolation is certainly true.    I’m often surprised how many technologists don’t seem to understand many simple aspects of human biology and evolution and vica versa.     Human intelligence, though intriguing, continues to be overrated as an phenomenon of exceptional quality.    We’re a somewhat arrogant creature by evolutionary design, but that does not justify our self importance.    Machines surpass most of us in most compartmentalized aspects of intelligence and many aspects of creativity  (mathematics / translation and language / game playing / music / information retrival, etc, etc).    It seems reasonable that what we call “consciousness” may only require massive connectivity – perhaps something as simple as creating a fast, multitasked conversation between different parts of an artificial brain.

SyNapse and Blue Brain Projects Update

As noted before I think the two most promising “Artificial Intelligence” projects are Blue Brain and DARPA SyNAPSE and I’m happy to see in this Boston blog “Neurdon” by some of the SyNAPSE project folks a few of the DARPA bucks going to elaborate on some of the technical goals of the SyNAPSE project:

SyNAPSE seeks not just to build brain-like chips, but to define a fundamentally distinct form of computational device. These new devices will excel at the kinds of distributed, data-intensive algorithms that complex, real-world environment require…

It’s very exciting stuff this “build a brain” competition.  Although I think the theoretical approach taken by Blue Brain is more consistent with what little we know about how brains work, I’d guess SyNAPSE’s access to DARPA funding will give it the long term edge in terms of delivering a functional thinking machine in the 15-20 year time frame most artificial intelligence researches believe we’ll need for that ambitious goal.

My optimism is greater than many because I think humans have rather dramatically exaggerated the complexity of their own feeble mental abilities by a quite a … bit, and I’d continue to argue that consciousness is much more a function of quantity than quality.

Another promising development in the artificial brain area is in Spain where  Blue Brain project partner universities are working on the project:  Cajal Blue Brain

Artificial Intuition a key to AI?

Convergence08 was a great conference with many interesting people and ideas. Thankfully the number of crackpots was very low, and even the “new age” mysticism stuff was at a minimum. Instead I found hundreds of authors, doctors, biologists, programmers, engineers, physicists, and more clear thinking folks all interested in how the new technologies will shape our world in ways more profound than we have ever experienced before.

My favorite insights came from Monica Anderson’s presentation on her approach to AI programming, which she calls “Artificial Intuition“. Unlike all other approaches to AI I’m familiar with Anderson uses biological evolution as her main analogs for conceptualizing human intelligence. I see this approach as almost a *given* if you have a good understanding of humans and thought, but it’s actually not a popular conceptual framework for AI, where most approaches rely on complex algorithmic logic – logic that Anderson argues clearly did not spawn human intelligence via evolution. Yet Anderson is by no means a programming neophyte – she’s a software engineer who has researched AI for some time, then spent two years programming at Google and then quit to start her own company, convinced that her AI approaches are on the right track.

Anderson’s work is especially impressive to me because as someone with a lot of work in biology under my belt (academically as well as corporeally) it has always surprised me how poorly many computer programmers understand even rudimentary biological concepts such as the underlying simplicity of the human neocortex and the basic principles of evolution which I’d argue emphatically have defined *every single aspect* of our human intelligence over a slow and clumsy, hit and miss process operating over millions of years. I think programmers tend to focus on mathematics and rule systems which are great modelling systems but probably a very poor analog for intelligence. This focus has in many ways poisoned the well of understanding about what humans and other animals do when they … think… which I continue to maintain is “not all that special”.

Anderson’s conceptual framework eliminates what I see as a key impediment to creating strong AI with conventional software engineering – ie having to build a massively complex programmable emulation of human thought. Instead, her approach ties together many simple routines that emulate the simple ways animals have developed to effectively interact with a changing environment.

Combining Anderson’s approach to the programming with the physical models of the neocortical column such as IBM Blue Brain would be my best bet for success in the AI field.

Live from Convergence08 Conference

Mountain View, California: Convergence08 conference.  Hundreds of people are gathering here at the Computer History Museum in Mountain View “Bringing Life to Big Ideas”.    The focus is on four core technologies and how they will change the world dramatically in the coming decades:   Infotech, Cogtech, Nanotech, and Biotech.

I’m especially interested in hearing from Peter Norvig, Google researcher and the guy who – literally – wrote the textbook on Artificial Intelligence.

So far the organization of this conference is very impressive though it’s not clear how many are attending.    The group keynote begins in about 20 minutes.