Title: Message
Right, funding makes a big difference, but ever if we spend part of our days serving bean counters does it control the direction of scientific thought?     I think it's often the other way around too, that what scientists find interesting to explore is what they're able to sell.   In that regard, helping Government, and others, learn what to expect from uncontrolled systems could be just as interesting and marketable as offering limited ways to control them.   You could present it as advances in 'steering', like how to read ahead on the curves so your mid-course corrections can  be early, small and graceful rather than late, large and clumsy... but then, government does seem to enjoy the latter so very much, perhaps we couldn't persuade them to give it up! :)
Artificial intelligence as a field "talks about" just about everything.  We mustn't confuse what the funding agencies demand (who, after all, call the tune in most instances, about what direction research will take) and what the scientists would wish, or even talk about quite publicly.  If you read the presidential essays of each new president of the Association for the Advancement of Artificial Intelligence, the main professional society of AI, you will see wonderful proposals for what the field might be and how it might be done.  DARPA wants reliable robot cars, thanks anyway.  

There's a melancholy moment at the end of the film where Marvin Minsky talks about the grand ideas of the founding fathers of AI, and the way the field has instead become fixated on incremental improvements in performance--owing to that's where the money is.

I've noticed that people often scoff because the game of Go has never had a real AI challenge.  I agree that Go is wonderfully complex, difficult, a tough nut to crack.  But research toward an automatic Go machine has been the sole province of non-funded amateurs for sweet forever.  It's possible--not necessarily guaranteed--that a major, well-funded effort might crack the problem.  Nobody who has any money could imagine what use it would be, unfortunately.

So for big ideas: for the first fifty years of AI, the dream was to build a killer chess machine.  Why?  Because this was considered the sine qua non of intelligent behavior.  Never mind that you wouldn't particularly want a chess master as your dinner partner; this reflected our view of what intelligence was at the time.  We have our killer chess machine and we (and the chess players, Kasparov says) have learned a lot from the effort.  

But the grand goal for the next fifty years is a robot soccer team that will defeat a human team in the World's Cup.  Think of what this means: planning, cooperating with other autonomies, kinetic intelligence, real-time calculations, and so forth.  It seems to me a worthy successor to the chess champion.  If I'm lucky, it will happen sooner than fifty years, and I'll get to see it for myself.  If not, not.

And, FWIW, this idea for a grand challenge bubbled up from workers--young workers--in the field, and was not proposed by a funding agency.  Other grand ideas are being pursued on a shoestring by other young researchers.  I can talk about them, OR--you can buy the new edition of my Machines Who Think, which addresses some of these contemporary issues.

There will be more to talk about when I show the film in December.

Pamela





"For some reason the most vocal Christians among us never mention the Beatitudes.  But with tears in their eyes they demand that the Ten Commandments be posted in public places.  And of course that's Moses, not Jesus.  I haven't heard one of them demand that the Sermon on the Mount, the Beatitudes, be posted anywhere."

 Kurt Vonnegut, "A Man Without A Country"


============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to