Well, robotics has typically been better funded than AI, a fact that I attribute to many peoples' intuitive liking for paying for building physical stuff rather than "just software" ...
I admit I'm not a professional salesman, but OTOH I've been keeping a small business with ~20 staff afloat for 7 years, so I'm not totally ignorant in that domain either. I'd be curious to hear your thoughts on how to sell the idea of an artificial worm. I note that funding for Alife peaked 10-15 years ago. Peter Voss raised $$ for A2I2, so far as I know, largely from investors he had previously known in his "past life" as a successful non-AI entrepreneur. In other words, I believe it was largely his proven business experience that enabled him to raise substantial angel investor funds for his AI project. Furthermore, his pitch and biz plan as I understand it involves fairly-near-term practical applications that are not heavily based on transfer learning but rather focused on supplying one domain-specific functionality (which has not yet been disclosed ;-) ben On Thu, Dec 18, 2008 at 11:18 AM, Mike Tintner <tint...@blueyonder.co.uk>wrote: > > > Ben, > > For the record yet again, I certainly believe *robotic* AGI is possible - I > disagree only with the particular approaches I have seen. > > I disagree re the importance/attractiveness of achieving "small" AGI. Hey, > just about all animals are v. limited by comparison with humans in their > independent learning capacities and motivation. But if anyone could achieve > something with even the limited generality/ domain-crossing power of say a > worm, it would be a huge thing. If you can dismiss that, I can tell you with > my marketing hat on, you have a limited understanding of how to sell here. > IMO small AGI is an easy and exciting sell - provided you have a reasonable > idea to offer. (Isn't some kind of small AGI v. roughly - from the little > I've gathered - what Voss is aiming for?) > > Mike, > > The lack of AGI funding can't be attributed solely to its risky nature, > because other highly costly and highly risk research has been consistently > funded. > > For instance, a load of $$ has been put into building huge particle > accelerators, in the speculative hope that they might tell us something > about fundamental physics. > > And, *so* much $$ has been put into parallel processing and various > supercomputing hardware projects ... even though these really have > contributed little, and nearly all progress has been made using commodity > computing hardware, in almost every domain. > > Not to mention various military-related boondoggles like the hafnium > bomb... which never had any reasonable scientific backing at all. > > Pure theoretic research in string theory is funded vastly more than pure > theoretic research in AGI, in spite of the fact that string theory has never > made an empirical prediction and quite possibly never will, and has no near > or medium term practical applications. > > I think there are historical and psychological reasons for the bias against > AGI funding, not just a rational assessment of its risk of failure. > > For one thing, people have a strong bias toward wanting to fund the > creation of large pieces of machinery. They just look impressive. They > make big scary noises, and even if the scientific results aren't great, you > can take your boss on a tour of the facilities and they'll see Multiple > Wizzy-Looking Devices. > > For another thing, people just don't *want* to believe AGI is possible -- > for similar emotional reasons to the reasons *you* seem not to want to > believe AGI is possible. Many people have a nonscientific intuition that > mind is too special to be implemented in a computer, so they are more > skeptical of AGI than of other risky scientific pursuits. > > And then there's the history of AI, which has involved some overpromising > and underdelivering in the 1960s and 1970s -- though, I think this factor is > overplayed. After all, plenty of Big Physics projects have overpromised and > underdelivered. The Human Genome project, wonderful as it was for biology, > also overpromised and underdelivered: where are all the miracle cures that > were supposed to follow the mapping of the genome? The mapping of the > genome was a critical step, but it was originally sold as being more than it > could ever have been ... because biologists did not come clean to > politicians about the fact that mapping the genome is only the first step in > a long process to understanding how the body generates disease (first the > genome, then the proteome, the metabolome, systems biology, etc.) > > Finally, your analysis that AGI funding would be easier to achieve if > researchers focused on transfer learning among a small number of domains, > seems just not accurate. I don't see why transfer learning among 2 or 3 > domains would be appealing to conservative, pragmatics-oriented funders. I > mean > > -- on the one hand, it's not that exciting-sounding, except to those very > deep in the AI field > > -- also, if your goal is to get software that does 3 different things, it's > always going to seem easier to just fund 3 projects to do those 3 things > specifically using narrowly-specialized methods, instead of making a riskier > investment in something more nebulous like transfer learning > > I think the AGI funding bottleneck will be broken either by > > -- some really cool demonstrated achievement [I'm working on it!! ... > though it's slow with so little funding...] > > -- a nonrational shift in attitude ... I mean, if string theory and > supercolliders can attract $$ in the absence of immediate utility or > demonstrated results, so can AGI ... and the difference is really just one > of culture, politics and mass psychology > > or a combination of the two... > > ben > > > > > On Thu, Dec 18, 2008 at 6:02 AM, Mike Tintner <tint...@blueyonder.co.uk>wrote: > >> >> >> Ben: Research grants for AGI are very hard to come by in the US, and from >> what I hear, elsewhere in the world also >> >> That sounds like - no academically convincing case has been made for >> pursuing not just long-term AGI & its more grandiose ambitions (which is >> understandable/ obviously v. risky) but ALSO its simpler ambitions, i.e. >> making even the smallest progress towards *general* as opposed to >> *specialist/narrow* intelligence, producing a ,machine, say, that could >> cross just two or three domains. If the latter is true, isn't that rather an >> indictment of the AGI field? >> >> >> >> >> ------------------------------ >> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/> | >> Modify<https://www.listbox.com/member/?&>Your Subscription >> <http://www.listbox.com> >> > > > > -- > Ben Goertzel, PhD > CEO, Novamente LLC and Biomind LLC > Director of Research, SIAI > b...@goertzel.org > > "I intend to live forever, or die trying." > -- Groucho Marx > > ------------------------------ > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > > ------------------------------ > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org "I intend to live forever, or die trying." -- Groucho Marx ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b Powered by Listbox: http://www.listbox.com