Ben,

Good question. Firstly, I learn a lot here, for wh. I'm v. grateful. But your 
question is: why deal with people so opposed to you?  Very broadly, the reason 
is: the people most opposed to you, (provided they're intelligent), are just 
those who force you to articulate your ideas most precisely and develop them 
most fully. (And that cuts or should cut both ways here.).

More specifically, AGI-ers -as I have in part explained -  are almost perfect 
representatives of a dying culture  - rational culture, which has been dominant 
since the Greeks and which believes that intelligence=rationality.  
Technically, rationality embraces logic, maths and language/NLP, and I guess 
programming period - and is epitomised, educationally by the IQ test.  The 
whole idea that intelligence depends on mastering  "the 3 R's" is consistent 
with and the progenitor of rational AI.. 

Rational culture is about to be surpassed in the next ten years, by creative 
culture, in which intelligence will come to be seen as primarily creative and 
only secondarily rational.- and imagination will become dominant. (The 
confluence of current world crisis/ changiing world order and rise of 
multimedia (esp. video)/internet culture are loosely parallel to the breakup 
500 years ago of the feudal order with the rise of the printed book). 

Creative culture is essential to solve the problems of AGI -   AI/AGI are 
getting nowhere precisely because rationality cannot solve the problems of - 
and is the complete *antithesis* of - creativity.

Being here, among other things, has helped me to articulate and define this 
clash.  And, if I hadn't been here, I wouldn't have had a v. recent idea. You 
see, much as you indicate en passant in your book, the wider culture has always 
known that rationality and creativity are  opposed. Just about everyone who 
writes about the psychology of creativity in any way, opposes it to logical 
thinking.  But none of this cuts any ice with AI-ers, who just can't see this 
even if the rest of the world can...     So the pressure to communicate with 
people so opposed gave me the idea that one can at once define creativity (and, 
conversely, rationality) * formally* - in their/your own terms - logically, and 
mathematically, and in terms of NLP - so that there can be no 
misunderstandings. And one can. 

And in a while, when it's more worked through, I'll explain the idea here.

Anyway, to answer you simply - conflict is v. fruitful, if you embrace it.  
(Jerry Rubin expounded this POV well in Do It! )
.

Ben:Mike,

I have a "personal" question for you

It seems to me that

a)
You think almost everyone on this list is profoundly misguided in their 
research direction, and in their understanding of the deeper issues underlying 
their research.

b)
You are not professionally working in the AGI domain

c)
There are other areas of research, such as robotics and computer vision, that 
you have a lot more respect for than (non robotics focused) AGI

d)
The vast bulk of discussion on this list deals with non robotics focused AGI

So, I am wondering: what is your motivation for spending time in discussions on 
this list?

I mean, there are a lot of people in the world whom I think are misguided.  But 
I have no motivation to spend my time participating in discussions on, say, 
string theory or fundamentalist Christian mailing lists, just to repeatedly 
remind those people that IMO they are wasting their time!!  I just leave them 
to their own business, and am happy enough with them so long as they leave me 
to mine (though of course I do need to compete with them for resources in some 
contexts...).

I'm not meaning to be aggressive here; I'm genuinely curious?

Do you think you're going to change our minds and make us see the error of our 
ways?  I really believe this is incredibly unlikely, because **every single 
argument you have made on this list so far** has been one that I, and probably 
most others on the list, have heard dozens of times before.  If we don't agree 
with these common arguments against non-robotics-focused AGI research, it's not 
because we haven't heard the arguments or thought about them!

I hasten to add that there are some others on this list with views closer to 
your own -- e.g. I know Bob Mottram is much more heavily bullish on robotics 
approaches to AGI than other approaches.  However, I also note that Bob doesn't 
feel the need to repeat the reasons for his intuitions of this nature over and 
over again .;-)

ben



  On Thu, Dec 18, 2008 at 11:18 AM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:


      Ben,

      For the record yet again, I certainly believe *robotic* AGI is possible - 
I disagree only with the particular approaches I have seen.

      I disagree re the importance/attractiveness of achieving "small" AGI. 
Hey, just about all animals are v. limited by comparison with humans in their 
independent learning capacities and motivation. But if anyone could achieve 
something with even the limited generality/ domain-crossing power of say a 
worm, it would be a huge thing. If you can dismiss that, I can tell you with my 
marketing hat on, you have a limited understanding of how to sell here. IMO 
small AGI is an easy and exciting sell - provided you have a reasonable idea to 
offer. (Isn't some kind of small AGI v. roughly - from the little I've gathered 
- what Voss is aiming for?)


      Mike,

      The lack of AGI funding can't be attributed solely to its risky nature, 
because other highly costly and highly risk research has been consistently 
funded.  

      For instance, a load of $$ has been put into building huge particle 
accelerators, in the speculative hope that they  might tell us something about 
fundamental physics.

      And, *so* much $$ has been put into parallel processing and various 
supercomputing hardware projects ... even though these really have contributed 
little, and nearly all progress has been made using commodity computing 
hardware, in almost every domain.

      Not to mention various military-related boondoggles like the hafnium 
bomb... which never had any reasonable scientific backing at all.

      Pure theoretic research in string theory is funded vastly more than pure 
theoretic research in AGI, in spite of the fact that string theory has never 
made an empirical prediction and quite possibly never will, and has no near or 
medium term practical applications.

      I think there are historical and psychological reasons for the bias 
against AGI funding, not just a rational assessment of its risk of failure.

      For one thing, people have a strong bias toward wanting to fund the 
creation of large pieces of machinery.  They just look impressive.  They make 
big scary noises, and even if the scientific results aren't great, you can take 
your boss on a tour of the facilities and they'll see Multiple Wizzy-Looking 
Devices.

      For another thing, people just don't *want* to believe AGI is possible -- 
for similar emotional reasons to the reasons *you* seem not to want to believe 
AGI is possible.  Many people have a nonscientific intuition that mind is too 
special to be implemented in a computer, so they are more skeptical of AGI than 
of other risky scientific pursuits.

      And then there's the history of AI, which has involved some overpromising 
and underdelivering in the 1960s and 1970s -- though, I think this factor is 
overplayed.  After all, plenty of Big Physics projects have overpromised and 
underdelivered.  The Human Genome project, wonderful as it was for biology, 
also overpromised and underdelivered: where are all the miracle cures that were 
supposed to follow the mapping of the genome?   The mapping of the genome was a 
critical step, but it was originally sold as being more than it could ever have 
been ... because biologists did not come clean to politicians about the fact 
that mapping the genome is only the first step in a long process to 
understanding how the body generates disease (first the genome, then the 
proteome, the metabolome, systems biology, etc.)

      Finally, your analysis that AGI funding would be easier to achieve if 
researchers focused on transfer learning among a small number of domains, seems 
just not accurate.  I don't see why transfer learning among 2 or 3 domains 
would be appealing to conservative, pragmatics-oriented funders.  I mean

      -- on the one hand, it's not that exciting-sounding, except to those very 
deep in the AI field

      -- also, if your goal is to get software that does 3 different things, 
it's always going to seem easier to just fund 3 projects to do those 3 things 
specifically using narrowly-specialized methods, instead of making a riskier 
investment in something more nebulous like transfer learning

      I think the AGI funding bottleneck will be broken either by

      -- some really cool demonstrated achievement [I'm working on it!! ... 
though it's slow with so little funding...]

      -- a nonrational shift in attitude ... I mean, if string theory and 
supercolliders can attract $$ in the absence of immediate utility or 
demonstrated results, so can AGI ... and the difference is really just one of 
culture, politics and mass psychology

      or a combination of the two...

      ben





      On Thu, Dec 18, 2008 at 6:02 AM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:


          Ben: Research grants for AGI are very hard to come by in the US, and 
from what I hear, elsewhere in the world also

          That sounds like -  no academically convincing case has been made for 
pursuing not just long-term AGI & its more grandiose ambitions (which is 
understandable/ obviously v.  risky) but ALSO its simpler ambitions, i.e. 
making even the smallest progress towards *general* as opposed to 
*specialist/narrow* intelligence, producing a ,machine, say, that could cross 
just two or three domains. If the latter is true, isn't that rather an 
indictment of the AGI field?





------------------------------------------------------------------------
              agi | Archives  | Modify Your Subscription  




      -- 
      Ben Goertzel, PhD
      CEO, Novamente LLC and Biomind LLC
      Director of Research, SIAI
      b...@goertzel.org

      "I intend to live forever, or die trying." 
      -- Groucho Marx



--------------------------------------------------------------------------
            agi | Archives  | Modify Your Subscription   


----------------------------------------------------------------------------
          agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  b...@goertzel.org

  "I intend to live forever, or die trying." 
  -- Groucho Marx



------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to