On Feb 5, 2008 11:36 PM, Benjamin Johnston <[EMAIL PROTECTED]> wrote: > Well, as I said before, I don't know which will directly produce general > intelligence and which of them will fail.
<snip /> > My point, again, is that we don't know how the first successful AGI will > work - but we can see many plausible ideas that are being pursued in the > hope of creating something powerful. Some of these are doomed to fail; but > we don't really know which ones they are until we try them. It doesn't seem > fair for you to say that nobody has offered a "crux" idea, and I'd prefer > that people follow their passions rather than insist that everybody should > get hung up on the centuries/millennia old question of what exactly is > intelligence. Thankyou. Your list is very informative. I think its worth mentioning the dangerous phenomenon you touched on here. For some reason, people get religious about their approaches. "No, my idea is better. I can't prove why yet, but it'll work." The problem with this line of reasoning (as we've all experienced) is it ends with "Lets just not argue about which approach is better." I think we all agree that some approaches _are_ better than others. We might not agree on which ones are which, but I don't want to run away from that discussion. You mentioned passion -- I'm passionate about solving strong AI, not about pursuing my ideas even if they're wrong. I don't think any of us want to waste our time working on a flawed idea because nobody told us. The other reason I think discussing this stuff is worthwhile is thus: I think eventually what we all want is the same. We want a machine into which we plug a reward function and maybe a webcam or something and then we can teach it to talk and think. Thats what I imagine anyway. Maybe any of the methods you talked about could be used to make that. Its like we're dreaming of inventing computers while we work on our different CPU designs. I want to talk about how the computer will fit together. If the cpu is the hard bit, I want as good a spec as possible, and that means knowing and discussing the infrastructure. I don't want to work on something which could never actually power an intelligent system because I didn't think big picture. Thats a real danger. If its true, tell me that I'm in the wrong forest. -J > -Benjamin Johnston > > > ----- > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?& > ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=93801513-06c777
