On Nov 12, 2007 1:49 PM, Mark Waser <[EMAIL PROTECTED]> wrote: > >> I'm more interested at this stage in analogies like > >> -- btw seeking food and seeking understanding > >> -- between getting an object out of a hole and getting an object out of > a pocket, or a guarded room > >> Why would one need to introduce advanced scientific concepts to an > early-stage AGI? I don't get it... > > :-) A bit disingenuous there, Ben. Obviously you start with the simple > and move on to the complex (though I suspect that the first analogy you cite > is rather more complex than you might think) -- but to take too simplistic > an approach that might not grow is just the "narrow AI" approach in other > clothing. >
Well, I don't think we're doing the latter, obviously. It's not as though we are creating an AGI architecture that is overfitted to controlling simple organisms in virtual worlds. We've created a general AGI architecture and will then be applying it in this particular context. > > >> Hmmm.... I guess I didn't understand what you meant. > >> What I thought you meant was, if a user asked "I'm a small farmer in > New Zealand. Tell me about horses" then the system would be able to > disburse its relevant knowledge about horses, filtering out the irrelevant > stuff. > >> What did you mean, exactly? > > That's a good simple, starting case. But how do you decide how much > knowledge to disburse? How do you know what is irrelevant? How much do > your answers differ between a small farmer in New Zealand, a rodeo rider in > the West, a veterinarian is Pennsylvania, a child in Washington, a > bio-mechanician studying gait? And horse is actually a *really* simple > concept since it refers to a very specific type of physical object. > > Besides, are you really claiming that you'll be able to do this next > year? Sorry, but that is just plain, unadulterated BS. If you can do that, > you are light-years further along than . . . . > Well, understanding the relevant context underlying a query is a fuzzy, not an absolute thing. There can be varying levels of capability at doing this. We have the basic mechanisms to enable this in NM, but they won't during 2008 perform this kind of contextualization as well as humans do. I didn't mean to be implying they would. > > >> There are specific algorithms proposed, in the NM book, for doing map > encapsulation. You may not believe they will work for the task, but still, > it's not fair to use the label "a miracle happens here" to describe a > description of specific algorithms applied to a specific data structure. > I guess that the jury will have to be out until you publicize the > algorithms. What I've seen in the past are too small, too simple, and won't > scale to what is likely to be necessary. > I disagree, but this would get into a very in-depth technical conversation which isn't really apropos for this list. > > >> I think it has medium-sized gaps, not huge ones. I have not filled all > these gaps because of lack of time -- implementing stuff needs to be > balanced with finalizing design details of stuff that won't be implemented > for a while anyway due to limited resources. > > :-) You have more than enough design experience to know that medium-size > gaps can frequently turn huge once you turn your attention to them. Who are > you snowing here? > Certainly they can, but I've thought about these particular gaps a lot, and believe that's not going to happen here. But of course it **could** -- as I keep saying, completing the NM system does involve some R&D, not pure engineering. -- Ben G ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=64229523-f67219
