Mike, I took your comments into consideration and have been updating my paper to make sure these problems are addressed.
See more comments below. On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner <tint...@blueyonder.co.uk>wrote: > 1) You don't define the difference between narrow AI and AGI - or make > clear why your approach is one and not the other > I removed this because my audience is for AI researchers... this is AGI 101. I think it's clear that my design defines general as being able to handle the vast majority of things we want the AI to handle without requiring a change in design. > > 2) "Learning about the world" won't cut it - vast nos. of progs. claim > they can learn about the world - what's the difference between narrow AI and > AGI learning? > The difference is in what you can or can't learn about and what tasks you can or can't perform. If the AI is able to receive input about anything it needs to know about in the same formats that it knows how to understand and analyze, it can reason about anything it needs to. > > 3) "Breaking things down into generic components allows us to learn about > and handle the vast majority of things we want to learn about. This is what > makes it general!" > > Wild assumption, unproven or at all demonstrated and untrue. > You are only right that I haven't demonstrated it. I will address this in the next paper and continue adding details over the next few drafts. As a simple argument against your counter argument... If that were true that we could not understand the world using a limited set of rules or concepts, how is it that a human baby, with a design that is predetermined to interact with the world a certain way by its DNA, is able to deal with unforeseen things that were not preprogrammed? That’s right, the baby was born with a set of rules that robustly allows it to deal with the unforeseen. It has a limited set of rules used to learn. That is equivalent to a limited set of “concepts” (i.e. rules) that would allow a computer to deal with the unforeseen. > Interesting philosophically because it implicitly underlies AGI-ers' > fantasies of "take-off". You can compare it to the idea that all science can > be reduced to physics. If it could, then an AGI could indeed take-off. But > it's demonstrably not so. > No... it is equivalent to saying that the whole world can be modeled as if everything was made up of matter. Oh, I forgot, that is the case :) It is a limited set of "concepts", yet it can create everything we know. > > You don't seem to understand that the problem of AGI is to deal with the > NEW - the unfamiliar, that wh. cannot be broken down into familiar > categories, - and then find ways of dealing with it ad hoc. > You don't seem to understand that even the things you think cannot be broken down, can be. Dave ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com