well, the AGI's or brain's "sign" or representation system is the heart of the matter - that's what will enable it to be adaptive (or not) and connect hitherto different ways of reaching, or "moving" (if it uses that concept) towards, goals.
That seems to be the way Jeff Hawkins is thinking - and he, too, interestingly seems to be basing all his concepts on movies (MOVING pictures as opposed to static pictures - i.e. a concept of a "dog" must be based on a moving picture of a dog, and not as you might at first think, a static picture. I think that's almost certainly right (and note re my cultural musings, how apt it is that all these ideas are emerging at the exact same time as we are moving into the era of the personal video (comparable to the printed book)). I don't know though - having still only glanced at his stuff - whether he has yet made the transition from being able to recognise a "dog" to being able to recognize an "animal." Anyone know about this? I suspect that there is a NECESSITY about the way the evolving animal brain has represented things - that , for example, you HAVE to have graphics representing the outlines of things. Anyway, I'll wait with baited breath to hear your exposition of the fundamentals of your system - and what I personally would like to hear is not so much the details of your programming, but simply the different forms that "dog" , "move" "goal" take - e.g. word "d-o-g", icon of dog, movie of dog running, or whatever... and how one kind of representation can call up any other kind - and of course how you can COMBINE concepts. Will your system be able to form a composite movie of a dog and a cat running together from two separate movies, and how? All this and more, of course, the human brain can do. ----- Original Message ----- From: Benjamin Goertzel To: [email protected] Sent: Thursday, May 03, 2007 3:31 PM Subject: Re: [agi] The University of Phoenix Test [was: Why do you think your AGI design will work?] how does your system or these other systems that you are talking about, represent "goals" , "move", "obstacle".. "path.."? Literally, what form do those concepts take within any of these systems -and what meanings/sense/ referents are attached to them and how? Do they actually use the general concept "goal" as such - as distinct, obviously, from having their own specific goals? Now you are asking about "how it works", though ;-) As noted in available review papers on NM, Novamente uses a multi-aspect knowledge representation, in which something like "obstacle" would be represented: -- declaratively, as nodes and links representing probabilistic relations -- as an overall "attractor pattern" of activity across the whole node/link network of memory -- visually, as a set of "internal movies" in NM's internal simulation I am actually writing a paper on knowledge representation, and will post it to this list within the next couple weeks. That should provide a good basis for discussing the question you've asked above. You see, if any computer system can represent those concepts as the human brain actually does, then I would suggest that it's at least half solved the problem of AGI. Well, NM is not a brain emulator and doesn't really have the goal of emulating the human brain's knowledge representation in any detail.... But I do think the human brain's KR is multi-aspect in the same general way that NM's is, as I've very roughly described above... -- Ben ------------------------------------------------------------------------------ This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?& ------------------------------------------------------------------------------ No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 269.6.2/785 - Release Date: 02/05/2007 14:16 ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
