Gah, sorry for the awfully late response. Studies aren't leaving me the energy to respond to e-mails more often than once in a blue moon...
On Feb 4, 2008 8:49 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: > They would not operate at the "proposition level", so whatever > difficulties they have, they would at least be different. > > Consider [curiosity]. What this actually means is a tendency for the > system to seek pleasure in new ideas. "Seeking pleasure" is only a > colloquial term for what (in the system) would be a dimension of > constraint satisfaction (parallel, dynamic, weak-constraint > satisfaction). Imagine a system in which there are various > micro-operators hanging around, which seek to perform certain operations > on the structures that are currently active (for example, there will be > several micro-operators whose function is to take a representation such > as [the cat is sitting on the mat] and try to investigate various WHY > questions about the representation (Why is this cat sitting on this mat? > Why do cats in general like to sit on mats? Why does this cat Fluffy > always like to sit on mats? Does Fluffy like to sit on other things? > Where does the phrase 'the cat sat on the mat' come from? And so on). [cut the rest] Interesting. This sounds like it might be workable, though of course, the exact assosciations and such that the AGI develops sound hard to control. But then, that'd be the case for any real AGI system... > > Humans have lots of desires - call them goals or motivations - that > > manifest in differing degrees in different individuals, like wanting > > to be respected or wanting to have offspring. Still, excluding the > > most basic ones, they're all ones that a newborn child won't > > understand or feel before (s)he gets older. You could argue that they > > can't be inborn goals since the newborn mind doesn't have the concepts > > to represent them and because they manifest variably with different > > people (not everyone wants to have children, and there are probably > > even people who don't care about the respect of others), but still, > > wouldn't this imply that AGIs *can* be created with in-built goals? Or > > if such behavior can only be implemented with a motivational-system > > AI, how does that avoid the problem of some of the wanted final > > motivations being impossible to define in the initial state? > > I must think about this more carefully, because I am not quite sure of > the question. > > However, note that we (humans) probably do not get many drives that are > introduced long after childhood, and that the exceptions (sex, > motherhood desires, teenage rebellion) could well be sudden increases in > the power of drives that were there from the beginning. > > Ths may not have been your question, so I will put this one on hold. Well, the basic gist was this: you say that AGIs can't be constructed with built-in goals, because a "newborn" AGI doesn't yet have built up the concepts needed to represent the goal. Yet humans seem tend to have built-in (using the term a bit loosely, as all goals do not manifest in everyone) goals, despite the fact that newborn humans don't yet have built up the concepts needed to represent those goals. It is true that many of those drives seem to begin in early childhood, but it seems to me that there are still many goals that aren't activated until after infancy, such as the drive to have children. -- http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/ Organizations worth your time: http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/ ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b Powered by Listbox: http://www.listbox.com
