Er.. I can see it now: an AGI that was designed for weather modeling
devolving into a lazy teenager ;)
--
"I don't want to be too sophisticated here, but 2007 is going
to suck, all 12 months of the calendar year"
- D.R. Horton CEO Donald Tomnitz
On Fri, 6 Jul 2007, Pei Wang wrote:
||As far as this discussion is concerned, "play" is an activity a system
||carries for its own sake, rather than as means to other ends. As a
||by-product, play always serves as an exercise for the relevant skills,
||as well as provides certain information about the environment (so it
||is indeed rewarded by evolution), but as soon as the system carries
||out the activity with those goals in mind, it is not playing anymore
||--- "play" must be "for fun", not "for money", "for career", etc.
||
||Even though the advantages of playing can be justified, there is also
||an obvious cost --- the time/energy/resources used in the activity.
||That is one reason why few people can justify their research if the
||goal is to design a system that can play but do nothing else. Also, an
||activity is called "play" only when it is irrelevant to the primary
||goal of the system, so Deep Blue is not "playing" in this sense when
||it plays chess.
||
||For AGI, to be able to "play", in the above sense, is not only
||necessary, but also possible, even inevitable. For a system with
||insufficient knowledge and resources, goal derivation always leads to
||"alienation", in the sense that what start as means become ends. If I
||have a goal A, and I belief it can be achieved by achieving B first,
||I'll take B as my goal, and begin to get internal reward from progress
||towards it. In the long run, B may turn out to be irrelevant and even
||opposite to A, though I have no way to completely and absolutely rule
||out this possibility at the beginning. The same is true for an AI
||system --- as far as the goal derivation process is based on
||insufficient knowledge and resources, eventually the system will have
||many derived goals that do not really serve the initial or original
||goals from which they are derived. When pursuing these derived goals,
||the system is, more or less, playing, because it is rewarded (or gets
||pleasure) from these activities themselves, rather than using them to
||achieving other goals.
||
||The goal derivation mechanism in NARS already works in this way. See
||my publications (http://nars.wang.googlepages.com/) for more details.
||
||Pei
||
||On 7/6/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
||> I think the purpose of play is that it allows the system to search the
||> space of possible actions in a broad yet shallow way, and characterize
||> the landscape under various fitness criteria. So at a later time when
||> some more serious task needs to be undertaken the system can quickly
||> jump to an area (or areas) of the space which it knows is likely to be
||> appropriate.
||>
||> There are also reward systems associated with this kind of search,
||> such that enjoyment is gained by continuing to characterize the space.
||> This reward system seems to be particularly active in humans, who are
||> always discontent and seeking to expand their envelope though leisure
||> activities or knowledge/career advancement.
||>
||> As far as I know there aren't any AI systems which "play" in a proper
||> sense. I've seen robots which appeared to be playing, but this was
||> usually just anthropomorphisation of a rather incompetent system
||> struggling to perform on a single narrow task.
||>
||>
||> On 06/07/07, a <[EMAIL PROTECTED]> wrote:
||> > >a> Sure, I can write a program to differentiate between a square and a
||> circle,
||> > >a> but it is not AGI. I need the program to automatically train and
||> > >a> recognize different shapes.
||> > >
||> > >This is the most important question you have to ponder before
||> > >doing anything specific (and useless!).
||> > >Even if you implement something that can "automatically train itself"
||> > >to do this particular thing, would it scale to do anything? Would it
||> > >teach you something useful about hypothetical way to implement an AGI?
||> >
||> >
||> > Harry Foundalis' thesis is to specific. It does not look like AGI. It only
||> classifies. It does not manipulate.
||> >
||> > I just thought of a way to make my program train itself. It learns by
||> itself by playing. Playing is exploring. Playing is a product of evolution.
||> Playing lets you try "risky" things in order to learn. Playing is learning
||> by trial and error. That's the perfect thing my program needs. Play is
||> driven by a psychological addiction. But coding addiction to every subsystem
||> in the program is too holistic. We need specialized non-emotional subsystems
||> in order to speed it up. Emotion is aggravating to AGI because there is no
||> need for emotion for AGI. But addiction is emotion. Addition is a motive.
||> >
||> > Initially, we need the program to do some random things such as randomly
||> playing. If it does a specific thing, it gets addictive "chemicals". Then,
||> it is addicted to do that specific thing. For example, it will get addicted
||> to solve tests if it gets addictive "chemicals" after it passed the test.
||> >
||> >
||> > I believe that passing an IQ test requires AGI so my program will have AGI
||> if it scores high on the test.
||> >
||> >
||> >
||> >
||> >
||> >
||> >
||> >
||>
____________________________________________________________________________________
||> > Get the Yahoo! toolbar and be alerted to new email wherever you're
||> surfing.
||> > http://new.toolbar.yahoo.com/toolbar/features/mail/index.php
||> >
||> > -----
||> > This list is sponsored by AGIRI: http://www.agiri.org/email
||> > To unsubscribe or change your options, please go to:
||> > http://v2.listbox.com/member/?&
||> >
||>
||> -----
||> This list is sponsored by AGIRI: http://www.agiri.org/email
||> To unsubscribe or change your options, please go to:
||> http://v2.listbox.com/member/?&
||>
||
||-----
||This list is sponsored by AGIRI: http://www.agiri.org/email
||To unsubscribe or change your options, please go to:
||http://v2.listbox.com/member/?&
||
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=13344661-4bee79