>a> Sure, I can write a program to differentiate between a square and a circle,
>a> but it is not AGI. I need the program to automatically train and
>a> recognize different shapes.
>
>This is the most important question you have to ponder before
>doing anything specific (and useless!).
>Even if you implement something that can "automatically train itself"
>to do this particular thing, would it scale to do anything? Would it
>teach you something useful about hypothetical way to implement an AGI?
Harry Foundalis' thesis is to specific. It does not look like AGI. It only
classifies. It does not manipulate.
I just thought of a way to make my program train itself. It learns by itself by
playing. Playing is exploring. Playing is a product of evolution. Playing lets
you try "risky" things in order to learn. Playing is learning by trial and
error. That's the perfect thing my program needs. Play is driven by a
psychological addiction. But coding addiction to every subsystem in the program
is too holistic. We need specialized non-emotional subsystems in order to speed
it up. Emotion is aggravating to AGI because there is no need for emotion for
AGI. But addiction is emotion. Addition is a motive.
Initially, we need the program to do some random things such as randomly
playing. If it does a specific thing, it gets addictive "chemicals". Then, it
is addicted to do that specific thing. For example, it will get addicted to
solve tests if it gets addictive "chemicals" after it passed the test.
I believe that passing an IQ test requires AGI so my program will have AGI if
it scores high on the test.
____________________________________________________________________________________
Get the Yahoo! toolbar and be alerted to new email wherever you're surfing.
http://new.toolbar.yahoo.com/toolbar/features/mail/index.php
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=12839928-dd0dd9