"The idea of putting a baby AI in a simulated world where it might learn cognitive skills is appealing. But I suspect that it will take a huge number of iterations for the baby AI to learn the needed lessons in that situation"
This is definitely a serious consideration - one way to overcome this might be the inclusion of innate behaviors that steer the new mind towards activities/actions that engender cognitive and emotional development. Babies instinctively look at faces, reach for objects, and track moving things w/ their eyes and eventually head and neck. An AI's innate behaviors could have a built in reward structure, where, for example, succesfully tracking a ball rolled across the simulated "floor" would reinforce the neural network patterns that produced the desired behavior. On a related note, what is the nature of pleasure(reward)? is it simply the sensation that occurs b/c of the neural activity/reorganization that occurs when needs are fulfilled or tasks completed successfully? If so, does pleasure correlate to increases in neural efficiency? Neurons and the networks they make up require a certain amount of reinforcement to maintain normal functioning (this is a fact, though I wish I had a reference handy to back up that assertion :). I'm guessing that pleasure is caused when reinforcement levels rise above their recent average. This would account for the fact that a) practising or doing something you like is pleasurable b) pleasure is relative to circumstance, and c) all forms of pleasure seem to be built upon the same core sensation. IMO this is important because it takes chemical effects out of the emotion equation, ie chemicals cause pleasure by activating existing reinforcement mechanisms. If I'm right, emotions are (at their most basic level) nothing but patterns in the activity of neural network type system, which we "feel" b/c we 'are' the system's activity, not the system itself... On a practical note, if the above hypothesis is correct, it would be relatively easy to identify the signature patterns of different emotions (via PET or fMRI) and emotionally "program" an AI's reward structure to ensure that it behaves itself J Standley http://users.rcn.com/standley/AI/AI.htm updated today! see: http://users.rcn.com/standley/AI/Neural%20Processing.htm http://users.rcn.com/standley/AI/ISL.htm ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]