On 5/4/07, James Ratcliff <[EMAIL PROTECTED]> wrote:
The point of most of this is humans and an AI would need to construct a
imaginary world environment in their mind. Most people make a typical elephant, and a typical chair and then interact the to as directed.
A blind person still gets its information from experience... if it reads
about an elephant, it proabbly says a big animal the size of a car, and her experience lets her know abnout cars and animals, and she has sat in chairs and know how big they are.
But both of those are tied to the physical experences that she has. You
can only get so much from the words alone unless you have an infinite database where everything poeeible has been described fully.
But many many things can be gathered from the text alone as well.
A VR interface would certainly be nice, but it takes a lot of time to build one and I'm not good at that area. Maybe Ben's AGI-Sim can be used by another AGI? If so we can save a lot of efforts. YKY ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
