> > > I'm not so sure. Some things are "built in", like our 4-D world and our > 2-D eyes into it. Within that world are objects, some of which are fixed > and some of which are not. > > On the other side, do those objects have arms and legs? How many? Do they > walk on horizontal surfaces? Here we are getting into details that are > probably better discovered than programmed. >
Yes, of course I agree with these examples. As someone else said in this thread, " the current state of cognitive science only tells us that an intelligent system must have a lot of built-in knowledge in some form, and must learn a lot more; exactly what gets supplied as built-in knowledge and in what form is still a set of free parameters in the design. " I have no significant interest in approaches like Cyc that involve hand-coding of specific knowledge like "people have arms".... On the other hand, nor do I think an AGI system necessarily has to learn stuff like -- visual and auditory patterns in the real world are often hierarchical -- it's often useful to analyze co-occurring sounds and images -- blending aspects of two useful concepts, will often yield another useful concept -- looking at patterns binding together things in the same spatiotemporal vicinity is often useful etc. I think that a large number of biases like this are built into the human brain, and it's sensible to build analogous biases into an AGI system.... Indeed one almost inevitably does so, when crafting a practical AGI architecture... But, the question of where to stop in terms of encoding biases, is one with no firm answer in terms of known science; so each AGI designer must make their own choices.... -- Ben G ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
