On 4/30/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
it is in the human brain. Every concept must be a tree, which can
continually be added to and fundamentally altered.  Every symbolic concept
must be grounded in a set of graphics and images, which are provisional and
can continually be redrawn.

That plastic template, as with all concepts, is permanently open to
revision. Probably, all the visualisations of house that your brain produces

And that is how we learn language - and indeed all our knowledge about the
world - provisionally. Everyone's personal history of learning is a history
of continually having ascribed meanings corrected.

graphics, image, redrawn, visualizations - all indicative of a high
degree of visual-spatial thinking.  I'm curious, are your own AGI
efforts are modelled on this mode of thought?  I ask because I wonder
if the machine intelligence we build will "envision" concepts in an
analogous way to our own processes.  If we (humans) currently
visualize because that part of our brain evolved the largest bandwidth
and working set out of necessity for survival, what pressure would
facilitate that evolution in the machine we build?  (or is it by
design that we model the machine after our own thought process)
</tangent>

Is the notion of a 'template' too fixed even in plastic?  Though it
requires a lot of computation, I imagine the probability would need to
be calculated in real-time at each point in context.  If the root node
of the 'house' tree were evaluated for a realtor it would weight the
leaves associated with structural information and property value more
highly than if the 'house' concept were evaluated as a sibling idea to
'home.'  Essentially every fact needs a confidence metric to determine
how well it relates to the current scope of investigation.  In the
case of double and triple entendre, we humans (sometimes) delight in
the unexpected relation across different contexts by way of a
particular word's multiple potential meanings.

Everyone's personal history of relationships between ideas is what
makes each of us unique.  In the elephant/chair scenario, my own
childhood of watch cartoons prevailed in visualizing a context where
an elephant in a chair was not a physics problem.  If an AGI is
raised/trained on cartoons, it will probably develop a wildly
different perspective of subjective reality than if it trained in a
military application.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to