What I tried to do with robocore is have a number of subsystems
dedicated to particular modalities such as vision, touch, hearing,
smell and so on.  Each of these modalities operates in a semi
independent and self organised way, and their function is to create
stable abstractions from the raw data of sensory experience.  So what
you end up with are learned representations (you could call them
symbols or neural groups) which can act as building blocks for
experience.

Each of the main modality specific subsystems is linked to a central
hub or switching mechanism (think of it as a big telephone exchange).
This hub can connect the partial representations into multi-modal
constellations which might be called percepts.  These percepts can be
formed by association in a sort of bottom up way, but they can also be
recalled by higher level systems so that you have bi-directionality.
A memory in this architecture is a kind of reconstruction of the
original experience from the representational jigsaw.  This is a
rather Freudian way of thinking about memory, since it's an active
reconstruction which could be subject to alteration over time in the
light of new experiences.  From an information storage point of view
it's also fairly efficient.  To recall a memory the hub switches are
set appropriately which lights up those areas within particular
modalities close to the level of direct experience.



On 27/02/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
>  > No one in AGI is aiming for common sense consciousness, are they?
>  >
>
>
> The OpenCog and NM architectures are in principle supportive of this kind
>  of multisensory integrative consciousness, but not a lot of thought has gone
>  into exactly how to support it ...
>
>  In one approach, one would want to have
>
>  -- a large DB of embodied experiences (complete with the sensorial and
>  action data from the experiences)
>
>  -- a number of dimensional spaces, into which experiences are embedded
>  (a spatiotemporal region corresponds to a point in a dimensional space).
>  Each dimensional space would be organized according to a different principle,
>  e.g. melody, rhythm, overall visual similarity, similarity of shape, 
> similarity
>  of color, etc.
>
>  -- an internal simulation world in which concrete remembered experiences,
>  blended experiences, or abstracted experiences could be enacted and
>  "internally simulated"
>
>  -- conceptual blending operations implemented on the dimensional spaces
>  and directly in the internal sim world
>
>  -- methods for measuring similarity, inheritance and other logical 
> relationships
>  in the dimensional spaces and the internal sim world
>
>  -- methods for enacting learned procedures in the internal sim world,
>  and learning
>  new procedures based on simulating what they would do in the internal sim 
> world
>
>
>  This is all do-able according to mechanisms that exist in the OpenCog and NM
>  designs, but it's an aspect we haven't focused on so far in NM... though 
> we're
>  moving in that direction due to our work w/ embodiment in simulation
>  worlds...
>
>  We have built a sketchy internal sim world for NM but haven't experimented 
> with
>  it much yet due to other priorities...
>
>
>  -- Ben
>
>
>  -------------------------------------------
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to