On Nov 18, 2007 6:45 PM, Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:

> Ben,
>
> Have you already considered what form of "multi-agent epistemic logic"
> (or whatever extension to PLN) Novamente will use to merge knowledge
> from different avatars?


Well, standard PLN handles this in principle via ContextLinks which allow
contextual (e.g. perspectival) specialization of knowledge.

However, **control** of this kind of process is the tricky thing, really..
melding knowledge from different minds is computationally costly and one
has to decide when one wants to do it...



>
> Related, do you consider some form of "privacy policy", or do you put
> the responsibility for not leaking secrets on avatars' owners?
>

For our initial virtual animals:
There is a collective memory among all the AI animals, and also an
individual
memory for each animal, and there's a policy for deciding what goes in
which...

In later products we will likely allow more flexibility, and let users
control
how much they want their agents to share and/or take from the collective
agent unconscious.

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66449659-1f8a96

Reply via email to