On 10/18/07, Derek Zahn <[EMAIL PROTECTED]> wrote:
>  Because neither of these things can be done at present, we can barely even
> talk to each other about things like goals, semantics, grounding,
> intelligence, and so forth... the process of taking these unknown and
> perhaps inherently complex things and compressing them into simple language
> symbols throws out too much information to even effectively communicate what
> little we do understand.

Are you suggesting that a narrow AI designed to improve communication
between researchers would be a worthwhile investment?  Imagine it as
the scaffolding required to support the building efforts.  "Natural"
language is enough of a problem in its own right that we have
difficulty talking to each other, to say nothing of building
algorithms that can do it even as poorly as we do.  At least if there
were a way to exchange the context along with an idea, there might be
less confusion between sender and receiver.  The danger of
contextually rich posts (the kind Richard Loosemore often authors) is
that there is too much information to consume.  That's where I think
narrow Assistive Intelligence could add the sender's assumed context
to a neutral exchange format that the receiver's agent could properly
display in an unencumbered way.  The only way I see for that to happen
is that the agents are trained on/around the unique core conceptual
mode of each researcher.

(I know... that's brainstorming with no idea how to begin any implementation)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55034905-bea938

Reply via email to