> > The problem at hand is, you're given some absolute and
> > some conditional probabilities regarding the concepts
> > at hand, and you want to infer a bunch of others.
>
> Hmm. The think I find interesting here is that humans don't have a good
> solution to this problem. Give a typical human a set of data like
> the above,
> and he'll just give you a blank look. Give him a specific problem
> and he'll
> do some first-order inference (i.e. Fluffy is more likely to be a
> cat's name
> than a dog's), but we rarely take it more than one step. Also, it seems to
> me that humans usually only look for the specific data required by a
> problem, rather than trying to figure out all the logical
> consequences of a
> set of data.
>
> This does not, of course, mean that you should give Novamente the
> ability to
> solve this kind of problem. But it does hint that what you're
> building is a
> different kind of mind than what humans have...
>
> Billy Brown

Yes, we are explicitly trying to build a different kind of mind than what
humans have.  Computers have a capability for precision that seems to vastly
exceed that of the human brain.  Exploiting this capability for precision in
an AI design seems appropriate.  Perhaps it can make up for the lack of the
massive parallelism that the human brain possesses....

About humans' abilities at probabilistic inference.  There has been plenty
of research on this in the cognitive psych community.  It seems that humans
are OK at solving this kind of problem *only in familiar contexts*.

That is, we can sometimes approximately solve problems formally mappable
into this kind of probabilistic inference problem, but our ability at
solving them is vastly better if the problems occur in familiar domains
(physical objects, social interactions, etc.) than if the problems occur in
an "abstracted" form.

There are two possible explanations for this:

1) Humans use special-case algorithms to solve these problem, a different
algorithm for each domain

2) Humans have a generalized mental tool for solving these problems, but
this tool can only be invoked when complemented by some domain-specific
knowledge

My intuitive inclination is that the correct explanation is 2) not 1).  But
of course, which explanation is correct for humans isn't all that relevant
to AI work in the Novamente vein.


-- Ben G


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to