On Sat, Sep 20, 2008 at 4:44 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Fri, 9/19/08, Jan Klauck <[EMAIL PROTECTED]> wrote:
>
> > Formal logic doesn't scale up very well in humans. That's why this
> > kind of reasoning is so unpopular. Our capacities are that
> > small and we connect to other human entities for a kind of
> > distributed problem solving. Logic is just a tool for us to
> > communicate and reason systematically about problems we would
> > mess up otherwise.
>
> Exactly. That is why I am critical of probabilistic or uncertain logic.
> Humans are not very good at logic and arithmetic problems requiring long
> sequences of steps, but duplicating these defects in machines does not help.
> It does not solve the problem of translating natural language into formal
> language and back. When we need to solve such a problem, we use pencil and
> paper, or a calculator, or we write a program. The problem for AI is to
> convert natural language to formal language or a program and back. The
> formal reasoning we already know how to do.



If formal reasoning were a solved problem in AI, then we would have
theorem-provers that could prove deep, complex theorems unassisted.   We
don't.  This indicates that formal reasoning is NOT a solved problem,
because no one has yet gotten "history guided adaptive inference control" to
really work well.  Which is IMHO because formal reasoning guidance
ultimately requires the same kind of analogical, contextual commonsense
reasoning as guidance of reasoning about everyday life...

Also, you did not address my prior point that Hebbian learning at the neural
level is strikingly similar to formal logic...

In probabilistic term logic we do deduction such as

A --> B
B --> C
|-
A --> C

and use probability theory to determine the truth value for the conclusion
based on the truth values of the premises.

On the other hand, if A, B and C represent neuronal assemblies and the -->'s
are synaptic bundles, then Hebbian learning does approximately the same
thing ... an observation that ties in nicely with recent work on Bayesian
neuronal population coding.

Formal logic is not something drastically different from what the brain
does.  It's an abstraction from what the brain does, but there are very
clear ties btw formal logic and neurodynamics, of which I've roughly
indicated one in the prior paragraph (and have indicated others in
publications).

Mapping knowledge btw language and internal representations is not a problem
independent of inference, it is a problem that is solved by inference in the
brain, and must ultimately be solved by inference in AI's.  The fact that
the brain implements its unconscious inferences in terms of Hebbian
adjustment of synaptic bundles btw cell assemblies, rather than in terms of
explicit symbolic operations, shouldn't blind us to the fact that it's still
an inferencing process going on...

Google Search is not doing much inferencing but it's a search engine not a
mind.  There is a bit of inferencing inside AdSense, more so than Google
Search, but it's still of a pretty narrow and simplistic kind (based on EM
clustering and Bayes nets) compared to what is needed for human-level AGI.

-- Ben G

-- Ben G



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to