On 23 Oct 2006 at 10:06, Ben Goertzel wrote:
> A very careful distinction needs to be drawn between:
> 
> 1) the distinction between
> 1a) using probabilistic and formal-logical operators for representing 
> knowledge
> 1b) using neural-net type operators (or other purely quantitative, non-
> logic-related operators) for representing knowledge 
> 
> 2) the distinction between
> 2a) using ungrounded formal symbols to pretend to represent knowledge, 
> e.g. an explicit labeled internal symbol for "cat", one for "give", etc.
> 2b) having an AI system recognize patterns in its perception and action 
> experience, and build up its own concepts (including symbolic ones) via 
> learning; which means that concepts like "cat" and "give" will generally be 
> represented as complex, distributed structures in the knowledge base, not 
> as individual tokens 
> 
> From the history of mainstream AI, one might conclude that 1a and 2a 
> inevitably cluster together, so that the only hope for 2b lies in 1b.  
> However, this is not the case. Novamente combines 1a and 2b, and I believe 
> NARS is intended to also.... 

I agree that combining probabilistic logic (with a reasonable amount of
consistency enforcement) with 'bottom-up' learning is crucial. However
I would suggest that '2a' is often worthwhile as a soft, context-dependent
index into '2b', particularly as a inference tool when you can do a lossy
simplification to symbolic logic, do some fast inference on that, then pop
back into managed-consistency-scope probabilistic logic with some
conclusions that are conditional on the estimated probability that the
assumptions behind the simplification hold. Most human usage of scientific
theories and engineering rules looks like this. Some applications (e.g.
FAI self-modification) demand complete rigour and more complicated
techniques to get it, but those are relatively rare.

Similarly it's ok to use embedded chunks of 1b when their inputs and
outputs are tightly scoped and you know what they're doing. Though the
kind of connectionist/informal learning algorithms I'd advocate using
(fully-custom, non-general and tightly-integrated algorithms generated by
the AI using an optimisation pressure model) don't look much like (in my
experience to date) the currently popular plausible-seeming-to-humans
algorithms.

> My contention is that probabilistic logic can be a suitable knowledge 
> representation for raw perceptions and actions, and that logical inference 
> (combined with pattern mining, evolutionary learning and other cognitive 
> operations) can be used to build up abstract concepts grounded in 
> perception and actions,

Agree, with the proviso that my idea of 'adequate grounding' is different
from yours (I'd characterise mine as 'explicit grounding' and yours as
'implicit grounding').

> For instance, this means that the "cat" concept may well not be 
> expressed by a single "cat" term, but perhaps by a complex learned 
> (probabilistic) logical predicate. 

I don't think it's really useful to discuss representing word meanings
without a sufficiently powerful notion of context (which is really hard).
 
> But my point for now is simply that all logic-based systems should not be
> damned based on the fact that historically a bunch of famous AI
> researchers have used logic-based KR in a cognitively unworkable way. 

I certainly agree with that, as long as 'logic-based' means 'probabilistic
logic with bottom-up modelling and no unitary concepts or simple
word-symbol mappings'. Unfortunately many people would read 'logic
based' as 'looks like Cyc'.

> Probabilistic logic is a general formalism that can express anything, and 
> furthermore it can express any thing in a whole lot of different ways.

That isn't a point in its favour. Expressive scope allows people to say
'oh, our system could do that, it just needs the right
rules/network/whatever' whenever you ask them 'so how would your system
implement cognitive ability X?'. The limited expressive scope of classic
ANNs was actually essential for getting relatively naïve and simplistic 
learning algorithms (e.g. backprop, Hebbian learning) to produce useful
solutions to an interesting (if still fairly narrow) class of problems.
OTOH, if you disallow 'please wait for me to program that into the KR' and
'we just need a bigger computer!' excuses, using a very expressive
substrate (at the limit, TM-equivalent code) actually forces people to
design powerful earning algorithms, so in that sense maybe it is a good
thing.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to