Hmm...
You mention
1) generalization
2) graded/smooth response
as advantages of connectionist systems
But of course, there is a vast amount of work on inductive and abductive
reasoning (i.e. generalization) in logic-based systems, and on uncertain
logics (which provide graded/smooth response quantified by real number
values). So even purely logic-based systems can provide both
generalization and graded/smooth response.
Novamente is not a neural network system, in the sense that its
equations do not try to mimic the dynamics of the brain.
However, I'm not sure it's a "symbolic" system in the traditional sense
either. This depends on how you intepret "symbolic."
There are nodes and links in Novamente. Let's say you have Novamente
hooked up to a camera eye with greyscale output, and the output of pixel
(100,200) has intensity 30% of maximum at time 12:30 PM March 17 2004.
Then we have a relationship in Novamente that we symbolize as
ExampleLink :=
atTime
(
ExecutionLink PixelIntensity (100,200) .3 ,
12:30 PM March 17 2004
)
Here for instance
* 100 and 200 and .3 are NumberNodes
* 12:30 PM March 17 2004 is a TimeNode
* PixelIntensity is a SchemaNode (indicating a function that takes input
and output
* atTime is a PredicateNode
* the (,) notation indicates a ListLink
Let's say Novamente then represents a circle as a certain pattern among
PixelIntensity values (expressed as a complex PredicateNode involving
combinatory logic operators)
Let's say it then generalized from this a more abstract mathematical
notion of a circle.
Is this "symbolic"? In what sense? Patterns are being built up based
on raw perceptual inputs, much as they would be in a neural network.
It's using a logical formalism --- probabilistic combinatory term logic
-- instead of pseudo-neural operators... But so what?
How is the link ExampleLink given above any more "symbolic" than a
neuron that fires based on the intensity of input to a given pixel?
Just because it records the time-stamp? Of course the time-stamp isn't
needed, it's just convenient, cruder mechanisms could be used instead.
I find that the symbolic/subsymbolic distinction is often misused. In a
complex cognitive system like Novamente (wants to be ;), there are both
symbolic and subsymbolic aspects, but it's hard to draw the line between
the two.
Peirce, in his semiotics, drew a crisp distinction between icons,
indices and symbols, but he also understood cognitive uncertainty, and
he recognized that a given mental form could share aspects of all these
different levels of reference. This is certainly true within Novamente.
-- Ben Goertzel
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Yan King Yin
> Sent: Monday, April 05, 2004 8:42 PM
> To: [EMAIL PROTECTED]
> Subject: [agi] Connectionism Required for AGI?
>
>
> Hi...
>
> I'm wondering how AGI designers view this issue. Usually
> we think connectionist systems have the advantages of:
> 1) generalization and
> 2) graded / smooth response
> among others.
>
> I assume Novamente is using a symbolic representation,
> which may become a difficult problem to solve once the
> AGI is "locked" into a certain framework. Or are there
> some ways to get around those limitations in a symbolic
> / Bayesian setting?
>
> Personally I'm more familiar with connectionism and I'm
> looking for an AGI group to join. But I'm also open to
> other AI paradigms.
>
> YKY
>
>
> ____________________________________________________________
> Find what you are looking for with the Lycos Yellow Pages
> http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.c
om/default.asp?SRC=lycos10
-------
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]