Mike,

> inference chain, can be done by lots of methods besides logic.  That
> is, in a real-world reasoning context, logical inference will
> generally be nudged and guided in the right direction by non-logical
> methods...
>
> Yeah, well it's the "non-logical" methods that are the key.

I believe that both logical and non-logical methods are key to human
intelligence...

The methods you prefer to highlight are important, but not as exclusively
or overwhelmingly important as you say...

> You are v. vague about what you think they entail - and seem to think that
> they might be in logical form - "declarative episodic memory."

Declarative episodic memory is different from internal simulation.

OpenCog's internal simulation is not logic-based, it's based on actually
simulating the 3D physical world inside the system's "mind's eye" ...

Internal simulation is valuable to a mind, but not all-important, and not
always needed...

> There is no reason to use logic here - other than that it's the main method
> you know and rely on (although it hasn't had and can't have any AGI fruit).
> Would you use logic to fit your hand or body into a complex hole?

Sometimes I would, sometimes I wouldn't -- it would depend on the case...

When I used to rock-climb, I relied on a variety of cognitive approaches --
some logical and some purely intuitive... some more body-centered and some
more abstract-cognition-centered...

> Do we use logic to mirror people in order to predict their moves and
> understand their motives (Iacoboni's Mirroring People is essential reading)?

There's a lot of literature showing that "theory of mind" (the way we
humans understand
others' behaviors) uses a combination of inferential (i.e. logical) and
simulative (e.g. mirror neuron) methods.  Both approaches are done in the human
mind; both are important.

> Above all, logic has zero creativity - it cannot produce new hypotheses.
> [Embodied] simulation/imagination is identified with creativity because
> that's the source.

Actually, uncertain logic (e.g. abductive reasoning) can be quite
creative, due to its
incorporation of a stochastic aspect...

Your notion of creativity reminds me of what my friend Meg Heath used to call
"radical creativity."   What she  meant was a kind of creativity that
comes totally
out of the blue, not derived from any data previously available to the
 mind at hand...

However, neither logic nor internal simulation nor neural nets nor any
other scientifically
comprehensible mechanism can do that.  No matter what method your
AI system or physical system or robot does, from a scientific view, what it's
doing is taking in some sensory data, and then choosing actions based on

-- its sensory data
-- its physical state before it received the sensory data

So, in that sense, according to science, there is no radical creativity.  Oops.

After all, an internal simulation is just based on

-- the world-simulation framework the  mind/brain possesses

-- the data it gathers from its senses, which are fed into this
world-simulation framework

Then, the mind-brain runs internal simulations based on its internal simulation
framework and its sense-data.... It draws conclusions based on this.
But, in what
sense are these conclusions really "novel"?  They are implicit in the
simulation framework
and the sense-data fed into it, in exactly the same sense that a
logical reasoning
engine's conclusions are implicit in the premises/axioms fed into it.

Logic has, or does not have, creativity, in the same sense that
internal simulation
has, or does not have, creativity....  According to science as we now
understand it,
neither has "radical creativity."

On the other hand, science is not necessarily a complete model of the universe,
broadly speaking.  According to science, there also aren't any qualia.

To my mind, radical creativity is a useful descriptor on the *subjective* level,
the level on which qualia exist.  But I don't find it useful on the physical and
computational level -- the scientific level -- which is the
mostrelevant level for
engineering AGI systems...

-- Ben


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to