Ben Goertzel wrote:
Richard,

I think that it would be possible to formalize your "complex systems argument"
mathematically, but I don't have time to do so right now.

Or, then again ..... perhaps I am wrong:  maybe you really *cannot*
understand anything except math?

It's not the case that I can only understand math -- however, I have a
lot of respect
for the power of math to clarify disagreements.  Without math, arguments often
proceed in a confused way because different people are defining terms
differently,a
and don't realize it.

But, I agree math is not the only kind of rigor.  I would be happy
with a very careful,
systematic exposition of your argument along the lines of Spinoza or the early
Wittgenstein.  Their arguments were not mathematical, but were very rigorous
and precisely drawn -- not slippery.

Thinking about it late last night, I certainly came to the same conclusion: that I could formalize the ideas in Wittgenstein style easily enough, define terms carefully etc.

Now, I thought that I *had* done that before, for anyone sufficiently versed in the various fields to which I referred, but....

I would be more than happy to repackage the ideas in that form, because that is my natural writing style anyway. I have spent decades having that Wittgenstein style beaten out of me by my elders and betters (i.e. my supervisor), so it would be like getting into a nice warm bath to go back and write like that again.





Perhaps you have no idea what the actual
argument is, and that has been the problem all along?  I notice that you
avoided answering my request that you summarize your argument "against" the
complex systems problem ... perhaps you are just confused about what the
argument actually is, and have been confused right from the beginning?

In a nutshell, it seems you are arguing that general intelligence is
fundamentally founded
on emergent properties of complex systems, and that it's not possible for us to
figure out analytically how these emergent properties emerge from the
lower-level structures
and dynamics of the complex systems involved.   Evolution, you
suggest, "figured out"
some complex systems that give rise to the appropriate emergent
properties to produce
general intelligence.  But evolution did not do this figuring-out in
an analytical way, rather
via its own special sort of "directed trial and error."   You suggest
that to create a generally
intelligent system, we should create a software framework that makes
it very easy to
experiment with  different sorts of complex systems, so that we can
then figure out
(via some combination of experiment, analysis, intuition, theory,
etc.) how to create a
complex system that gives rise to the emergent properties associated
with general
intelligence.

I'm sure the above is not exactly how you'd phrase your argument --
and it doesn't
capture all the nuances -- but I was trying to give a compact and approximate
formulation.   If you'd like to give an alternative, equally compact
formulation, that
would be great.

I think the flaw of your argument lies in your definition of
"complexity", and that this
would be revealed if you formalized your argument more fully.  I think
you define
complexity as a kind of "fundamental irreducibility" that the human
brain does not possess,
and that engineered AGI systems need not possess.  I think that real
systems display
complexity which makes it **computationally difficult** to explain
their emergent properties
in terms of their lower-level structures and dynamics, but not as
fundamentally intractable
as you presume.

But because you don't formalize your notion of complexity adequately,
it's not possible
to engage you in rational argumentation regarding the deep flaw at the
center of your
argument.

However, I cannot prove rigorously that the brain is NOT complex in
the overly strong
sense you  allude it is ... and nor can I prove rigorously that a
design like Novamente Cognition
Engine or OpenCog Prime will give rise to the emergent properties
associated with
general intelligence.  So, in this sense, I don't have a rigorous
refutation of your argument,
and nor would I if you rigorously formalized your argument.

However, I think a rigorous formulation of your argument would make it
apparent to
nearly everyone reading it that your definition of complexity is
unreasonably strong.

I am pressed for time right now, but my brief comment on the above version of my argument is that it gives a superficial and slightly skewed summary of the conclusion, but leaves out the exact sequence of the argument, which is itself quite tight and precisely argued.

So, somewhere along the line, you probably skimmed it a bit too quickly and got the general conclusion but missed the detail. Unfortunately, I think you then got the impression that there was not detail to be had.


But, all said and done, I write a more stylized version of it. Should be ready by the time you get back from the frozen north.

Watch out for grizzlies.  They eat ontologists.


Richard Loosemore







-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to