Richard,

>  How does this relate to the original context in which I cited this list
>  of four characteristics?  It loks like your comments are completely outside
> the original context, so they don't add anything of relevance.

I read the thread and I think my comments are relevant

>  Let me bring you up to speed:

>  1) The mere presence of these four characteristics *somewhere* in a
>  system has nothing whatever to do with the argument I presented (this
>  was a distortion introduced by Ed Porter in one of his many fits of
>  misunderstanding).  Any fool could put together a non-complex system
>  with, for example, four distinct modules that each possessed one of
>  those four characteristics.  So what?  I was not talking about such
>  trivial systems, I was talking about systems in which the elements of
>  the system each interacted with the other elements in a way that
>  included these four characteristics.

This last sentence is just not very clearly posed.

The four aspects mentioned were

-- memory
-- adaptation/development
-- identity
-- nonlinearity

In the Pet Brain,

-- memory is a dynamic process associated with a few coupled nonlinear
dynamics acting on a certain data store

-- adaptation/development is a process that involves a number of dynamics
acting on memory

-- the identity of a pet is associated with certain specified
parameters, but also
includes self-organizing patterns in the memory that are guided by
these parameters
and other processes

-- nonlinearity pervades all major aspects of the system, and the
system as a whole

>  So when you point to the fact that "somewhere" in Novamente (in a single
>  'pet' brain) you can find all of these, it has no bearing on the
>  argument I presented.  I was principally referring to these
>  characteristics appearing at the symbol level (and symbol-manipulation
>  level), not the 'pet brain' level.  You can find as much memory,
>  identity, etc etc as you like, in other sundry parts of Novamente, but
>  it won't make any difference to the place where I was pointing to it.

I'm not sure how you're defining the term "symbol."

If you define it in the classical Peircean definition (symbol as contrasted to
icon and index) then indeed the four aspects you mentioned do occur in
the Pet Brain on the symbol level.

>  2)  Even if you do come back to me and say that the symbols inside
>  Novamente all contain all four characteristics, I can only say "so what"
>  a second time ;-).  The question I was asking when I laid down those
>  four characteristics was "How many physical systems do you know of in
>  which the system elements are governed by a mechanism that has all four
>  of these, AND where the system as a whole has a large-scale behavior
>  that has been mathematically proven to arise from the behaviors of the
>  elements of the system?"
>
>  The answer to that question (I'll save you the trouble) is 'zero'.

But why do you place so much emphasis on mathematical proof?

I don't think that mathematical proof is needed for creating an AGI system.

(And I say this as a math PhD, who enjoys math more than pretty much any
other pursuit...)

Formal software verification is still a crude science, so that very few of the
software programs we utilize have been (or could tractably be) proven to
fulfill their specifications.  We create software programs based on piecemeal
rigorous justifications of fragments of the software, combined with intuitive
understanding of the whole.

Furthermore, as a mathematician I'm acutely aware of physicists' often low level
of mathematical rigor.  As a single example, Feynman integrals in particle
physics were used by physicists for decades, to do real calculations predicting
the outcomes of real experiments with great accuracy, before finally some
mathematicians came along and provided them with a rigorous mathematical
grounding.

>  The inference to be made from that fact is that anyone who does put
>  together a system like  -  like, e.g., the fearless Mr. B. Goertzel  -
>  is taking quite a bizarre and extraordinary position, if he says that he
>  alone, of all people, is quite confident that his particular system,
>  unlike all the others, is quite understandable.

"Understandable" is a vague term.  In complex systems it's typical that
one can predict statistically properties of the whole system's behavior, yet
can't predict the details.  So a complete understanding is intractable but
a partial, useful qualitative understanding is more feasible to come by.

Also, I note there's a difference btw an engineered and a natural system,
in terms of the degree of inspection one can achieve of the system's internal
details.

I strongly suspect that in 10-20 years neuroscientists will arrive at a decent
qualitative explanation of how the lower-level mechanisms of the brain generate
the higher-level patterns of human mind.  The reason we haven't yet, is not that
there is some insuperable "complexity barrier", but rather that we
lack the appropriate
data.

For an AGI system which we build,

a) we have access to all the data we want regarding the system's
internal structures
and dynamics

b) we can build the system so as not to have unnecessary complicatedness, adding
confusion to the necessary complexity

so the task of understanding the system (partially) is easier.

>  This is the point where we have been many times before:  when faced with
>  the possibility that your tangled(*) system might be just as vulnerable
>  to the problem as those thousands upon thousands of examples of complex
>  systems that are *not* understandable, you express great confidence with
>  words like:
>
>
> > ... I don't agree
> >
> > that this complexity is so severe as to render implausible an
> > intuitive understanding, from first principles, of the system's
> > qualitative large-scale behavior based on the details of its
> > construction.  It's true we haven't done the math to predict the
> > system's qualitative large-scale behavior rigorously; but as system
> > designers and parameter tuners, we can tell how to tweak the system to
> > get it to generally act in certain ways.
> >
>
>  To the best of my knowledge, nobody has *ever* used "intuitive
>  understanding" to second-guess the stability of an artificial complex
>  system in which those four factors were all present in the elements in a
>  tightly coupled way.

It seems to me that
we have done that over the last few months with the Pet Brain.  I doubt
it's the first time.

However, it's really difficult to seriously pursue these discussions with you
because your definitions of terms are so slippery.

>  So that is all we have as a reply to the complex systems problem:
>  engineers saying that they think they can just use "intuitive
>  understanding" to get around it.

Engineering in the real world is nearly always a mixture of rigor and
intuition.  Just like analysis of complex biological systems is.

Anyway, I don't expect to convince you that engineered AGI is possible,
until it's actually ready.  It's clear your attitude is pretty fixed
in your mind.

I don't think your intuition is a stupid one, or even a naive one, I just think
it's wrong.  I don't think that brains are as profoundly irreducible
as you think
they are, nor that AGI systems need to be.  I think that predicting the relevant
high-level statistical patterns of these complex systems based on understanding
of the components and their interrelationships, is not as hard as you say.
I realize that I have not proved this, and nor have you proved your point.
Neither of us is going to prove our point via verbal argumentation; and to prove
either point mathematically would require math way beyond what exists.  So
we'll just have to wait and see how the science evolves over the next years
& decades...

-- Ben

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to