David Noziglia wrote:
> First, if the idea is to build computer systems that can do the
> same things
> that "natural" intelligent systems - people - can do, then the
> term "General
> Intelligence" is not a true description of either the goal or the
> methodology.

To me, "generality" of intelligence is a fuzzy, not binary criterion.

Humans clearly have more generality in their intelligence than Deep Blue or
Mathematica....

There may be other minds out there somewhere, with intelligence vastly more
general than ours.  An awful lot of human intelligence is specialized for
things like scene understanding and social interaction, but yet we're
nowhere near as specialized as "narrow AI" programs like the ones I just
mentioned...

> The idea that human intelligence is contained or
> described by
> a single factor - Burt's and Spearman's g - is pretty much a dead letter,
> despite trash political manifestos disguised as science like The
> Bell Curve.
> We no are pretty sure that human intelligence is built out of lots of
> different "tools" to accomplish specific tasks like perception,
> comprehension, motivation, and calculation.

Whether there is a single number to measure *degree* of intelligence, is a
different question than whether it's meaningful to speak about the
*generality* of a system's intelligence.

There are many ways to measure degree of intelligence, and also many ways to
formally define generality of intelligence if one goes that route.

I continue to think that the generality of a system's intelligence is a
meaningful concept.  However I stress that it's an intuitive concept, which
could be mathematically formalized and empirically measured in a lot of
different ways.

> So the members of this group are
> each in their own way working on ways of modeling intelligent
> tasks, and the
> work done on these goals is significant and valuable.

I have done (and am doing) work on task-specific "narrow AI", and it's very
different from the work I'm doing on AGI.

The difference is that in narrow AI, one is trying to create programs that
can solve some specific human-defined problems.

In AGI, we're trying to create computer programs that can solve a wide
variety of problems, and can solve new types of problems without having
humans explicitly formulate the new problems for them, or modify them to
suit the new problems.

To achieve this kind of "general intelligence", I believe, one needs to
create a system that integrates perception, action, cognition, memory,
learning of declarative & procedural knowledge, and a host of other things.
I believe one needs a self-organizing system that interacts richly with an
environment that contains a variety of interconnected "problems" and
"tasks."

In practice, work on building this kind of system is VERY DIFFERENT from
work on building narrow-AI systems oriented toward particular tasks.  This
is so, even though there is some overlap in terms of software tools,
algorithms and data structures between the two pursuits.

> But that's not what you're aiming at.  You're also hoping that at
> some point
> all your task-specific intelligent tools will get hooked together into an
> entity that can coordinate all their work, and that can then turn its
> electron-fast analysis into a self-analyzing, creative loop that generates
> something that passes the Turing Test.

It is not the case that we're building a host of task-specific tools and
hooking them together.

In the Novamente project, we have a coherent overall design for an AGI, and
we're coding and testing it -- step by step.

It happens that some of the parts of this in-development system have
narrow-AI uses which can be commercialized.  This is fortunate in terms of
pragmatic funding issues, but not so pertinent to the AGI goal itself.

> I would suggest, however - and here I know I'm going way too far
> - that the
> term you have chosen (AGI) for this meta-project has been deliberately
> selected to make the enterprise sound scientific and legitimate.

That is somewhat true.  The term was formulated (or at least, introduced to
me) by Shane Legg, when we were looking for a title for our (forthcoming)
edited volume on AGI.  My proposed title "Real AI", was deemed too
confrontational, and AGI was suggested as something that wouldn't ruffle so
many feathers.

I don't think there's anything wrong with choosing a relatively
non-confrontational name.  The ambitious and speculative nature of the
research is not being hidden from anyone.  But if choosing a milder name
helps funding sources and journal editors to accept our work as serious,
then to my mind it's a worthwhile thing.  [At age 35, my idealism has
limits, and I'd prefer to focus it on really important things like keeping
the AI design itself pure ;)  ]

>  What you
> are actually after - and what the arguments are really about - is
> something
> quite different.
>
> Building the tools is possible.  Coordinating them is possible.  But the
> next step, which you have mislabeled AGI, is not.
>
> Because what you are really after should be called not Artificial
> Intelligence, but Artificial Consciousness.

My own opinion is that when AGI is created, Artificial Consciousness will
come along for free.

I am a panpsychist; I think that everything, right down to particles, is
"conscious" a little bit.

Some things are more conscious than others.  The apparent partial
correlation btw degree of general intelligence and degree of consciousness
is a very interesting thing.  I have done a lot of deep thinking about this,
and written some things about it, e.g. in my rough-draft online manuscript
"Unification of Science and Spirit" [which is old, and doesn't fully
represent my current views; some bits of it are rather embarrassing to me
now, actually.]

But I don't think we need to crack that deep philosophical puzzle in order
to create an AGI....

To make an even stronger statement: I think we can create a conscious
machine without fully understanding consciousness.

> That is the key characteristic of being human.  It is not
> something that can
> be built through a reductionist construction of finite Turing Machine
> programs.

I understand that you, Paul Pruiett, Roger Penrose, Stuart Hameroff, and
some other smart, deep-thinking people hold this belief.

In fact, I once held that same belief too.  But I changed my mind.

None of the arguments you or other believers in the non-algorithmic creed
have put forth, seem very convincing to me.  I have read nearly everything
that's been published on this topic, so my view is certainly not
ill-informed.

This gets into some rather deep philosophical issues...

I now view the universe as consisting of three things: patterns, physical
reactions, and pure chance.

Pure chance could be called a lot of other things: spirit, mystery,
elemental randomness, whatever you like....  Charles S. Peirce had a lot to
say about this.  He called it First, as opposed to Second (the physical
universe of reactions), and Third (the mental universe of relationships).

Classical physics, quantum physics, quantum gravity and the theory of Turing
machines are examples of patterns.  So are you and me.  So is the letter
"a."  In Peircean terms, patterns are Third.

Consciousness is clearly somehow associated with the pure-chance aspect of
being, with Peircean First.

Naively, one might think that this intuition about consciousness being
associated with chance, is incompatible with the possibility of a conscious
computer program.  Because computer programs are in theory deterministic,
non-chance-based.

But reality is a lot deeper than that.  I have not yet plumbed all the
depths but I have visited a lot of them....  The following comments are
speculative but may illustrate the flavor of my thinking in these areas.

I cannot in practice predict the behavior of a very complex computer
program.  I cannot predict the detailed behavior of a Novamente system even
now, let alone a Novamente system as it will (hopefully) exist in a few
years, with some real AGI going on.

When you look at the mathematical definition of "randomness", one observes
that the concept is defined *only relevant to an observer*.  Now, the theory
of CS tells you that for infinitely large entities, the observer-dependence
goes away in a sense.  But I think the finite definition is the important
one.  I think that chance is intrinsically a subjective notion.  If I can't
predict what a Novamente system is going to do, because of basic limitations
imposed by the finitude of my brain, then that Novamente system is
chance-displaying, to me.

I think chance is subjective, and hence, if consciousness is associated with
chance, then in a sense whether X is conscious or not may depend on the
perspective from which X is being observed....

We then have a Godelian argument that any complex mental system is going to
be conscious with respect to itself, its own subjective point of view.
Because no really complex system can fully predict itself....  This
conclusion fits in naturally with the panpsychic attitude I mentioned above.

I do not expect these brief philosophical musings to be convincing to
anyone.  I give them here mostly just to point out that this is something
I've thought about deeply, something I think that is important.  If I
disagree with you, Paul, Penrose etc., it is not because I haven't reflected
deeply on these matters.  I don't dismiss consciousness glibly a la Daniel
Dennett.  I know it's a very real and deep phenomenon, but I don't believe
that it's related to quantum physics or quantum gravity in the way that
Penrose/Hameroff propose.  I think that a future physics theory will clarify
the relationship between physical reality and consciousness, but I suspect
that this clarification will explain why digital computer programs CAN be
conscious, and not the opposite.

> I can be as wildly speculative as the next person, and with a lot less
> real-world scholarship to base that on.  I can say that every truly
> significant step in macro-evolutionary history has been brought about by
> symbiosis.

Along these lines, you should check out the book Symbiogenesis by Werner J.
Schwemmler.

http://www.amazon.de/exec/obidos/ASIN/0899255892/qid%3D1033565584/sr%3D1-19/
ref%3Dsr%5F1%5F1%5F19/028-4347167-6373313

He puts forth a lot of really interesting speculative theories along these
lines.

My review of his later book "Basic Cancer Programs" touches on some of these
themes as well:

www.goertzel.org/papers/CancerPrograms.htm


Finally, you say:
>  That linear binary
> programs just may not be capable of developing emergent overlays
> upon which
> autonomous autopoietic entities can appear.

It's a side point, but I don't understand where "linearity" comes in here.
Digital computer programs generally represent nonlinear systems,
actually....

Or are you referring to the quantum level, where infinite-dimensional linear
mappings are used; and contrasting this with quantum gravity theories which
mostly involve nonlinear aspects a la general relativity theory?


-- Ben G


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to