Paul P
wrote:
***
That "I" have not demonstrated precise
definitions to the phenomenon of life or the phenomenon of intelligence is not a
proper argument that you are right about the possibility of computer
life.
***
Of course it isn't an argument that I'm
right.
If by "prove" you're referring to the standard
criterion of "mathematical, conceptual or empirical demonstrations that are
convincing to the community of scientists", I cannot prove that I'm right about the
possibility of computer life & computer intelligence, and you
can't that I'm wrong...
For example, it has been "proved" in the standard sense
that a perpetual motion machine is not possible. And it has been "proved"
in this sense that sending a spaceship to Pluto is possible, even though such a
thing has never been done.
But neither the possibility nor impossibility of AGI
has been "proved" in this sense.
So we are left with different intuitions.
Paul P
wrote:
***
I have not and likely cannot demonstrate
such definitions because such definitions must be abstractions whose variance,
from the reality we are discussing, is not only large but is not even
measurable. We simply cannot know the essence of some things except as an
experience, and this experience may not be convertible (reduced to) a set of
atomic definitions.
***
Actually, I do agree that there's an aspect of
subjective experience that can't be captured by mathematical formalisms or
scientific experiments.
But I don't see why a computationally-implemented
system can't possess this aspect of subjective experience, "just as we can"
(though with its own unique flavor...)
The connection between this ineffable aspect of
experience (which some call "awareness") and "intelligence" is an interesting
question.
I tend toward animism, toward the feeling that all
entities in the universe -- right down to particles -- are
aware.
But yet, I also feel like some entities are more aware
than others.. and this greater awareness is connected with greater
intelligence...
I don't deny there are big and interesting puzzles
here, some of which are scientifically resolvable, some of which may not
be.
But I do not believe we need to resolve these puzzles
to create an AGI. Just as we don't need to resolve the puzzles of human
consciousness to FIX problems of human consciousness via neurosurgery ... and
just as I can communicate with you without having proved in any sense that you
are a self-aware being and not a fully deterministic
automaton...
Paul P
wrote:
***
The requirement that there be
"definitions" is a type of reductionism. To say that there are real things
that we either can not talk about very well precisely or can not understand at
all is just to say what one feels is the case. A "requirement" to give
definitions is in these cases an external requirement that one does not have to
accept.
***
I don't *require* that there be definitions, in all
contexts. When my wife says "I love you" to me, I don't require that she
define her terms.
However, I find that "defining things precisely" is a
very useful tool; and I think it's an appropriate tool in the context of
creating AGI systems...
Paul P
wrote:
***
I actually am making the point that no
machine has achieved any reasonable definition of intelligence, and pointing out
that the claim that this will eventually happen is a claim that does not have
plausible evidence based on the effort over the past fifty years,
***
This is a profoundly unconvincing argument to me.
In the same sense, one could have argued in 1850 or 1950 that no machine
had ever carried humans into space. So what? Technology advances
over time...
Paul P
wrote:
***
PLUS an analysis of the categorical
difference between abstraction and natural things that are not
abstractions.
***
This part, I don't understand
fully.
There is a difference between
Novamente and abstractions about
Novamente
There is a difference between my
brain and abstractions about my
brain
A computer running a Novamente
software system is not an abstraction, nor is my
brain
Both a computer running a Novamente
system, and my brain, are physical systems of different
sorts.
You say that one of these physical systems
is capable of achieving the ineffable thing you call "intelligence" ... but I
doubt it...
But neither of these physical systems is
an abstraction...
Paul P
wrote:
***
I have defined the Manhattan Project to
establish Knowledge Science is a different fashion that the one that you
give.
This
is essentially the core of the issue between you and now right
now.
I
want to advance the science of knowledge sharing and knowledge experience, and I
do not think that a computer is an essential part of either knowledge sharing
and knowledge experience.
***
So do you advocate a Knowledge Management project that
doesn't use computers at all? What do you prefer? An old-fashioned
library?? ;-)
-- Ben
