Ben Goertzel wrote:
Sorry, but I simply do not accept that you can make "do really well on
a long series of IQ tests" into a computable function without getting
tangled up in an implicit homuncular trap (i.e. accidentally assuming
some "real" intelligence in the computable function).
Let me put it this way: would AIXI, in building an implementation of
this function, have to make use of a universe (or universe simulation)
that *implicitly* included intelligences that were capable of creating
the IQ tests?
So, if there were a question like this in the IQ tests:
"Anna Nicole is to Monica Lewinsky as Madonna is to ......"
Richard, perhaps your point is that IQ tests assume certain implicit
background knowledge. I stated in my email that AIXI would equal any
other intelligence starting with the same initial knowledge set.... So,
your point is that IQ tests assume an initial knowledge set that is part
and parcel of human culture.
No, that was not my point at all.
My point was much more subtle than that.
You claim that "AIXI would equal any other intelligence starting with
the same initial knowledge set". I am focussing on the "initial
knowledge set."
So let's compare me, as the other intelligence, with AIXI. What exactly
is the "same initial knowledge set" that we are talking about here?
Just the words I have heard and read in my lifetime? The words that I
have heard, read AND spoken in my lifetime? The sum total of my sensory
experiences, down at the neuron-firing level? The sum total of my
sensory experiences AND my actions, down at the neuron firing level?
All of the above, but also including the sum total of all my internal
mental machinery, so as to relate the other fluxes of data in a coherent
way? All of the above, but including all the cultural information that
is stored out there in other minds, in my society? All of the above,
but including simulations of all the related
Where, exactly, does AIXI draw the line when it tries to emulate my
performance on the test?
(I picked that particular example of an IQ test question in order to
highlight the way that some tests involve a huge amount of information
that requires understanding other minds .. my goal being to force AIXI
into having to go a long way to get its information).
And if it does not draw a clear line around what "same initial knowledge
set" means, but the process is open ended, what is to stop the AIXI
theorems from implictly assuming that AIXI, if it needs to, can simulate
my brain and the brains of all the other humans, in its attempt to do
the optimisation?
What I am asking (non-rhetorically) is a question about how far AIXI
goes along that path. Do you know AIXI well enough to say? My
understanding (poor though it is) is that it appears to allow itself the
latitude to go that far if the optimization requires it.
If it *does* allow itself that option, it would be parasitic on human
intelligence, because it would effectively be simulating one in order to
deconstruct it and use its knowledge to answer the questions.
Can you say, definitively, that AIXI draws a clear line around the
meaning of "same initial knowledge set," and does not allow itself the
option of implicitly simulating entire human minds as part of its
infinite computation?
Now, I do have a second line of argument in readiness, in case you can
confirm that it really is strictly limited, but I don't think I need to
use it. (In a nutshell, I would go on to say that if it does draw such
a line, then I dispute that it really can be proved to perform as well
as I do, because it redefines what "I" am trying to do in such a way as
to weaken my performance, and then proves that it can perform better
than *that*).
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983