Mike,

But interestingly while you deny that the given conception of intelligence
is rational and deterministic.. you then proceed to argue rationally and
deterministically.


Universal intelligence is not based on a definition of what rationality is.
It is based
on the idea of achievement.  I believe that if you start to behave
irrationally (by any
reasonable definition of the word) then your ability to achieve goals will
go down and
thus so will your universal intelligence.


that actually you DON'T usually know what you desire. You have conflicting
desires and goals. [Just how much do you want sex right now? Can you produce
a computable function for your desire?]


Not quite.  Universal intelligence does not require that you personally can
define
your, or some other system's, goal.  It just requires that the goal is well
defined
in the sense that a clear definition could be written down, even if you
don't know
what that would look like.

If you want intelligence to include undefinable goals in the above weaker
sense
then you have this problem:

"Machine C is not intelligent because it cannot do X, where X is something
that cannot be defined."

I guess that this isn't a road you want to take as I presume that you think
that
machine intelligence is possible.


And you have to commit yourself at a given point, but that and your
priorities can change the next minute.


A changing goal is still a goal, and as such is already taken care of by the
universal intelligence measure.


And vis-a-vis universal intelligence, I'll go with Ben

"According to Ben Goertzel, Ph. D, "Since universal intelligence is only
definable up to an arbitrary constant, it's of at best ~heuristic~ value in
thinking about the constructure of real AI systems. In reality, different
universally intelligent modules may be practically applicable to different
types of problems." [8] <http://www.sl4.org/archive/0104/1137.html>"


Ben's comment is about AIXI, so I'll change to that for a moment.  I'm going
to have
to be a bit more technical here.

I think the compiler constant issue with Kolmogorov complexity is in some
cases
important, and in others it is not.  In the case of Solomonoff's continuous
universal
prior (see my Scholarpedia article on algorithmic probability theory for
details) the
measure converges to the true measure very quickly for any reasonable choice
of
reference machine.  With different choices of reference machine the compiler
constant may mean that the system doesn't converge for a few more bytes of
input.
This isn't an issue for an AGI system that will be processing huge amounts
of data
over time.  The optimality of its behaviour in the first hundred bytes of
its existence
really doesn't matter.  Even incomputable super AIs go through an infantile
stage,
albeit a very short one.


You seem to want to pin AG intelligence down precisely, I want to be more
pluralistic - and recognize that uncertainty and conflict are fundamental to
its operation.


Yes, I would like to pin intelligence down as precisely as possible.

I think that if somebody could do this it would be a great step forward.
I believe that issues of definition and measurement are the bedrock of
good science.

Cheers
Shane

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to