Hi Eric,

According to my belief, that I also claim to have published a strong case
for, we have such a theory, in which the common principle underlying
intelligence is that of Occam programs, that are computationally hard
to extract.

(I don't mean a program in the Occam language, a program

constructed according to an extrapolated Occam's razor.)
Also according to this belief, "understanding" is comprised of having
such an Occam program that exploits underlying structure in order to
generalize. According to this belief, unfortunately, the Occam
program underlying our intelligence is itself unlikely to admit any more
compact Occam program understanding it, and thus may be inherently not
understandable.



I don't quite agree with this perspective, though my view is pretty close.

I also find it useful to view understanding in terms of algorithmic
information,
but I think that "finding the shortest program capable of computing X" is
not
a good way of conceptualizing "understanding X."

Rather, I think that "finding the fuzzy set of programs capable of
compressing
X, relative to one's knowledge base K" is a better perspective.

For a complex X, there will be many different programs capable of
compressing
X.  Just finding the shortest program for computing X does not necessarily
give a complete understanding of X.

Ben's comments, and to some extent his approach to AGI of building
code and then hoping when run it will produce a complex set of
patterns that do stuff seems somewhat related to this,



Yes, it's related....  The Novamente system can be viewed as attempting to
find
a bunch of compressing programs in relevant datasets, esp. in datasets of
the
form "carrying out action A in context C will lead to achievement of goal
G."  This
is not necessarily the most useful way to view the system in practice, but
it is
a correct way.  And of course, in accordance with the "no free lunch
theorem",
the idea is that it should be good at finding compressing programs in
datasets
of the above form that actually occur in the practical life of an embodied
agent,
not in mathematically general datasets of the above form.


except for
some reason he stipulates that human intelligence is understandable.
I'm not clear on why he thinks human level intelligence is
"understandable", or even what he means by this.



I've tried to clarify this above.  What I mean is that humans are able to
detect
many meaningful patterns (read: patterns = compressing programs, if you
like)
in human behaviors ... and once brain scans work better, I bet we will be
able
to detect many very meaningful, significant patterns emergent btw human
behaviors and the output of brain scanners...

OTOH, for a massively superhuman AI, the  quantity of  patterns we will be
able to detect in this way, may be far far less ---- because most of the
significant
patterns  in its behavior and state may have an algorithmic information far
beyond
the capacity of our brains.

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to