I am here providing a summary of my '10 big ideas' for Cog-Sci/AGI.
No justifcation is provided as of yet (that is, my purpose here is
merely to clearly and briefly state my 10 big ideas).  Their status at
this time is of entertaining speculation only.

So here's the 10 big ideas:

(1)  The extreme low entropy density at the beginning of time and the
predictable increase in entropy shows that the universe can be thought
of an RPOP [Really Powerful Optimization Process] (albiet an extremely
poorly optimized one).  This is NOT to say that the increase in
entropy is the purpose of the universe (that would be ludicrous),
rather than the increase in entropy shows that there is a universal
optimization pressure, and the universe can be thought of as an
integrated, time asymmetric system.  Thus, there are universal
terminal values.

(2)  The universal terminal values are ultimately grounded in
aesthetics.  That is to say, there may be a huge myriad of worth-while
terminal values, but my claim is that these are all special cases of
'beauty'.  That is,  the creation of beauty is the actual purpose of
the universe (the thing being optimized by the universal RPOP).

(3)  Bayesian reasoning is NOT the ultimate system of rational
calculation.  In fact, analogy formation IS.  Whereas, conventional
Bayesians would regard analogy formation as a special case of Bayes,
my claim is that the converse is actually the case... it is actually
Bayes that is a special case of analogy formation, and analogy
formation in the most general sense cannot be reduced to Bayesian

(4)  The precise mathematics of analogy formation utilizes the
concepts of *category theory*.  Analogy formation is equivalent to
*ontology merging* - the mapping of a concept from one knowledge
domain, to another knowledge domain.  Ultimately,ontology/KR
[Knowledge Representation] is all of effective intelligence.  That is,
my claim is that all the puzzles of cognitive science/AI are merely
sub-problems of ontology/KR.  Theres a simple formula of category
theory which tells you exactly how to calculate the 'semantic
distance' between any two concepts and carry out the mapping between
them.  Bayes theorem is merely a special case of this formula.  This
formula is the *real* secret to the universe!

(5)  A fully general purpose ontology results in the stratification of
reality into three levels - the *platonic* level (which is timeless,
and is basically equivalent the Tegmark universe), consisting of
*platonic archetypes* (abstractions); the *system* level, consisting
of reality as seen by an observers - dynamical systems with input,
processing and output; and the *artifact* level, consisting of static
*things* that are outputs of the system level above it.

(6)  From (5) - There exist real *platonic mathematical forms*, which
are outside space and time (platonic level), *mathematical systems* ,
which are the implemented algorithms seen by observers inside space
and time, and finally *mathematical artifacts*, which are the
onotologies by which observers classify reality.

(7)  Ontologies are the means by which we *reflect* on knowledge.
They are the *internal language of the mind* which we use to
*communicate* (map) logical concepts from one domain to another.  The
formula referred to in my claim (4) provides the solution to the
problem of goal stability - it tells you exactly how a mind can hold a
stable goal system under reflection - recall - this formula is not
Bayesian - instead it uses the math of category theory to show how to
map a concept from one knowledge domain (ontology) to another
knowledge domain (ontology).

(8)  Consciousness is generated by the aforementioned *ontology
merging* (which, recall, I'm claiming is equivalent to *analogy
formation*).  It is simply the integration of concepts from different
knowledge domains.

(9)  The ultimately non-Bayesian nature of true, general purpose
intelligence is a consequence of Occam's razor.  Occam's razar states
that the simple is favored over the complex.  Popper showed that there
are an inifnite number of theories compatible with any given finite
set of observations - successful induction requires an non-Bayesian
ingredient - this the means of *setting the priors* to reduce the
number of initial possibilities under consideration to manageable
number.  Occam's razor itself cannot be reduced to Bayes, because
judgements about what is simple and complex depend on *the semantics*
(logical meaning) of the theories under consideration, whereas Bayes
only deals with predication sequences - detection of externally
observable patterns.  Bayes only deals with *functionality*
(externally observable consequences) - full intelligence deals with
*semantics* (the meaning of concepts), and this goes beyond mere
Bayesian probability shuffling.

(10) The universal terminal values (which, recall, I'm suggesting are
all sub-values of 'beauty'), are a consequence of (1)-(9). Recall,
that I said that I thought that all problems of Cog-Sci are merely sub-
problems of Ontology/KR.  But Onotlogy/KR is all about
*representations of things*.  And Occams razor says the simple is
favored over the complex.  In terms of representations of things, this
is just to say that some representations are more elegant than others
(which is the very definition of aesthetics!).  Thus, it follows
uiversal values are implicit in the very nature of successful
cognition itself.

Yeah.  Thats it.  My genius stands or falls on my 10 big ideas ;)

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Reply via email to