*/Eliezer/*'s hubris about a Bayesian approach to intelligence is
nothing more than the usual 'metabelief' about a mathematics... or about
computation... meant in the sense that "cognition is computation", where
computation is done BY the universe (with the material of the universe
used to manipulate abstract symbols)
*You don't have to work so hard to walk away from that approach...*
Computationalism is FALSE in the sense that it cannot be used to
construct a scientist.
A scientist deals with the UNKNOWN.
If you could compute a scientist you would already know everything!
Science would be impossible.
So you *can* 'compute/simulate' a scientist, but if you could the
science must already have been done... hence you wouldn't want to.
Computationalism is FALSE in the sense of 'not useful', not false in the
sense of 'wrong'.
You cannot model a modeller of the intrinsically unknown. As a
computationalist manipluator of abstract symbols you are required to
deliver a model of how to learn - in which you must specify how all
novelty shall be handled! In other words you can;t deal with the REAL
unknown - where you have no such model!....
ie. a computationalist scientist is an oxymoron: a logical
contradiction. If you say you can then you are question begging
computationalism whilst failing to predict an a-priori unsupervised
observer (a scientist).
The Bayesian 'given' (the conditional) assumes knowledge of a given
which is a-priori not available. It assumes observation of the kind we
have.. otherwise how would you know any options to choose as
givens?..... furthermore it assumes that if somehow we were to
experiment to resolve a choice of 'givens' (Bayesian conditionals) as
being the 'truth' - then there are potentially an enormous collection of
'givens', all of which can be inserted in the same bayesian predictor...
resulting in degenerate knowledge.... you know NOTHING because you fail
to resolve anything useful about the world outside. You don't even know
there's an 'outside'.
The bayesian (all computationalist) approach fails to predict
observation (in the sense of ANY observation/an observer, not a
particular observation) and fails to predict the science that might
result from an observer.
This is the achilles heel of the computationalist argument.
The computationalist delusion (dressed up in Bayesian or any other
abstract symbol-manipulator's clothes) has to stop right here, right now
and for good.
BTW This does not mean that 'cognition is not computation'.... I hold
that cognition is NATURAL symbol manipulation, not ABSTRACT symbol
manipulation. But that's a whole other story... The natural symbols are
Please feel free to deliver the above to Eliezer. He'll remember me!
Tell him the AGI he is so fearful of are a DOORSTOP and will be
pathetically vulnerable to human intervention. The whole AGI
fear-mongering realm needs to get over themselves and start being
scientific about what they do. It's all based on assumptions which are
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at