On 15 Jun 2011, at 18:47, meekerdb wrote:
On 6/15/2011 6:56 AM, Bruno Marchal wrote:
Doesn't this objection only apply to attempts to construct an AI
human-equivalent intelligence? As a counter example I'm thinking
of Ben Goertzel's OpenCog, an attempt at artificial general
intelligence (AGI), whose design is informed by a theory of
intelligence that does not attempt to mirror or model human
intelligence. In light of the "Benacerraf principle", isn't it
possible in principle to provably construct AIs so long as we're not
trying to emulate or model human intelligence?
I think that comp might imply that simple virgin (non programmed)
universal (and immaterial) machine are already conscious. Perhaps
even maximally conscious. Then adding induction gives them
Löbianity, and this makes them self-conscious (which might already
be a delusion of some sort). Unfortunately the hard task is to
interface such (self)-consciousness with our probable realities
(computational histories). This is what we can hardly be sure about.
I still don't know if the brain is just a filter of consciousness,
in which case losing neurons might enhance consciousness (and some
data in neurophysiology might confirm this). I think Goertzel is
more creating a competent machine than an intelligent one, from
what I have read about it. I oppose intelligence/consciousness and
competence/ingenuity. The first is needed to develop the later, but
the later has a negative feedback on the first.
There is a tendency to talk about "human-equivalent intelligence" or
"human level intelligence" as an ultimate goal. Human intelligence
evolved to enhance certain functions: cooperation, seduction,
bargaining, deduction,... There's no reason to suppose it is the
epitome of intelligence. Intelligence may take many forms, some of
which we would have difficulty realizing or crediting. Like a
universal machine that is not programmed, which by one measure is
maximally intelligent but also maximally incompetent. Even in
humans intelligence is far from one-dimensional. A small child is
extremely intelligent as measured by the ability to learn, but not
very smart as measured by knowledge.
So we agree violently on this, to borrow an expression to Russell (I
When I want to be cynical, I define humans as an ape which stung other
apes on crosses. Nothing to be proud of.
It is a cultural problem, especially in Occident, that the humans
believe that they are the last word of God. I would be God (!) that
would be enough to invest more in spiders and birds, or other
creatures. They are more modest. In a sense they might be "more"
Löbian than us.
Competence can kill intelligence. It will depend on us, I am not
fatalist, but we might be the 'dinosaurs of competence'.
Will say more in other replies, probably.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at