On 22 Apr 2013, at 16:51, Telmo Menezes wrote:
On Mon, Apr 22, 2013 at 5:55 AM, John Clark <[email protected]>
wrote:
On Sat, Apr 20, 2013 at 8:14 PM, Telmo Menezes <[email protected]
>
wrote:
There is an entire field of physics, for example, dedicated to
studying
emergence in a rigorous fashion
True, and the key word is " rigorous" and that means knowing the
details.
Cellular automata show how simple local rules can give rise to
complexity,
Yes, but saying something worked by cellular automation wouldn't be
of any
help it you didn't know what those simple local rules were, and to
figure
them out that you need reductionism. In talking about art there are
2 buzz
words that, whatever their original meaning, now just mean "it
sucks"; the
words are "derivative" and "bourgeois". Something similar has
happened in
the world of science to the word "reductive", it now also means "it
sucks".
Not long ago I read a article about the Blue Brain Project, it said
it was a
example of science getting away from reductionism, and yet under
the old
meaning of the word nothing could be more reductive than trying to
simulate
a brain down to the level of neurons.
I don't know how to say this without resorting to a cliché: "the whole
is more than the sum of the parts". Yes, you have to understand the
details, but when you stick the neurons together, something happens
that is not obvious from the description of the operation of a single
neuron. Let's take a reductionist example, Newton's second law:
F = ma
If I plug in the values of two of the variables I can get the value of
the third one. If mass doubles, force doubles. Nothing new ever
happens.
Yet, with F brought by three bodies (or some more), this is already
Turing universal.
It's great for explaining motion. But with brains, if I plug
in a new neuron, something radically new can happen. There is no
reductionist law of motion for neurons if you want to understand how
intelligence emerges from their interactions.
OK. Once we have universal machine, we get autonomous universal layer.
To understand the working of a program good at chess, the
understanding of the working of the lower gates will not help, except
for suggesting a universal level at play.
Some of our most powerful emotions like pleasure, pain, and lust
come from
the oldest parts of our brain that evolved about 500 million years
ago.
Emotions are two things at the same time: a bunch of signals in the
brain that help with learning, morphogenesis and so on (for a survival
advantage, as you suggest) and the first person feeling of these
emotions. The 1p feeling of the emotions is the greatest mystery of
all, in my view, and neuroscience has no theory that explains it.
Maybe the 1p experience arrises from brain activity, but at the moment
it requires a magic step.
But comp explains why there is a magic step, and where the magic comes
from. The price is that we have to explain the brain from that "magic".
About 400 million years ago Evolution figured out how to make the
spinal
cord, the medulla and the pons, we have these brain structures just
like
fish and amphibians do and they deal in aggressive behavior,
territoriality
and social hierarchies. The Limbic System is about 150 million
years old and
ours is similar to that
found in other mammals. Some think the Limbic system is the source
of awe
and exhilaration because it is the active site of many psychotropic
drugs,
and there's little doubt that the amygdala, a part of the Limbic
system, has
much to do with fear. After some animals developed a Limbic system
they
started to spend much more time taking care of their young, so it
probably
has something to do with love too.
Again, this is all fine but the 1p/3p bridge is the mystery. I suspect
this mystery falls outside of what science can address, as has been
discuseed ad nausea in this mailing list.
That's a dangerous move. Science can address the question, and in this
case provides perhaps the best solution, in the sense that it explains
how and why machines looking inward can know some truth, and can know
that such truth are not communicable or justifiable. This can be done
without more magic than addition and multiplication.
It is our grossly enlarged neocortex that makes the human brain so
unusual
and so recent, it only started to get large about 3 million years
ago and
only started to get ridiculously large less than one million years
ago. It
deals in deliberation, spatial perception, speaking, reading,
writing and
mathematics; in other words everything that makes humans so very
different
from other animals. The only new emotion we got out of it was worry,
probably because the neocortex is also the place where we plan for
the
future.
I'm fine with all that, but what is the "you" that feels the worry?
The true and consistent believer. It has no name, nor any identity
card, yet you know him very well.
In arithmetic it will be the Bp & Dt & p, seen by the divine intellect
(G*).
This is what make the relative numbers into developing feelings, and
those feeling obeys some logical laws which can explain why you feel
that those kinds of things is not amenable to science.
Yet science can explain why some truth exists and can't be amenable to
science, notably when the mechanist hypothesis is made.
That last sentence *is* amenable to science, and actually is already
amenable to machine's science. Comp provides a meta-solution by
explaining "scientifically" how machines can encounter (and if honest
enough, cannot not encounter) a big thing which is such that science
can only be a lantern on that big thing (the qualia and the quanta
appearing as some aspects of it).
If nature came up with feeling first and high level intelligence
much much
later I don't see why the opposite would be true for our computers.
It's
probably a hell of a lot easier to make something that feels but
doesn't
think than
something that thinks but doesn't feel.
I would love to know how to make something that feels.
Strictly speaking it is not that difficult.
In a sense it is already done, by Babbage, but also bacteria.
You need two things:
1) a local Turing universal machine. (brain, computer, DNA,
arithmetic, etc.)
2) patience.
Another solution would consist in meeting a woman, or vice versa, and
get children. It is a bit the same.
We don't build intelligent machines, we fish them in the abysses of
the arithmetical ocean.
I know how to
make things that think.
Really? Then ask to that thing which thinks how does it thinks when
not thinking and just waiting for awhile. Be polite and ask to that
thinking entity how it feels to be a thinking being.
That's what I did in a way for any sound and consistent extension of
any machines believing in Robinson arithmetic ( the ontology) and
Peano Arithmetic (the observer and/or the dreamer).
You are right, it is easy to do a thinking machine. But incompleteness
suggests, and shows (accepting standard definition of knowledge and
observation) that feelings are just unavoidable. And doubly so if you
give them universal goal, like "help yourself" or "get closer to
<whatever non nameable>.
People like António Damásio (my compatriot) and other
neuroscientists
confuse a machine's ability to recognise itself with consciousness.
I see no evidence of confusion in that.
Imagine a computer connected to a camera pointed at itself, running an
algorithm that can identify its own boundaries in any background. Is
it suddenly conscious?
Not necessarily. It might take time to bet on an identification.
This makes me wonder if some people are zombies.
Without the axiom that intelligent behavior implies consciousness
it would
be entirely reasonable to conclude that you are the only conscious
being in
the universe.
Now we're getting to the heart of it. That axiom is a religious
belief. Unlike other scientific axioms, it doesn't help us in building
new gadgets, so not even useful in that sense.
John is right. Except that intelligence -> consciousness is not
entirely a comp axiom, and is more a comp theorem, with consciousness
defined for example by the fixed point of the doubt (what is true for
some machine, but non justifiable for such machine).
So it is not a religious belief. It is a theorem in computer science
once you accept the definition of consciousness above, and the
classical theory of knowledge (the modal logic S4).
And this + the FPI, leads to a notion of observable which can be
compared to the empirically observable. So it is a theory of
consciousness with testable consequence, even if indirect.
Computers are what they have always been, Turing machines with
finite
tapes.
Human brains are what they have always been, a finite number of
interconnected neurons imbedded in 3 pounds of grey jello.
Yes.
The tapes are getting bigger, that's all.
Yes, but the grey jello is not getting any bigger and that is
exactly why
computers are going to win.
I agree.
The expression "computer will win" is very ambiguous. In a sense they
have already won with the bacteria, in many sense, even.
*we* will get artificial brain long before we fish anything as clever
as a spider, in that sense we will be virtualized before we get non
controllable machines. For economical reason, non controllable machine
will grow, but it will take some time before they get the right to
vote (it is more a human problem than a machine problem, or it is the
eternal problem of recognizing oneself in the other).
There are ethical problem. Tomorrow planes will get limbic systems,
and we will copy a lot nature, with a larger space of hybridizations
possible, together with the catastrophies and disasters.
Measuring conscious by intelligent behaviour is mysticism,
Call it any bad name you like but the fact is that both you and I
have been
measuring consciousness by intelligent behavior every minute of
every hour
of our waking life from the moment we were born;
Here I don't follow Clark. Consciousness is not intelligent behavior,
as it is not the amount of information which counts for consciousness,
nor a way to interpret them rapidly, it is more like an instinctive
admission or confirmation of an instinctive (unconscious) bet. It is
an oscillation between Dt and Dt & t, somehow.
but now if we're confronted
with a intelligent computer for some unspecified reason you say we're
supposed to suddenly stop doing that. Why?
Oh I wouldn't. I might very well suspect the computer is conscious,
but I wouldn't claim to be sure or know why. One of my dreams is to
create a program that would generate that doubt.
Peano Arithmetic, ZF, ... they have that doubt.
They already believe that if they don't believe in 0=1, then they
don't believe that they don't believe in 0=1.
The only consciousness I have direct experience with is my own
and I
note that when I'm sleepy my consciousness is reduced and so
is my
intelligence, when I'm alert the reverse is true.
I agree on intelligence, but I don't feel less conscious when I'm
sleepy.
If so and consciousness is a all or nothing matter and is not on a
continuum
then you should vividly remember the very instant you went to sleep
last
night. Do you?
I subscribe to Russell's remark here.
Can we be sure we have been unconscious at any time?
Are you OK that "I am not conscious" is meaningless? It is obviously
true ... for the zombie. It is obviously false when asked to conscious
people.
But "I have been unconscious" is already a theory, a conscious belief,
more easily doubted than "I am conscious".
There is a sort of "non consciousness", or "subconsciousness" the
night, in sleep, but that is more like an altered state of
consciousness that an absence of consciousness. It is apparently
"programmed" to not easing the remembering of that state, but people
interested in consciousness can explore.
I'm a bit sleepy right now.
Wow what a temptation, with that opening if I was in a bad mood I'd
make
some petty remark like "that explains a lot", but I'm not so I won't.
Yeah, I'd definitely like to watch the reality show.
Yes. This, we can enjoy the exploration too. Note that the inside is
sometimes related to the outside :)
Bruno
Telmo.
John K Clark
--
You received this message because you are subscribed to the Google
Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an
email to [email protected].
To post to this group, send email to everything-
[email protected].
Visit this group at http://groups.google.com/group/everything-list?hl=en
.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list?hl=en
.
For more options, visit https://groups.google.com/groups/opt_out.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.