On Mon, Apr 22, 2013 at 5:55 AM, John Clark <johnkcl...@gmail.com> wrote:
> On Sat, Apr 20, 2013 at 8:14 PM, Telmo Menezes <te...@telmomenezes.com>
> wrote:
>
>> > There is an entire field of physics, for example, dedicated to studying
>> > emergence in a rigorous fashion
>
>
> True, and the key word is " rigorous" and that means knowing the details.
>
>
>>
>> > Cellular automata show how simple local rules can give rise to
>> > complexity,
>
>
> Yes, but saying something worked by cellular automation wouldn't be of any
> help it you didn't know what those simple local rules were, and to figure
> them out that you need reductionism. In talking about art there are 2 buzz
> words that, whatever their original meaning, now just mean "it sucks"; the
> words are "derivative" and "bourgeois". Something similar has happened in
> the world of science to the word "reductive", it now also means "it sucks".
> Not long ago I read a article about the Blue Brain Project, it said it was a
> example of science getting away from reductionism, and yet under the old
> meaning of the word nothing could be more reductive than trying to simulate
> a brain down to the level of neurons.

I don't know how to say this without resorting to a cliché: "the whole
is more than the sum of the parts". Yes, you have to understand the
details, but when you stick the neurons together, something happens
that is not obvious from the description of the operation of a single
neuron. Let's take a reductionist example, Newton's second law:

F = ma

If I plug in the values of two of the variables I can get the value of
the third one. If mass doubles, force doubles. Nothing new ever
happens. It's great for explaining motion. But with brains, if I plug
in a new neuron, something radically new can happen. There is no
reductionist law of motion for neurons if you want to understand how
intelligence emerges from their interactions.

>>
>> > when someone invokes utilitarianism
>
>
> I don't see any difference between "invoking utilitarianism" and just doing
> something that works, and I'm pretty sure that's better than doing something
> that doesn't work.
>
>
>>
>> > a concept that can be dangerous, as History as shown us a number of
>> > times.
>
>
> I can't think of a single case where science was harmed by doing something
> that worked.

I'm worried about harming people, not science. Science cannot be
harmed. It will still be there once we get back to our senses. It will
be there provided we survive thermonuclear war and are able to
practice it freely and are not under the control of a police state
enabled by high-tech surveillance, drones, data mining and so on.

What I meant, though, is that utility is relative to a goal, and goals
might be conflicting, even with science. Suppose we, the members of
this mailing list, find ourselves as the survivors of some
catastrophic event. Apart from making for a great reality show (I
would pay good money to watch that), we would probably focus on
applying our collective knowledge of boring old science and
engineering to survive, before going back to discussing the nature of
reality.

The problem with pursuing immediate utility aggressively is that it
can easily get us stuck in local optima. It hurts creativity. If I may
draw a parallel with Darwinian evolution, a lot of crap has to be
generated so that something amazing takes place. The crap is the price
we pay for the "miracles".

>>
>> > The missing part I don't understand bugs me.
>
>
> It bugs me too, I also want to know everything but, you can't always get
> what we want. Hey, somebody ought to make a song about that.

Yeah, and they should also mention something about utilitarianism --
*sometimes* you get what you need.

>
>>>>
>>>> >>>If consciousness is easier than intelligence
>>>
>>> >> Evolution certainly found that to be the case.
>>
>> >There is not scientific evidence whatsoever of this.
>
>
> Some of our most powerful emotions like pleasure, pain, and lust come from
> the oldest parts of our brain that evolved about 500 million years ago.

Emotions are two things at the same time: a bunch of signals in the
brain that help with learning, morphogenesis and so on (for a survival
advantage, as you suggest) and the first person feeling of these
emotions. The 1p feeling of the emotions is the greatest mystery of
all, in my view, and neuroscience has no theory that explains it.
Maybe the 1p experience arrises from brain activity, but at the moment
it requires a magic step.

> About 400 million years ago Evolution figured out how to make the spinal
> cord, the medulla and the pons, we have these brain structures just like
> fish and amphibians do and they deal in aggressive behavior, territoriality
> and social hierarchies. The Limbic System is about 150 million years old and
> ours is similar to that
> found in other mammals. Some think the Limbic system is the source of awe
> and exhilaration because it is the active site of many psychotropic drugs,
> and there's little doubt that the amygdala, a part of the Limbic system, has
> much to do with fear. After some animals developed a Limbic system they
> started to spend much more time taking care of their young, so it probably
> has something to do with love too.

Again, this is all fine but the 1p/3p bridge is the mystery. I suspect
this mystery falls outside of what science can address, as has been
discuseed ad nausea in this mailing list.

> It is our grossly enlarged neocortex that makes the human brain so unusual
> and so recent, it only started to get large about 3 million years ago and
> only started to get ridiculously large less than one million years ago. It
> deals in deliberation, spatial perception, speaking, reading, writing and
> mathematics; in other words everything that makes humans so very different
> from other animals. The only new emotion we got out of it was worry,
> probably because the neocortex is also the place where we plan for the
> future.

I'm fine with all that, but what is the "you" that feels the worry?

> If nature came up with feeling first and high level intelligence much much
> later I don't see why the opposite would be true for our computers. It's
> probably a hell of a lot easier to make something that feels but doesn't
> think than
> something that thinks but doesn't feel.

I would love to know how to make something that feels. I know how to
make things that think.

>>
>> > People like António Damásio (my compatriot) and other neuroscientists
>> > confuse a machine's ability to recognise itself with consciousness.
>
>
> I see no evidence of confusion in that.

Imagine a computer connected to a camera pointed at itself, running an
algorithm that can identify its own boundaries in any background. Is
it suddenly conscious?

>
>>
>> > This makes me wonder if some people are zombies.
>
>
> Without the axiom that intelligent behavior implies consciousness it would
> be entirely reasonable to conclude that you are the only conscious being in
> the universe.

Now we're getting to the heart of it. That axiom is a religious
belief. Unlike other scientific axioms, it doesn't help us in building
new gadgets, so not even useful in that sense.

>
>>
>> > Computers are what they have always been, Turing machines with finite
>> > tapes.
>
>
> Human brains are what they have always been, a finite number of
> interconnected neurons imbedded in 3 pounds of grey jello.

Yes.

>>
>> > The tapes are getting bigger, that's all.
>
>
> Yes, but the grey jello is not getting any bigger and that is exactly why
> computers are going to win.

I agree.

>
>>
>> >Measuring conscious by intelligent behaviour is mysticism,
>
>
> Call it any bad name you like but the fact is that both you and I have been
> measuring consciousness by intelligent behavior every minute of every hour
> of our waking life from the moment we were born; but now if we're confronted
> with a intelligent computer for some unspecified reason you say we're
> supposed to suddenly stop doing that. Why?

Oh I wouldn't. I might very well suspect the computer is conscious,
but I wouldn't claim to be sure or know why. One of my dreams is to
create a program that would generate that doubt.

>
>>>
>>> >> The only consciousness I have direct experience with is my own and I
>>> >> note  that when I'm sleepy my consciousness is reduced and so is my
>>> >> intelligence,  when I'm alert the reverse is true.
>>>
>>
>> > I agree on intelligence, but I don't feel less conscious when I'm
>> > sleepy.
>
>
> If so and consciousness is a all or nothing matter and is not on a continuum
> then you should vividly remember the very instant you went to sleep last
> night. Do you?

I subscribe to Russell's remark here.

>>
>> > I'm a bit sleepy right now.
>
>
> Wow what a temptation, with that opening if I was in a bad mood I'd make
> some petty remark like "that explains a lot", but I'm not so I won't.

Yeah, I'd definitely like to watch the reality show.

Telmo.

>   John K Clark
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list?hl=en.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to