On Dec 29, 8:29 pm, John Clark <johnkcl...@gmail.com> wrote:
> On Wed, Dec 28, 2011 at 2:54 PM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
> >>Are you saying that hallucinations, dreams, and delusions don't exist?
>
> > >They don't exist, they insist. Their realism supervenes upon the
> > interpretation of the subject so that they have no independent ex-istence.
>
> But you can say the same thing about ANYTHING.

Are you arguing that there is no difference between dreams and
reality?

> Making predictions and
> manipulating the world is the most we can hope for, nobody has seen deep
> reality. Our brain just reacts to the electro-chemical signals from nerves
> connected to a transducer called an eye.

That's factually incorrect. Our perceptions are shaped by our
expectations. You see these words not as retinal signals but as
presentations of semantic text. If you could not read English the
electro chemical signals from the nerves would be no different, yet
your brain would 'just react' in a different way. That difference is
insignificant however to the difference in the sensemaking experience
of the person using that brain and those eyes. Sense is *not*
transduced, it is leveraged. It is a specular sensorimotive
experience. There is no transduction homunculus turning optical
signals into Cartesian Theater films.


> Our computers react to the
> electronic signals from wires connected to a transducer called a TV camera.

That's why they don't see. They just detect electronic signals from
one form to another. Unlike us, they are not an audience for their
reactions.

>
> Our brain uses theories to explain these signals, so would intelligent
> computers. Theories explain how some sense sensations relate to other sense
> sensations. For example we receive information from our eyes, we interpret
> that information as a rock moving at high speed and heading toward a large
> plate glass window, we invent a theory that predicts that very soon we will
> receive another sensation, this time from our ears, that we will describe
> as the sound of breaking glass.

No, it's just the opposite. We see a rock moving at high speed heading
toward a window. That is the literal reality. We invent nothing. Our
experience fills us with a sensory expectation of the rock smashing
through the window. Then after millions of years, we invent a theory
that we receive 'information' from our eyes and 'interpret that
information'. I used to think of it that way too, but not any more.
Information is the theory, the rock is the reality. It's pretty
straightforward. The sensation is primary, the third person
explanation of the sensation is a welcome addition which improves the
sensation, but goes off the rails completely if we try to replace one
with the other. It fails because it relies on a hypothetical
transparent voyeur that takes human perception for granted. If you
take it for granted to begin with, you can't learn anything about it
and only obscure the reality with prejudice and just-so stories.


> Soon our prediction is confirmed so the
> theory is successful; but we should remember that the sound of broken glass
> is not broken glass, the look of broken glass is not broken glass, the feel
> of broken glass is not broken glass. What "IS" broken glass? It must have
> stable properties of some sort or I wouldn't be able to identify it as a
> "thing". I don't know what those ultimate stable properties are, but I know
> what they are not, they are not sense sensations. I have no idea what glass
> "IS". The sad truth is, I can point to "things" but I don't know what a
> thing "IS", and I'm not even sure that I know what "IS" is, and an
> intelligent computer would be in exactly the same boat I am.

You have solved the problem of the stupidity of computers by making
yourself as stupid as they are. I know what a thing is. I know what
glass is. I know what IS is. I know what broken glass is, I know that
its sound is an aspect of what it is to me, I know that its look is
part of what it is to me. It is instantaneously familiar with zero
theory required. My anticipation of breaking glass in your example is
not a prediction or a theory, it is semantic momentum. Experience
repeated literally in memory and figuratively through movies, TV,
stories, and my own imagination.

Of course there is more to glass than my experience of it, and if I
were interested I could expand my experience with it and knowledge of
it to a great extent, but still not know all of what it is and does in
the privacy of it's own frame of reference.

>
> > There's no harm in anthropomorphizing a stuffed animal or emoticon or
> > whatever
>
> What about anthropomorphizing your fellow human beings? It seems to me to
> be very useful to pretend that other people have feelings just like I do,
> at least it's useful when they are not acting unintelligently, like when
> other people are sleeping or dead.

We don't have to anthropomorphize other human beings, they are already
anthropomorphic. We just have to know that we ourselves are human too.

>
> > but if you want to understand consciousness or emotion [...]
>
> You have only one example of consciousness that you can examine directly,
> your own. If you want to study the consciousness of others, be they made of
> meat or metal, then like it or not you MUST anthropomorphize.

I can clearly tell the difference between a human being and a voice
mail system. I am under no obligation to anthropomorphize cybernetic
systems. It makes sense that humans evolved from other animal species,
so I am comfortable anthropomorphizing living organisms to an extent,
in a loose and figurative way. I might think an ant is cute and call
it 'him' or something, but I might kill it for no important reason
where I wouldn't do that with a cat or chimpanzee.

>
> > Computers can be thought of as billions of little plastic THANK YOUs
> > ornamenting the microelectronic gears of a logical clock.
>
> You take something grand and glorious, like intelligence or consciousness,
> and break it up into smaller and simpler pieces, then you take those pieces
> and break them up again into even smaller and simpler pieces, then you
> repeat the process again, and again, and again, and again. Eventually you
> come to something that is not the slightest bit grand or glorious and you
> say, "this can not have anything to do with intelligence or consciousness
> because it is too small and simple and is no longer grand and glorious".

No, I don't do that. I say the smallest particle has to have the
potential for grand and glorious experience inherently or else it
could not be the case. A trillion ping pong balls in a vacuum will
never become alive, intelligent, or conscious. The primitive building
blocks must be blocks which can build significance to begin with. 79
ping pong balls will never be an atom of gold, no matter how you spin
them or crush them.

> And you want to understand how something very complicated works so you
> break it into smaller pieces and you come to understand how the individual
> pieces work but then you say "I want to understand this thing but that
> explanation can't be right because I understand it". Foolish argument is it
> not?

It's because you are looking at the wrong pieces. If I want to
understand the Taj Majal I would visit it, read the history of it,
study Mughal culture, architecture, etc. Your view only would consider
studying bricks and marble and congratulate itself on being so
pragmatic and announce confidently that the Taj Mahal is nothing but
stone blocks and how foolish I would be to resort to any non-masonry
explanations of it.

>
> > Information doesn't feel like anything.
>
> Interesting piece of information, how did you obtain it? Did this
> information about information come to you in a dream?
>
> > It's an inversion to consider information genuinely real.
>
> There you go again with the "R" word. OK if it makes you happy there will
> never be a AI that is "really" intelligent", but it could easily beat you
> in any intellectual pursuit you care to name; so I guess being "real" isn't
> very important.

How about I will make you a deal. You will be able to beat any
computer at any task for all eternity. All it will cost you is your
consciousness. You will be in a coma forever and never experience a
single sensation again. Is it a deal? So I guess being intelligent and
beating others in intellectual pursuits isn't very important.

>
> > Consciousness research doesn't go anywhere because it's being approached
> > in the wrong way
>
> It doesn't go anywhere because consciousness theorizing is too easy, any
> theory will work just fine; but intelligence theorizing is hard as hell and
> most intelligence theories fail spectacularly, so enormous progress has
> been made in making machines intelligent. That is also why armchair
> theorists always talk about consciousness and never intelligence;
> consciousness is easy but intelligence is hard.

When something is hard it can also be because you're doing it wrong.

>
> > Whether or not a machine could be conscious is the wrong question to ask.
>
> I agree, even if the machine isn't conscious that's it's problem not mine,
> the question to ask is "is the machine intelligent?". And the answer is
> that it is if it behaves that way

Intelligent in the trivial sense, sure. Clever is maybe a better term.
Intelligence implies understanding, which requires awareness.

>
> > A machine isn't an actual thing, it's just a design
>
> Yes a design, in other words it's just information.

Which isn't an actual thing either. Designs and information are not
causally efficacious.

> And the thing that
> makes your 3 pound brain different from 3 pounds of corned beef is the way
> the atoms are arranged, in other words information.

It's the other way around. The arrangement of the atoms is utterly
meaningless and indistinguishable from corned beef were it not for the
significance of their providing a human life experience for a human
such as me. If we found a brain growing in the attic and we had never
seen one before, we would put gloves on and throw it in the trash.

>
> > Intelligence can't evolve without consciousness.
>
> If so then the Turing Test works for consciousness and not just
> intelligence; so if you have a smart computer you know it is conscious;

Trivial intelligence is not consciousness.

> but
> the reverse is not necessarily true, a conscious computer may or may not be
> smart.

True. So? Smart is worthless without consciousness.

>
> > Determinism cannot have opinions. What would be the point?
>
> I don't understand the question, what would be who's point?

The point of anything being able to have an opinion. If the universe
was deterministic, then what would be the point of feeling one way or
another about what was or wasn't happening?

>
> > Why should you have any preference in how things are arranged if they
> > have always
> > been and will always be arranged in the way that they are determined to be?
>
> Because neither you nor a outside observer knows what those prearrangements
> will lead to, deterministic or not the only way to know what you are going
> to do next is to watch you and see.

But the whole issue is moot if it's deterministic. What is your motive
to care about what you are going to do next if you can't do anything
about it. It's like saying even though TV doesn't exist you still
would want to know what time the best shows are on.

>And if you don't like everything always
> happening because of cause and effect that's fine, the alternative is that
> some things do not happen because of cause and effect, and there is a word
> for that "random".

Those are not the only two choices. Other people on this board make
that mistake too so I am very familiar with it. The word for that is
called "intention". Free will. Motive. It is neither random or
deterministic. It may be influenced by conditions outside of our
control, but that doesn't chance the fact that it is a concrete,
ordinary feature of our reality. To take your position literally would
be pathological machinemorphism. Fortunately you don't really believe
what you are saying you wouldn't try to debate with me because that
could only have a deterministic or random result.

>If you find that being a pair of dice is philosophically
> more satisfying than being a cuckoo clock that's fine with me; there is no
> disputing matters of taste.

Yet being a living organism is not an option for you. Do you not see
the incredible cognitive bias of that. You are saying literally "I
know that I am not really myself and I know nothing about anything
except that nobody is really themselves or knows anything either". I
made sense of things that way too. It works, sort of. But the way it
makes sense to me now is like waking up from a dream compared to
that.

>
> > That's circular reasoning. You can't justify the existence of feeling
> > or meaning by saying that meaning makes things feel meaningful.
>
> The feeling of freedom comes from the inability to always predict what we
> are going to do next even in a unchanging environment, and this inability
> would be there even if the universe were 100% deterministic (it's not), and
> most people find this feeling pleasant. What is circular about that?

Because you are talking feeling for granted. It just 'comes from' the
inability to predict. Really? A rock can't predict anything, does that
mean it must find that feeling pleasant?

>
> >  The neuron doctrine is just one model of consciousness,
>
> You can say that again! There are more models of consciousness than you can
> shake a stick at.
>
> >  one which has failed to have any explanatory power in reality.
>
> Yes, exactly like every other model of consciousness, not one has the
> slightest bit of experimental evidence in its favor,  consciousness
> theories are all equally useless. So lets talk about intelligence theories
> even though that is astronomically more difficult.

Intelligence theories seem dull to me. It's just puzzles.
Consciousness theories are useless because consciousness is useless.
Being useless is the universe's ultimate luxury.

>
> > A human being doesn't use neurons, it is the collective life experience
> > of neurons. They are living organisms, not machines.
>
> What about the parts of those neurons? Is the neurotransmitter
> acetylcholine a living organism?

No, it's just an organic molecule. More mechanical compared to a
living cell but no less mechanical than a dead cell.

> And what about the parts of that molecule,
> is a hydrogen atom a living organism?

No it's a sensorimotive-electomagnetic micro-monad.

> Does acetylcholine know about
> philosophy when you think about Plato, or does acetylcholine just obey the
> laws of chemistry?

I doubt that acetylcholine obeys the laws of chemistry, it just knows
the sweet taste of an acetylcholine receptor and the foul stench of an
acetylcholine antagonist and we interpret the consequences of that as
the laws of chemistry. Also maybe all acetylcholine in a given
organism has a unified experience like we do. It might have a systemic
political agenda and vie with other neurotransmitters for
representation, rigging the elections from behind the scenes to
influence our behaviors.

>
> > It's not the literal sense that matters when we are talking about
> > subjectivity.
>
> Subjectively you don't feel exactly like you did one year ago but pretty
> much you do, so something must have remained pretty much constant over that
> time and if it wasn't atoms (and it certainly was not) and it wasn't
> information then what was it?

Ah, now you're getting somewhere. It's the semantic momentum of the
self as a whole. On the inside of a body, cell, or molecule, things
are completely different and opposite from the outside. It's not about
stuff divided by distance in space, it's about experiences multiplied
by significance in time. This is a huge epiphany. Earthshaking. It's
right in front of you. You have only to consider it with an unbiased
mind. Our experiences are literally ours alone, but figuratively they
are a cumulative entanglement of all of the experiences of ourselves,
our tissues, atoms, etc but also families, friends, generations,
cultures, species, planet, etc. What is the same about this
conversation? It's not the words or the characters, not the packets or
bytes, but something must be the same about it. It's the sense it
makes to us. The themes and meanings. The experience and
participation. This is what it is to be a living being.

>
> >  Information doesn't exist.
>
> Hmm, yet another of those things that do not exist. It seems that lack of
> the existence property does not cramp the style of these things very much.

If you don't exist, expectations are low to begin with.

>
> > If you make a mistake though, your friend might catch it, but the
> > calculator cannot.
>
> Your friend is far more likely to make a error in arithmetic than a
> calculator is.

Sure, but the friend is far more likely to look like Sofia Vergara.

>
> >  You are looking at the exterior behavior of the neuron only.
>
> You are looking at the exterior behavior of the microelectronic switches
> only.

Because that's all we have to look at. Since we are the interior
behavior of neurons we can't doubt their awareness. Microelectronic
switches could be composing symphonies behind our backs, but I think
it's sophistry to entertain that line of thinking tbh.

>
> > Our entire lives are literally created through neurons and we know that
> > they are filled with human feeling and experiences
>
> What's with this "our" business? I know that I am conscious and I have a
> theory that you are too when you are not sleeping or dead, in other words
> when you act intelligently; but I can't prove it and it's only a theory.

Is it really a theory though? Did you really sit down one day and
think "I have a theory that I am not the only person on Earth". Why
would such an elementary and self evident sense need to have a theory
attached to it? It's nice to have a theory too, don't get me wrong,
but it is not necessary to question the obvious just because it can't
be proved to a rigid and arbitrary empirical standard.

>
> > What humans do is an example of human intelligence. What computers do is
> > an example of human intelligence at programming semiconductors.
>
> According to that reasoning Einstein was not intelligent, it was Einstein's
> teachers that were intelligent.

Einstein wasn't a good student. He used thought experiments that he
made up. Like me. His teachers were intelligent because he was self-
taught.

>  1952 was a watershed year in the history
> of AI, in that year Arthur Samuel wrote a checker playing program, and the
> interesting thing is that the program could pretty consistently beat Arthur
> Samuel at playing checkers.

AI is cool. I only have a problem with it being confused with
consciousness, which is *much* cooler.

>
> > The semiconductors know all about voltage and current but nothing about
> > the messages and pictures being traded through those systems.
>
> Neurons know about synapse voltages and ion concentrations but nothing
> about the messages and pictures being traded through the brain.

That's true, and those things could all occur just as they do without
any messages or pictures being associated with them, but since we know
for a fact that they are synchronized and mutually causally
efficacious, we don't have any reason to doubt the relation between
them. Until we can plug semiconductors into our brain, we won't know.
The fact that we can't already should suggest that there is a fairly
important difference to begin with.

>
> > Computation is not intelligence. It's really just organized patience.
>
> Regardless of what it "just" is, it can "just" outsmart you.

You seem focused on competition. I don't have any insecurities about
computers. They can beat me in every board game and trivia show on the
planet. I will happily take the consolation prize - eating, sleeping,
laughing, complaining. I will never be jealous of an inanimate object.

>
>  > The computer is an infinitely patient and accurate moron with a well
>
> > trained muscle instead of a mind.
>
> A moron that can nevertheless make you or me look like idiots, so if you're
> right and computation is not intelligence then computation is better than
> intelligence because one can outsmart the other.

That's not a very smart way of looking at it. Is being a dead smart
person better than a live idiot?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to