On Jul 28, 10:55 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Thu, Jul 28, 2011 at 2:44 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

> > It's only you who is aware that they are questions. Your copy machine
> > is no different from any copy machine except that it generates a
> > recording of a different page than the one fed into it.
>
> In order for it to generate such a copy, it must intelligently select what
> to return out of a near infinite number of possibilities.  How could this
> decision be made without understanding the question, and applying
> intelligence to choose an acceptable response?  You might say the process is
> not intelligent, but this seems contrary to the definition of intelligence.

Google has no understanding of what you type. It just compares the
text string to other text strings and derives statistical
probabilities which it spits out. You can call that intelligence if
you want to, but then what would you call something that not only
understands what you mean but can give you their opinion about it?

> Whether or not the intelligent machines experiences the intelligence, would
> you not still say this machine *is* intelligent?

No, the machine isn't even a machine, it's just a bunch of silicon, or
bowling pins, or dominoes. We interpret it as a machine if we want to,
and we interpret that machine as intelligent if we want to.

> > Any perceived intelligence is
> > purely the reflection of an intelligent observer's own semantic
> > awareness.
>
> What if there were two intelligent machines that interacted with eachother,
> and you asked machine A if machine B was intelligent?

Neither one can perceive anything. They will respond whatever way they
are programmed to respond. They call up whatever recording-reflection
matches their a-signifying mechanical algorithm.

> > What do you consider behavior? I don't think that inorganic machines
> > will be able to care about anything, to imagine anything, to feel, to
> > experience anything, etc. Machines don't understand speech, they just
> > imitate the understanding of speech.

> To imitate understanding, there must be some element that exists somewhere
> which actually is understanding.

If I imitate a grapefruit must there be some element that exists
somewhere which is actually a grapefruit?

>  Would you not say that deep blue
> understands the rules of chess?  If it does not understand the rules of
> chess, why does it always make valid moves?  Do Google's self-driving cars
> understand the rules of the road?  (Red means stop, Green means go)

They understand nothing. They just correlate one meaningless input
with another meaningless output based upon meaningless logic which
they have no awareness of. Google's self-driving cars have no more
knowledge of the rules of the road than we have a head count of
bacteria in our colon.

> > They don't beat people in chess,
> > they become chess..
>
> I am not sure what this means.

Just that. A game is a machine. You explicate the entire game into
silicon and you have Deep Blue. It has every possible game in a
database and it just dynamically matches the current game with the one
where the player set of pieces loses first. It has no strategy, it
just pulls cards from the deck in the order that has been specified by
the static set of all permutations to arrive at a specific meaningless
but mechanically identifiable pattern.

> > they have no idea what a person is.
>
> A chess-playing machine doesn't have to know what a person is, but if a
> machine can be made to understand the rules of chess, I don't see why it
> could not be made to understand what a person is.

It can't be made to understand anything and still be purely
mechanical.

> Perhaps people's brains are doing the same exact thing.  People aren't
> consciously aware of the processes that underlay their thinking, so it is
> perfectly possible that Jeopardy players use some of the same methods as
> Watson.

Oh, they absolutely do use some of the same methods. Our brain is
mechanical as well as experiential. We can emulate the mechanical side
with anything. We can't emulate experience at all.

> How does the first person experience make its way to having third-person
> observable effects (which were not detectable before)?  This sounds like a
> form of interactionist dualism. (A ghost in the machine)

In the fullness of time, the essential nature is revealed. There was a
case of a man who had a brain injury through the limbic region. One of
the results was that he was unable to make simple decisions like
deciding which kind of cereal to buy. He had no capacity to make
decisions based upon whim or feeling, everything had to be evaluated
through the cortex. To try to evaluate the difference between hundreds
of cereals through cerebral logic alone is a catastrophe. You need a
point of view, a personal opinion. Watson doesn't have that.

> A machine can fill the the full range of possible observations as seen
> through a TV screen.  So there is nothing a person could show through that
> screen in an interview that an intelligent program could not replicate.

It can fill the range of audio visual patterns, but it can't fill the
range of possibilities for the sense those patterns make over time. If
you are watching two parallel videos of the same person in India, and
there is an earthquake in the studio, the video feed in which you see
an earthquake happening is real.

> > If a person cares about something, they do things differently than if
> > they don't. Sooner or later, the machine forgets to stop playing
> > Monopoly and the house burns down. A person doesn't do that as often.
> > It's not about ways a human can move that a machine cannot, it's about
> > being able to relate to having to sneeze or worrying about money.
>
> So a process needs to worry about money in order to act like it worries
> about money?

What does a process act like when it worries about money? What is the
mathematical formula to make a styrofoam container worry?

> It is okay if you hold this position, it seems you reject the possibility of
> Strong AI.  There could be real-life version of Data ( if you watch Star
> Trek ) according to this idea.

Strong AI is fine, as long as the A is more than semantic logic. It
needs biochemical analogs.

> > Movement is not intelligence or consciousness. As with frog legs and
> > headless chickens, movement does not require sentience.
>
> I was not suggesting that it was, only that human intelligence's externally
> visible behavior only manifests as movement.  Therefore for a machine to
> give the appearence of human intelligence, it need only control the movement
> of a human body appropriately.

Human intelligence doesn't manifest externally. The brick wall doesn't
know we're intelligent, and we made the thing. Artificial Intelligence
only needs to fool us, and even there, probably not going to fool a
baby.

> > Patterns in carbon are not more meaningful, it's just that carbon gets
> > together with oxygen, hydrogen, and nitrogen to make molecules that
> > team up as a cell... the cell can experience patterns of a
> > qualitatively more significant nature than can the molecule or atom.
>
> I think that bits in memory and modules, can get together and make
> subroutines and team up as software modules, and the software modules can
> experience patterns qualitatively more significant nature than a simpler
> function or subroutine.

If that were the case, then software would be evolving by itself. You
would have to worry about Google self-driving cars teaming up to kill
us. We don't though. Machines don't team up by themselves. Cells do.
Have you seen the docus on conjoined twins? 
http://www.youtube.com/watch?v=42A0GdmQ02M
Do you think if you welded two computers motherboards together by
accident that two copies of Windows would be able to successfully
respond to input from either computer? What kind of software would be
able to adapt that way to an unforeseeable ontological challenge? Do
you think that cells could make that work without having some low
level need to survive of their own?

> > If silicon could be made to group into a self-replicating molecule
> > that autopoeisized into a 'sill' then it too would be able to have
> > qualitatively more significant experiences, however not necessarily
> > the same as a carbon based cell.
>
> You say not necessarily the same, does this admit the possibility that they
> could be the same?

Oh absolutely.

> He often says things like simulated fire won't burn, simulated digestion
> won't digest, and that a computer program can't understand anything.  He
> also believes in "biological naturalism" which is the theory that only
> biological material can be conscious.

Oh he does? I agree with the first three. To say only biological
material can be conscious is to imply that consciousness is a high
level function though. I think it's a fundamental identity-symmetry of
all matter, it just gets expressed to a more familiar degree in
organisms which are most similar to us. That's how awareness works -
it identifies with what it is similar to (or opposite - which is
another kind of similarity; dissimilarity).

> > I do reject the idea that machines have beliefs. A belief implies that
> > one is choosing to care about something.
>
> Can a machine at least have the belief that one number is larger than
> another?  Or that a location in memory is a "1" rather than a "0"?

No, it doesn't know what a 1 or 0 is. A machine executed in
microelectronics likely has the simplest possible sensorimotive
experience, barring some physical environmental effect like heat or
corrosion. It has a single motive: 'Go' and 'No Go'. It has a single
sense: This, this, this, and this are going, that that that are not
going. It's the same sensorimotive experience of a simple molecule.
Valence open. Valence filled.

> Perhaps none of the machines you are familiar with seem to care about
> anything, but what about some massively complex super computer, programmed
> by humans 10,000 years from now, with much greater complexity and power than
> the human brain.  You believe that no amount of programming, however
> sophisticated, could ever yield a program that cared about something?  Even
> rats care for their young.  According to your theory, no computer, however
> sophisticated, could ever replicate the behaviors of a rat?

It's a category error. No amount of sophisticated programming is going
to simulate fire either. Caring is a participatory experience. It
cannot be emulated. We could make a program that acts like we think
something should act when it cares, and that should be good enough for
our purposes. We only want them as slaves anyhow. We've got enough
baby rats as it is.

> There are higher-level processes at work in computers.  Do you have any
> experience with computer programming?

Enough experience, yeah. BASIC, some assembly language. I'm a network
analyst. I've been playing with computers for more than 30 years.

> > > Do you think machines can adapt?
> > Living organisms are machines too, and they do adapt. What experiences
> > the adaptation however is not a machine.
>
> Just above you said that no machine could ever care for anything.  Or do you
> mean no machine that does not fall into your definition of life?

I mean living organisms are machines as well as being anti-machines.
The reason life exists is to develop anti-mechanical qualities, same
as neurology. Otherwise there is no reason to evolve beyond simple
molecules.

> > > Do you think machines can have information?
> > Not by themselves. Information is in the eye of the beholder. It has
> > no independent existence.
>
> How do you define information?

I try not to define words. They define themselves through their
context and intention with which they are used.

> Do autopilots not need to have a model of lift, thrust, gravity, and drag in
> order to fly?

Does an abacus need to have a model of chickens and cabbages to let us
keep track of them?

> > > Do you think machines can understand?
> > No
>
> How are you defining "understand"?

Actually I have a whole thing about 'understanding'. It comes from a
the Proto Indo European for 'within' - *nter/inter/enter like entero -
guts. The etymology of stand has to do with being settled and still.
So it's a settling within. A gut feeling that agrees with you. An
experience of encountering sense which augments your own aggregated
interior sense.

> > I don't necessarily believe that neutrinos exist.
>
> They have been observed.  They interact with heavy water but only very
> rarely.

Heavy water has been observed doing something that we associate with
particles. We infer that they are independent particles, but I think
that they are events of whatever medium is being observed rather than
a separate entity that exists on it's own. This explains all of the
apparent weirdness of QM. Atomic moods. Quorum sensing. Perceptual
relativity. We are observing our own observation through a medium
which limits the reflection of that observation to a primitive
mathematical language.

> > They aren't identical in any way. It's your intellect that equates
> > them. What is important to you may not meet the minimum requirements
> > of what is important to building a mind.
>
> Well consider that each neuron is a vertex in that graph, it has the same
> connections to other entities which also have the same connections.  From
> the perspective of every vertex, the graph is the same (in terms of all the
> edges and where they lead).

What graph?

> Did some alien intelligence or God have to look down on the first life forms
> that evolved on Earth for them to be alive?

Nah. It would have been too boring. A billion years of blue green
algae?

> A computer can virtualize more powerful hardware.  Turing universality says
> nothing about how many ticks the clock on your wall will pass during that
> emulation.

In this case power is exactly a measure of how few ticks on the clock
will pass. Are you saying that a 2GHz processor can run a program that
emulates the same processor at 16GHz virtually? Sounds good. The next
step would be to have a 89 Ford Fiesta engine that emulates a
Lamborghini engine in one of it's cylinders.

> > First
> > hand experience either occurs through experience or it does not occur
> > at all. A machine is not capable of experience, by definition.
>
> By definition?  Just above you said lifeforms are machines.

The part of life forms that are machines is not the part that
experiences anything. Machines are not capable of experience unless
they are executed in a physical form, in which case the experience
would be defined by the material upon which it is executed. You put a
rat in a maze, it has one kind of experience, you put a human in a
maze it has another, an atom it has another - but the maze istelf,
even if it has merry go rounds and slides and tunnels on paper, has no
experience at all. There is no free floating experience of mazeness
(except within our psyche).

> > What's your theory of qualia?
>
> It is probably not something you would accept, since it is emergent, and
> related to information and processing.  Basically, you can imagine the
> simplest possible qualia, knowledge/awareness of one of two different
> states.  Imagine some single celled organism which feeds during the day and
> its predators sleep.  It has some primitive binary awareness of whether it
> is light out or warm out.  When it is warm out, other centers of its
> primitive mind are activated (the ones which associate light with safety,
> well being, satiation, etc) are all linked to the part of the animal's
> nervous system and activated by the knowledge that it is light out.
> Likewise, when it is dark out, the animal's fear centers activate which
> alter its behaviors (it might become more jumpy, paranoid, etc).  The sense
> of fear is not some primitive fundamental phenomenon, but the combination of
> various alterations in the state of the mind.  Likewise, yellow can be
> understood in terms of the mind that manifests it.  Rather than being
> simple, some qualia may be very complex, with the upperbound being the
> complexity of the brain itself.  You say you are unable to explain the
> qualia of yellow, but then your linguistic portion of the brain does not
> have access to all the information that the visual portion of your brain
> received and processed.  To understand how qualia are simply information,
> lightly touch the back of your hand with your finger, and spend several
> minutes concentrating on what that feeling is, what it feels like.  You will
> find it is only information (your arwareness that you are being touched),
> Pain and other qualia are more complex, they involve more connected areas of
> the brain which affect a large number of different modules in the brain.

I'm sympathetic to this as an understanding of the role that qualia
plays in behavior and evolution, but it doesn't address the main
problem of qualia, which is why it would need to exist at all. Why
have experience when you can just have organisms responding to their
environment as a machine does. A machine certainly doesn't need any
sensory spectaculars to accomplish information transfer.

> > What is the experience of any 10 of those particles?
>
> The particles don't experience, the system (the mind that is formed by the
> relations, organization, and patterns of the mind) is what experiences.

So the particles have no experience whatsoever. Ok.

> > How many
> > particles does it take before an experience takes place?
>
> As many as it takes to construct a rudimentary mind (assuming those
> particles are formed into a mind).

That's what I'm asking. About how many particles which don't
experience anything does it take to construct the most rudimentary
mind possible? A million? A hundred billion? Do the zombie replacement
in reverse, how many can you subtract before a functioning mind
suddenly becomes a mindless function of unrelated particles? What
happens to the qualia? Fade, absent? Is there thermodynamic energy
lost?

> > Will that
> > experience be the same whether those particles are atoms or ping pong
> > balls or digital vectors?
>
> So long as they have an "isomorphic" identity.  If the ping pong balls
> fullfill the same roles, and relations as part of the overall structure of
> the system, then the same mind and experiences will result.

Isomorphic. That's what I'm saying. Silicon is not isomorphic to
organic. That's why no living thing can survive on silicon. It doesn't
fulfill the same roles, so it doesn't wind up creating cells that
don't make organisms that don't have nervous systems that don't
experience mammalian consciousness.

> > Why can't we just update the experience
> > directly?
>
> As far as I am concerned, you are updating it directly.  The experience is a
> complex pattern, there is no shortcut to get to an experience.

If we were updating it directly there would be no qualia.

> That is a great question, which I think shows a flaw in physicalism.  The
> mind does not exist (as a physical structure) at any one location.  It is an
> informational pattern (which extends both through the dimensions of time and
> space).

To me that's a metaphysical appeal. Time and space contain no pattern.
They are the absence of pattern.

> > Why can't we just see it with
> > the naked eye?
>
> Here you sound like Liebniz, who said:
>
> "One is obliged to admit that perception and what depends upon it is
> inexplicable on mechanical principles, that is, by figures and motions. In
> imagining that there is a machine whose construction would enable it to
> think, to sense, and to have perception, one could conceive it enlarged
> while retaining the same proportions, so that one could enter into it, just
> like into a windmill. Supposing this, one should, when visiting within it,
> find only parts pushing one another, and never anything by which to explain
> a perception. Thus it is in the simple substance, and not in the composite
> or in the machine, that one must look for perception."
>
> He was saying that if we blew up a mechanical mind, so we could walk around
> inside it and look around, we wouldn't find the mind anywhere, there would
> be nothing to account for it.  I didn't have the chance to ask Liebniz this,
> so I will ask you: What would you expect to see?  What do you think a mind
> would look like?

The mind looks like whatever visual art or artifacts it produces. When
you read these words, you read my mind. It's not an object, it's a
subject. I tried to capture it here: 
http://27.media.tumblr.com/tumblr_lm7sjgcPBE1qeenqko1_500.jpg
What is black is subjective.

> It is not as though one would find some glowing orb or magic ghost like
> particle.  Consider that if we magnified a hard drive such that we could
> walk around around it, we would not see the photographs, and movies, nor the
> documents and music files that are represented in it.

Exactly. Only our minds can see the movies and hear the music. There
is nothing on a hard drive for a cockroach or platypus.
Representation, information... these are phantoms of a made up world.
They are mythology. Technically it's an anti-mythology because it's
far occidental rather than oriental but it's still well out of the
realm of concrete existential processes. The universe is made of
sense, and information is a slim category of sense.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to