On Sep 16, 11:11 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Fri, Sep 16, 2011 at 9:57 AM, Craig Weinberg <whatsons...@gmail.com>wrote:

>
> > The perceptual frame of semiconductors is so primitive,
>
> I don't say semiconductors are conscious, I say programs are conscious.

That's where I think you are failing to examine the presumption that
programs have any objective existence. A program is one
(sensorimotive) thing when it is conceived by programmer's minds,
another when it is executed as the behavior of semiconductors. It's no
different in principle than telling a dog 'Give me a paw.' and
considering that command is itself conscious.

> There is no upper bound to how complex and rich software can be.  A program
> could have all the depth and richness of a human mind, or 1,000,000 times
> more.

Complexity is not the issue. Emulating every grain of sand on Earth is
complex but it represents relatively low significance. No matter how
complex a robot is, it's never going to discover color if it can't
see.

>
> You might not realize it, but every time the discussion shifts to the
> consciousness of computers you begin talking of the chemical elements, and
> the low level pieces like transistors.  You don't, however, exhibit this
> same bias when talking about brains and biological machines.  Do you not see
> this double standard?

Of course I see the difference, but it's not an oversight. When we
talk about our own consciousness we have neural correlates to use.
When we talk about computers, we don't have any of it's native
sensorimotive correlates to work with (because it's private), so we
can only infer and deduce what it might be like. If we had incidents
of computers spontaneously bursting into song, or reproducing
themselves or anything that exhibits any native sentient tendencies we
can relate to, then we would not be having this discussion. It would
be obvious that computers were wild entities, maybe trainable or not.
What we see instead is that they are not 'wildable'. They don't need
to be trained because if you understand how their circuits work, you
can simply instruct them to do something over and over and they will
do it forever. So you not see this difference as worthy of
consideration? To me it's obviously a defining part of what makes the
difference between what we consider living, feeling, thinking, or
human, and what is not.

>
> > ie, the
> > mechanisms they rely upon are maximally probable relations rather than
> > harnessing improbability as living organisms do, that there is an
> > infinitesimally low change of the thing ever noticing or making sense
> > of what we have programmed it to do.  Just as we can't see ultraviolet
> > light but it still gives us a sunburn, the instruction set of a
> > computer or robot falls on deaf ears but (slavishly) willing limbs.
>
> Why no talk of your slavishly willing biochemistry?

Because our sensorimotive qualia arises natively within our own
biochemical context. You can't be a slave to yourself. Silicon isn't
born with a cell phone etched into it. It can't make a cell phone by
itself. Do you see the difference?

>
>
>
> > > > It's not a binary distinction.
>
> > > I agree.
>
> > > > Even with people,
> > > > those that remind us more of ourselves are deemed to be more
> > > > conscious.
>
> > > I don't think it is a matter of being more or less conscious, but
> > > rather a question of what one is conscious of.
>
> > Possibly, but  who would be a counter example of someone that we think
> > is more conscious than someone else but which is less like us?
>
> Perhaps the whales are conscious of more than us.

I agree, but the fact that it's a 'perhaps' can probably be attributed
to exactly what I'm talking about. They don't build cities and drive
cars, so part of us wonders if really they are any more intelligent
than any other non human creature. Chimpanzees seeem more
'intelligent' than whales, but I think that could easily be because
they look more like us and behave more like us.

>
> > Here I
> > think we can see how 'consciousness' is like 'quality of being alive'
> > so that we can always find some excuse for our prejudices. We suspect
> > that unfamiliar lifestyles or cultures, even when seemingly 'better'
> > than our own ideal lifestyle have some hidden flaw, some missing
> > elements that do not make as much sense as our own, and that makes
> > them 'other'. There could be exceptions I suppose, I just haven't
> > really thought of any. The less I can see of myself in someone else,
> > the less I can relate to them and their reality as 'real'.
>
> > > >> what is an appropriate substitution level to
> > > >> bet on before uploading one's brain, can computers be conscious in
> > > >> the same
> > > >> way as biological brains, etc.
>
> > > > That can only be ascertained through experiment,
>
> > > Your understanding of consciousness is far from complete if we still
> > > need to conduct experiments to answer these questions.  Hopefully you
> > > see now there is more to consciousness then it's simple appearance.
>
> > No, just the opposite. My understanding of consciousness is
> > specifically and unequivocally that subjective qualia can only ever be
> > experienced first hand. All non experimental hypotheses of
> > consciousness are doomed to failure.
>
> Hypotheses usually precede experiments.

That's a stereotype of the scientific method. In practice, all steps
of the scientific method are dynamically instantiated in different
orders. You work with what you've got. If you have an experiment, then
you make a hypothesis. If you have a hypothesis, you might gather
information or state a new problem. But in any case, sure, I'd love to
do some experiments. Any suggestions?

>
> > > I am not attempting to disqualify first person experience, only
> > > understand it.
>
> > By understand it though, you mean understand it in third person terms.
> > Isolate the mechanism. That's the opposite of how it works.
>
> So according to you understanding it (in the traditional sense of the word)
> is impossible?

No it's very possible, but you have to understand it through
experience first and analysis second. You have to accept that color is
what color is, and it means what it means. You have to sequester your
knowledge of objective mechanics - not forget them, but partition them
so as not to contaminate the naive experience.

Pretend that the universe is coming to an end tomorrow and an
omnipotent alien has asked you to tell her everything she needs to
know to recreate the universe in the finest detail and leaving nothing
out. We need to know how to build an ontology of feeling, being,
seeing, living, experiencing, etc. Not the forms, but the contents.
Assume nothing. You have to make up how matter works, how energy
works, how arithmetic works. The reasoning behind reason and the sense
of sense using nothing but your own direct experience and no
secondhand knowledge at all - only what you know to be true personally
or suspect might be true is allowed.

>
>
>
> > > > My hope is that there is a threshold where is is possible for someone
> > > > will reach a supersaturated tipping point and crystallize an
> > > > understanding of what I'm talking about, like those 'When You See
> > > > it..." memes (http://static.black-frames.net/images/when-you-see-
> > > > it_____________.jpg). Once you realize that what we perceive is both
> > > > fact and fiction
>
> > > Do you think something can be both true and false?
>
> > Of course. True or false:
>
> > Rain is beneficial.
> > It's ok to eat peanuts.
> > Zero is a number.
> > Y is a vowel.
> > I am old.
> > I am freaking out.
> > I am Iron Man.
>
> This points to a lack of clarity.  I think if you tried to be specific
> enough in your arguments that you never could to say X is both Y and not Y,
> your ideas would be a lot easier to understand or refute.

If the cosmos was like that, then I would of course do that. My goal
is not to have an understandable idea about the cosmos, it is to
understand the cosmos in a single idea. In order to do that (see
previous thought experiment for a suggestion of how to approach this),
we are compelled to realize that not only is the universe both clear
and unclear, but there are useful things which can be derived from
that symmetry. Clarity has certain valuable ontological qualities but
they are meaningless, and literally impossible without the
contradistinction of figurative, multivalent qualities. If you are
going to talk about subjectivity, I think you have to expect that it's
going to be (at least in it's most extreme and defining modalities) as
fuzzy and fictional as objectivity is literal and factual. How could
it be otherwise? How could fiction and imagination not be part of the
cosmos?

>
>
>
> > True and false are only appropriate in artificially constrained
> > literal contexts (which are important and powerful foundations of
> > science and reason, of course) but the real universe is also made of
> > 'maybe', 'it depends', and 'nobody knows'.
>
> > > > and that both fact and fiction are themselves a
> > > > matter of perception then it gives you the freedom to appreciate the
> > > > cosmos as it is, in all it's true demented genius, rather than as a
> > > > theoretical construct to support the existence of fact at the expense
> > > > of fiction (or vice versa).
>
> > > >>> It is also a lower level phenomenon of
> > > >>> anthropology, zoology, ecology, geology, and astrophysics-cosmology.
> > > >>> Some psychological functions can be realized by different physical
> > > >>> media, some physical functions, like producing epinephrine, can be
> > > >>> realized by different psychological means (a movie or a book,
> > > >>> memory,
> > > >>> conversation, etc).
>
> > > >>>>>>> How do you get 'pieces' to 'interact' and obey
> > > >>>>>>> 'rules'? The rules have to make sense in the particular context,
> > > >>> and
> > > >>>>>>> there has to be a motive for that interaction, ie sensorimotive
> > > >>>>>>> experience.
>
> > > >>>>>> If there were no hard rules, life could not evolve.
>
> > > >>>>> 'Hard rules' can only arise if the phenomena they govern have a
> > > >>>>> way of
> > > >>>>> being concretely influenced by them. Otherwise they are
> > > >>>>> metaphysical
> > > >>>>> abstractions. The idea of 'rules' or 'information' is a human
> > > >>>>> intellectual analysis. The actual thing that it is would be
> > > >>>>> sensorimotive experience.
>
> > > >>>> Are you advocating subjective idealism or phenomenalism now?
>
> > > >>> I'm advocating a sense monism encapsulation of existential-essential
> > > >>> pseudo-dualism.
>
> > > >> Could you please restate this using words with a conventional
> > > >> meaning?
>
> > > > I'm advocating a universe based entirely on sense, sense being the
> > > > unresolvable tension between, yet unity among, subjective experiences
> > > > and objective existence.
>
> > > This is a Buddhist idea.
>
> > > It is also quite similar to Bruno's explanation of the appearence of
> > > the physical world.
>
> > Yes but to anchor it in sense specifically, so that physics and
> > consciousness can be reconciled through the relation between interior
> > pattern recognition and exterior substance interaction I think takes
> > the idea in a new direction.
>
> So is there an external reality or only minds?

Both. That's what's great about the sense based model. It validates
both our naive intuitions and our scientific measurements without one
cancelling the other out. Our mind's reality is the native reality of
what we are. It is the best version of what is relevant to us that we
can access - which is as external as anything ever could be. We can
extend our native reality and acquire more relevant bits of realities
which were formerly external to our minds, so that we share more of
our sense and motive with *the* world and it shares more of it's many
layers of sense and motive with *our* world.

We only have our minds, but with it we can see through our bodies into
the world it lives in, and we can feel through our interior the
subjective worlds our psyche can experience. We are like a tree, with
roots that suck water and nutrients up from underground and air-roots
that suck solar energy and CO2 from the air. We grow and penetrate
deeper into our private mysterious world and higher into the knowable
public world.


>
>
>
> > > >> No, the mind which supervenes on the computation of the milk
> > > >> bottles will
> > > >> experience red.
>
> > > > A mind arises from a collection of milk bottles?
>
> > > If they perform the right computation.
>
> > Have you considered that might not be true?
>
> The alternatives to mechanism are interactionist dualism (a non-physical
> soul fiddles with physics), immaterial idealism (only minds exist),
> solipsism (only I exist), and other such ideas which "give up" on any hope
> of understanding or studying consciousness.

You left out my alternative. Involuted monism (a sense singularity
with an essential orientation and an existential counter-orientation)

> Therefore the only way to
> progress in our understanding is to study consciousness through the lens of
> mechanism.

Studying consciousness through the lens of mechanism is great. I
support it completely. I just don't think we should commit to it
exclusively, especially if there is a better interpretation staring us
in the face.

>
> > If you are being evaluated
> > in a psychiatric hospital, do you think it would be a good idea to
> > make that assertion?
>
> These ideas are not intuitive, like the idea that the earth moves around the
> sun, it takes some thought and background to understand.  Others have been
> persecuted for attempting to express simple but non-intuitive ideas in the
> past.

Definitely, and I'm not saying that makes them wrong, I'm just saying
that you realize that it's a case of extraordinary claims require
extraordinary evidence. My claims are ordinary, just unfamiliar. I'm
saying that the world is exactly the way that it appears and exactly
the way science contradicts appearances. They are both true and both
false depending on their context. Your claim is super crazy invisible
panpsychism.

>
>
>
> > > > Automatically? Does
> > > > it think about anything other than the one momentary experience of red
> > > > that occurs somehow from bottles knocking each other down in some
> > > > particular configuration?
>
> > > It depends on what the computation is.
>
> > > >>> Will I see red if I look at the milk
> > > >>> bottles?
>
> > > >> No.
>
> > > >>> How can you seriously entertain that as a reality?
>
> > > >> You won't see red when you look at a neuron involved in the
> > > >> processing of
> > > >> that sensory data, nor will the individual neurons which serve as
> > > >> the basis
> > > >> for that processing know the experience of red.
>
> > > > I agree. So what is it exactly that does know the experience of red?
>
> > > The mind.
>
> > Which is where?
>
> It has no physical location.  In what spot does your point of view exist?
> Where is the feeling when you rub your fingers together?  It is not in your
> fingers.

The feeling and point of view are on the inside of my nervous system
(and brain in particular). That is it's physical location. It's not
hovering behind the refrigerator.

>
> > In the milk bottles?
>
> I wouldn't say so.
>
> > Does it radiate physically around
> > them like a field?
>
> No.  However, you might be able to draw a 4-D shape around some region of
> spacetime in which all the computations that constitute your present moment
> of awareness occur.
>
> > Where does it get the red from?
>
> From the relations held between the bottles.

What is a relation made of and how is red precipitated from them?

>
>
>
> > > >> Entertaining the idea of
> > > >> milk bottles having a private experience is no more a leap than
> > > >> entertaining
> > > >> the idea that the cells in your brain can do the same.
>
> > > > On one level that's true, since we have no direct access to what other
> > > > things experience, but it doesn't mean that it's very likely that the
> > > > experience it has could ever be comparable to that of our brain cells.
>
> > > I thought you believed that we are a higher level process than our
> > > brain cells.  And that our experience is not that of our brain cells.
>
> > The higher level process of our brain cells is the process of the
> > brain as a whole. We are what is experienced through the whole brain
> > processes - composed *not* of the lower level processes of the brain
> > cells, but of the lower level *experiences* of the brain cell
> > processes. Big difference. Huge. Because what brain cells do is
> > constrained by literal existence - it has to be discrete. matter. in a
> > public. space.  What can be experienced *through* those brain cells is
> > a totally different (opposite) ontology. It's a continuous. energy. in
> > a private. time. It's sensorimotive, not electromagnetic, so it does
> > things that literally have to be seen to be believed, have to felt to
> > be understood. That's not metaphor, that is it's actual architecture.
>
> > The matter and space which hosts this energy-time works different
> > kinds of wonders - computations, mass productions, infinite methodical
> > patience. etc. The sensorimotive has a different skill set. It does
> > imagination, feeling, storytelling. It presents to us a kind of
> > profound omnipotence within the context of our own fictive subjective
> > process that is almost completely cancelled out by the corresponding
> > objective process - almost but not completely. The underlap is
> > extruded through time as significance accumulation and across space as
> > entropy. It's really pretty simple I think, you just have to really
> > grasp that the interior of everything is almost exactly the opposite
> > of the way it seems, except for a degree of overlap which is...sense.
> > Reality. Mundane truth.
>
> > > > If it were, there would be no reason to have brain cells at all.
>
> > > We need something to perform the right computations.
>
> > Why not milk bottles or dust?
>
> It would be hard to fit a system of billions of milk bottles in our skulls.

Make the skull bigger?

>
> Regarding dust, how do you imagine such a Turing machine made out of dust to
> be designed?  Even though the pieces of dust are small, how do you think the
> parts the move, count, measure, the dust particles would work, how big would
> that be, and how quickly could it process information?

Exactly. It's an impractical material for a Turing machine. A silicon
chip is an ideal material. A gaggle of geese is an inappropriate
material. Substance matters. The universality of universal machines is
constrained by substance. Turing machines cannot fly unless they are
implemented in something that is aerodynamically viable. Likewise
there is no reason to assume that they can live and feel unless they
are implemented in something that is biologically and neurologically
viable.

>
>
>
> > > > We
> > > > could just be a giant amoeba or pile of sand and have any experience
> > > > possible - human or otherwise.
>
> > > There is no reason to expect a pile of sand will perform the right
> > > computations.
>
> > I agree. Why do you expect a pile of silicon will be able to perform
> > the right computations though, just because it's in a particular
> > shape?
>
> Because if it is a Turing machine, it could perform any computations,
> including the right ones.

I've established though that not all Turing machines are created
equal. It makes a difference whether silicon is in a chip or in fine
grains, so why would our consciousness necessarily work just as well
on a silicon chip?

>
>
>
> > > > Instead of needing eyes we could just
> > > > drill a hole in our skull. Something makes humans different from non-
> > > > humans, I think that it's related to the experiences of organisms over
> > > > time as well as the consequences of the physical conditions local to
> > > > their bodies.
>
> > > So if you stepped into a star trek style transporter would you no
> > > longer be human because the copy lost it's history as an evolved
> > > organism?
>
> > It may not be that simple that who we are can be captured literally as
> > a static configuration. We are be-ings expressed through specific
> > material configurations. There is no guarantee that the material
> > configuration will reconstruct the same being.
>
> It would be the same physically, chemically, biologically, psychology, would
> it not?

No, I don't think it would be. It's like if you tried to teleport a
Nascar race into the ocean, or a city onto the moon. Our lack of
understanding of the interiority of the thing may be hiding a physical
non-commutativity of personal identity.

>
> > You may die and what is
> > transported is your identical twin,
>
> With this definition of death, we die every night when we go to sleep and an
> imposter awakes in our bed.

Except that we wake up inside the same body. If we change bodies, we
may not wake up at all.

>
> > just like a genetically identical
> > twin - a truly different person that other people will notice is
> > different, and whose personality and decisions will immediately
> > diverge from yours as soon as teleportation is accomplished.
>
> This would suggest physics has nothing to do with personality.

No, it just suggests that physics is necessary but not sufficient to
define personality.

>
>
>
> > We may be able to be 'walked off' from one brain to another, or from
> > our brain to a computer if the computer is sufficiently similar to our
> > own, but that may actual live experience. We may have to literally
> > grow into the new environment over years. Immortality could be fun.
>
> I hope so.  Mechanism suggests we are all immortal.

Me too. Sense monism suggests we are all both mortal and immortal, but
if we can make more of what is mortal immortal and more of what is
immortal mortal, so much the better.


>
> > > I don't see how that question is any different than: this universe is
> > > only physics, so what is the difference between smashing a skull and
> > > smashing a coconut?
>
> > Right, that's the question. Why do we, as physical parts of the
> > physical universe 'care' more about one than the other.
>
> Partly because we have evolved to care about others, and partly because we
> can see ourself in others.

Which is what I'm saying. We see ourselves in others that appear to be
like us, and coconuts are not like us. It turns out that coconuts
probable are not like us very much as far as living organisms go.
Computers are also not very much like us as they are not even living,
but just as a coconut can be carved into a face, a computer can be
articulated into a pseudo person.

>
> > What
> > physically cares and why does it seem impossible in inanimate objects
> > but vitally important to living things? If there is no physical
> > difference between living beings and inorganic phenomena, why and how
> > could it ever seem like there was?
>
> People used to think organic matter was made of a different material than
> inorganic matter, because inorganic matter was never seen to move on its
> own, to reproduce, and it cooked rather than melted.  Today we know better:
> the only difference between organic matter and inorganic matter is
> organization.  Fundamentally they are all the same subatomic particles, they
> are just arranged differently.

It's not only the arrangement, it's the ability of the particles to
produce different characteristics in different combinations. Ping pong
balls don't do that.
>
> > > > What would that be though? What is similar to red but not a color?
>
> > > The experience of red has very little to to with photons of a certain
> > > fequency.
>
> > I agree. The experience of red has very little to do with anything
> > except the ability to experience red. That's why I say it is not a
> > computation. The experience is conducted through and modulated by
> > computation, but those computations do not automatically produce the
> > experience of red. They have to occur in something that can possible
> > see red to begin with (i.e. not a collection of milk bottles or a
> > silicon chip)
>
> So tell me: Why can the quarks and leptons in your head support the
> experience of red, but the quarks and leptons in a computer cannot?

The quarks and leptons do not support the experience of red, they
support the experience of atoms, which support molecules, which
support cells, some of which support the experience of red (others
support the experience of salty, others support the experience of
experiences of emotions as thoughts or words)..

>  Do you
> think the quarks and leptons in the head have special properties or
> abilities not found in quarks and leptons elsewhere?

No.


>
>
>
> > > >>>>> The suggestion of a mind is purely imaginary, based upon
> > > >>>>> a particular interpretation of scientific observations.
>
> > > >>>> When we build minds out of computers it will be hard to argue
> > > >>>> that that
> > > >>>> interpretation was correct.
>
> > > >>> Ah yes. Promissory Materialism. Science will provide. I'm confident
> > > >>> that the horizon on AGI will continue to recede indefinitely like a
> > > >>> mirage, as it has thus far. I could be wrong, but there is no reason
> > > >>> to think so at this point.
>
> > > >> If you told any AI researcher in the 70s of the accomplishments
> > > >> from the
> > > >> links I provided they would break out the campaign bottles.  The
> > > >> horizon is
> > > >> not receding, rather you are in the slowly warming pot not noticing
> > > >> it is
> > > >> about to boil.
>
> > > > I do think there is a lot of great science and technology coming out
> > > > of it, but I think we are no closer to true artificial general
> > > > intelligence than we were in 1975.
>
> > > We are creeping up the evolutionary tree.  We are about at insects
> > > now, and coming up on mice.  From fruit flies to mice is the same jump
> > > from mice to cats, and from mice to cats is the same jump from cats to
> > > humans.
>
> > We are making insect puppets, not insects. I like what Brent said
> > about Big Dogs needing to find their own energy source and to produce
> > Little Dogs.
>
> Its 
> coming:http://www.geekologie.com/2009/06/carnivorous_robots_eat_meat_fo.phphttp://www.robotictechnologyinc.com/index.php/EATRhttp://www.guardian.co.uk/technology/2009/jul/19/robots-research
>
> http://singularityhub.com/2009/04/09/3d-printing-and-self-replicating...

In my life there has never not been promises of exciting new
technologies. I can't think of any that ever were actually
commercially available. What did actually happen - the internet for
example, was not predicted even when it was right under our noses.
There was a lot of speculation about Interactive TV in 1992-3 but no
hint of what was coming in 94-95. It developed in real time with
companies scrambling to catch up with what was happening on it's own.
Not saying it has to always be that way, just giving you some insight
why my technological optimism of my 10s and 20s were replaced by
deepening skepticism in my 30s and 40s.

>
> > That's a good criteria, although I still don't know that
> > they will feel or understand what it is to be alive if the experiences
> > of their components are not also the experiences of living organisms.
> > Instead of just looking to move up the evolutionary tree, we should
> > also focus on making a reaally good nanotech stem cell. That's if we
> > want to create true AGI. I don't think we want that at all. We want
> > servants. True AGI is not going to be our servant.
>
> > > > We just understand more about
> > > > emulating certain functions of intelligence. When we approach it from
> > > > a 1-p:3-p sense based model rather than a 3-p computation model, I
> > > > think we will have the real progress which has eluded us thus far.
>
> > > >>>> I think your analogy is in error.  You cannot compare the strip
> > > >>>> of metal
> > > >>> to
> > > >>>> the trillion cell organism.  The strip of metal is like a red-
> > > >>>> sensing
> > > >>> cone
> > > >>>> in your retina.  It is merely a sensor which can relay some
> > > >>>> information.
> > > >>>> How that information is interpreted then determines the experience.
>
> > > >>> Aren't you just reiterating what I wrote? "because a strip of
> > > >>> metal is
> > > >>> so different from a trillion cell living being"
>
> > > >> What I mean is that the metal strip is not the mind, and should not
> > > >> be
> > > >> equated with one.  It is more like a temperature sensitive nerve-
> > > >> ending.  A
> > > >> thermostat with the appropriate additional computational functions
> > > >> could
> > > >> feel, sense, be aware, think, be conscious, care, etc.
>
> > > > or it could just compute and report a-signifying data.
>
> > > A modern day argument against souls of those that are different from us.
>
> > Nah. There was never a reason to believe that their 'souls' were in
> > any meaningful way similar to us. They don't scream when you damage
> > them. They don't grow or change or reproduce or evolve. You really
> > think that the TV could be watching shows with you? Do envelopes read
> > the mail they hold? Seriously, this line of thinking is sophomoric
> > sophistry to me.
>
> It is your examples that are sophomoric.

For instance?

>
>
>
> > > > We know that it makes some difference, because diseases which change
> > > > the flexibility of those tubes or permittivity of those filaments make
> > > > differences in what we as a whole are capable of feeling. Why wouldn't
> > > > it? Why would a machine executed in semiconductor glass be any more
> > > > effective at reproducing the anguish of a suffering animal than a pile
> > > > of finely chopped scallions would be at running a spreadsheet
> > > > application? Why doesn't matter matter?
>
> > > Because function is what matters, not how many protons are stuck
> > > together in the pieces that make something.
>
> > No, it's much more the fact that they are protons and not neutrons
> > that make them something. Seventy nine protons make gold. What do 79
> > neutrons make? What do 79 ping pong balls make? What does the equation
> > 70+8+1 make? Nothing significant.
>
> > > >>> "There cannot be a Microsoft Windows difference without an Intel
> > > >>> chip
> > > >>> difference". To say that Windows determines what the chip does you
> > > >>> would say that Intel and AMD chips both supervene upon Windows. It
> > > >>> seems backwards at first but it sort of makes sense, sort of a
> > > >>> synonym
> > > >>> for 'rely upon'. It's still kind of an odious and pretentious way to
> > > >>> say something pretty straightforward, so I try to just say what I
> > > >>> mean
> > > >>> in simpler terms.
>
> > > >> I see, it is defined confusingly.  I can also see it interpreted as
> > > >> follows:
> > > >> The state of the Microsoft word program cannot change without a
> > > >> change in
> > > >> the state of the underlying computer hardware.  But not all changes
> > > >> in the
> > > >> computer hardware correspond to changes in the state of the program.
>
> > > > Right, I can see that interpretation too. That's why I hate reading
> > > > philosophy, haha.
>
> > > >>>>>>>>> and reduces
> > > >>>>>>>>> our cognition to an unconscious chemical reaction.
>
> > > >>>>>>>> If I say all of reality is just a thing, have I really reduced
> > > >>> it?
>
> > > >>>>>>> It depends what you mean by a 'thing'.
>
> > > >>>>>> Does it?
>
> > > >>>>> Of course. If I say that an apple is a fruit, I have not reduced
> > > >>>>> it as
> > > >>>>> much as if I say that it's matter.
>
> > > >>>> How you choose to describe it doesn't change the fact that it is an
> > > >>> apple.
>
> > > >>> I think the exact opposite. There is no such fact. It's only an
> > > >>> apple
> > > >>> to us. It's many things to many other kinds of perceivers on
> > > >>> different
> > > >>> scales. An apple is a fictional description of an intangible,
> > > >>> unknowable concordance of facts.
>
> > > >>>> Likewise, saying the brain is a certain type of chemical reaction
> > > >>>> does
> > > >>> not
> > > >>>> devalue it.  Not all chemical reactions are equivalent, nor are all
> > > >>>> arrangements of matter equivalent.  With this fact, I can say the
> > > >>>> brain
> > > >>> is a
> > > >>>> chemical reaction, or a collection of atoms.  Neither of those
> > > >>>> statements
> > > >>> is
> > > >>>> incorrect.
>
> > > >>> I don't have a problem with that. You could also say the brain is a
> > > >>> certain type of hallucination.
>
> > > >>>>>>>> Explaining something in no way reduces anything unless what you
> > > >>>>> really
> > > >>>>>>> value
> > > >>>>>>>> is the mystery.
>
> > > >>>>>>> I'm doing the explaining. You're the one saying that an
> > > >>>>>>> explanation
> > > >>> is
> > > >>>>>>> not necessary.
>
> > > >>>>>> Your explanation is that there is no explanation.
>
> > > >>>>> Not really.
>
> > > >>>> An explanation, if it doesn't make new predictions, should at
> > > >>>> least make
> > > >>> the
> > > >>>> picture more clear, providing a more intuitive understanding of the
> > > >>> facts.
>
> > > >>> I think that mine absolutely does that.
>
> > > >>>>>>>> Also, I don't think it is incorrect to call it an "unconscious
> > > >>>>> chemical
> > > >>>>>>>> reaction".  It definitely is a "conscious chemical reaction".
> > > >>> This
> > > >>>>> is
> > > >>>>>>> like
> > > >>>>>>>> calling a person a "lifeless chemical reaction".
>
> > > >>>>>>> Then you are agreeing with me. If you admit that chemical
> > > >>>>>>> reactions
> > > >>>>>>> themselves are conscious,
>
> > > >>>>>> Some reactions can be.
>
> > > >>>>>>> then you are admitting that awareness is a
> > > >>>>>>> molecular sensorimotive property and not a metaphysical illusion
> > > >>>>>>> produced by the brain.
>
> > > >>>>>> Human awareness has nothing to do with whatever molecules may be
> > > >>> feeling,
> > > >>>>> if
> > > >>>>>> they feel anything at all.
>
> > > >>>>> Then you are positing a metaphysical agent which supervenes upon
> > > >>>>> molecules to accomplish feeling. (which is maybe why you keep
> > > >>>>> accusing
> > > >>>>> me of doing that).
>
> > > >>>> Yes, the mind is a computation which does the feeling and it
> > > >>>> supervenes
> > > >>> on
> > > >>>> the brain.
>
> > > >>> Why does the computation need to do any feeling?
>
> > > >> When a process is aware of information it must have awareness.
>
> > > > I can be aware of Chinese subtitles, but I have no awareness of
> > > > Chinese.
>
> > > Okay.
>
> > > > A CD player can play a sad song for us, but that doesn't mean
> > > > that it makes the CD player sad.
>
> > > I wouldn't expect it to.
>
> > So why expect a computer to be sad just because it's acting sad for
> > us?
>
> I would expect it to be sad if it is replicating all the functions,
> behaviors, and patterns that go along with a human being sad.

Like a doll that cries salty tears is more sad than one that cries tap
water? I don't get what appearances of sadness in a simulation
designed to appear sad have to do with the thing actually feeling sad.
Why would anyone think that?

>
>
>
> > > > Every physical thing has some kind of
> > > > 'awareness' or sensorimotive content, however primitive, but
> > > > computation itself does not necessarily have it's own existence. It's
> > > > just a text in the context of our awarenss. A cartoon character
> > > > doesn't have any feelings.
>
> > > Do you really see no difference between a computer and a cartoon?
>
> > Sure I do, but I'm trying to show you that you don't see the
> > similarities. It's a reductio ad absurdum to expose why the principle
> > is flawed. The difference is just a degree of complexity that
> > glamorizes the computation.
>
> It is not a difference in degree of complexity.  A cartoon does not
> interpret or process information, build complex data structures, make
> comparisons or decisions, respond to inputs, change itself, etc.

Sure it does. It moves around in it's world, says and does things.
References previous events. If a computer simulation can do everything
that a person can do then a cartoon of a computer can do everything
that a computer can do. If appearances are everything, then
sufficiently detailed cartoons of computers are computers.

>
> > If you make a really great simulation of a
> > person - say that you import the human genome into The Game of Life,
> > and in the course of your Beta Testing, you end up with some human
> > carnivores eating some human herbivores. Should those carnivores be
> > prosecuted for murder and cannibalism? Should you be arrested?
>
> Should God be arrested for creating the world given everything that's
> followed?

Sure, why not?

>
>
>
> > > > It can be seen to respond to it's cartoon
> > > > environment but it's not the awareness of the cartoon you are
> > > > watching, it's the awareness of the cartoonist, the producer, the
> > > > writer, the animator that you are watching.
>
> > > >>> Why have we not seen a single information processing system indicate
> > > >>> any awareness beyond that which it was designed to simulate?
>
> > > >> Watson was aware of the Jeopardy clue being asked, was it not?
>
> > > > No. Watson is just a massive array of semiconductors eating power and
> > > > crapping out zillions of hierarchically distilled results.
>
> > > Sounds a lot like a brain.
>
> > It is a lot like a brain, but it's nothing like a person. It's a model
> > of the (external, public, generic, electromagnetic) material behaviors
> > of the brain, not the (internal, private, proprietary, sensorimotive)
> > experience through the brain's energy. It's a glass brain - all form
> > and no content.
>
> Perhaps the computer would deride your coal brain.  All soot and no
> consistency.

That might be possible in it's own way eventually. Although I think
the idea of being carbon based is a bit misleading. We're more water,
sugar, and protein based in our own native frame of reference. Carbon
itself isn't really 'alive', it's necessary but not sufficient for
life.

>
>
>
> > > > It's an
> > > > intelliformed organization, not an intelligent organism. It doesn't
> > > > care if it's right or wrong or how well it understands the clue,
>
> > > How well it understood it determined if it attempted an answer.
>
> > That's the consequence being mistaken for the cause. It doesn't
> > understand anything, it just reports the findings of it's search, like
> > Google. We can project onto it through HADD/prognosia
>
> What do hyperactive attention deficit disorder or prognosia have to do with
> anthropomorphizing Watson?

Because we project agency into inanimate objects. We see ourselves in
things that appear to be like us, even if they are not.

>
> > that there is
> > some difference in how it feels when a query is more or less
> > successful - we can imagine that a question is 'hard' or 'easy' for
> > it, but that's just ventriloquism. All questions are easy for it.
>
> The easy ones it can get more quickly than those that are hard for it.
>
> > Some
> > take longer, some take forever so it is programmed to give up. Either
> > way it doesn't care, it will keep answering questions for no reason
> > like an idiot until we unplug it.
>
> > > > it's
> > > > just going to run it's meaningless algorithms on the meaningless data
> > > > it's being fed.
>
> > > What makes the data in it's memory meaningless but the data in your
> > > brain meaningful?
>
> > What's in my brain isn't data, it's living neurochemistry.
>
> There is data in your brain, otherwise you would not know anything.

That's a false equivalence. It's like saying there is dollars inside
your house, otherwise it wouldn't be worth any money.

>
> > What's in
> > my memory is signified residue of perceptual experience.
>
> It seems that you invent a new term and theory for each question you answer.

It's either say it in a new way or say the same thing over and over,
which would you prefer?


>
> The computations, relations and information of the mind have a physical
> basis.

No, they have a physical correlation. Just as the neurological
activity of the cortex and limbic system have a psychological
correlation.

> As the mind reaches different computational states, the brain
> reaches different physical states.  Imagine a computer program that starts
> from any integer N, and increments it, tests if it is prime, and if not,
> repeats the process.  If the last decimal digit of the first prime number it
> finds is 3, then it plays a sound on the speakers, otherwise it does not.
> The program is aware of the primality of the number, conscious of the fact
> that its least significant digit is 3, and this knowledge has nothing to do
> with whether the computer is a tape with a marker, Eniac, ping pong balls,
> milk bottles or an integrated circuit.  This awareness triggers the physical
> effect of playing a sound through the speakers.

The program knows nothing about N, primality, numbers, speakers etc.
The programmer knows about those things, puts them into a code that
can be converted into a binary code which maps to the semiconductors.
The semiconductors know zero about the program, they only know how to
open and close circuits and to tell the difference.

>
> > How does thinking about gambling affect the
> > amygdala?

No guesses?


>
> > Oh I see, because India is made only of atoms and not only of stones.
> > Still I would hardly say that any model of atoms anything other than
> > basic chemical relations. I don't think even biology is a possible
> > outcome of an atomic model.
>
> Then either biology is false or the atomic model is false.

The atomic model isn't false, it's just incomplete.

>
> > The event of life emerging redefines what
> > is possible for molecules retroactively.
>
> Life doesn't change the underlying principles of chemistry.

No, but it changes the possibilities of what those principles can
develop into.

>
> > Think of it like a hand. No
> > model of the physiology of your hand is going to have a physical
> > outcome of say, communicating sign language.
>
> Sign language comes from the brain not the hands.

It comes from the brain, but it ends up in the hands. You can't
perform signing properly without them.

>
> > You can't anticipate that
> > four fingers and an opposable thumb is going to wind up being useful
> > as a communication method to creatures whose ears don't work well.
> > It's a wild overconfidence in the power of theory I think to imagine
> > that kind of reach in a concrete model of a particular phenomena. Most
> > every microcosmic physical phenomenon relates to atoms, but atomic
> > phenomena don't describe every other phenomenon in the universe.
>
> > > >>>>>>> Human consciousness is a specific Taj Mahal of sensorimotive-
> > > >>>>>>> electromagnetic construction. The principles of it's
> > > >>>>>>> construction
> > > >>> are
> > > >>>>>>> simple, but that simplicity includes both pattern and pattern
> > > >>>>>>> recognition.
>
> > > >>>>>> Pattern and pattern recognition, information and information
> > > >>> processing.
> > > >>>>>> Are they so different?
>
> > > >>>>> Very similar yes, but to me information implies a-signifying
>
> > > >>>> Could you define "a-signifying" for me?
>
> > > >>> Meaning that the information has no meaning to the system processing
> > > >>> it. A pattern of pits on a CD is a-signifying to the listener and
> > > >>> the
> > > >>> music being played is a-signifying to the stereo. In each case,
> > > >>> fidelity of the text is retained, but the content of the text is
> > > >>> irrelevant outside of the context of it's appropriate system. A TV
> > > >>> set
> > > >>> isn't watching TV, it's just scanning lines. That's information.
> > > >>> Handling data generically without any relevant experience..
>
> > > >> This is the difference between a recording (or information being
> > > >> sent over a
> > > >> wire) compared to information being processed (in which it has
> > > >> particular
> > > >> meaning by virtue of the context and difference it makes in the
> > > >> processing).
>
> > > > You're still hallucinating 'information' into wires. There's no
> > > > objective information there to the wire other than atomic collisions.
>
> > > Information is a physical concept, as Claude Shannon showed.  In fact
> > > I think it is a more fundamental concept than physics.
>
> > Information has physical requirements, but isn't itself physical. I
> > would say that what information refers to is *as* fundamental as
> > physics but not more. Sense is more fundamental than either physics or
> > information. Information is essentially second hand experience.
> > Sensorimotive qualities which are intentionally treated as quantities.
>
> > > > Information is just a way of saying external assistance to sense-
> > > > making. Whether the text has meaning in a particular context or not
> > > > depends on the relation between the two. A machine can't make sense of
> > > > feelings, it can only make sense of  it's intended measurements in
> > > > terms of objective measurements. There is no private subjectivity
> > > > going on. It's all accessible publicly.
>
> > > >> The self driving Google car's cameras which transmit the raw
> > > >> input data possesses no meaning, but the software that determines
> > > >> that it
> > > >> sees a car, or a stop sign generates meaning from this
> > > >> information.  "Stop
> > > >> sign *means* we need to decelerate"
>
> > > > No, the software doesn't know what a car or a stop sign is,
>
> > > You don't know what a stop sign is.
>
> > Sure I do. It's in my native perceptual niche. It's designed expressly
> > so that I will know what it is. My eyeballs don't know what it is. My
> > foot doesn't know what it is, but I myself know exactly what it is
> > supposed to be.
>
> Well then so does the software in the Google car.

No, human programmed software is not native to a metal and plastic
artifact.

>
>
>
> > > > it just
> > > > presents a an instruction set to a microprocessor
>
> > > Your neurons just present a neurotransmitter to a neuron
>
> > A neurotransmitter doesn't know what a stop sign is either. The
> > instruction set is the only definition of the stop sign that the
> > computer has.
>
> The model can be arbitrarily complex.  The instruction set of the computer
> has very little to do with the informational representation and rules the
> software might use to identify a stop sign from another object.  It's red
> color, hexagonal shape, letters, etc. may all be part of that model within
> the computer.

We may put those ideas into the software, but nothing can get them out
except another person. It's as alien to a computer as a stop sign
would be to barnacles growing on it in the ocean. It's a pretty
straightforward deduction. I can't get music out of a CD without a CD
player, speakers, and ears that work. There is no music in the CD
itself, only a-signifying pits in a mylar disc. The fact that we are
able to decode the coded texts that we inscribed on that disc should
never be confused with imparting some experience of music to a piece
of plastic or an electronic device. It's a hallucination.

>
> > They are opposites. We see the stop sign through the
> > inside of our synapses, but a microprocessor has an arithmetic pattern
> > imposed upon it externally which doesn't resemble any of it's
> > subjective references.
>
> > > > switches the circuit
> > > > on that leads to the actuator
>
> > > Which causes a nerve to fire that leads to a muscle in your foot
>
> > No, it causes you to become aware of the necessity of moving your
> > foot. You have to comply with that sensory input with a motive output
> > of your own choosing (conditioned and reflexive as it may become, you
> > can still voluntarily override conditioning and run the stop sign. The
> > Google car can't do that).
>
> The Google car might have competing priorities, such as "Do I run the stop
> sign, or avoid a collision with a car that is not slowing down behind me?"
> Much like a person could run a stop sign given the proper motivation.

It still has no choice in determining it's actions in each case. It
may have to have some randomness to determine the priority of
conflicting imperatives, but whichever one comes out on top gets the
involuntarily programmed response.

>
>
>
> > > > that happens to lead to the accelerator
> > > > (it could lead to a toaster or a nuclear missile). Optical patterns
> > > > which satisfy the software's description of stop signs cause a circuit
> > > > to close. There is no meaning or choice involved. Turning on a water
> > > > faucet doesn't mean anything to the plumbing. There are consequences
> > > > on a physical level, but not one that leads on it's own to psychology.
>
> > > Now who is denying "the other side of the coin"?  Not that I ever
> > > denied a first person perspective.  Perhaps you think I am because you
> > > confuse my belief that computers can be conscious to imply we are as
> > > souless as you believe computers to be.
>
> > I don't think that you believe people to be soulless, but I think that
> > you would have to if you followed substance monism to it's logical
> > conclusion.
>
> Naively people thing that materialism leads to a disproof of souls, but
> these people just didn't go far enough.  Materialism leads to mechanism
> which leads to immateriality and eternal continuation of consciousness, and
> I think also the idea that we are all of the same consciousness.

If you believe in immaterial eternal consciousness, what is the use of
material mechanism?

>
> > I'm not denying that there is sensorimotive content to
> > matter - I think that there has to be, I just think that there are
> > channels of perception and that our naive apprehensions of foreign
> > channels has some validity. It may be exaggerated, it may be
> > stereotype, but the fact that it exists may be extremely important in
> > understanding how all of this works. Why do we so not care what a
> > water faucet is thinking?
>
> It's probably not thinking.

That's how I feel about computers (or software).

>
>
>
> > > >>> A choice is being made from the 3-p view, but that isn't the one
> > > >>> that
> > > >>> matters. The computer has no knowledge of it's choices. It's just
> > > >>> executing an instruction set.
>
> > > >> It does have knowledge.  What you ascribe to having no knowledge of
> > > >> the
> > > >> decision is the underlying basis of the computation.  Similarly, your
> > > >> neurons (individually) have no idea of what stock you are
> > > >> purchasing or
> > > >> selling at the time you do.  Only you (the higher level process
> > > >> does).  It
> > > >> is the same with a computer-supported mind.
>
> > > > The difference is that our higher level processes arise autopoetically
> > > > from our neurology. A computer has is our higher level processes
> > > > imposed on semiconductors which have no capacity to develop their own
> > > > higher level processes - which is precisely why these kinds of
> > > > materials are used. Making a computer out of living hamsters in a maze
> > > >is not going to be very reliable. Hamsters have more of their own
> > >> agenda. Their behavior is less predictable. Humans even more so.
>
> > > All processes need a reliable foundation, be they the physical laws ir
> > > a chips instruction set.
>
> > Any processes executed by a hamster do not,
>
> The actions of the hampsters depend on the fixed foundation of the laws of
> physic and chemistry.

So in the fixed foundations of physics and chemistry, there would be a
time before the existence of water. Was water always a potential
within hydrogen or within protons? If so, why bother with the
formality of making it out of H2O? Where does novelty come from in a
universe of fixed laws? What law allows for novelty?

>
> > apparently, need as
> > reliable a foundation as do boolean logic computers. This difference
> > must be addressed. If everything is founded on reliability, why and
> > how does anything become unreliable?
>
>  Unreliability is when something does something we didn't want it to do, but
> it was doing exactly what it had to do according to the laws of physics.
> The unreliability stems from imperfections in design.

But hamsters don't obey different laws of physics. How do they get to
be more unreliable than semiconductors?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to