On Sep 13, 12:52 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Mon, Sep 12, 2011 at 3:16 PM, Craig Weinberg <whatsons...@gmail.com>wrote:
> > To say that complex things can result from very simple rules is true
> > enough, but it's circular reasoning that distracts from the relevant
> > questions: What are 'rules' and where do they come from?
> Well this is another question altogether.  My quick answer to it is that we
> chose them from among all possible systems of laws in the sense that if the
> laws were not conducive to life we would not be here to observe them.

That's circular reasoning. It doesn't answer the question, it just
unasks it by saying that the rules are whatever they happen to need to
be in order for us to exist and think that there are rules. It fails
to examine why or how there is a such thing as 'possibility' in the
first place, which is what my question is.

> > How are they
> > enforced? Why would there be a difference between simple and complex
> > to begin with and what makes one lead to the other but not the other
> > way around?
> > > > > >> Similarly with
> > > > > >> neurons: they each follow simple rules, and they are not aware of
> > the
> > > > > >> grand picture, which is the emergent phenomenon of intelligence
> > and
> > > > > >> consciousness.
> > > > > > No, because that would necessitate an entity that was not neurons
> > to
> > > > > > "stand back and look at the whole". Since there is nobody here but
> > us
> > > > > > neurons, neurons must in fact facilitate both top down and bottom
> > up
> > > > > > direction.
> > > > > The system of neurons instantiates an observer, the person.
> > > > Why would it do that on top of already doing everything that the
> > > > person thinks they are doing? It's superfluous, metaphysical magic.
> > > If they didn't then you would be unaware of the nail you stepped on, or
> > the
> > > fact that you are hungry or thirsty.  You seem to think we could sleep
> > walk
> > > through life and be just as successful without consciousness.  (This is a
> > > belief in zombies)
> > To me there is no question that awareness is necessary for existence,
> > living organisms or not. You're conflating awareness in general with
> > human consciousness, as if it were necessary to have a multi-sensory
> > technicolor 3D holographic presentation of the universe to be able to
> > feed yourself or avoid sharp objects.
> It's not necessary, but it helps.

It's easy to assume that it helps, just as it's easy for me to assume
that we have free will. If we don't need our conscious mind to make
decisions, then we certainly don't need the fantasyland associated
with our conscious minds to help with that process. Think of building
a robot that walks around and looks for food and avoids danger. Why
would it help to construct some kind of Cartesian theater inside of
it? Functionally, there is no reasonable explanation for perception or
experience, especially if you believe in determinism.

> > > If you are non-deterministic,
> > > then nothing determines what you do and you are a slave to the roll of a
> > > die.
> > Only if you a priori eliminate the possibility of free will and sense.
> > Then you are left with the options that make no sense and have no free
> > will.
> You have a will, which determines what you do, but your will in turn is
> determined by other things.

If it was completely determined by other things, then it's existence
would be redundant. The fact that it is influenced by other things
doesn't mean that it is completely determined by other things and
doesn't mean that it has no native ability itself to influence things
out of it's own sense and identity.

> > > Have you ever played with "The Game of Life" (
> >http://www.bitstorm.org/gameoflife/).
> > Haha, of course. Thirty years ago. I notice there's been no
> > improvement since then.
> Actually someone made a Turing machine in one 
> recently:http://rendell-attic.org/gol/tm.htm

That's cool. That is an improvement of a sort. It's still a game
though, not leading to any kind of meaningful self-evolving patterns
of novelty.

> > Right. High level processes cause lower level effects without
> > violating the underlying rules of the system.
> Have you read Douglas Hofstadter?  He says we are "Strange loops" in the
> sense that the mind builds up hierarchies which then loop back and cause low
> level changes at the bottom of that hierarchy.

Yes, I liked GEB a lot (but never finished it.. got too far into
mathematical flavored abstractions). I agree that part of our process
can be described that way but it's not sufficient to explain what the
content of those loops actually are. It has no sensorimotive
significance, just sterile architectural functions. The other half of
what we are needs to be included in any explanation of ourselves.

> > They are making the
> > rules of the system. When you play a game, you are making new
> > temporary rules which govern the production of your own
> > neurotransmitters associated with the emotions of enjoying the game.
> There may be higher-level rules which serve as summaries for more
> complicated lower-level rules, but the lower level rules do not change.

Right, the lower level rules do not change. They are enabling the high
level rules in the first place. But the high level rules make use of
the low level rules for their own purposes. I think it's metaphysical
to talk about 'summaries' existing, but yes, summarizing is one of the
functions that defines high level as distinct from low level. It's not
the only one though. White light does not summarize the visible
spectrum qualitatively but we could say that it functions that way
quantitatively. That relationship between the two is a good metaphor
for understanding how I think sense works. Color is not just different
quantities of whiteness and white is not just a summary of color - it
is what it is, a specific irreducible order of visual perception and
motive relation.

> > If you choose not to play the game then those neurotransmitters will
> > not be produced. It's up to you, personally. You can say that it's
> > just the neurotransmitters themselves causing the neurons to produce
> > them or it's the neurons themselves causing a hallucination of a
> > person thinking they are playing a game, but honestly, that's absurd.
> > The brain can make whatever neurotransmitters it wants, it doesn't
> > need to cook up a game and a player of games as an excuse.
> > >They are
> > > simply higher-level more complex behaviors built upon simple rules acting
> > on
> > > a large number of interacting pieces.
> > Why is that 'simply'?
> Simply because everything they do is captured by the simple rules.

Then what would there be a point of capturing them? If high level
complexity is entirely described by low level behaviors added
together, then what is the point of adding them? Isn't it obvious that
different levels of perception yield different novel possibilities?
That a ripe peach does something that a piece of charcoal doesn't?
That yellow is different from just a bluer kind of red?

> > How do you get 'pieces' to 'interact' and obey
> > 'rules'? The rules have to make sense in the particular context, and
> > there has to be a motive for that interaction, ie sensorimotive
> > experience.
> If there were no hard rules, life could not evolve.

'Hard rules' can only arise if the phenomena they govern have a way of
being concretely influenced by them. Otherwise they are metaphysical
abstractions. The idea of 'rules' or 'information' is a human
intellectual analysis. The actual thing that it is would be
sensorimotive experience. It is a concrete manifestation from which we
extract essential patterns which we can model through quantitatively
but those models themselves are not independent entities in the

> > Yes because it assumes that there is a such thing as two persons who
> > are physically or mentally indistinguishable. Even one person is not
> > indistinguishable from themselves from moment to moment. Do they have
> > the same eyebrow mites crawling around their face? Do they have the
> > same quadrillion bacteria in their gut (http://en.wikipedia.org/wiki/
> > Gut_flora)?
> It is possible in theory for there to be two identical people.  If the
> universe is infinitely big and matter exists throughout then chances are
> there is a physically identical version of you somewhere, atom for atom,
> particle for particle identical.  Do you not think this physically identical
> person would have the same experience as you?

No. Just the fact of not occupying the same space as my body makes it
different. The idea of two separate things being 'identical' is a
function of pattern recognition. Identical to who?

> > There is of course a strong correlation between physical and
> > psychological phenomena of a human mind/body, but that correlation is
> > not causation. Psychological properties can be multiply realized in
> > physical properties,
> This means you think other different physical forms can have identical
> psychological forms.  E.g., a computer can have the experience of red.

If the computer was made out of something that can experience red,
then sure.

> > but physical properties can be multiply realized
> > in psychological properties as well. Listening to the same song will
> > show up differently in the brain of different people, and different
> > even in the same person over time, but the song itself has an
> > essential coherence and invariance that makes it a recognizable
> > pattern to all who can hear it. The song has some concrete properties
> > which do not supervene meaningfully upon physical media.
> Different physical properties can be experienced differently, but that's not
> what supervenience is about.  Rather it says that two physically identical
> brains experiencing the same song will have identical experiences.

Identical is not possible, but the more similar one thing is
physically to another, the more likely that their experiences will
also be more similar. That's not the only relevant issue though. It
depends what the thing is. A cube of sugar compared to another cube of
sugar is different than comparing twins or triplets of human beings.
The human beings are elaborated to a much more unpredictable degree.
It's not purely a matter of complexity and probability, there is more
sensorimotive development which figures into the difference. We have
more of a choice. Maybe not as much as we think, and maybe it's more
of a feeling that we have more choice, but nevertheless, the feeling
that smashing a person's head is different from smashing a coconut
deserves serious consideration in any kind of ontological formulation
of the universe and our place in it.
> > We understand the what and the how of electromagnetism, but to predict
> > what a brain would do you need to understand the who and the why of
> > sensorimotive perception.
> For this to be so, then the brain must do something that violates quantum
> electrodynamics.

No it doesn't. No more than the Taj Mahal must do something that
violated classical brick dynamics.

> > > I don't think Stathis, Bruno or myself are denying any of these things.
> > Bruno isn't, he's just attributing them to arithmetic - which I
> > respect, but disagree with because of recent changes in how I think
> > about it.
> What was this recent change?

The realization that photons may not be real.

> > You and Stathis though, are pure substance monists. Anything
> > that doesn't show up in a billiard ball model must be magic.
> > > I don't think it disqualifies the gliders or guns of the Game of Life to
> > admit
> > > that they are complex phenomena which nonetheless follow from simple
> > rules.
> > > A lone electron is a simple thing,
> > It's not a simple thing. It's a simple name for a very complex
> > phenomenon. The gliders or guns of the Game of Life aren't things at
> > all in themselves. They're experiences of a people using a GUI.
> > > but when you have ~10^25 of them confined
> > > to the size of a human skull moving in immensely complicated shapes and
> > > patterns no one can use the fact that the electron is simple to
> > disqualify
> > > the awesome complexity and wonder of the mind.
> > If the electron truly were simple, and it had only simple physical
> > functions, then it would disqualify the mind from existing.
> You think an electron has to have the capability to feel in order for the
> human brain to be able to feel?

It doesn't have to have the capability to feel like a human being
feels, no, but it has to have some kind of capability to detect and
change according to that detection.

> > Nothing
> > about the physical functions of the brain, neurons, or electrons we
> > observe suggest the existence of a mind.
> The particles in the brain model their external reality,

In what way? Where is this model located? The forest could be modeling
intergalactic p0rn for all we know, but without any experience of the
result of that 'model', we can't really say that is what the brain is
doing at all. The brain is just living cells doing the things that
living cells do.

> analyze patterns,
> process sensory information, digest it, share it with other regions, and
> enable the body to better adapt and respond to its environment.

The immune system does that too. The digestive system. Bacteria does

> behaviors and functions suggest the existence of a mind to me.

Only because you have a mind and you are reverse engineering it. If a
child compares live brain tissue under a microscope to pancreas tissue
or bacteria under a microscope, they would not necessarily be able to
guess which one was 'modeling' a TV show and which was just producing
biochemistry. The suggestion of a mind is purely imaginary, based upon
a particular interpretation of scientific observations.

> > > > Saying that a structure 'determines' the way
> > > > something 'responds' has no explanatory power at all. It's taking for
> > > > granted the ability of structures to 'determine' and 'respond' as if
> > > > those were logical expectations to have of shapes of matter. It takes
> > > > for granted that the existence of a thing which has some reason to
> > > > determine or respond to anything without making that thing explicit.
> > > > What good is input and output to something that is nothing but input
> > > > and output? Nothing makes sense without sense itself, and nothing
> > > > which exists makes no sense.
> > > Take a thermostat and give it the ability to look over and compare its
> > > current temperature and past temperatures, to attempt to predict the
> > future
> > > based on the trends and patterns of temperatures, to talk about these
> > > considerations and its present state, and you will have trouble denying
> > that
> > > this thermostat really does have sense in the same meaning of the word as
> > we
> > > do.
> > I have no trouble denying that. The thermostat element does have sense
> > of local temperature, and the computer required to record and analyze
> > statistical trends has sense of being an electronic device opening and
> > closing circuits according to the natural inclinations of it's
> > material components to store and discharge current,
> I could say the same things about the processes in your brain.

but you can't say the same thing about the processes in your own mind.
It's true that nobody can say for sure what a thermostat experiences,
but I have never claimed to know that, I only suggest what seems
rational to me based on the fact that thermostats don't seem to do
anything other than what we expect them to do.

> > but the two things
> > have no sense of each other.
> This is your guess.

Yes. I think it's a pretty good one though. It explains why
thermostats don't tell you move to a warmer climate or conspire with
the other appliances in your house to take better care of them.

> > The computer doesn't know what
> > temperature is at all, and the thermostat doesn't know what a computer
> > is at all.
> A 2 year old doesn't know what temperature is, or understand what a brain
> is, but one can still tell the difference between hot and cold.

Being able to tell the difference between hot and cold is the two year
old's version of knowing what temperature is. A computer doesn't know
the difference between hot and cold, but if it's connected to
something that changes measurably under different thermal conditions,
then we can use the computer to report the status of it's connection
to that thing. The thermostat's version of the difference between hot
and cold is not likely, in my view, to be very similar to a person's
view, because a strip of metal is so different from a trillion cell
living being (I doubt we share the same sense of humor with
thermostats either).

> > In contrast, we understand what temperature means to us and why we
> > care about it.
> An appropriately designed machine could care about it too.

Why do you think that a machine can care about something? Just because
we have mechanistic properties doesn't mean that caring can be
accomplished through mechanism.

> > We actually feel something, we have an opinion about
> > the temperature and what it means in terms of our comfort or planning
> > our activities etc. The thermostat, even with advanced predictive
> > computation capacities, has no comparable sense or experience.
> It could have experience, even if it was not necessarily comparable to
> yours.

Exactly. I think that is the case.

> Also, I thought you said everything, even water molecules, have sense and
> experience.

Yes, I think they do. It might be totally orthogonal to our own
experience... maybe molecules all have one aggregate experience
somehow, or maybe they have a very shallow pool of qualia, etc.

> > It has
> > no general intelligence or ability to question it's programming -
> It could.

Theoretically maybe, but it does not seem to be the case.
> > it
> > just blindly takes measurements and plugs them into logic circuitry
> > without ever knowing what logic is.
> > > I think your theory is the result of assuming awareness is as simple,
> > plain,
> > > and fundamental as it seems.
> > My theory is not assuming awareness is simple, it is deducing that
> > fact after long and careful consideration of the alternatives.
> This is giving up hope of understanding it.

No, I think that it's embracing the reality that we understand it

> > > An fMRI could not, but a model of your brain and surrounding environment
> > at
> > > the level of QED could, unless you think QED is wrong.
> > Another false dichotomy. If someone in my surrounding environment
> > looks at me a certain way, my perception of that presents me with
> > feelings and possibilities for interpretations of the look and the
> > feelings. QED has no capacity to address phenomena like that and
> > therefore fails spectacularly at predicting what I'm going to be
> > thinking of, yet does not make QED 'wrong'.
> Then you require that electrons do things not predicted by QED.

Does QED predict that electrons tell jokes and produce TV shows?

> > > > This view of the psyche as being the inevitable result of sheer
> > > > biochemical momentum is not even remotely plausible to me.
> > > Why?
> > Because it doesn't take into account that there is an experience
> > associated with the psyche which is dynamically changing the
> > biological momentum from the top down from moment to moment.
> > > > It denies
> > > > any input/output between the mind and the outside world
> > > There definitely is interaction between the mind and its environment.
> > Then there can't be any model of prediction based on just biochemical
> > default behaviors.
> You don't seem to get what is meant by supervenience.

Nothing 'is meant' by supervenience. I may not get what you mean by
supervenience. It's a tricky term because if A supervenes on B it
means that B defines or controls A but A does not define/control B.
It's easy to get it mixed up.

> > > > and reduces
> > > > our cognition to an unconscious chemical reaction.
> > > If I say all of reality is just a thing, have I really reduced it?
> > It depends what you mean by a 'thing'.
> Does it?

Of course. If I say that an apple is a fruit, I have not reduced it as
much as if I say that it's matter.

> > > Explaining something in no way reduces anything unless what you really
> > value
> > > is the mystery.
> > I'm doing the explaining. You're the one saying that an explanation is
> > not necessary.
> Your explanation is that there is no explanation.

Not really.

> > > Also, I don't think it is incorrect to call it an "unconscious chemical
> > > reaction".  It definitely is a "conscious chemical reaction".  This is
> > like
> > > calling a person a "lifeless chemical reaction".
> > Then you are agreeing with me. If you admit that chemical reactions
> > themselves are conscious,
> Some reactions can be.
> > then you are admitting that awareness is a
> > molecular sensorimotive property and not a metaphysical illusion
> > produced by the brain.
> Human awareness has nothing to do with whatever molecules may be feeling, if
> they feel anything at all.

Then you are positing a metaphysical agent which supervenes upon
molecules to accomplish feeling. (which is maybe why you keep accusing
me of doing that).

> > > If when you think of a chemical reaction, you think of a test tube filled
> > > with a liquid and then you equate it with cold, lifeless simplicity, and
> > you
> > > extend that association to other chemical reactions such as life itself
> > and
> > > think that life is thereby diminished, it is not the theory that needs
> > > adjustment but rather the assocations you are making in your mind.
> > No, I'm seeing it the other way around. I'm saying everything has some
> > degree of awareness, but that doesn't mean that everything has the
> > same awareness. Molecules have different properties which make
> > different kinds of sense in conjunction with other molecules. That is
> > the same thing as life, just not as elaborate. It's only cold and
> > lifeless in comparison to ourselves because we've taken our particular
> > zoological development to a ridiculous extreme.
> > > Machine's aren't cold, calculating, dim-witted, logical, unfeeling, or
> > > polite.  This is just a personal bias you have developed over a lifetime
> > > with working with rather simplistic man-made machines.  To think all
> > > machines are like this is the same mistaking as thinking all chemical
> > > reactions are dead or unconscious.
> > You think that I don't know what you're saying, but I do. I thought
> > that myself for many years. I understand why it's appealing. I get
> > completely the beauty of how self-similarity and complexity scale up
> > seamlessly from something which might seem mechanical to us to
> > something which seems natural, and how it is just the unfamiliarity
> > that makes it seem different to us. That's all true, but you don't get
> > that I'm not talking about that. I'm talking about the fact that
> > without awareness, no pattern is possible at all.
> An unheard tree falling in the woods.

Yes, but the organisms that make up the woods would have awareness
beyond hearing it. On an ontological level, pattern cannot be a
pattern without some form of pattern recognition. They are two parts
of the same thing.

> > Just because natural
> > processes can be modeled mechanistically doesn't mean that creating
> > machines based on those models won't be cold and unfeeling. I think
> > that any model which tries to work with awareness but does not take
> > awareness into account is fatally flawed.
> Awareness exists but I don't think its not a physical property.

That's fine, but you would need to explain what kind of property it
is. How does it come to affect physical things?

> > > > If that were the
> > > > case then you could never have a computer emulate it without exactly
> > > > duplicating that biochemistry. My view makes it possible to at least
> > > > transmit and receive psychological texts through materials as
> > > > communication and sensation but your view allows the psyche no
> > > > existence whatsoever. It's a complete rejection of awareness into
> > > > metaphysical realms of 'illusion'.
> > > I think you may be mistaken that computationalism says awareness is an
> > > illusion.  There are some eliminative materialists who say this, but I
> > think
> > > they are in the minority of current philosophers of mind.
> > How would you characterize the computationalist view of awareness?
> A process to which certain information is meaningful.  Information is
> meaningful to a process when the information alters the states or behaviors
> of said process.

What makes something a process? Are all processes equally meaningful?

> > What makes the difference between something that is aware and
> > something that is not?
> Minimally, if that thing possesses or receives information and is changed by
> it.  Although there may be more required.

We are changed by inputs and outputs all the time that we are not
aware of.

> > It seems to me to be obscured behind a veil of
> > general 'complexity'.
> I think simple awareness can be simple, but complex awareness, like that of
> humans, is highly complex.

I agree.

> > > If the atoms always follow these laws, and we can come to know these
> > laws,
> > > then in principle a computer programmed to follow these laws can tell us
> > how
> > > a particular arrangement of atoms will evolve over time.  Do you agree
> > with
> > > this?
> > No. If bricks always follow certain laws, and we can come to know
> > these laws, then in principle a computer programmed to follow these
> > laws can tell us how a particular pile of bricks will be assembled
> > over time.
> If you are assuming humans or other things involved in assembling those
> bricks, then your model is incomplete.

That's why QED is incomplete for understanding human consciousness.

>  In my example I used a simulation of
> atoms, because what besides atoms and their interactions between each other
> do you think is important in the functioning of a brain?

The experiences that the brain has of the world, and the impact of
material conditions of that world on the brain (on a chemical,
biological, zoological, and anthropological level as well as atomic).
If you dwell on certain thoughts, you will change the structure of
your brain.

> > Do you agree with that? Can you detect the blueprint of a
> > future Taj Mahal from the mechanics of how random stones fit together?
> Not stones, but perhaps atoms.

What's so special about atoms that they are hiding Taj Mahals but not

> > Human consciousness is a specific Taj Mahal of sensorimotive-
> > electromagnetic construction. The principles of it's construction are
> > simple, but that simplicity includes both pattern and pattern
> > recognition.
> Pattern and pattern recognition, information and information processing.
> Are they so different?

Very similar yes, but to me information implies a-signifying
computation. You don't need to know what information means to process
it. To recognize a pattern, you have to actually make some sense of it
yourself. Information is 3-p generic. Pattern is more general, 1-p
sensations as well as 3-p measurements.

> > > The words we know how to utter exist as patterns encoded by neurons in
> > the
> > > brain.
> > That's not the same thing. Without actually uttering them, they are
> > not words.
> They're not spoken words, but they're words written in some neural language
> just like words written on paper.

They're biochemical texts in a neurological context, so they may be a
kind of word equivalent to neurons, but they are not words in any
human sense. You think that they are encoded neurolgically because
your mind decodes when it reads, but there is no place for a
translator to intervene. That neural language is not translated into
any other language, it is just experienced differently from the inside
than it can be observed from the outside. Not just differently, but in
the ontologically complementary way.

> > A pattern on a DVD is not a movie unless it's played on a
> > DVD player and watched by a human being. If we found an alien corpse,
> > there is no way we could extract any words from any patterns in the
> > brain, nor would we find any mechanism which encodes and decodes any
> > such 'patterns' into words.
> The mechanism would exist in the alien brain, and it could be found.
> Assuming the brain was sufficiently preserved.

No, that would mean a Homunculus hiding in the tissue of the brain
scribbling out translations.

> > You have to live in the brain of the alien
> > to hear or say the words.
> > > > > > It's not complicated, it's simplistic. If you put a group of kids
> > in a
> > > > > > room you can't predict which ones will go on to hit other people,
> > even
> > > > > > if you know who will hit who first. A billiard ball model only
> > works
> > > > > > for things that behave like billiard balls. Again, this should be
> > > > > > considered settled science. You can't reduce everything to abstract
> > > > > > positions and velocity when you are dealing with the real world,
> > and
> > > > > > especially with living organisms.
> > > > > You can't reduce consciousness to physical processes, but you can
> > > > > reduce physical processes to other physical processes, which means
> > > > > that you can describe how any physical entity will behave entirely in
> > > > > physical terms, without reference to consciousness.
> > > > You're assuming that behavior isn't driven by consciousness. What
> > > > evidence do you have of that, other than a priori certainty about the
> > > > nature of physical processes. My view is just that electromagnetism (a
> > > > physical process, right?) is sensorimotive (a qualitative experience).
> > > How can a simple thing "electromagnetism" be identical with the seemingly
> > > infinite possible variations if conscious experience?
> > Electromagnetism is just a description of how physical substance
> > relates and reacts to itself. In simple substances, it's simple, in
> > complex arrangements of substances it's complex. An MRI will show that
> > our conscious experience is coordinated with electromagnetic activity
> > in the brain. It's not identical to conscious experience, it's the
> > back door of conscious experience, but they share an identical common
> > sense.
> Is your theory of the two sides to the coin just a restatement of the idea
> of first and third person views?

Not so much a restatement but a development of that idea.

> > > E.g., how might red, as seen as one side of electromagnetism be different
> > > from the electromagnetism involved when seeing blue?
> > Red or blue can't be seen at all from the electromagnetism side. They
> > are visual feelings that can be correlated with electromagnetic
> > wavelength/frequency specifications to typical human optical response
> > but have no objective visual qualities. The difference between red and
> > blue, to those who can see them, has to do with how our visual cortex
> > makes sense of the inside of our retinas
> The visual cortex doesn't receive "sense" from the retinas, it receives an
> indirect signal, which is information.

A signal is a way of considering an event. If I wave a flag on a ship,
and someone in the Navy sees it, then it's a signal. Information
similarly is an analytical view of the process of sense being made
without referring directly to the experience of sensemaking. Neither
of them are physical processes such as what is going on now in your
retinas and brain. There are no signals there, no information, other
than what you experience them to be. If the visual cortex doesn't
receive sense from the retinas, then it invents it on it's own. Either
way it's the same result. Somehow something ends up as sense - which
is not an indirect signal or 'information' - it is sensorimotive

> > It doesn't mean that the Game of Life simulation will ever resemble
> > actual living organisms either. After 30 or 40 years, the Game of Life
> > has yet to evolve anything.
> The Game of Life can contain Turing machines, and by extension it could
> contain any possible program, including a simulation of an evolution of an
> artificial life form, or perhaps even you and me having this conversation.
> One could translate Bruno's dovetailer into the game of life and then it
> would contain everyone.

I understand why you believe that, but you don't understand why I'm
pretty sure that's never going to happen.

> > > > There is a
> > > > difference which arises partly from complexity, but also as a
> > > > consequence of qualitative differences between different inertial
> > > > frames. More degrees of freedom, with more non-computational
> > > > considerations come into play.
> > > What are some examples of these non-computational considerations?
> > Feeling. Memory. Choice. Imagination. Humor. Emotion. Strategy.
> > Insight. Vision. Genius.
> Feeling:http://www.youtube.com/watch?v=cHJJQ0zNNOM&t=35sANDhttp://www.youtube.com/watch?v=lSG--GY2p2o

Impressive, but still just HADD/prognosia.

> Memory:http://hothardware.com/News/IBM-Building-120PB-Drive-Worlds-Largest/

That's not human memory, it's just data storage.

> Choice:http://en.wikipedia.org/wiki/Automated_trading_system(Computers
> make over 60% of the trading decisions on the US stock market)

That's just computation. There is no personal preference there.

> Imagination:http://www.miraclesmagicinc.com/science/man-builds-inventing-machine....
> Humor:http://news.bbc.co.uk/2/hi/technology/5275544.stm
> Emotion:http://www.youtube.com/watch?v=qSmlKAly1UE(I would argue the
> herbivores might feel something like fear when being chased by a predator)
> Strategy:http://en.wikipedia.org/wiki/Deep_Blue_%28chess_computer%29
> Insight:http://developers.slashdot.org/story/11/09/11/2051222/Has-Cleverbot-P...
> Vision:http://www.bookofjoe.com/2007/08/can-you-spot-th.html
> Genius:http://en.wikipedia.org/wiki/Watson_%28computer%29

I'll come back to these later, but there's nothing new here for me.
It's conflating certain superficial kinds of intellectual processes
that can be electronically with human experience (which I maintain
cannot be reproduced that way).

> >  Qualia have no physical properties, they have sensorimotive-
> > perceptual-experiential properties. A mind can't perceive a new type
> > of color because color is the mind itself.
> > > I think the qualia, like the meaning in the conversation is dependent on
> > the
> > > mind.  It is built up from other lower level phenomena.
> > Yes! Lower level sensorimotive phenomena. Not neurological structures
> > but the feelings that insist within and through those structures.
> If the feelings can't change the momentum or direction of a particle, then
> they have no causal role and I don't see the point in including them in
> one's model of reality.

The feelings *are* the momentum and direction of particles, and they
and and do have a causal role in determining their own patterns on
their own level as well as levels below or above them. Our momentum
and direction influences civilization. Civilization's momentum and
direction influences us. Same with our neurons and their molecules.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to