Re: bruno list

2011-08-02 Thread Bruno Marchal


On 01 Aug 2011, at 01:12, Craig Weinberg wrote:


Oriental standard of epistemology again. Wisdom, not knowledge.


That is an authoritative argument. Like universal argument, they are  
also non valid.





It
doesn't make sense that you can make fire out of numbers.


That is a statement without a justification, which sums up your non- 
comp assumption. It does not motivate for believing that you are  
correct.


Also, it is misleading, because trivially you cannot make fire out of  
numbers, but, assuming comp, arithmetical relations can make numbers  
believes in relative body and fire, and even getting burned with all  
the feelings you might imagine.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Bruno Marchal


On 01 Aug 2011, at 01:12, Craig Weinberg wrote:


I don't see why it has to be infinite and I don't see what's wrong
with non Turing.


We will come back on this. Normally sane04 explains this. I have to go  
now,


Meanwhile you might think on how to explain what you mean by  
sensorimotive, without doing any poetry, so that anybody can  
understand clearly what you mean.
I remind you that honest scientists admit not to understand, in the  
scientific way, what is the nature of matter, nor the nature of mind,  
so it will not help to allude on this. It might make sense to allude  
on some property of mind and/or matter that we might share the  
intuition with you.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Stathis Papaioannou
On Tue, Aug 2, 2011 at 11:37 AM, Craig Weinberg whatsons...@gmail.com wrote:
 On Aug 1, 8:07 pm, Stathis Papaioannou stath...@gmail.com wrote:

 1. You agree that is possible to make something that behaves as if
 it's conscious but isn't conscious.

 N. I've been trying to tell you that there is no such thing as
 behaving as if something is conscious. It doesn't mean anything
 because consciousness isn't a behavior, it's a sensorimotive
 experience which sometimes drives behaviors.

Behaviour is what can be observed. Consciousness cannot be observed.
The question is, can something behave like a human without being
conscious?

 If you accept that, then it follows that whether or not someone is
 convinced as to the consciousness of something outside of themselves
 is based entirely upon them. Some people may not even be able to
 accept that certain people are conscious... they used to think that
 infants weren't conscious. In my theory I get into this area a lot and
 have terms such as Perceptual Relativity Inertial Frame (PRIF) to help
 illustrate how perception might be better understood (http://
 s33light.org/post/8357833908).

 How consciousness is inferred is a special case of PR Inertia which I
 think is based on isomorphism. In the most primitive case, the more
 something resembles what you are, in physical scale, material
 composition, appearance, etc, the more likely you are to identify
 something as being conscious. The more time you have to observe and
 relate to the object, the more your PRIF accumulates sensory details
 which augment your sense-making of the thing,  and context,
 familiarity, interaction, and expectations grow to overshadow the
 primitive detection criteria. You learn that a video Skype of someone
 is a way of seeing and talking to a person and not a hallucination or
 talking demon in your monitor.

 So if we build something that behaves like Joe Lunchbox, we might be
 able to fool strangers who don't interact with him, and an improved
 version might be able to fool strangers with limited interaction but
 not acquaintances, the next version might fool everyone for hours of
 casual conversation except Mrs. Lunchbox cannot be fooled at all, etc.
 There is not necessarily a possible substitution level which will
 satisfy all possible observers and interactors, pets, doctors, etc and
 there is not necessarily a substitution level which will satisfy any
 particular observer indefinitely. Some observers may just think that
 Joe is not feeling well. If the observers were told that one person in
 a lineup was an android, they might be more likely to identify Joe as
 the one.

The field of computational neuroscience involves modelling the
behaviour of neurons. Even philosophers such as John Searle, who
doesn't believe that a computer model of a brain can be conscious, at
least allow that a computer model can accurately predict the behaviour
of a brain. Searle points out that a model of a storm may predict its
behaviour accurately, but it won't actually be wet: that would require
a real storm. By analogy, a computer inside someone's head may model
the behaviour of his brain sufficiently well so as to cause his
muscles to move in a perfectly human way, but according to Searle that
does not mean that the ensuing being would be conscious. If you
disagree that even the behaviour can be modelled by a computer then
you are claiming that there is something in the physics of the brain
which is non-computable. But there is no evidence for such
non-computable physics in the brain; it's just ordinary chemistry.

 In any case, it all has nothing to do with whether or not the thing is
 actually conscious, which is the only important aspect of this line of
 thinking. We have simulations of people already - movies, TV, blow up
 dolls, sculptures, etc. Computer sims add another layer of realism to
 these without adding any reality of awareness.

So you *are* conceding the first point, that it is possible to make
something that behaves as if it's conscious without actually being
conscious? We don't even need to talk about brain physics: for the
purposes of the philosophical discussion it can be a magical device
created by God. If you don't concede this then you are essentially
agreeing with functionalism: that if something behaves as if it's
conscious then it is necessarily conscious.

 2. Therefore it would be possible to make a brain component that
 behaves just like normal brain tissue but lacks consciousness.

 Probably not. Brain tissue may not be any less conscious than the
 brain as a whole. What looks like normal behavior to us might make the
 difference between cricket chirps and a symphony and we wouldn't
 know.

 If you concede point 1, you must concede point 2.

 3. And since such a brain component behaves normally the rest of the
 brain should be have normally when it is installed.

 The community of neurons may graciously integrate the chirping
 sculpture into their community, but it 

Re: bruno list

2011-08-02 Thread Bruno Marchal


On 01 Aug 2011, at 21:20, Craig Weinberg wrote:


On Aug 1, 2:55 pm, Bruno Marchal marc...@ulb.ac.be wrote:


That happens with comp too, if you grasp the seventh UDA step. Our
first person experience are distributed in a non computable way in  
the

universal dovetailing.

You have a good intuition, but you assume much to much. The goal is  
to

explain the sense and matter without assuming sense nor matter (but
accepting the usual phenomenology of it, which is what we need to
search an explanation for).


Searching for an explanation is phenomenological too though, as are
numbers. Arithmetic is part of sense.


The fact that we know the numbers phenomenologically does not imply  
that they are phenomenological.
Human arithmetic is no doubt part of human sense, but this does not  
make arithmetical truth dependent on humans.
On the contrary, number theorists, logicians and computer scientist  
knows that arithmetic *kicks back* (cf Johnson's principle of  
reality). We know since Gödel that arithmetical truth escapes all  
axiomatizable or effective theories.




A sensorimotive circuit, to
detect, model, and control. It's an experience which requires a very
specific intelligence to participate in. We can control and detect by
arithmetic modeling, but that doesn't mean the object of it's modeling
is arithmetic. I think that I'm actually assuming much less - my
primitive universe doesn't require any epiphenomena or
disqualification of appearances.


You assume matter, sense, and related them by adding infinities. You  
mention electromagnetic waves which subsumes the (natural) numbers by  
using trigonometry on the reals, so at this level your theory clearly  
assumes more ontology or independent truth than computationalism.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Bruno Marchal


On 01 Aug 2011, at 21:42, Craig Weinberg wrote:


On Aug 1, 2:33 pm, Bruno Marchal marc...@ulb.ac.be wrote:

On 01 Aug 2011, at 01:12, Craig Weinberg wrote:


What would be the part of a burning log that you need to emulate to
preserve it's fire?


What you call fire is a relation between an observer and fire, and
what you need consists in emulating the fire and the observers at his
right substitution level (which comp assumes to exist).

Jason and Stathis have already explain this more than one time, I  
think.


The explanations I've heard so far are talking about something like
virtualized fire and a virtualized observer.


Yes. That is the point.





I'm asking about how
would you emulate fire so that it burns like fire to all observers
that fire exists for.


Here, you are asking me to make a confusion of level. To understand  
the point you need only to understand that the virtual people feel the  
virtual fire, when they are emulated at their right substitution level.






It needs to burn non-virtualized paper.


You ask for the impossible. But this is not asked, nor followed, by  
computationalism.





Heat
homes, etc. How does arithmetic do that, and if it can't why is that
not directly applicable to consciousness?


Arithmetic does that because Turing universality is an arithmetical  
concept, and that numbers have 'naturally' universal relations in  
between each other.


Once you bet your survive with a digital brain, and possess some  
amount of self-referential ability, you can understand that in fine  
the mind body problem can be translated into a body appearance problem  
in arithmetic. By a 'wonderful miracle' (the Solovay logics) we get a  
bit more than physics, but a propositional machine's neoplatonist-like  
theology.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Bruno Marchal


On 02 Aug 2011, at 03:49, Craig Weinberg wrote:


On Aug 1, 4:31 pm, Bruno Marchal marc...@ulb.ac.be wrote:


I believe that babbage machine, if terminated, can run a program
capable to see a larger spectrum than us.



Why do you, or why should I believe that though?


Well, it is a consequence of digital mechanism, alias  
computationalism.


It seems like circular reasoning to me. If you believe in comp, then
you believe math can have human experiences.


Not at all. Comp is just the doctrine according to which you can  
survive with a digital brain. This is neutral with respect to  
materialism. It is a non trivial consequence that comp leads to the  
abandon of the physical supervenience thesis.






I ask why I should
believe that, and you say that comp compels the belief.


That is all what UDA is about, besides indeterminacy, non locality,  
non-cloning. Some people accept immateriality at the step seven, and  
others opt for a ultrafinitist physicalism. The step 8 shows that  
ultrafinitist physicalism is a red hearing.
I don't ask you to take my word. Study, and you will understand, or  
perhaps find a weakness. To be honest and precise, there is still one  
point which I think must be made more precise, which is that comp  
implies the 323-principle. It says that if, for some particular  
computation, consciousness supervenes (physically) on a computer which  
does not use the register 323, then consciousness supervenes  
(physically) on that 'same' computation on the computer from which the  
323 register has been removed. The reason why this is true is that by  
relaxing the notion of physics to include genuine *such*  
counterfactuals prevent you to say yes to the doctor for  
computationalist reason. It is again a magical move. This is what  
makes obligatory to explain the physical reality by universal number's  
dream relative measure.







We explain, or try to explain, the complex (matter, mind, gods and
goddesses, and all that) from the simple principles on which many
agree, like addition and multiplication.


That's what I'm doing. Sensorimotive experience is clearly simpler
than addition and multiplication to me,


To you, perhaps. It is your work to make it simpler for us.




and with my hypothesis, it can
be seen that this experiential principle may very well be universal.


I don't approach questions in terms of syntactic architecture. I'm
starting with nothing and adding only what appears to be necessary  
to
understanding the cosmos without leaving out anything important  
(like

life, consciousness, subjectivity).


I have no doubt you try to understand something, but you seems to  
have

no idea of what is a scientific approach, to be frank.
We always try to assume the less, derive things, and compare with  
data.


Even more than always trying to do particular things, a scientific
approach should not always do what it always does. You can do both. I
have a clear vision of how these phenomena fit together and I think it
makes sense. I feel that it's up to others to test it in whatever way
they like.


You have to be a billions times more precise I'm afraid.

Bruno





 From what I (hardly) understand of your approach, you bury the Mind-
Body problem in an infinitely low substitution level.


To me, otherwise knows as solving the Mind-Body problem.


At least you
acknowledge that you have to say no to the doctor, and that *is*
your right. beware the crazy doctor (pro-life like) who might not ask
for your opinion.


I'd go to the doctor that has had alternate halves of his brain
replaced for a year each.

Craig



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Craig Weinberg
On Aug 1, 3:02 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 01 Aug 2011, at 01:12, Craig Weinberg wrote:

  Oriental standard of epistemology again. Wisdom, not knowledge.

 That is an authoritative argument. Like universal argument, they are  
 also non valid.

By what authority are they always non valid? I'm not saying they are
valid, but when we examine the phenomenon of authority itself - the
initiation of teleological orientation, aka subjectivity, we may not
necessarily be able to automatically disqualify these kinds of
arguments. In the subjective realm, the cogito presents a legitimate
argument as a starting point for understanding the phenomenon. 'Je
pense donc je suis' reveals a phenomenology of INsistence in
contradistinction to it's existential set correlate which relies upon
the ability to doubt all authoritative insistence. That's why it's the
hard problem of consciousness, because you have to learn it the hard
way, through first hand, 1p experience.

  It
  doesn't make sense that you can make fire out of numbers.

 That is a statement without a justification, which sums up your non-
 comp assumption. It does not motivate for believing that you are  
 correct.

Ok, true it is not a justified to say that it doesn't make sense, but
I'm justified in saying that it doesn't make sense to me.


 Also, it is misleading, because trivially you cannot make fire out of  
 numbers, but, assuming comp, arithmetical relations can make numbers  
 believes in relative body and fire, and even getting burned with all  
 the feelings you might imagine.

I understand perfectly that the effects of fire and body can be
emulated within a virtual context, but to say that there is no
relevant distinction between that context and the universe in which we
participate naturally is just as unjustified as my assertion that it
makes no sense. If the simulation cannot cause things to burn outside
of it's virtual context, then there is no reason to assume that it can
cause consciousness which can be related outside of it's context also.

It's not the numbers that believe in relative body and fire, it's just
us believing that numbers can believe something. We can believe in a
CGI generated cartoon world to an extent, but I have no reason to
imagine that the cartoon world exists to itself. That's silly, right?

Craig
http://s33light.org

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Bruno Marchal


On 01 Aug 2011, at 20:11, Craig Weinberg wrote:


On Aug 1, 1:55 pm, Bruno Marchal marc...@ulb.ac.be wrote:

On 01 Aug 2011, at 01:12, Craig Weinberg wrote:

What machine attributes are not Turing emulable? I thought Church  
says

that all real computations are Turing emulable.


But for Church the real computations are what can do a finite mind
with a finite set of transparent instructions, in a finite time, but
with as much memory and time he needs. It is the intuitively
computable functions.


I didn't realize it was that limited.


It is not that limited. It is the only effective set (you can  
generated it) which is close for the most trancendental operation  
known in math (diagonalization). Just this makes that set  
explicatively close.





I wasn't thinking of real in the
sense of only physical, but if Church posits a finite 'mind' with
transparent 'instructions' then it would seem useless for emulating
qualia.


I guess that is trivial assuming your non-comp theory.




Does a mind include sensation and perception?


It does not exclude it, but that is only elementary relevant. A Turing  
machine must be able to recognize if some symbol is on its tape or  
not, and act in a way depending on its state, but above that nothing  
much is needed. Indeed the goal is to explain the complex (sensation  
and perception) from the simplest (elementary perception and obeisance  
to elementary laws). If not this is a bit like a treachery, as far as  
we look for an explanation, perhaps even partial.





It seems very
narrow


It is not, by result in computer science, we know what the simplest  
thing can get awfully bizarre, unpredictible, deep and sophisticated.  
We can only scratch on the surface, and provably so, assuming comp.





and special case begging.


You are the one supposed to motivated us for a non-comp theory.





There is no reference at all with any idea of
real in the sense of physically real, which is something never
defined. David Deutsch has introduced a physical version of Church
thesis, but this has no bearing at all with Church thesis. Actually I
do think that Church thesis makes Deutsch thesis false, but I am not
sure (yet I am sure that Church thesis + yes doctor leads to the
existence of random oracle and local violation of Church thesis by
some physical phenomena (akin to iterated self-multiplication).



So if there are machine aspects that are not Turing emulable, why
aren't they primitive?


Because we recover them in the epistemologies, or at the meta-level,  
when we listen to the average LUMs in the tiny UD, or sigma_1  
arithmetical truth. They are either definable or derivable.
From inside it is bigger than Everett multiverse (and that might be a  
real problem for comp: the white rabbit problem which is equivalent  
with the problem of justifying the stability and sharability of the  
physical appearances from numbers and addition+multiplication).


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Craig Weinberg
On Aug 1, 3:07 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 01 Aug 2011, at 01:12, Craig Weinberg wrote:

  I don't see why it has to be infinite and I don't see what's wrong
  with non Turing.

 We will come back on this. Normally sane04 explains this. I have to go  
 now,

 Meanwhile you might think on how to explain what you mean by  
 sensorimotive, without doing any poetry, so that anybody can  
 understand clearly what you mean.

Unfortunately sensorimotive is by definition poetic in part, because
it is the ontological complement to electromagnetism, which is non-
poetic, literal, and mechanical. Electromagnetism is nothing more than
patterns observed in the behavior of matter. The experience of those
behaviors are sensorimotive. As an electric circuit is a contiguous
material loop through which 'current' flows between positive and
negative poles, the experience of that circuit can be modeled as a
feeling or sense of disequilibrium which motivates an intention to
complete the circuit - which it will seek to do whatever way it can.
Of course that's just a theory, we can't know what it's like to be a
single electric circuit like that, but we can know that our
experiences are conducted through our nervous system, and that nervous
system can be understood to have sensory and motor functions, and that
those functions as experiential input and output are ontological
conjugates. Together, the sensorimotive function on the scale of an
organism can be called perception or sentience. We make sense of
ourselves and our environment and we are motivated by that sense to
try to complete the sensorimotive circuit which is presented.

The difference between something like semiconductors or copper wire
circuits from neurological circuits is more than just complexity.
Complexity is necessary but not sufficient for bringing about
consciousness, feeling, and understanding. Why this is boils down to
the same reason why we would want to use semiconductors instead of
neurons in the first place. There are different physical
(electromagnetic) characteristics which make wires and chips much
easier to work with. If that were not the case, there would be no
debate because we would 'simply' use cheap, sugar powered brain chips
instead. There are things that cells do that a chip can't easily do,
and vice versa. Higher forms of consciousness is one of the former.
The problem is that if we a priori define the universe as computation,
we disqualify the other form of escalation: signification. Complexity
alone is not significance. A cell is more than molecules, and I
propose that the reason that molecules keep organizing themselves into
cells is that they get something out of it. I don't know if the shared
experience of being a cell is vicarious or direct, but like any kind
of human belonging, there is a motive there and a sense.

From a mechanical perspective, sense is the many to one input while
motive is the one to many output, but it's the sense of the
experiential content which is input and output as intention, not just
an encoded exterior 'signal'. Indeed our modern media technology
demonstrates how the sense that we make as human beings can easily be
encoded in many different signal translation architectures. It's not
the form of the signal that matters from the sensorimotive
perspective, it's what sense the receiver can make of the signal. What
I think comp does is imagine that signal form must equate with signal
content, particularly given the success of miniaturization in
processing enormously complex signals. I think this goes along with
the conception of electromagnetism as disembodied forces and fields,
quantum mechanical probability waves, etc as the overreaching of
abstraction to compensate for the disqualification of sensorimotive
phenomena because it doesn't mix well with existing theoretical
approaches and Enlightenment era traditions.

 I remind you that honest scientists admit not to understand, in the  
 scientific way, what is the nature of matter, nor the nature of mind,  
 so it will not help to allude on this. It might make sense to allude  
 on some property of mind and/or matter that we might share the  
 intuition with you.

What's wrong with understanding the nature of mind and matter? I admit
to not knowing whether or not my understanding will be contradicted by
some greater sense making effort, but I don't think that there is
inherently anything less presumptuous about focusing on granular
details of these phenomena rather than sketching out the big picture.

Craig
http://s33light.org

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Bruno Marchal


On 02 Aug 2011, at 17:57, Craig Weinberg wrote:


On Aug 1, 3:02 pm, Bruno Marchal marc...@ulb.ac.be wrote:

On 01 Aug 2011, at 01:12, Craig Weinberg wrote:


Oriental standard of epistemology again. Wisdom, not knowledge.


That is an authoritative argument. Like universal argument, they are
also non valid.


By what authority are they always non valid?


Because the genuine understanding is a personal affair. Authoritative  
assertions makes sense in army and in many local life struggling  
situations. Science tries its best to avoid them. Good religion also,  
imo.





I'm not saying they are
valid, but when we examine the phenomenon of authority itself -


But then we change the subject. We can study authority and 1p in the  
usual 3p-theories.





the
initiation of teleological orientation, aka subjectivity, we may not
necessarily be able to automatically disqualify these kinds of
arguments. In the subjective realm, the cogito presents a legitimate
argument as a starting point for understanding the phenomenon. 'Je
pense donc je suis' reveals a phenomenology


So we can agree on some principle, like consciousness is known to be  
truth, yet not definable, nor provable, and might be the only thing of  
that kind, etc. Then we continue to reason, in the 3p way, on such a  
1p notion.





of INsistence in
contradistinction to it's existential set correlate which relies upon
the ability to doubt all authoritative insistence. That's why it's the
hard problem of consciousness, because you have to learn it the hard
way, through first hand, 1p experience.


That is a mix of jargon + a (good) pun.





It
doesn't make sense that you can make fire out of numbers.


That is a statement without a justification, which sums up your non-
comp assumption. It does not motivate for believing that you are
correct.


Ok, true it is not a justified to say that it doesn't make sense, but
I'm justified in saying that it doesn't make sense to me.


We are not really interested in what make sense to you if you cannot  
convey that sense.

We try to agree on things and reason from them.








Also, it is misleading, because trivially you cannot make fire out of
numbers, but, assuming comp, arithmetical relations can make numbers
believes in relative body and fire, and even getting burned with all
the feelings you might imagine.


I understand perfectly that the effects of fire and body can be
emulated within a virtual context, but to say that there is no
relevant distinction between that context and the universe in which we
participate naturally is just as unjustified as my assertion that it
makes no sense.


First there is no proof that there is an *ontologically* primitive  
physical universe. Second I referred you to a paper which argues that  
the notion of primary universe does not make sense in comp, although  
approximation of this might make sense, but that remains to be shown.







If the simulation cannot cause things to burn outside
of it's virtual context, then there is no reason to assume that it can
cause consciousness which can be related outside of it's context also.


There is a reason. The search for simplicity in the basic principles,  
the avoidance of special infinities (that you have to learn computer  
science and diagonalization to be able to build them), the avoidance  
of assuming what needs to be explained, etc.






It's not the numbers that believe in relative body and fire, it's just
us believing that numbers can believe something.


Numbers are more easy than us. We try to explain us by numbers and/ 
or machines. If you have electromagnetic waves or any waves in your  
theory, you are assuming numbers (implicitly).





We can believe in a
CGI generated cartoon world to an extent, but I have no reason to
imagine that the cartoon world exists to itself. That's silly, right?


It is not logically silly. You can decide to cross the ocean on a  
sieve. But you are taking the risk of not going very far.


If comp appears to be inconsistent, or if the comp-physics appears to  
be disproved by nature, we will have a hint on those special  
infinities, that you need, in a relevant way. If you start from non- 
comp, you put what seems ad hoc difficulties on the problem. But you  
can try, of course. But we will demolish your invalid argument against  
comp, as long as you continue to use them (as long as we are patient  
and not to busy!)


Bruno
http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Craig Weinberg
On Aug 2, 8:59 am, Stathis Papaioannou stath...@gmail.com wrote:
 On Tue, Aug 2, 2011 at 11:37 AM, Craig Weinberg whatsons...@gmail.com wrote:
  On Aug 1, 8:07 pm, Stathis Papaioannou stath...@gmail.com wrote:

  1. You agree that is possible to make something that behaves as if
  it's conscious but isn't conscious.

  N. I've been trying to tell you that there is no such thing as
  behaving as if something is conscious. It doesn't mean anything
  because consciousness isn't a behavior, it's a sensorimotive
  experience which sometimes drives behaviors.

 Behaviour is what can be observed. Consciousness cannot be observed.
 The question is, can something behave like a human without being
 conscious?

Does a cadaver behave like a human? If I string it up like a
marionette? If the puppeteer is very good? What is the meaning of
these questions when it has nothing to do with whether the thing feels
like a human?

  If you accept that, then it follows that whether or not someone is
  convinced as to the consciousness of something outside of themselves
  is based entirely upon them. Some people may not even be able to
  accept that certain people are conscious... they used to think that
  infants weren't conscious. In my theory I get into this area a lot and
  have terms such as Perceptual Relativity Inertial Frame (PRIF) to help
  illustrate how perception might be better understood (http://
  s33light.org/post/8357833908).

  How consciousness is inferred is a special case of PR Inertia which I
  think is based on isomorphism. In the most primitive case, the more
  something resembles what you are, in physical scale, material
  composition, appearance, etc, the more likely you are to identify
  something as being conscious. The more time you have to observe and
  relate to the object, the more your PRIF accumulates sensory details
  which augment your sense-making of the thing,  and context,
  familiarity, interaction, and expectations grow to overshadow the
  primitive detection criteria. You learn that a video Skype of someone
  is a way of seeing and talking to a person and not a hallucination or
  talking demon in your monitor.

  So if we build something that behaves like Joe Lunchbox, we might be
  able to fool strangers who don't interact with him, and an improved
  version might be able to fool strangers with limited interaction but
  not acquaintances, the next version might fool everyone for hours of
  casual conversation except Mrs. Lunchbox cannot be fooled at all, etc.
  There is not necessarily a possible substitution level which will
  satisfy all possible observers and interactors, pets, doctors, etc and
  there is not necessarily a substitution level which will satisfy any
  particular observer indefinitely. Some observers may just think that
  Joe is not feeling well. If the observers were told that one person in
  a lineup was an android, they might be more likely to identify Joe as
  the one.

 The field of computational neuroscience involves modelling the
 behaviour of neurons. Even philosophers such as John Searle, who
 doesn't believe that a computer model of a brain can be conscious, at
 least allow that a computer model can accurately predict the behaviour
 of a brain.

Because he doesn't know about the essential-existential relation. Can
a computer model of your brain accurately predict what is going to
happen to you tomorrow? Next week? If I pull a name of a random
country out of a hat today and put you on a plane to that country,
will the computer model already have predicted what you will see and
say during your trip? That is what a computer model would have to do
to predict the 'behavior of a brain' - that means predicting signals
which correlate to the images processed in the visual regions of the
brain. How can you predict that without knowing what country you will
be going to next week?

This is the limitation of a-signifying modeling. You can only do so
much comparing the shapes of words and the order of the letters
without really understanding what the words mean. Knowing the letters
and order is important, but it is not sufficient for understanding
either what a human or their brain experiences. It's the meaning that
is essential. This should be especially evident after the various
derivative-driven market crashes. The market can't be predicted
indefinitely through statistical analysis alone, because the only
thing the statistics represent is driven by changing human conditions
and desires. Still people will beat the dead horse of quantitative
invincibility.

 Searle points out that a model of a storm may predict its
 behaviour accurately, but it won't actually be wet: that would require
 a real storm.

We may be at the limit of practical meteorological modeling. Not only
will the virtual storm will not be wet, it won't even necessarily
behave like a real storm when it really needs to. Reality is a
stinker. It doesn't like to be pinned down for long.

By analogy, a 

Re: bruno list

2011-08-02 Thread Stephen P. King

Hi,

There is a difference between intractability and non-computable. 
See Stephen Wolfram's article on this: 
http://www.stephenwolfram.com/publications/articles/physics/85-undecidability/2/text.html


The point is that there is a point where the best possible model or 
computational simulation of a system is the system itself. The fact that 
it is impossible to create a model of a weather system that can predict 
*all* of its future behavior does not equal to a proof that one cannot 
create an approximately accurate  model of a weather system. One has to 
trade off accuracy for feasibility. Arbitrarily accurate models of 
systems require a quantity of computational resources to run that 
increases exponentially with the number of variables of the system.



Onward!

Stephen

On 8/2/2011 1:35 PM, Craig Weinberg wrote:

On Aug 2, 8:59 am, Stathis Papaioannoustath...@gmail.com  wrote:

On Tue, Aug 2, 2011 at 11:37 AM, Craig Weinbergwhatsons...@gmail.com  wrote:

On Aug 1, 8:07 pm, Stathis Papaioannoustath...@gmail.com  wrote:

1. You agree that is possible to make something that behaves as if
it's conscious but isn't conscious.

N. I've been trying to tell you that there is no such thing as
behaving as if something is conscious. It doesn't mean anything
because consciousness isn't a behavior, it's a sensorimotive
experience which sometimes drives behaviors.

Behaviour is what can be observed. Consciousness cannot be observed.
The question is, can something behave like a human without being
conscious?

Does a cadaver behave like a human? If I string it up like a
marionette? If the puppeteer is very good? What is the meaning of
these questions when it has nothing to do with whether the thing feels
like a human?


If you accept that, then it follows that whether or not someone is
convinced as to the consciousness of something outside of themselves
is based entirely upon them. Some people may not even be able to
accept that certain people are conscious... they used to think that
infants weren't conscious. In my theory I get into this area a lot and
have terms such as Perceptual Relativity Inertial Frame (PRIF) to help
illustrate how perception might be better understood (http://
s33light.org/post/8357833908).
How consciousness is inferred is a special case of PR Inertia which I
think is based on isomorphism. In the most primitive case, the more
something resembles what you are, in physical scale, material
composition, appearance, etc, the more likely you are to identify
something as being conscious. The more time you have to observe and
relate to the object, the more your PRIF accumulates sensory details
which augment your sense-making of the thing,  and context,
familiarity, interaction, and expectations grow to overshadow the
primitive detection criteria. You learn that a video Skype of someone
is a way of seeing and talking to a person and not a hallucination or
talking demon in your monitor.
So if we build something that behaves like Joe Lunchbox, we might be
able to fool strangers who don't interact with him, and an improved
version might be able to fool strangers with limited interaction but
not acquaintances, the next version might fool everyone for hours of
casual conversation except Mrs. Lunchbox cannot be fooled at all, etc.
There is not necessarily a possible substitution level which will
satisfy all possible observers and interactors, pets, doctors, etc and
there is not necessarily a substitution level which will satisfy any
particular observer indefinitely. Some observers may just think that
Joe is not feeling well. If the observers were told that one person in
a lineup was an android, they might be more likely to identify Joe as
the one.

The field of computational neuroscience involves modelling the
behaviour of neurons. Even philosophers such as John Searle, who
doesn't believe that a computer model of a brain can be conscious, at
least allow that a computer model can accurately predict the behaviour
of a brain.

Because he doesn't know about the essential-existential relation. Can
a computer model of your brain accurately predict what is going to
happen to you tomorrow? Next week? If I pull a name of a random
country out of a hat today and put you on a plane to that country,
will the computer model already have predicted what you will see and
say during your trip? That is what a computer model would have to do
to predict the 'behavior of a brain' - that means predicting signals
which correlate to the images processed in the visual regions of the
brain. How can you predict that without knowing what country you will
be going to next week?

This is the limitation of a-signifying modeling. You can only do so
much comparing the shapes of words and the order of the letters
without really understanding what the words mean. Knowing the letters
and order is important, but it is not sufficient for understanding
either what a human or their brain experiences. It's the meaning that
is 

Re: bruno list

2011-08-02 Thread meekerdb

On 8/2/2011 5:59 AM, Stathis Papaioannou wrote:

So you*are*  conceding the first point, that it is possible to make
something that behaves as if it's conscious without actually being
conscious? We don't even need to talk about brain physics: for the
purposes of the philosophical discussion it can be a magical device
created by God. If you don't concede this then you are essentially
agreeing with functionalism: that if something behaves as if it's
conscious then it is necessarily conscious.
   


I agree with you vis a vis Craig.  But I think functionalism may well 
allow different kinds of consciousness (and I'm not clear that Bruno's 
version does).  That we hear an inner narration of our thoughts is 
probably an evolutionary accident arising because it was efficient to 
utilize some of the same brain structure used for hearing when thinking 
in language (c.f. Julian Jaynes).  If we were making an artificial 
intelligent being we could chose to have separate hardware and software 
perform the perception and the cogitation.  Would that being be 
conscious?  I'd say so.  It could certainly act consciously.  Would it 
be conscious in the same way we are?  No.  Similarly with vision and 
visual imagination.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread meekerdb

On 8/2/2011 11:06 AM, Stephen P. King wrote:

Hi,

There is a difference between intractability and non-computable. 
See Stephen Wolfram's article on this: 
http://www.stephenwolfram.com/publications/articles/physics/85-undecidability/2/text.html 



The point is that there is a point where the best possible model 
or computational simulation of a system is the system itself. The fact 
that it is impossible to create a model of a weather system that can 
predict *all* of its future behavior does not equal to a proof that 
one cannot create an approximately accurate  model of a weather 
system. One has to trade off accuracy for feasibility. Arbitrarily 
accurate models of systems require a quantity of computational 
resources to run that increases exponentially with the number of 
variables of the system. 


But only up to the point where the number is the same as the number in 
the system being modeled.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: a difference between intractability and non-computable?

2011-08-02 Thread Stephen P. King

On 8/2/2011 2:38 PM, meekerdb wrote:

On 8/2/2011 11:06 AM, Stephen P. King wrote:

Hi,

There is a difference between intractability and non-computable. 
See Stephen Wolfram's article on this: 
http://www.stephenwolfram.com/publications/articles/physics/85-undecidability/2/text.html 



The point is that there is a point where the best possible model 
or computational simulation of a system is the system itself. The 
fact that it is impossible to create a model of a weather system that 
can predict *all* of its future behavior does not equal to a proof 
that one cannot create an approximately accurate  model of a weather 
system. One has to trade off accuracy for feasibility. Arbitrarily 
accurate models of systems require a quantity of computational 
resources to run that increases exponentially with the number of 
variables of the system. 


But only up to the point where the number is the same as the number in 
the system being modeled.


Brent
--

Hi Brent,

There is something 'off' in what I wrote and I think that you see 
it. Please elaborate.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Craig Weinberg
On Aug 2, 2:06 pm, Stephen P. King stephe...@charter.net wrote:

      The point is that there is a point where the best possible model or
 computational simulation of a system is the system itself. The fact that
 it is impossible to create a model of a weather system that can predict
 *all* of its future behavior does not equal to a proof that one cannot
 create an approximately accurate  model of a weather system. One has to
 trade off accuracy for feasibility.

I agree that's true, and by that definition, we can certainly make
cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those systems for
the scope of their intended purpose. To get beyond that level of
accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the trouble of re-
inventing life just to get a friendlier sounding voicemail.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: a difference between intractability and non-computable?

2011-08-02 Thread meekerdb

On 8/2/2011 11:49 AM, Stephen P. King wrote:

On 8/2/2011 2:38 PM, meekerdb wrote:

On 8/2/2011 11:06 AM, Stephen P. King wrote:

Hi,

There is a difference between intractability and non-computable. 
See Stephen Wolfram's article on this: 
http://www.stephenwolfram.com/publications/articles/physics/85-undecidability/2/text.html 



The point is that there is a point where the best possible model 
or computational simulation of a system is the system itself. The 
fact that it is impossible to create a model of a weather system 
that can predict *all* of its future behavior does not equal to a 
proof that one cannot create an approximately accurate  model of a 
weather system. One has to trade off accuracy for feasibility. 
Arbitrarily accurate models of systems require a quantity of 
computational resources to run that increases exponentially with the 
number of variables of the system. 


But only up to the point where the number is the same as the number 
in the system being modeled.


Brent
--

Hi Brent,

There is something 'off' in what I wrote and I think that you see 
it. Please elaborate.


Onward!


Not 'off', just an aside about approximating a system.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread meekerdb

On 8/2/2011 12:43 PM, Craig Weinberg wrote:

On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net  wrote:

   

  The point is that there is a point where the best possible model or
computational simulation of a system is the system itself. The fact that
it is impossible to create a model of a weather system that can predict
*all* of its future behavior does not equal to a proof that one cannot
create an approximately accurate  model of a weather system. One has to
trade off accuracy for feasibility.
 

I agree that's true, and by that definition, we can certainly make
cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those systems for
the scope of their intended purpose. To get beyond that level of
accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the trouble of re-
inventing life just to get a friendlier sounding voicemail.

Craig

   
So now you agree that a simulation of a brain at the molecular level 
would suffice to produce consciousness (although of course it would be 
much more efficient to actually use molecules instead of computationally 
simulating them).   This would be a good reason to say 'no' to the 
doctor, since even though you could simulate the molecules and their 
interactions, quantum randomness would prevent you from controlling 
their interactions with the molecules in the rest of your brain.  
Bruno's argument would still go through, but the 'doctor' might have to 
replace not only your brain but a big chunk of the universe with which 
it interacts.  However, most people who have read Tegmark's paper 
understand that the brain must be essentially classical as a computer 
and so a simulation, even one of molecules, could be quasi-classical, 
i.e. local.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread Stephen P. King

On 8/2/2011 4:04 PM, meekerdb wrote:

On 8/2/2011 12:43 PM, Craig Weinberg wrote:

On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net  wrote:

  The point is that there is a point where the best possible 
model or
computational simulation of a system is the system itself. The fact 
that

it is impossible to create a model of a weather system that can predict
*all* of its future behavior does not equal to a proof that one cannot
create an approximately accurate  model of a weather system. One has to
trade off accuracy for feasibility.

I agree that's true, and by that definition, we can certainly make
cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those systems for
the scope of their intended purpose. To get beyond that level of
accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the trouble of re-
inventing life just to get a friendlier sounding voicemail.

Craig

So now you agree that a simulation of a brain at the molecular level 
would suffice to produce consciousness (although of course it would be 
much more efficient to actually use molecules instead of 
computationally simulating them).   This would be a good reason to say 
'no' to the doctor, since even though you could simulate the molecules 
and their interactions, quantum randomness would prevent you from 
controlling their interactions with the molecules in the rest of your 
brain.  Bruno's argument would still go through, but the 'doctor' 
might have to replace not only your brain but a big chunk of the 
universe with which it interacts.  However, most people who have read 
Tegmark's paper understand that the brain must be essentially 
classical as a computer and so a simulation, even one of molecules, 
could be quasi-classical, i.e. local.


Brent


Hi Brent,

I wonder if you would make a friendly wager with me about the 
veracity of Tegmark's claims about the brain being essentially 
classical? I bet $1 US (payable via Paypal) that he is dead wrong *and* 
that the proof that the brain actively  involves quantum phenomena that 
are discounted by Tegmark will emerge within two years. We already have 
evidence that the photosynthesis process in plants involves quantum 
coherence, there is an experiment being designed now to test the 
coherence in the retina of the human eye.


http://www.ghuth.com/2010/02/03/another-finding-of-quantum-coherence-in-a-photosynthetic-biological-system/
http://www.ghuth.com/2011/04/24/quantum-coherence-and-the-retina/

As to your post here. Craig's point is that the simulated brain, 
even if simulated down to the molecular level, will only be a simulation 
and 'think simulate thoughts'. If said simulated brain has a 
consiousness it will be its own, not that some other brain. A 
consciousness can no more be copied than the state of a QM system.


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread meekerdb

On 8/2/2011 2:08 PM, Stephen P. King wrote:

On 8/2/2011 4:04 PM, meekerdb wrote:

On 8/2/2011 12:43 PM, Craig Weinberg wrote:

On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net  wrote:

  The point is that there is a point where the best possible 
model or
computational simulation of a system is the system itself. The fact 
that
it is impossible to create a model of a weather system that can 
predict

*all* of its future behavior does not equal to a proof that one cannot
create an approximately accurate  model of a weather system. One 
has to

trade off accuracy for feasibility.

I agree that's true, and by that definition, we can certainly make
cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those systems for
the scope of their intended purpose. To get beyond that level of
accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the trouble of re-
inventing life just to get a friendlier sounding voicemail.

Craig

So now you agree that a simulation of a brain at the molecular level 
would suffice to produce consciousness (although of course it would 
be much more efficient to actually use molecules instead of 
computationally simulating them).   This would be a good reason to 
say 'no' to the doctor, since even though you could simulate the 
molecules and their interactions, quantum randomness would prevent 
you from controlling their interactions with the molecules in the 
rest of your brain.  Bruno's argument would still go through, but the 
'doctor' might have to replace not only your brain but a big chunk of 
the universe with which it interacts.  However, most people who have 
read Tegmark's paper understand that the brain must be essentially 
classical as a computer and so a simulation, even one of molecules, 
could be quasi-classical, i.e. local.


Brent


Hi Brent,

I wonder if you would make a friendly wager with me about the 
veracity of Tegmark's claims about the brain being essentially 
classical? I bet $1 US (payable via Paypal) that he is dead wrong 
*and* that the proof that the brain actively  involves quantum 
phenomena that are discounted by Tegmark will emerge within two years. 
We already have evidence that the photosynthesis process in plants 
involves quantum coherence, there is an experiment being designed now 
to test the coherence in the retina of the human eye.


http://www.ghuth.com/2010/02/03/another-finding-of-quantum-coherence-in-a-photosynthetic-biological-system/ 


http://www.ghuth.com/2011/04/24/quantum-coherence-and-the-retina/


Those are not really to the point.  Of course the brain involves quantum 
processes and some of these involve coherence for short times.  But 
Tegmark argues that the times are too short to be relevant to neural 
signaling and information processing.  There's an implicit assumption 
that neural activity is responsible for thought - that the 'doctor' 
could substitute at the neuron level.  I think this is right and it is 
supported by evolutionary considerations.  We wouldn't want an 
intelligent Mars Rover to make decisions based on quantum randomness 
except in rare circumstance (like Buridan's ass) and it wouldn't be 
evolutionarily advantageous for an organism on Earth.  I'm glad to 
accept your bet; except that I'm not sure how to resolve it.  It don't 
think finding something like the energy transfer involving coherence in 
photosynthesis or photon detection is relevant.




As to your post here. Craig's point is that the simulated brain, 
even if simulated down to the molecular level, will only be a 
simulation and 'think simulate thoughts'. If said simulated brain has 
a consiousness it will be its own, not that some other brain. 


Craig's position seems to be more a blur than a point.  He has said that 
only biological neurons can instantiate consciousness and only a 
conscious being can act like a conscious being.  That would imply that a 
being with an artificial, e.g. silicon chip based, brain cannot act like 
a conscious being.



A consciousness can no more be copied than the state of a QM system.


That's the point in question.  If Tegmark is right, it can.

Brent



Onward!

Stephen



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Craig Weinberg
On Aug 2, 4:04 pm, meekerdb meeke...@verizon.net wrote:
 On 8/2/2011 12:43 PM, Craig Weinberg wrote:

 So now you agree that a simulation of a brain at the molecular level
 would suffice to produce consciousness (although of course it would be
 much more efficient to actually use molecules instead of computationally
 simulating them).   This would be a good reason to say 'no' to the
 doctor, since even though you could simulate the molecules and their
 interactions, quantum randomness would prevent you from controlling
 their interactions with the molecules in the rest of your brain.  
 Bruno's argument would still go through, but the 'doctor' might have to
 replace not only your brain but a big chunk of the universe with which
 it interacts.  However, most people who have read Tegmark's paper
 understand that the brain must be essentially classical as a computer
 and so a simulation, even one of molecules, could be quasi-classical,
 i.e. local.

I'm saying that the closer you get to simulating everything that a
human brain actually is, rather than what we assume is it's
'function', the closer you are going to get to a human equivalent
consciousness. You might be able to cut some corners to achieve
certain attributes but you might also lose other attributes which may
not even be known yet. When I'm talking about getting down to the
cellular, genetic, or molecular level though, I'm talking about
replacing them with alternate physical materials designed by
computers, not abstract machine calculations themselves running on
silicon or some other platform.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread Craig Weinberg
On Aug 2, 5:08 pm, Stephen P. King stephe...@charter.net wrote:
 On 8/2/2011 4:04 PM, meekerdb wrote:

      As to your post here. Craig's point is that the simulated brain,
 even if simulated down to the molecular level, will only be a simulation
 and 'think simulate thoughts'. If said simulated brain has a
 consiousness it will be its own, not that some other brain. A
 consciousness can no more be copied than the state of a QM system.

Absolutely, that's true too. Even if you do make a brain out of
germanium and silicon DNA based neurons, the result is not going to be
any more identical to the template brain than an identical twin.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread Stephen P. King

On 8/2/2011 5:26 PM, meekerdb wrote:

On 8/2/2011 2:08 PM, Stephen P. King wrote:

On 8/2/2011 4:04 PM, meekerdb wrote:

On 8/2/2011 12:43 PM, Craig Weinberg wrote:

On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net  wrote:

  The point is that there is a point where the best possible 
model or
computational simulation of a system is the system itself. The 
fact that
it is impossible to create a model of a weather system that can 
predict
*all* of its future behavior does not equal to a proof that one 
cannot
create an approximately accurate  model of a weather system. One 
has to

trade off accuracy for feasibility.

I agree that's true, and by that definition, we can certainly make
cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those systems for
the scope of their intended purpose. To get beyond that level of
accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the trouble of 
re-

inventing life just to get a friendlier sounding voicemail.

Craig

So now you agree that a simulation of a brain at the molecular level 
would suffice to produce consciousness (although of course it would 
be much more efficient to actually use molecules instead of 
computationally simulating them).   This would be a good reason to 
say 'no' to the doctor, since even though you could simulate the 
molecules and their interactions, quantum randomness would prevent 
you from controlling their interactions with the molecules in the 
rest of your brain.  Bruno's argument would still go through, but 
the 'doctor' might have to replace not only your brain but a big 
chunk of the universe with which it interacts.  However, most people 
who have read Tegmark's paper understand that the brain must be 
essentially classical as a computer and so a simulation, even one of 
molecules, could be quasi-classical, i.e. local.


Brent


Hi Brent,

I wonder if you would make a friendly wager with me about the 
veracity of Tegmark's claims about the brain being essentially 
classical? I bet $1 US (payable via Paypal) that he is dead wrong 
*and* that the proof that the brain actively  involves quantum 
phenomena that are discounted by Tegmark will emerge within two 
years. We already have evidence that the photosynthesis process in 
plants involves quantum coherence, there is an experiment being 
designed now to test the coherence in the retina of the human eye.


http://www.ghuth.com/2010/02/03/another-finding-of-quantum-coherence-in-a-photosynthetic-biological-system/ 


http://www.ghuth.com/2011/04/24/quantum-coherence-and-the-retina/


Those are not really to the point.  Of course the brain involves 
quantum processes and some of these involve coherence for short 
times.  But Tegmark argues that the times are too short to be relevant 
to neural signaling and information processing.  There's an implicit 
assumption that neural activity is responsible for thought - that the 
'doctor' could substitute at the neuron level.  I think this is right 
and it is supported by evolutionary considerations.  We wouldn't want 
an intelligent Mars Rover to make decisions based on quantum 
randomness except in rare circumstance (like Buridan's ass) and it 
wouldn't be evolutionarily advantageous for an organism on Earth.  I'm 
glad to accept your bet; except that I'm not sure how to resolve it.  
It don't think finding something like the energy transfer involving 
coherence in photosynthesis or photon detection is relevant.




No, my thought is that quantum coherence accounts for, among other 
things, the way that sense data is continuously integrated into a 
whole.  This leads to a situation that Daniel C. Dennett calls the 
Cartesian Theater. Dennett's proof that it cannot exist because it 
generates infinite regress of homunculi inside humonculi is flawed 
because such infinities can only occur if each of the humonculi has 
access to sufficient computational resources to generate the rest of 
them. When we understand that computations require the utilization of 
resources and do not occur 'for free' we see that the entire case 
against situations that imply the possibility of infinite regress fails.
Quantum phenomena is NOT all about randomness. Frankly I would 
really  like to understand how that rubbish of an idea still is held in 
seriously thinking people! There is not randomness in QM, there in only 
the physical inability to predict exactly when some quantum event will 
occur in advance. It is because QM system cannot be copied that makes it 
impossible to predict their behavior in advance, not because of some 
inherent randomness! Take the infamous radioactive atom in the 
Schrodinger Cat box. Is its decay strictly a random phenomena? Not 
really! QM says not one word about randomness, it only allows us to 
calculate the half-life of said atom and that calculation is as good as 
is possible given 

Re: Simulated Brains

2011-08-02 Thread Craig Weinberg
On Aug 2, 5:26 pm, meekerdb meeke...@verizon.net wrote:

 Craig's position seems to be more a blur than a point.  He has said that
 only biological neurons can instantiate consciousness

Consciousness is a qualitative estimation, all but useless for
discussing the distinction between biological and non-biological
interiority. It's an obsolete term as far as scientific examination
goes. I say that human equivalent consciousness can probably only be
instantiated by some form of biological neuron. Molecular level
'consciousness' is what is instantiated when you turn your computer
on. The reason that your computer's awareness will not be able to be
improved until it is a human equivalent is the same reason why only
one class of molecules makes cells and one class of cells become
neurons. If something very different could just as easily suffice, I
think that it would be common to find alternate DNA based species, non-
cellular animals, and non-neurological brains.

 and only a
 conscious being can act like a conscious being.

'Conscious' to me just means awareness of awareness, and it has no
particular symptom that can be recognized through any category of
acts.

 That would imply that a
 being with an artificial, e.g. silicon chip based, brain cannot act like
 a conscious being.

I've been repeating this over and over but nobody seems to recognize
it. Whether or not something is deemed to be 'acting like a conscious
being' just means that something resembles yourself in it's physical
appearance and behavior enough that you infer it to have an interior
environment similar to your own. It has little to do with whether or
not arithmetic can be made to feel or believe something. That is what
I am saying is a category error.

  A consciousness can no more be copied than the state of a QM system.

 That's the point in question.  If Tegmark is right, it can.


Nah, Tegmark is wrong. Neurological signalling is just the tip of the
iceberg. There is no actual physical phenomenon as a 'signal'.
Anything can be a signal if it is interpretable as such. If brains
could be generated independently of the cells and molecules they are
made of, you would probably find some evidence of that in nature. A
complex mineral that discusses semiotics or a planet that has figured
out how to duplicate itself.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread meekerdb

On 8/2/2011 2:33 PM, Craig Weinberg wrote:

On Aug 2, 4:04 pm, meekerdbmeeke...@verizon.net  wrote:
   

On 8/2/2011 12:43 PM, Craig Weinberg wrote:
 
   

So now you agree that a simulation of a brain at the molecular level
would suffice to produce consciousness (although of course it would be
much more efficient to actually use molecules instead of computationally
simulating them).   This would be a good reason to say 'no' to the
doctor, since even though you could simulate the molecules and their
interactions, quantum randomness would prevent you from controlling
their interactions with the molecules in the rest of your brain.  
Bruno's argument would still go through, but the 'doctor' might have to

replace not only your brain but a big chunk of the universe with which
it interacts.  However, most people who have read Tegmark's paper
understand that the brain must be essentially classical as a computer
and so a simulation, even one of molecules, could be quasi-classical,
i.e. local.
 

I'm saying that the closer you get to simulating everything that a
human brain actually is, rather than what we assume is it's
'function', the closer you are going to get to a human equivalent
consciousness.


I understand what you're saying.  I just don't see any reason to believe it.

Brent


You might be able to cut some corners to achieve
certain attributes but you might also lose other attributes which may
not even be known yet. When I'm talking about getting down to the
cellular, genetic, or molecular level though, I'm talking about
replacing them with alternate physical materials designed by
computers, not abstract machine calculations themselves running on
silicon or some other platform.

Craig

   


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread meekerdb

On 8/2/2011 2:44 PM, Stephen P. King wrote:

On 8/2/2011 5:26 PM, meekerdb wrote:

On 8/2/2011 2:08 PM, Stephen P. King wrote:

On 8/2/2011 4:04 PM, meekerdb wrote:

On 8/2/2011 12:43 PM, Craig Weinberg wrote:

On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net  wrote:

  The point is that there is a point where the best possible 
model or
computational simulation of a system is the system itself. The 
fact that
it is impossible to create a model of a weather system that can 
predict
*all* of its future behavior does not equal to a proof that one 
cannot
create an approximately accurate  model of a weather system. One 
has to

trade off accuracy for feasibility.

I agree that's true, and by that definition, we can certainly make
cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those systems for
the scope of their intended purpose. To get beyond that level of
accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the trouble 
of re-

inventing life just to get a friendlier sounding voicemail.

Craig

So now you agree that a simulation of a brain at the molecular 
level would suffice to produce consciousness (although of course it 
would be much more efficient to actually use molecules instead of 
computationally simulating them).   This would be a good reason to 
say 'no' to the doctor, since even though you could simulate the 
molecules and their interactions, quantum randomness would prevent 
you from controlling their interactions with the molecules in the 
rest of your brain.  Bruno's argument would still go through, but 
the 'doctor' might have to replace not only your brain but a big 
chunk of the universe with which it interacts.  However, most 
people who have read Tegmark's paper understand that the brain must 
be essentially classical as a computer and so a simulation, even 
one of molecules, could be quasi-classical, i.e. local.


Brent


Hi Brent,

I wonder if you would make a friendly wager with me about the 
veracity of Tegmark's claims about the brain being essentially 
classical? I bet $1 US (payable via Paypal) that he is dead wrong 
*and* that the proof that the brain actively  involves quantum 
phenomena that are discounted by Tegmark will emerge within two 
years. We already have evidence that the photosynthesis process in 
plants involves quantum coherence, there is an experiment being 
designed now to test the coherence in the retina of the human eye.


http://www.ghuth.com/2010/02/03/another-finding-of-quantum-coherence-in-a-photosynthetic-biological-system/ 


http://www.ghuth.com/2011/04/24/quantum-coherence-and-the-retina/


Those are not really to the point.  Of course the brain involves 
quantum processes and some of these involve coherence for short 
times.  But Tegmark argues that the times are too short to be 
relevant to neural signaling and information processing.  There's an 
implicit assumption that neural activity is responsible for thought - 
that the 'doctor' could substitute at the neuron level.  I think this 
is right and it is supported by evolutionary considerations.  We 
wouldn't want an intelligent Mars Rover to make decisions based on 
quantum randomness except in rare circumstance (like Buridan's ass) 
and it wouldn't be evolutionarily advantageous for an organism on 
Earth.  I'm glad to accept your bet; except that I'm not sure how to 
resolve it.  It don't think finding something like the energy 
transfer involving coherence in photosynthesis or photon detection is 
relevant.




No, my thought is that quantum coherence accounts for, among other 
things, the way that sense data is continuously integrated into a whole.


What integrated whole do you refer to?  Our memory of a life?  How does 
it account for it?


This leads to a situation that Daniel C. Dennett calls the Cartesian 
Theater. Dennett's proof that it cannot exist because it generates 
infinite regress of homunculi inside humonculi is flawed because such 
infinities can only occur if each of the humonculi has access to 
sufficient computational resources to generate the rest of them. When 
we understand that computations require the utilization of resources 
and do not occur 'for free' we see that the entire case against 
situations that imply the possibility of infinite regress fails.


I don't understand that.  Are you agreeing with Dennett that an infinite 
regress cannot occur or are you arguing that the need to pay for 
resources makes them possible?


Quantum phenomena is NOT all about randomness. Frankly I would 
really  like to understand how that rubbish of an idea still is held 
in seriously thinking people! There is not randomness in QM, there in 
only the physical inability to predict exactly when some quantum event 
will occur in advance. It is because QM system cannot be copied that 
makes it impossible to predict their behavior in advance, 

Re: Simulated Brains

2011-08-02 Thread meekerdb

On 8/2/2011 2:58 PM, Craig Weinberg wrote:

I've been repeating this over and over but nobody seems to recognize
it. Whether or not something is deemed to be 'acting like a conscious
being' just means that something resembles yourself in it's physical
appearance and behavior enough that you infer it to have an interior
environment similar to your own. It has little to do with whether or
not arithmetic can be made to feel or believe something. That is what
I am saying is a category error.
   


You have to keep repeating it because you also keep repeating that an 
artificial being can't really appear to be conscious (to his wife).  
This implies that is something more than physics and chemistry behind 
the behavior that we (and his wife) interpret as consciousness; because 
the physics and chemistry can be simulated computationally.  I 
understand that you deny the simulated physics and chemistry would 
instantiate consciousness.  But you don't seem to recognize that this 
implies that the real physics and chemistry couldn't either.


So which is it.  Can there be a philosophical zombie or not?

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Craig Weinberg
On Aug 2, 5:58 pm, meekerdb meeke...@verizon.net wrote:

 I understand what you're saying.  I just don't see any reason to believe it.

You were summing up my position as including

(although of course it would be it would be
 much more efficient to actually use molecules instead of computationally
 simulating them).

I'm not saying that. I'm saying that you could possibly simulate human
consciousness using different molecules and cells, but not simulating
them computationally. A computational simulation implies that it is
substance independent, which obviously biological life and the
conscious feelings that are associated with it are not. If you do a
computational simulation through a similar material that the brain is
made of, then you have something similar to a brain. The idea of pure
computation independent of some physical medium is not something we
should take for granted. It seems like a completely outrageous fantasy
to me. Why would such a thing be any more plausible than ghosts or
magic?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread meekerdb

On 8/2/2011 3:26 PM, Craig Weinberg wrote:

On Aug 2, 5:58 pm, meekerdbmeeke...@verizon.net  wrote:

   

I understand what you're saying.  I just don't see any reason to believe it.
 

You were summing up my position as including

   

(although of course it would be it would be
much more efficient to actually use molecules instead of computationally
simulating them).
   

I'm not saying that. I'm saying that you could possibly simulate human
consciousness using different molecules and cells, but not simulating
them computationally. A computational simulation implies that it is
substance independent, which obviously biological life and the
conscious feelings that are associated with it are not.


But that is not obvious and saying so isn't an argument.


If you do a
computational simulation through a similar material that the brain is
made of, then you have something similar to a brain. The idea of pure
computation independent of some physical medium is not something we
should take for granted. It seems like a completely outrageous fantasy
to me. Why would such a thing be any more plausible than ghosts or
magic?
   


I don't take it for granted.  But I can imagine building an intelligent 
robot that acts in every way like a person.  And I know that I could 
replace his computer brain for a different one, built with different 
materials and using different physics, that computed the same programs 
without changing its behavior.  Now you deny that this robot is 
conscious because its brain isn't made of proteins and water and neurons 
- but I could replace part of the computer with a computer made of some 
protein and water and some neurons; which according to you would then 
make the robot conscious.  This seems to me to be an unjustified 
inference.  If it acts conscious with the wet brain and it acted the 
same before, with the computer chip brain, then I infer that it was 
probably conscious before.


Do I conclude that it experiences consciousness exactly as I do?  No, I 
think that it might depend on how its programming is implement, e.g. 
LISP might produce different experience than FORTRAN  or whether there 
are asynchronous hardware modules.  I'm not sure how Bruno's theory 
applies to this since he looks at the problem from a level where all 
computation is equivalent modulo Church-Turing.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread Stephen P. King

On 8/2/2011 6:08 PM, meekerdb wrote:

On 8/2/2011 2:44 PM, Stephen P. King wrote:

On 8/2/2011 5:26 PM, meekerdb wrote:

On 8/2/2011 2:08 PM, Stephen P. King wrote:

On 8/2/2011 4:04 PM, meekerdb wrote:

On 8/2/2011 12:43 PM, Craig Weinberg wrote:

On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net  wrote:

  The point is that there is a point where the best possible 
model or
computational simulation of a system is the system itself. The 
fact that
it is impossible to create a model of a weather system that can 
predict
*all* of its future behavior does not equal to a proof that one 
cannot
create an approximately accurate  model of a weather system. One 
has to

trade off accuracy for feasibility.

I agree that's true, and by that definition, we can certainly make
cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those systems for
the scope of their intended purpose. To get beyond that level of
accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the trouble 
of re-

inventing life just to get a friendlier sounding voicemail.

Craig

So now you agree that a simulation of a brain at the molecular 
level would suffice to produce consciousness (although of course 
it would be much more efficient to actually use molecules instead 
of computationally simulating them).   This would be a good reason 
to say 'no' to the doctor, since even though you could simulate 
the molecules and their interactions, quantum randomness would 
prevent you from controlling their interactions with the molecules 
in the rest of your brain.  Bruno's argument would still go 
through, but the 'doctor' might have to replace not only your 
brain but a big chunk of the universe with which it interacts.  
However, most people who have read Tegmark's paper understand that 
the brain must be essentially classical as a computer and so a 
simulation, even one of molecules, could be quasi-classical, i.e. 
local.


Brent


Hi Brent,

I wonder if you would make a friendly wager with me about the 
veracity of Tegmark's claims about the brain being essentially 
classical? I bet $1 US (payable via Paypal) that he is dead wrong 
*and* that the proof that the brain actively  involves quantum 
phenomena that are discounted by Tegmark will emerge within two 
years. We already have evidence that the photosynthesis process in 
plants involves quantum coherence, there is an experiment being 
designed now to test the coherence in the retina of the human eye.


http://www.ghuth.com/2010/02/03/another-finding-of-quantum-coherence-in-a-photosynthetic-biological-system/ 


http://www.ghuth.com/2011/04/24/quantum-coherence-and-the-retina/


Those are not really to the point.  Of course the brain involves 
quantum processes and some of these involve coherence for short 
times.  But Tegmark argues that the times are too short to be 
relevant to neural signaling and information processing.  There's an 
implicit assumption that neural activity is responsible for thought 
- that the 'doctor' could substitute at the neuron level.  I think 
this is right and it is supported by evolutionary considerations.  
We wouldn't want an intelligent Mars Rover to make decisions based 
on quantum randomness except in rare circumstance (like Buridan's 
ass) and it wouldn't be evolutionarily advantageous for an organism 
on Earth.  I'm glad to accept your bet; except that I'm not sure how 
to resolve it.  It don't think finding something like the energy 
transfer involving coherence in photosynthesis or photon detection 
is relevant.




No, my thought is that quantum coherence accounts for, among 
other things, the way that sense data is continuously integrated into 
a whole.


What integrated whole do you refer to?  Our memory of a life?  How 
does it account for it?




This is not rocket surgery, come on! Think! Did you ever happen to 
notice that, modulo variations in distance, the sounds you hear, the 
things you see, feels, taste, etc. are all integrated together? How is 
it that, modulo deya vu and similar synesthesias and dislexia, the brain 
generates a vritual reality version of the world around you that is 
amazingly free of latency? While there are visual effects that replicate 
aliasing effects, such as when we see the spokes of a wheel turning 
backwards, the ability of the brain to turn all those signals into a 
single and integrated virtual world is amazing, but more amazing still 
is the fact that there is something in the brain that acts like an 
observer, something that lead many in the past to speculate about a 
homunculus...



This leads to a situation that Daniel C. Dennett calls the Cartesian 
Theater. Dennett's proof that it cannot exist because it generates 
infinite regress of homunculi inside humonculi is flawed because such 
infinities can only occur if each of the humonculi has access to 
sufficient 

Re: Simulated Brains

2011-08-02 Thread Jason Resch
On Tue, Aug 2, 2011 at 4:44 PM, Stephen P. King stephe...@charter.netwrote:


No, my thought is that quantum coherence accounts for, among other
 things, the way that sense data is continuously integrated into a whole.
  This leads to a situation that Daniel C. Dennett calls the Cartesian
 Theater. Dennett's proof that it cannot exist because it generates infinite
 regress of homunculi inside humonculi is flawed because such infinities can
 only occur if each of the humonculi has access to sufficient computational
 resources to generate the rest of them. When we understand that computations
 require the utilization of resources and do not occur 'for free' we see that
 the entire case against situations that imply the possibility of infinite
 regress fails.
Quantum phenomena is NOT all about randomness. Frankly I would really
  like to understand how that rubbish of an idea still is held in seriously
 thinking people! There is not randomness in QM, there in only the physical
 inability to predict exactly when some quantum event will occur in advance.
 It is because QM system cannot be copied that makes it impossible to predict
 their behavior in advance, not because of some inherent randomness! Take the
 infamous radioactive atom in the Schrodinger Cat box. Is its decay strictly
 a random phenomena? Not really! QM says not one word about randomness, it
 only allows us to calculate the half-life of said atom and that calculation
 is as good as is possible given the fact that we cannot generate a
 simulation of that atom and its environment and all of the interactions
 thereof in a way that we can get predictions about its behavior in advance.



What is the distinction between random and unpredictable?



 A consciousness can no more be copied than the state of a QM system.


 That's the point in question.  If Tegmark is right, it can.

 Tegmark is wrong.


Stephen, do you doubt that consciousness can be implemented by a digital
machine or process?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: bruno list

2011-08-02 Thread Craig Weinberg
On Aug 2, 6:51 pm, meekerdb meeke...@verizon.net wrote:

 But that is not obvious and saying so isn't an argument.

You don't have to accept it, but you shouldn't strawman it either.

  If you do a
  computational simulation through a similar material that the brain is
  made of, then you have something similar to a brain. The idea of pure
  computation independent of some physical medium is not something we
  should take for granted. It seems like a completely outrageous fantasy
  to me. Why would such a thing be any more plausible than ghosts or
  magic?

 I don't take it for granted.  But I can imagine building an intelligent
 robot that acts in every way like a person.  And I know that I could
 replace his computer brain for a different one, built with different
 materials and using different physics, that computed the same programs
 without changing its behavior.  Now you deny that this robot is
 conscious because its brain isn't made of proteins and water and neurons
 - but I could replace part of the computer with a computer made of some
 protein and water and some neurons; which according to you would then
 make the robot conscious.  This seems to me to be an unjustified
 inference.  If it acts conscious with the wet brain and it acted the
 same before, with the computer chip brain, then I infer that it was
 probably conscious before.

Why are you equating how something appears to behave with it's
capacity to experience human consciousness? Think of the live neurons
as a pilot light in a gas appliance. You may not need to heat your hot
water heater with a bonfire that needs to be maintained with wood, and
if you have a natural gas utility you can substitute a different fuel,
but if that fuel can't ignite, there's not going to be any heat. By
your reasoning, the natural gas could be substituted with carbon
dioxide, since it looks the same, acts like a gas, etc so you could
infer that it should make the same heat. With the pilot light, you
will at least know whether or not the fuel is viable.

 Do I conclude that it experiences consciousness exactly as I do?  No, I
 think that it might depend on how its programming is implement, e.g.
 LISP might produce different experience than FORTRAN  or whether there
 are asynchronous hardware modules.  I'm not sure how Bruno's theory
 applies to this since he looks at the problem from a level where all
 computation is equivalent modulo Church-Turing.

I hear what you're saying, and there's no question that the
programming is instrumental both in simulating intelligence or
generating a human level of interior experience artificially. All I'm
saying is that LISP or FORTRAN cannot have an experience by itself. A
silicon chip can and does experience something when it runs a program,
just not what we experience when we use the program. Just as your TV
set experiences something when you watch the news, but what it
experiences is not the news, not a tv program, not colored pixels or
patterns, but electronic level detection. Circuits, voltage,
resistance, capacitance, etc. You put a lot of fancy elaboration on a
circuit, sure, maybe you get some novelty showing up in the
experience, but I think that the level at which molecules cohere as a
living cell is likely to be the same level at which electronic
detection level awareness autopoiesizes into actual sensitivity or
proto feeling. I'm guessing about this of course, but I think it makes
sense, certainly a lot more sense than the idea that 'consciousness'
gradually appears when there are enough IF-THEN statements.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread Stephen P. King

On 8/2/2011 8:20 PM, Jason Resch wrote:



On Tue, Aug 2, 2011 at 4:44 PM, Stephen P. King stephe...@charter.net 
mailto:stephe...@charter.net wrote:



   No, my thought is that quantum coherence accounts for, among
other things, the way that sense data is continuously integrated
into a whole.  This leads to a situation that Daniel C. Dennett
calls the Cartesian Theater. Dennett's proof that it cannot
exist because it generates infinite regress of homunculi inside
humonculi is flawed because such infinities can only occur if each
of the humonculi has access to sufficient computational resources
to generate the rest of them. When we understand that computations
require the utilization of resources and do not occur 'for free'
we see that the entire case against situations that imply the
possibility of infinite regress fails.
   Quantum phenomena is NOT all about randomness. Frankly I would
really  like to understand how that rubbish of an idea still is
held in seriously thinking people! There is not randomness in QM,
there in only the physical inability to predict exactly when some
quantum event will occur in advance. It is because QM system
cannot be copied that makes it impossible to predict their
behavior in advance, not because of some inherent randomness! Take
the infamous radioactive atom in the Schrodinger Cat box. Is its
decay strictly a random phenomena? Not really! QM says not one
word about randomness, it only allows us to calculate the
half-life of said atom and that calculation is as good as is
possible given the fact that we cannot generate a simulation of
that atom and its environment and all of the interactions thereof
in a way that we can get predictions about its behavior in advance.



What is the distinction between random and unpredictable?


Unpredictable means that it cannot be predicted. Randomness is 
uncaused.  A completely deterministic behavior can be unpredictable and 
not random. Consider the behaviour of a non-linear system.




A consciousness can no more be copied than the state of a QM
system.


That's the point in question.  If Tegmark is right, it can.

   Tegmark is wrong.


Stephen, do you doubt that consciousness can be implemented by a 
digital machine or process?


I doubt that consciousness can be implemented in classical machines 
or their logical equivalents. Digital machines maybe, if they involve 
quantum entanglement of a certain kind.


Onward!

Stephen



Jason
--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread meekerdb

On 8/2/2011 4:00 PM, Stephen P. King wrote:

On 8/2/2011 6:08 PM, meekerdb wrote:

On 8/2/2011 2:44 PM, Stephen P. King wrote:

On 8/2/2011 5:26 PM, meekerdb wrote:

On 8/2/2011 2:08 PM, Stephen P. King wrote:

On 8/2/2011 4:04 PM, meekerdb wrote:

On 8/2/2011 12:43 PM, Craig Weinberg wrote:

On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net  wrote:

  The point is that there is a point where the best 
possible model or
computational simulation of a system is the system itself. The 
fact that
it is impossible to create a model of a weather system that can 
predict
*all* of its future behavior does not equal to a proof that one 
cannot
create an approximately accurate  model of a weather system. 
One has to

trade off accuracy for feasibility.

I agree that's true, and by that definition, we can certainly make
cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those systems 
for

the scope of their intended purpose. To get beyond that level of
accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the trouble 
of re-

inventing life just to get a friendlier sounding voicemail.

Craig

So now you agree that a simulation of a brain at the molecular 
level would suffice to produce consciousness (although of course 
it would be much more efficient to actually use molecules instead 
of computationally simulating them).   This would be a good 
reason to say 'no' to the doctor, since even though you could 
simulate the molecules and their interactions, quantum randomness 
would prevent you from controlling their interactions with the 
molecules in the rest of your brain.  Bruno's argument would 
still go through, but the 'doctor' might have to replace not only 
your brain but a big chunk of the universe with which it 
interacts.  However, most people who have read Tegmark's paper 
understand that the brain must be essentially classical as a 
computer and so a simulation, even one of molecules, could be 
quasi-classical, i.e. local.


Brent


Hi Brent,

I wonder if you would make a friendly wager with me about the 
veracity of Tegmark's claims about the brain being essentially 
classical? I bet $1 US (payable via Paypal) that he is dead wrong 
*and* that the proof that the brain actively  involves quantum 
phenomena that are discounted by Tegmark will emerge within two 
years. We already have evidence that the photosynthesis process in 
plants involves quantum coherence, there is an experiment being 
designed now to test the coherence in the retina of the human eye.


http://www.ghuth.com/2010/02/03/another-finding-of-quantum-coherence-in-a-photosynthetic-biological-system/ 


http://www.ghuth.com/2011/04/24/quantum-coherence-and-the-retina/


Those are not really to the point.  Of course the brain involves 
quantum processes and some of these involve coherence for short 
times.  But Tegmark argues that the times are too short to be 
relevant to neural signaling and information processing.  There's 
an implicit assumption that neural activity is responsible for 
thought - that the 'doctor' could substitute at the neuron level.  
I think this is right and it is supported by evolutionary 
considerations.  We wouldn't want an intelligent Mars Rover to make 
decisions based on quantum randomness except in rare circumstance 
(like Buridan's ass) and it wouldn't be evolutionarily advantageous 
for an organism on Earth.  I'm glad to accept your bet; except that 
I'm not sure how to resolve it.  It don't think finding something 
like the energy transfer involving coherence in photosynthesis or 
photon detection is relevant.




No, my thought is that quantum coherence accounts for, among 
other things, the way that sense data is continuously integrated 
into a whole.


What integrated whole do you refer to?  Our memory of a life?  How 
does it account for it?




This is not rocket surgery, come on! Think! Did you ever happen to 
notice that, modulo variations in distance, the sounds you hear, the 
things you see, feels, taste, etc. are all integrated together? How is 
it that, modulo deya vu and similar synesthesias and dislexia, the 
brain generates a vritual reality version of the world around you that 
is amazingly free of latency? While there are visual effects that 
replicate aliasing effects, such as when we see the spokes of a wheel 
turning backwards, the ability of the brain to turn all those signals 
into a single and integrated virtual world is amazing, but more 
amazing still is the fact that there is something in the brain that 
acts like an observer, something that lead many in the past to 
speculate about a homunculus...


This world view is not necessarily so integrated.  If you've ever been 
in a car crash you'll know that you hear the sound before the sights 
that go with it.  This comports with Dennett's point that the brain puts 
things together with time 

Re: Simulated Brains

2011-08-02 Thread meekerdb

On 8/2/2011 5:20 PM, Jason Resch wrote:



On Tue, Aug 2, 2011 at 4:44 PM, Stephen P. King stephe...@charter.net 
mailto:stephe...@charter.net wrote:



   No, my thought is that quantum coherence accounts for, among
other things, the way that sense data is continuously integrated
into a whole.  This leads to a situation that Daniel C. Dennett
calls the Cartesian Theater. Dennett's proof that it cannot
exist because it generates infinite regress of homunculi inside
humonculi is flawed because such infinities can only occur if each
of the humonculi has access to sufficient computational resources
to generate the rest of them. When we understand that computations
require the utilization of resources and do not occur 'for free'
we see that the entire case against situations that imply the
possibility of infinite regress fails.
   Quantum phenomena is NOT all about randomness. Frankly I would
really  like to understand how that rubbish of an idea still is
held in seriously thinking people! There is not randomness in QM,
there in only the physical inability to predict exactly when some
quantum event will occur in advance. It is because QM system
cannot be copied that makes it impossible to predict their
behavior in advance, not because of some inherent randomness! Take
the infamous radioactive atom in the Schrodinger Cat box. Is its
decay strictly a random phenomena? Not really! QM says not one
word about randomness, it only allows us to calculate the
half-life of said atom and that calculation is as good as is
possible given the fact that we cannot generate a simulation of
that atom and its environment and all of the interactions thereof
in a way that we can get predictions about its behavior in advance.



What is the distinction between random and unpredictable?


That's a fraught question.  I'd say there are some processes that are 
deterministic but unpredictable because they a classically chaotic (e.g. 
the weather).  Random refers to variables that take values from a 
probability distribution (as define by Kolmogorov for example).  They 
may be inherently random or they may be just unpredictable.


Brent





A consciousness can no more be copied than the state of a QM
system.


That's the point in question.  If Tegmark is right, it can.

   Tegmark is wrong.


Stephen, do you doubt that consciousness can be implemented by a 
digital machine or process?


Jason
--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: Simulated Brains

2011-08-02 Thread Colin Geoffrey Hales
A computed theory of a hurricane is not a hurricane.
A computed theory of cognition is not cognition.

We don't want a simulation of the thing.
We want an instance of the thing.


-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
Sent: Wednesday, 3 August 2011 2:19 PM
To: everything-list@googlegroups.com
Subject: Re: Simulated Brains

On 8/2/2011 4:00 PM, Stephen P. King wrote:
 On 8/2/2011 6:08 PM, meekerdb wrote:
 On 8/2/2011 2:44 PM, Stephen P. King wrote:
 On 8/2/2011 5:26 PM, meekerdb wrote:
 On 8/2/2011 2:08 PM, Stephen P. King wrote:
 On 8/2/2011 4:04 PM, meekerdb wrote:
 On 8/2/2011 12:43 PM, Craig Weinberg wrote:
 On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net
wrote:

   The point is that there is a point where the best 
 possible model or
 computational simulation of a system is the system itself. The 
 fact that
 it is impossible to create a model of a weather system that can

 predict
 *all* of its future behavior does not equal to a proof that one

 cannot
 create an approximately accurate  model of a weather system. 
 One has to
 trade off accuracy for feasibility.
 I agree that's true, and by that definition, we can certainly
make
 cybernetic systems which can approximate the appearance of
 consciousness in the eyes of most human clients of those systems

 for
 the scope of their intended purpose. To get beyond that level of
 accuracy, you may need to get down to the cellular, genetic, or
 molecular level, in which case it's not really worth the trouble

 of re-
 inventing life just to get a friendlier sounding voicemail.

 Craig

 So now you agree that a simulation of a brain at the molecular 
 level would suffice to produce consciousness (although of course 
 it would be much more efficient to actually use molecules instead

 of computationally simulating them).   This would be a good 
 reason to say 'no' to the doctor, since even though you could 
 simulate the molecules and their interactions, quantum randomness

 would prevent you from controlling their interactions with the 
 molecules in the rest of your brain.  Bruno's argument would 
 still go through, but the 'doctor' might have to replace not only

 your brain but a big chunk of the universe with which it 
 interacts.  However, most people who have read Tegmark's paper 
 understand that the brain must be essentially classical as a 
 computer and so a simulation, even one of molecules, could be 
 quasi-classical, i.e. local.

 Brent

 Hi Brent,

 I wonder if you would make a friendly wager with me about the 
 veracity of Tegmark's claims about the brain being essentially 
 classical? I bet $1 US (payable via Paypal) that he is dead wrong

 *and* that the proof that the brain actively  involves quantum 
 phenomena that are discounted by Tegmark will emerge within two 
 years. We already have evidence that the photosynthesis process in

 plants involves quantum coherence, there is an experiment being 
 designed now to test the coherence in the retina of the human eye.


http://www.ghuth.com/2010/02/03/another-finding-of-quantum-coherence-in-
a-photosynthetic-biological-system/ 

 http://www.ghuth.com/2011/04/24/quantum-coherence-and-the-retina/

 Those are not really to the point.  Of course the brain involves 
 quantum processes and some of these involve coherence for short 
 times.  But Tegmark argues that the times are too short to be 
 relevant to neural signaling and information processing.  There's 
 an implicit assumption that neural activity is responsible for 
 thought - that the 'doctor' could substitute at the neuron level.  
 I think this is right and it is supported by evolutionary 
 considerations.  We wouldn't want an intelligent Mars Rover to make

 decisions based on quantum randomness except in rare circumstance 
 (like Buridan's ass) and it wouldn't be evolutionarily advantageous

 for an organism on Earth.  I'm glad to accept your bet; except that

 I'm not sure how to resolve it.  It don't think finding something 
 like the energy transfer involving coherence in photosynthesis or 
 photon detection is relevant.


 No, my thought is that quantum coherence accounts for, among 
 other things, the way that sense data is continuously integrated 
 into a whole.

 What integrated whole do you refer to?  Our memory of a life?  How 
 does it account for it?


 This is not rocket surgery, come on! Think! Did you ever happen to

 notice that, modulo variations in distance, the sounds you hear, the 
 things you see, feels, taste, etc. are all integrated together? How is

 it that, modulo deya vu and similar synesthesias and dislexia, the 
 brain generates a vritual reality version of the world around you that

 is amazingly free of latency? While there are visual effects that 
 replicate aliasing effects, such as when we see the spokes of a wheel 
 turning backwards, the ability of the brain to turn all those signals 
 

Re: bruno list

2011-08-02 Thread meekerdb

On 8/2/2011 8:27 PM, Craig Weinberg wrote:

On Aug 2, 6:51 pm, meekerdbmeeke...@verizon.net  wrote:

   

But that is not obvious and saying so isn't an argument.
 

You don't have to accept it, but you shouldn't strawman it either.

   

If you do a
computational simulation through a similar material that the brain is
made of, then you have something similar to a brain. The idea of pure
computation independent of some physical medium is not something we
should take for granted. It seems like a completely outrageous fantasy
to me. Why would such a thing be any more plausible than ghosts or
magic?
   

I don't take it for granted.  But I can imagine building an intelligent
robot that acts in every way like a person.  And I know that I could
replace his computer brain for a different one, built with different
materials and using different physics, that computed the same programs
without changing its behavior.  Now you deny that this robot is
conscious because its brain isn't made of proteins and water and neurons
- but I could replace part of the computer with a computer made of some
protein and water and some neurons; which according to you would then
make the robot conscious.  This seems to me to be an unjustified
inference.  If it acts conscious with the wet brain and it acted the
same before, with the computer chip brain, then I infer that it was
probably conscious before.
 

Why are you equating how something appears to behave with it's
capacity to experience human consciousness? Think of the live neurons
as a pilot light in a gas appliance. You may not need to heat your hot
water heater with a bonfire that needs to be maintained with wood, and
if you have a natural gas utility you can substitute a different fuel,
but if that fuel can't ignite, there's not going to be any heat. By
your reasoning, the natural gas could be substituted with carbon
dioxide, since it looks the same, acts like a gas, etc so you could
infer that it should make the same heat. With the pilot light, you
will at least know whether or not the fuel is viable.
   


OK, so in your analogy to what is the pilot light analogous?

   

Do I conclude that it experiences consciousness exactly as I do?  No, I
think that it might depend on how its programming is implement, e.g.
LISP might produce different experience than FORTRAN  or whether there
are asynchronous hardware modules.  I'm not sure how Bruno's theory
applies to this since he looks at the problem from a level where all
computation is equivalent modulo Church-Turing.
 

I hear what you're saying, and there's no question that the
programming is instrumental both in simulating intelligence or
generating a human level of interior experience artificially. All I'm
saying is that LISP or FORTRAN cannot have an experience by itself.


I agree with that.  But can 'it' (a program) have experience when 
running on a computer?  And if so, does it have the same experience when 
it's running under Linux as under MacOS, on a PC or a (physical) Turing 
machine?  The latter is what functionalism asserts and you seem to deny.


Brent


A
silicon chip can and does experience something when it runs a program,
just not what we experience when we use the program. Just as your TV
set experiences something when you watch the news, but what it
experiences is not the news, not a tv program, not colored pixels or
patterns, but electronic level detection. Circuits, voltage,
resistance, capacitance, etc. You put a lot of fancy elaboration on a
circuit, sure, maybe you get some novelty showing up in the
experience, but I think that the level at which molecules cohere as a
living cell is likely to be the same level at which electronic
detection level awareness autopoiesizes into actual sensitivity or
proto feeling. I'm guessing about this of course, but I think it makes
sense, certainly a lot more sense than the idea that 'consciousness'
gradually appears when there are enough IF-THEN statements.

Craig

   


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread Jason Resch



On Aug 2, 2011, at 10:54 PM, Stephen P. King stephe...@charter.net  
wrote:



On 8/2/2011 8:20 PM, Jason Resch wrote:




On Tue, Aug 2, 2011 at 4:44 PM, Stephen P. King stephe...@charter.net 
 wrote:


   No, my thought is that quantum coherence accounts for, among  
other things, the way that sense data is continuously integrated  
into a whole.  This leads to a situation that Daniel C. Dennett  
calls the Cartesian Theater. Dennett's proof that it cannot exist  
because it generates infinite   regress of homunculi inside  
humonculi is flawed because such infinities can only occur if each  
of the humonculi has access to sufficient computational resources  
to generate the rest of   them. When we understand that  
computations require the utilization of resources and do not occur  
'for free' we see that the entire case against situations that  
imply the possibility of infinite regress fails.
   Quantum phenomena is NOT all about randomness. Frankly I would  
really  like to understand how that rubbish of an idea still is  
held in seriously thinking people! There is not randomness in QM,  
there in only the physical inability to predict exactly when some  
quantum event will occur in advance. It is because QM system cannot  
be copied that makes it impossible to predict their behavior in  
advance, not because of some inherent randomness! Take the infamous  
radioactive atom in the Schrodinger Cat box. Is its decay strictly  
a random phenomena? Not really! QM says not one word about  
randomness, it only allows us to calculate the half-life of said  
atom and that calculation is as good as is possible given the fact  
that we cannot generate a simulation of that atom and its  
environment and all of the interactions thereof in a way that we  
can get predictions about its behavior in advance.



What is the distinction between random and unpredictable?



Unpredictable means that it cannot be predicted.


Okay.


Randomness is uncaused.


Is there anything that is truly random? Perhaps what we consider  
random (from qm) is merely unpredictable (from our inside view) of the  
deterministic wave function.


What is random and what is predictable is then a matter of  
perspective.  I might send you a random looking bit stream, but it  
might be fully predictable if only you knew the encryption key and  
algorithm used to generate it.


  A completely deterministic behavior can be unpredictable and not  
random. Consider the behaviour of a non-linear system.




A consciousness can no more be copied than the state of a QM system.

That's the point in question.  If Tegmark is right, it can.

   Tegmark is wrong.


Stephen, do you doubt that consciousness can be implemented by a  
digital machine or process?


I doubt that consciousness can be implemented in classical  
machines or their logical equivalents.


Why?

Digital machines maybe, if they involve quantum entanglement of a  
certain kind.




Classical computers can emulate quantum computers, albeit inefficiently.

What is this certain kind of entanglement you refer to?

Note that there is no evidence that entanglement plays any important  
role in the function if neurons, and there is evidence against it,  
such as the successful simulation of the neocortical column which did  
not require any simulating any quantum effects.



Onward!

Stephen



Jason
--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To post to this group, send email to   everything-list@googlegroups.com 
.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at   http://groups.google.com/group/everything-list?hl=en 
.


--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread Stephen P. King

On 8/3/2011 12:18 AM, meekerdb wrote:

On 8/2/2011 4:00 PM, Stephen P. King wrote:

On 8/2/2011 6:08 PM, meekerdb wrote:

On 8/2/2011 2:44 PM, Stephen P. King wrote:

On 8/2/2011 5:26 PM, meekerdb wrote:

On 8/2/2011 2:08 PM, Stephen P. King wrote:

On 8/2/2011 4:04 PM, meekerdb wrote:

On 8/2/2011 12:43 PM, Craig Weinberg wrote:
On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net  
wrote:


  The point is that there is a point where the best 
possible model or
computational simulation of a system is the system itself. The 
fact that
it is impossible to create a model of a weather system that 
can predict
*all* of its future behavior does not equal to a proof that 
one cannot
create an approximately accurate  model of a weather system. 
One has to

trade off accuracy for feasibility.

I agree that's true, and by that definition, we can certainly make
cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those 
systems for

the scope of their intended purpose. To get beyond that level of
accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the 
trouble of re-

inventing life just to get a friendlier sounding voicemail.

Craig

So now you agree that a simulation of a brain at the molecular 
level would suffice to produce consciousness (although of course 
it would be much more efficient to actually use molecules 
instead of computationally simulating them).   This would be a 
good reason to say 'no' to the doctor, since even though you 
could simulate the molecules and their interactions, quantum 
randomness would prevent you from controlling their interactions 
with the molecules in the rest of your brain.  Bruno's argument 
would still go through, but the 'doctor' might have to replace 
not only your brain but a big chunk of the universe with which 
it interacts.  However, most people who have read Tegmark's 
paper understand that the brain must be essentially classical as 
a computer and so a simulation, even one of molecules, could be 
quasi-classical, i.e. local.


Brent


Hi Brent,

I wonder if you would make a friendly wager with me about the 
veracity of Tegmark's claims about the brain being essentially 
classical? I bet $1 US (payable via Paypal) that he is dead 
wrong *and* that the proof that the brain actively  involves 
quantum phenomena that are discounted by Tegmark will emerge 
within two years. We already have evidence that the 
photosynthesis process in plants involves quantum coherence, 
there is an experiment being designed now to test the coherence 
in the retina of the human eye.


http://www.ghuth.com/2010/02/03/another-finding-of-quantum-coherence-in-a-photosynthetic-biological-system/ 


http://www.ghuth.com/2011/04/24/quantum-coherence-and-the-retina/


Those are not really to the point.  Of course the brain involves 
quantum processes and some of these involve coherence for short 
times.  But Tegmark argues that the times are too short to be 
relevant to neural signaling and information processing.  There's 
an implicit assumption that neural activity is responsible for 
thought - that the 'doctor' could substitute at the neuron level.  
I think this is right and it is supported by evolutionary 
considerations.  We wouldn't want an intelligent Mars Rover to 
make decisions based on quantum randomness except in rare 
circumstance (like Buridan's ass) and it wouldn't be 
evolutionarily advantageous for an organism on Earth.  I'm glad to 
accept your bet; except that I'm not sure how to resolve it.  It 
don't think finding something like the energy transfer involving 
coherence in photosynthesis or photon detection is relevant.




No, my thought is that quantum coherence accounts for, among 
other things, the way that sense data is continuously integrated 
into a whole.


What integrated whole do you refer to?  Our memory of a life?  How 
does it account for it?




This is not rocket surgery, come on! Think! Did you ever happen 
to notice that, modulo variations in distance, the sounds you hear, 
the things you see, feels, taste, etc. are all integrated together? 
How is it that, modulo deya vu and similar synesthesias and dislexia, 
the brain generates a vritual reality version of the world around you 
that is amazingly free of latency? While there are visual effects 
that replicate aliasing effects, such as when we see the spokes of a 
wheel turning backwards, the ability of the brain to turn all those 
signals into a single and integrated virtual world is amazing, but 
more amazing still is the fact that there is something in the brain 
that acts like an observer, something that lead many in the past to 
speculate about a homunculus...


This world view is not necessarily so integrated.  If you've ever been 
in a car crash you'll know that you hear the sound before the sights 
that go with it.  This comports with Dennett's point that the 

Re: Simulated Brains

2011-08-02 Thread meekerdb

On 8/2/2011 10:03 PM, Stephen P. King wrote:
I'm just interested in how we would decide who won?  If there is some 
test you can suggest or some theoretical development you anticipate 
it would be very relevant to the question of the philosophical zombie.
Whatever, this conversation is going nowhere. I am over it. You want 
your dollar? Will that make you happy? 


No.  I'm not unhappy, just curious.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread Jesse Mazer
On Wed, Aug 3, 2011 at 1:14 AM, meekerdb meeke...@verizon.net wrote:

 On 8/2/2011 10:03 PM, Stephen P. King wrote:

 I'm just interested in how we would decide who won?  If there is some test
 you can suggest or some theoretical development you anticipate it would be
 very relevant to the question of the philosophical zombie.

 Whatever, this conversation is going nowhere. I am over it. You want your
 dollar? Will that make you happy?


 No.  I'm not unhappy, just curious.

 Brent


 It might help if Stephen would explain what scale of quantum coherence he's
predicting. The new possible explanation for photosynthesis only involved
quantum coherence within a *single* molecule, not quantum coherence spread
across the entire chloroplast (organelle where photosynthesis occurs), let
alone across an entire cell or multiple cells in a plant. It seems that most
people who think quantum coherence has something to do with how the brain
does its job are talking about large-scale quantum coherence across brain
regions with a macroscopic separation (Tegmark's article, which reflects the
opinion of nearly all physicists, is that this sort of thing is totally
unrealistic due to decoherence), is this specifically what you're predicting
Stephen? Or would you count it as a win if quantum coherence were only
found to play a useful role within individual neurotransmitter molecules or
similarly small collections of atoms?

Jesse

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Simulated Brains

2011-08-02 Thread Jason Resch

What is your theory of identity?

Would you agree that if a certain object has identical properties,  
roles, and relations that it is the same?


Do you understand that within a  program the properties, roles, and  
relations may be defined to perfectly match that of any other finite  
object?


If some object X in the context of this universe has the set of  
properties S.  And some object Y in the context of a simulated  
universe has the same set of properties S.  Then how can X be said to  
be different from Y?


You could say they exist in different contexts but then the existence  
of a difference becomes observer relative.  A fire in the simulation  
only seems different from a fire in this universe because it is being  
comared from a different context.  Likewise if our universe were a  
simulation then a fire in this universe would seem different from a  
fire in the universe hosting the simulation from the perspective of  
someone outside this universe.


We don't say storms are not wet because if god viewed this universe  
from the outside he would get no water on his shoes.  So let us not  
compare apples to oranges when discussing the appropriate context for  
a simulated object.


The experience of fire is only possible when the context of the  
observer is the same context which contains an object with all the  
proprties of fire.


Between different levels of simulation there is an asymmetry.  Higher  
levels (the ones performing the simulation) can interfere with the  
simulation, inject information into and extract information out.   
Lower levels cannot escape the simulation, alter it's rules, or learn  
anything definitive about the ultimate platform on which it runs.


Since information may be entered into this lower level universe, as  
well as taken out, then by simulating a mind (creating it's  
consciousness in this lower level universe) we can both supply sensory  
information and extract behavioral information and have these appear  
in our higher level universe.  The consciousness that exists in that  
simulation is everybit as real as yours in this universe.


Simulation allows us to create (or access) other possible universes.   
Simulated carbon, in the context of the simulation, would be  
indistinguishable and have all the same properties as carbon in this  
universe.  To say they differ is to believe that two objects alike in  
every possible way are still somehow different, despite that the  
difference could never be demonstrated.


Jason

On Aug 2, 2011, at 11:25 PM, Colin Geoffrey Hales cgha...@unimelb.edu.au 
 wrote:



A computed theory of a hurricane is not a hurricane.
A computed theory of cognition is not cognition.

We don't want a simulation of the thing.
We want an instance of the thing.


-Original Message-
From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
Sent: Wednesday, 3 August 2011 2:19 PM
To: everything-list@googlegroups.com
Subject: Re: Simulated Brains

On 8/2/2011 4:00 PM, Stephen P. King wrote:

On 8/2/2011 6:08 PM, meekerdb wrote:

On 8/2/2011 2:44 PM, Stephen P. King wrote:

On 8/2/2011 5:26 PM, meekerdb wrote:

On 8/2/2011 2:08 PM, Stephen P. King wrote:

On 8/2/2011 4:04 PM, meekerdb wrote:

On 8/2/2011 12:43 PM, Craig Weinberg wrote:

On Aug 2, 2:06 pm, Stephen P. Kingstephe...@charter.net

wrote:



 The point is that there is a point where the best
possible model or
computational simulation of a system is the system itself. The
fact that
it is impossible to create a model of a weather system that  
can



predict
*all* of its future behavior does not equal to a proof that  
one



cannot
create an approximately accurate  model of a weather system.
One has to
trade off accuracy for feasibility.

I agree that's true, and by that definition, we can certainly

make

cybernetic systems which can approximate the appearance of
consciousness in the eyes of most human clients of those  
systems



for
the scope of their intended purpose. To get beyond that level  
of

accuracy, you may need to get down to the cellular, genetic, or
molecular level, in which case it's not really worth the  
trouble



of re-
inventing life just to get a friendlier sounding voicemail.

Craig


So now you agree that a simulation of a brain at the molecular
level would suffice to produce consciousness (although of course
it would be much more efficient to actually use molecules  
instead



of computationally simulating them).   This would be a good
reason to say 'no' to the doctor, since even though you could
simulate the molecules and their interactions, quantum  
randomness



would prevent you from controlling their interactions with the
molecules in the rest of your brain.  Bruno's argument would
still go through, but the 'doctor' might have to replace not  
only



your brain but a big chunk of the universe with which it
interacts.  However, most people who have read Tegmark's paper
understand that