Re: COMP refutation paper - finally out

2011-08-26 Thread benjayk


Bruno Marchal wrote:
 
 
 On 25 Aug 2011, at 14:03, benjayk wrote:
 


 Bruno Marchal wrote:

 Aren't you restricting your notion of
 what is explainable of what your own theory labels explainable with
 its own
 assumptions?

 Yes, but this is due to its TOE aspect: it explains what  
 explanation
 are, and what we can hope to be 100% explainable, and what we will
 never be explained (like the numbers).
 It seems to me what it does is assuming what is explained and then  
 explain
 that this is so, while not making explicit that it is assumes (see  
 below).
 In effect, I believe it shows that our efforts to find fundamental
 explantions are bound to fail, because explanations do not apply to  
 the
 fundamental thing. Explanations are just relative pointers from one  
 obvious
 thing to another.
 
 This might explain why you don't study the argument. If you believe at  
 the start we cannot do it, I understand the lack of motivation for the  
 hard work.
 
 Have you understood the UD Argument: that IF we can survive with a  
 digital brain, then physics is a branch of computer science or number  
 theory.
 
 I think that your misunderstanding of the AUDA TOE comes from not  
 having seen this point.
I can follow that argument, and it seems valid. Of course I can not be sure
I really understood it. My point is that, even if physics is a branch of
computer science in the theory, this may just be an result of how the theory
reasons, and does not follow if we begin to interpret whether the computer
science itself needs something *fundmentally* beyond itself, that is just
not mentioned by relying on the assumption that the sense in arithmetic can
somehow be seperated from sense in general. I am not sure whether this
constitutes a rejection of COMP. It seems amibigous. If one insists that
arithmetical truth can be seperated from truth in general, then I think COMP
is just false because the premise is meaningless. Otherwise, COMP may be
true, but just because it implicitly assumes an ontological fundament that
transcends numbers.


Bruno Marchal wrote:
 


 Bruno Marchal wrote:



 Bruno Marchal wrote:

 You have to study to understand by yourself that it explain mind  
 and
 matter from addition and multiplication, and that the explanation  
 is
 the unique one maintainable once we say yes to the doctor. The
 explanation of matter is enough detailed so that we can test the  
 comp
 theory with observation.
 If this were true, show me a document just consisting of addition  
 and
 multplication that tells ANYTHING about mind and matter or even
 anything
 beyond numbers and addition and multiplication without your
 explanation.
 As long as you can't provide this it seems to me you ask me to study
 something that doesn't exist.

 Nu = ((ZUY)^2 + U)^2 + Y

 ELG^2 + Al = (B - XY)Q^2

 Qu = B^(5^60)

 La + Qu^4 = 1 + LaB^5

 Th +  2Z = B^5

 L = U + TTh

 E = Y + MTh

 N = Q^16

 R = [G + EQ^3 + LQ^5 + (2(E - ZLa)(1 + XB^5 + G)^4 + LaB^5 + +
 LaB^5Q^4)Q^4](N^2 -N)
  + [Q^3 -BL + L + ThLaQ^3 + (B^5 - 2)Q^5] (N^2 - 1)

 P = 2W(S^2)(R^2)N^2

 (P^2)K^2 - K^2 + 1 = Ta^2

 4(c - KSN^2)^2 + Et = K^2

 K = R + 1 + HP - H

 A = (WN^2 + 1)RSN^2

 C = 2R + 1 Ph

 D = BW + CA -2C + 4AGa -5Ga

 D^2 = (A^2 - 1)C^2 + 1

 F^2 = (A^2 - 1)(I^2)C^4 + 1

 (D + OF)^2 = ((A + F^2(D^2 - A^2))^2 - 1)(2R + 1 + JC)^2 + 1


 Thanks to Jones, Matiyasevitch. Some number Nu verifying that system
 of diophantine equations (the variables are integers) are Löbian
 stories, on which the machine's first person indeterminacy will be
 distributed.
 We don't even need to go farer than the polynomial equations to
 describe the ROE.

 What you ask me is done in good textbook on Mathematical logic.
 You used more than numbers in this example, namely variables.
 
 Statements on numbers can use variable. If you want only numbers,  
 translate those equation into one number, by Gödel's technic. But that  
 would lead to a cumbersome gigantic expression.
Yes, OK, this objection is invalid.


Bruno Marchal wrote:
 
 But even then,
 I am not convinced this formulas make sense as being löbian stories
 without an explanation. Surely, I can't prove that.
 
 This is like saying that a brain cannot make sense without another  
 brain making sense of it.
Indeed I think brains are meaningless without other brains to reflect
themselves in (making mutual sense of each other). You won't find a brain
floating in outer space, without any other brain to make sense of it.


Bruno Marchal wrote:
 
 The point is technical: numbers + addition and multiplication does  
 emulate the computational histories.
 
 You cannot use a personal feeling to doubt a technical result.
There is no such a completly technical result, if we use some technique that
is not strictly deducable from the axioms of the system.


Bruno Marchal wrote:
 
 I am not doing a philosophical point: I assume comp (which assumes  
 both consciousness and physical reality), and I prove from those  
 

Re: bruno list

2011-08-26 Thread Stathis Papaioannou
On Thu, Aug 25, 2011 at 12:31 AM, Craig Weinberg whatsons...@gmail.com wrote:

 Feeling doesn't come from a substance, it's the first person
 experience of energy itself. Substance is the third person
 presentation of energy patterns. If you turn it around so that feeling
 is observed in third person perspective, it looks like determinism or
 chance, while substance has no first person experience (which is why a
 machine, as an abstraction, can't feel, but what a machine is made of
 can feel to the extent that substance can feel.)

 Whether there are other substances in the brain that we haven't
 discovered yet is not the point. There might be, but so what. It's not
 the mechanism of brain chemistry that feels, it's the effect that
 mechanism has on the cumulatively entangled experience of the brain as
 a whole, as it experiences with the cumulatively entangled experiences
 of a human life as a whole.

This is a bit hard to understand. Are you agreeing that there is no
special consciousness stuff, but that consciousness results from the
matter in the brain going about its business? That is more or less the
conventional view.

 Do you think it's possible to reproduce the function of anything at all?

 It's possible to reproduce functions of everything, but there is no
 such thing as *the* function of something. To reproduce *all* possible
 functions of something is to be identical to that thing. If the
 reproduction even occupies a different space then it is not identical
 and does not have the same function. Think about it. If you have one
 ping pong ball in the universe, it has one set of finite states (which
 would be pretty damn finite).

 If you have another ping pong ball exactly the same there is a whole
 other set of states conjured out of thin air - they can smack
 together, roll over each other, move together and apart, etc. BUT, the
 original ball loses states that it never could have anticipated. True
 solitude becomes impossible. Solipsism becomes unlikely as the other
 ball becomes an object that it cannot not relate to.

 What you're not factoring in is that 'pattern' is a function of our
 pattern recognition abilities. Even though you firmly believe that our
 experience is flawed and illusory, somehow that gets set aside when
 you want to prove that logic is different. Your faith is that the
 logical patterns that we understand *are* what actually exists, rather
 than a particular kind of interpretation contingency. You think that
 A=A because it must by definition... but I'm pointing out that it's
 your definition that makes something = something, and has no
 explanatory power over A. In fact, the defining = can, like the second
 ping pong ball, obscure the truth of what A is by itself. This is
 critical when you're looking at this level of ontological comparison.
 Describing awareness itself cannot be accomplished by taking awareness
 for granted in the first place. First you have to kill = and start
 from nothing.

The function I am talking about is relatively modest, like making a
ping-pong ball out of a new plastic and designing it so that it weighs
the same and is just as elastic. If you then put this ping-pong ball
in with balls of the older type, the collection of balls will bounce
around normally, even though the new ball might be different in
colour, reflectivity, flammability etc. There is no need to figure out
exactly where all the balls will be after bouncing around for an hour,
just the important parameters of a single ball so that it can slot
into the community of balls as one of their own. A fire could come
along and it will be obvious that the new ball, being less flammable,
behaves differently, but we are not interested in what happens in the
event of a fire, otherwise we would have included that in the design
specifications; we are only interested in balls bouncing around in a
room.

Similarly with an artificial neuron, for the purposes of this
discussion we are interested only in whether it stimulates the other
neurons with the same timing and in response to the same inputs as a
biological neuron would. If it does, then the network of neurons will
respond in the usual way and ultimately make muscles move in the usual
way. (Please note that while the artificial neuron can in a thought
experiment be said to perform this function exactly the same as its
biological equivalent, in practice it would only need to perform it
approximately the same, since all biological tissue functions slightly
differently from moment to moment anyway.) The question is whether
given that the artificial neuron does this job adequately, would it
necessarily follow that the qualia of the brain would be unchanged? I
think it would, otherwise we would have the situation where you
declare that everything is normal (because the neurons driving the
muscles of speech are firing normally) while in fact feeling that
everything is different.

 Figuring out the internal dynamics of the neuron will tell you 

Re: Unconscious Components

2011-08-26 Thread Bruno Marchal


On 22 Aug 2011, at 21:54, Craig Weinberg wrote:


On Aug 22, 1:30 pm, Bruno Marchal marc...@ulb.ac.be wrote:

On 20 Aug 2011, at 23:48, Craig Weinberg wrote:


PART I



On Aug 20, 12:16 pm, Bruno Marchal marc...@ulb.ac.be wrote:

On 20 Aug 2011, at 03:14, Craig Weinberg wrote:



On Aug 18, 9:43 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 17 Aug 2011, at 06:47, Craig Weinberg wrote:



Not sure I understand. Do I hope for this world and therefore it
exists to me in a solipsistic way?


I mean you can hope to be true, but you can never know that you  
are

true for sure about anything, except your consciousness.
Transcendental realities are transcendental, simply.


OK. I thought you were saying something else, like 'thoughts  
create

reality'.


Only physical realities. But I don't expect anyone to understand  
what

that means, without grasping it by themselves. The UDA is the
explanation.



Whenever you say these kinds of things, I assume that you're just
talking about the arithmetic Matrix seeming real to us because we're
in it.


The arithmetic matrix is real, because it is just a collection of  
true

arithmetical facts. It is the physical and theological which seems
real (= lived), and are real in higher order senses, epistemological,
sensational, etc.


True to who? If I make up a Sims world where donkeys fly, do they
represent factual truth? It all seems context dependent to me. It
makes truth arbitrary. Couldn't I make an arithmetic matrix where the
occupants believe in a different arithmetic than our own? What makes
you think that senses are higher order?


Because sensible device needs a minimal amount of complexity. There  
are evidence of complex processing and interactions in sensible being,  
and I have no clue how sense could be made primary without introducing  
some kind of non Turing emulable magic.
I don't think you can make a different matrix with occupant believing  
in a different arithmetic. I don't think this makes any sense. If they  
take different axiom, it means they use a different structure. There  
are plenty sorts of number system, but the laws of arithmetic does not  
depend of the subject which consider them. 17 is prime or not, for  
anybody.






This is the Perceptual Relativity Inertial Frame or PRIF. A de facto
frame of localized coherence which itself takes on a second order
nested or holarchic 1p coherence. We are members of a very very very
specific club that is exclusive to entities


I don't belong to that club. I didn't sign in.


You are saying that you are better than human?


I was saying that I do no belong to a club that is a priori exclusive  
to entities, alluding to your carbon vie of a human being.







IF mechanism is true, there is no titanium needle, still less an
interiority. Only person have interiority views, and person lives  
in

Platonia.


Why can't persons live in bodies/houses/cities? All that's missing  
is

to let go of the illusion that 1p is an illusion


1p is not an illusion. We agree on that.


and take it at face
value as a legitimate physical phenomenon


That is physicalism.


Physcialism yes, but with expanded sensorimotive physics.


That is coherent with your non-comp, view. But then either you have  
zombie, or you have to introduce or describe those special non Turing  
emulable magic somewhere.
You have not yet succeeded in explaining what is sensorimotive  
physics, without alluding explicitly to poetry.






I'm
not ruling it out, because of course we can't tell 1p consciousness
from 3p completely, but, the sense of things being more like us and
less like us demands more of an explanation.


Darwin + computer science (abstract biology, psychology, etc.)


So you are saying that there is no legitimate difference between
yourself and a sand dune, other than you have been programmed and
conditioned to perceive the sand dune as irrelevant to your survival
and reproduction? Really the sand dune is quite an interesting chap
with a keen interest in early jazz and architecture.


Sure, but not relevant.







Why the big threshold
between what we think is alive and what isn't?


I don't see any threshold.


So being dead is just as good as being alive for you, your family,
friends, pets. It's all the same.


It is not because there are no threshold or frontier between two  
states that there are no clear case.
For example the M set has a fuzzy border, and without arbitrary long  
zoom you can't decide if a point on the border is in or out of the  
set. But many points are clearly in and clearly out. I would say a  
pebble is clearly not alive, and a bird is clearly alive, but the  
question makes no sense (other than conventional) for a virus or a box  
of cigarette. With my usual definition, they are alive, because they  
have a sophisticated reproduction cycle.








Why do we care so much
about not being dead ourselves?


Eating is more fun than being eaten, in general.


And why should that be the case if 

Re: Unconscious Components

2011-08-26 Thread Bruno Marchal


On 22 Aug 2011, at 22:20, Craig Weinberg wrote:


On Aug 22, 1:56 pm, Bruno Marchal marc...@ulb.ac.be wrote:

On 21 Aug 2011, at 15:28, Craig Weinberg wrote:

My point is that, by definition of philosophical zombie, they  
behave

like normal and sane human being. It is not walking coma, or
catatonic
behavior. It is full human behavior. A zombie might write a book on
consciousness, or have a diary of his dreams reports.



A movie can feature an actress writing a book on consciousness or
doing anything else that can be demonstrated audiovisually. How is
that not a zombie?


The movie lack the counterfactual. If the public shout don't go the
cave! to the heroine in a thriller, she will not listen.


That can be obscured by making the movie ambiguous. Having the actors
suddenly look in the camera and say something like Did you say
something? We can't hear you very well in here. When the tension
builds the heroine could say to the camera I know what you're
thinking, but I'm going in anyways. I think if you give the movie
anywhere near the latitude you are giving to arithmetic, you'll see
that the threshold between a movie and a UM is much less than between
a living organism and a silicon chip. You can make movies interactive
with alternate story lines that an audience can vote on, or just
pseudointeractive:  
http://listverse.com/2011/05/24/top-10-william-castle-film-gimmicks/
(#1)


If the movie is so much interactive then, it is no more a movie, but a  
virtual reality. If the entities behave like humans for a long time  
enough, I will attribute them consciousness.







Zombie are
different, they behave like you and me. By definition of  
philosophical

zombie, you can't distinguish it from a real human. You can
distinguish a human from filmed human, all right?


Not without breaking the frame of reference. I can't distinguish a
live TV broadcast from a recorded broadcast. It's an audiovisual only
frame of reference. To postulate a philosophical zombie, you are
saying that nothing about them can be distinguished from a genuine
person, which is tautological. If nothing can be distinguished by
anyone or any thing at any time, then it is the genuine person, by
definition.


Not at all. I can conceptually imagine them without having  
consciousness by definition. Of course with comp this lead to non  
sense, given that consciousness is not attached to any body, but only  
to soul living in Platonia. In comp we don't have a mind body problem,  
only a problem of illusion of bodies.







You're just saying 'an apple that is genuine in every possible way,
except that it's an orange' and using that argument to say 'then
apples can be no different than oranges in any meaningful way and
there is no reason why apples cannot be used to make an orange as long
as the substitution level is low enough.' The fallacy is that it uses
semantics of false exclusion to justify false inclusion. By insisting
that my protests that apples and oranges are both fruit but oranges
can never be made of apples is just an appeal to the false assumption
of substitution level, you disregard any possibility of seeing the
simple truth of the relation.


I don't disregard that possibility, but comp explains much more. You  
need the applen and the orange, and non comprehensible link. I need  
only the apple (to change a bit your analogy).







If you make it a 3D-hologram of an actress, with
odorama and VR touchback tactile interfaces, then is it a zombie? If
you connect this thing up to a GPS instead of a cinematically  
scripted

liturgy and put it in an information kiosk, does it become a zombie
then? I don't see much of a difference.


Behaviorally they have no difference with human. Conceptually they  
are

quite different, because they lack consciousness and any private
experiences.
With comp, such zombies are non sensical, or trivial. Consciousness  
is

related to the abstract relations involved in the most probable
computations leading to your actual 3-states.


Yes, zombies are non sensical or trivial.


It's still just a facade which
reflects our human sense rather than the sense of an autonomous  
logic

which transcends programming. Even if it's really fancy programming,
it's experience has no connection with us. It's a cypher that only
intersects our awareness through it's rear end, upon which we have
drawn a face.



That is an advantage. Precise and hypothetical. Refutable.



True, but it has disadvantages as well. Dissociated and clinical.



So you say.



Meaningless. (cue 'Supertramp - The Logical Song')



So you say.


Right. These qualities cannot be proved from 3-p. Meaning and  
feeling

are not literal and existential. If they don't insist for you, then
you don't feel them.



Sense contingent upon the theoretical existence
of numbers (or the concrete existence of what unknowable
phenomenon is
represented theoretically as numbers)



Mathematician can study the effect of set of unknowable things.
That
is the 

Re: bruno list

2011-08-26 Thread Craig Weinberg
On Aug 26, 9:05 am, Stathis Papaioannou stath...@gmail.com wrote:
 On Thu, Aug 25, 2011 at 12:31 AM, Craig Weinberg whatsons...@gmail.com 
 wrote:
  Feeling doesn't come from a substance, it's the first person
  experience of energy itself. Substance is the third person
  presentation of energy patterns. If you turn it around so that feeling
  is observed in third person perspective, it looks like determinism or
  chance, while substance has no first person experience (which is why a
  machine, as an abstraction, can't feel, but what a machine is made of
  can feel to the extent that substance can feel.)

  Whether there are other substances in the brain that we haven't
  discovered yet is not the point. There might be, but so what. It's not
  the mechanism of brain chemistry that feels, it's the effect that
  mechanism has on the cumulatively entangled experience of the brain as
  a whole, as it experiences with the cumulatively entangled experiences
  of a human life as a whole.

 This is a bit hard to understand. Are you agreeing that there is no
 special consciousness stuff, but that consciousness results from the
 matter in the brain going about its business? That is more or less the
 conventional view.

  Do you think it's possible to reproduce the function of anything at all?

  It's possible to reproduce functions of everything, but there is no
  such thing as *the* function of something. To reproduce *all* possible
  functions of something is to be identical to that thing. If the
  reproduction even occupies a different space then it is not identical
  and does not have the same function. Think about it. If you have one
  ping pong ball in the universe, it has one set of finite states (which
  would be pretty damn finite).

  If you have another ping pong ball exactly the same there is a whole
  other set of states conjured out of thin air - they can smack
  together, roll over each other, move together and apart, etc. BUT, the
  original ball loses states that it never could have anticipated. True
  solitude becomes impossible. Solipsism becomes unlikely as the other
  ball becomes an object that it cannot not relate to.

  What you're not factoring in is that 'pattern' is a function of our
  pattern recognition abilities. Even though you firmly believe that our
  experience is flawed and illusory, somehow that gets set aside when
  you want to prove that logic is different. Your faith is that the
  logical patterns that we understand *are* what actually exists, rather
  than a particular kind of interpretation contingency. You think that
  A=A because it must by definition... but I'm pointing out that it's
  your definition that makes something = something, and has no
  explanatory power over A. In fact, the defining = can, like the second
  ping pong ball, obscure the truth of what A is by itself. This is
  critical when you're looking at this level of ontological comparison.
  Describing awareness itself cannot be accomplished by taking awareness
  for granted in the first place. First you have to kill = and start
  from nothing.

 The function I am talking about is relatively modest, like making a
 ping-pong ball out of a new plastic and designing it so that it weighs
 the same and is just as elastic. If you then put this ping-pong ball
 in with balls of the older type, the collection of balls will bounce
 around normally, even though the new ball might be different in
 colour, reflectivity, flammability etc.

I understand that, but you are still assuming a metaphysical
appearance of an 'awareness' somehow coming into being as a
consequence of 'bouncingness' itself rather than seeing that awareness
is a property OF the balls themselves. Although the bouncing certainly
is part of what goes into the contents of any awareness that might
already be there, the awareness itself is ultimately determined by
what fundamental unit you are looking at. Organic molecules do a lot
of strange things when they are bouncing around together - very
different things compared to inorganic atoms, ping pong balls, or
programmable abstractions.

 There is no need to figure out
 exactly where all the balls will be after bouncing around for an hour,
 just the important parameters of a single ball so that it can slot
 into the community of balls as one of their own. A fire could come
 along and it will be obvious that the new ball, being less flammable,
 behaves differently, but we are not interested in what happens in the
 event of a fire, otherwise we would have included that in the design
 specifications; we are only interested in balls bouncing around in a
 room.

That's the problem. You're interested in the wrong thing. Cells and
organsims are not billiard balls. If you treat them as predictable
mechanisms, you lose the very dimension that you are trying to
emulate. The unpredictable behavior of a cell doesn't arise out of
complexity, it arises out of a higher order of simplicity that organic
molecules 

Re: bruno list

2011-08-26 Thread meekerdb

On 8/26/2011 1:14 PM, Craig Weinberg wrote:

That's the problem. You're interested in the wrong thing. Cells and
organsims are not billiard balls. If you treat them as predictable
mechanisms, you lose the very dimension that you are trying to
emulate.


It's not a question of treating them as predictable; so far as anyone 
has been able to tell they *are* predictable.  No one has found any 
evidence that they do not behave according to the known laws of physics 
and chemistry - which means they are predictable. What evidence do you 
have to the contrary?



The unpredictable behavior of a cell doesn't arise out of
complexity, it arises out of a higher order of simplicity that organic
molecules facilitate.
   


Higher order simplicity??  More magic or more poetry?  You seem to be 
agreeing that complexity is not sufficient to make cells unpredictable.  
So in principle the complex behavior of the cell could be predicted even 
at the molecular level.  You are claiming this prediction would fail 
because of ...what?


   

  Similarly with an artificial neuron, for the purposes of this
  discussion we are interested only in whether it stimulates the other
  neurons with the same timing and in response to the same inputs as a
  biological neuron would.
 

Even if you could create an artificial neuron which could impersonate
the responsiveness of an natural one, it wouldn't matter because it
still doesn't feel anything.


How do you know it doesn't feel anything?  How do you know it doesn't 
feel exactly the same as the neuron it replaced?  How do you know the 
feeling of either the neuron or the artificial neuron has an effect on 
what you would feel?  We know from operations on the brain that 
electrostimulation may evoke memories, the sound of a melody, and other 
qualia.  The subject never says, That felt like electrostimulation. or 
That didn't produce any feelings.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Unconscious Components

2011-08-26 Thread Craig Weinberg
On Aug 26, 11:01 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 22 Aug 2011, at 22:20, Craig Weinberg wrote:









  On Aug 22, 1:56 pm, Bruno Marchal marc...@ulb.ac.be wrote:
  On 21 Aug 2011, at 15:28, Craig Weinberg wrote:

  My point is that, by definition of philosophical zombie, they
  behave
  like normal and sane human being. It is not walking coma, or
  catatonic
  behavior. It is full human behavior. A zombie might write a book on
  consciousness, or have a diary of his dreams reports.

  A movie can feature an actress writing a book on consciousness or
  doing anything else that can be demonstrated audiovisually. How is
  that not a zombie?

  The movie lack the counterfactual. If the public shout don't go the
  cave! to the heroine in a thriller, she will not listen.

  That can be obscured by making the movie ambiguous. Having the actors
  suddenly look in the camera and say something like Did you say
  something? We can't hear you very well in here. When the tension
  builds the heroine could say to the camera I know what you're
  thinking, but I'm going in anyways. I think if you give the movie
  anywhere near the latitude you are giving to arithmetic, you'll see
  that the threshold between a movie and a UM is much less than between
  a living organism and a silicon chip. You can make movies interactive
  with alternate story lines that an audience can vote on, or just
  pseudointeractive:  
  http://listverse.com/2011/05/24/top-10-william-castle-film-gimmicks/
  (#1)

 If the movie is so much interactive then, it is no more a movie, but a
 virtual reality. If the entities behave like humans for a long time
 enough, I will attribute them consciousness.

That's where I think you are being too promiscuous with consciousness
attribution. To me it wouldn't matter how long it takes for me to
figure out that it wasn't conscious, once I found out that it was only
an interactive movie, I would not continue to extend the presumption
that the movie itself is conscious.

This thought experiment brings out a relevant detail though in the
idea of ventriloquism. Even if a ventriloquist is the best possible
ventriloquist, I still do not think that we should attribute
consciousness to the dummy (certain horror movies notwithstanding).
It's ok to informally group them together as one, since that's how
motive works - it's insistence can be read through a prosthetic
puppet, mask, cartoon, work of art, etc. If we can read the text, then
we can be influenced by the sender's intent. This is the case with
software. It is a way for the intelligence of the programmer and
groups of programmers to enact their ideas in the form of a machine.

Most of the time, it makes no difference to conflate a ventriloquists
intelligence with the character they use to impersonate the dummy, the
two of them together could be thought of as a single ventriloquist act
- but if we are talking about a dummy being it's own ventriloquist,
then we are looking at a completely different phenomenon. We watch a
movie and relate to it as a vicarious human experience - actors and
their actions rather than frames of pixels or film.

I could see how you could choose to see a sufficiently interactive
film as being practically indistinguishable from a 3p perspective, but
I don't see how you could assume that a corresponding 1p experience
arises spontaneously. Where? The film? The electronics? The program?
It's metaphysical and crazy. My view is crystal clear. The programmers
sense and motives are sent through the medium of the theatrical
experience to the audience, who receive it as human sense and motive.

The text rides on the back of the many electronic production devices
and perceptual organs of the viewers, but it is not interpreted by
those media at all. No matter how much music you listen to on your
iPod, it's never going to intentionally compose it's own songs. The
movie doesn't learn to act, and the computer doesn't learn to feel
either. They have their own perceptual frames to contend with. The
iPod and the computer need to gratify their semiconductor circuits.
The movie reel needs to spin, the motor needs to keep cycling, the
film strip needs to keep falling into the sprockets, etc. They don't
have any appreciation of the contents which we find significant. I
think there's a tragic gender relation metaphor in there somewhere.
Something about what boys and girls find attractive in each other not
being similar to what they value in themselves.




  Zombie are
  different, they behave like you and me. By definition of
  philosophical
  zombie, you can't distinguish it from a real human. You can
  distinguish a human from filmed human, all right?

  Not without breaking the frame of reference. I can't distinguish a
  live TV broadcast from a recorded broadcast. It's an audiovisual only
  frame of reference. To postulate a philosophical zombie, you are
  saying that nothing about them can be distinguished from a genuine
  person, which is 

Re: bruno list

2011-08-26 Thread Craig Weinberg
On Aug 26, 4:38 pm, meekerdb meeke...@verizon.net wrote:
 On 8/26/2011 1:14 PM, Craig Weinberg wrote:

  That's the problem. You're interested in the wrong thing. Cells and
  organsims are not billiard balls. If you treat them as predictable
  mechanisms, you lose the very dimension that you are trying to
  emulate.

 It's not a question of treating them as predictable; so far as anyone
 has been able to tell they *are* predictable.  No one has found any
 evidence that they do not behave according to the known laws of physics
 and chemistry - which means they are predictable. What evidence do you
 have to the contrary?

None of what goes on in a cell could be predicted purely by chemistry.
If you were down at the level of individual atoms, you would have no
possible clue of the existence of anything like a cell, just as
looking at the surface of a TV screen with a microscope excludes the
possibility of making sense of a movie being watched.

You have to grasp this concept of perceptual frame of reference. A
cell is not the same thing as molecules - it's meta-molecular. Most of
what a cell does makes sense in pure chemical terms, like most of what
an animal does makes sense in purely cellular terms. It has absolutely
nothing to do with defying physics or chemistry, it's that the
reliable, predictable levels of physical reality are routinely
manipulated to serve the purposes, whims, and fantasies of meta-meta-
meta organic entities.

  The unpredictable behavior of a cell doesn't arise out of
  complexity, it arises out of a higher order of simplicity that organic
  molecules facilitate.

 Higher order simplicity??  More magic or more poetry?

Do you consider cells and bodies magic or poetic? What about higher
order simplicity sounds like witchcraft to you?

  You seem to be
 agreeing that complexity is not sufficient to make cells unpredictable.  
 So in principle the complex behavior of the cell could be predicted even
 at the molecular level.  You are claiming this prediction would fail
 because of ...what?

No. You're equating simplicity with microcosm. That's what I mean by
higher order simplicity and perceptual frames of reference. You can't
predict how a baseball game will turn out by looking at nothing but
the trajectories of baseballs in previous games. That is exactly what
substance monism suggests by insisting that the macrocosm can always
be predicted by scaling up the microcosm. I didn't think that kind of
mechanistic view is even taken seriously anymore, even in the hard
sciences. All that went out the window in the 20th century.

The prediction fails because it's basing the prediction on the wrong
thing. What we think about has an effect on our body on a systemic
level. The neurons behavior is caught up in that like we're caught up
in weather systems. We make brain hurricanes happen just by thinking
about something we enjoy or hate. Those make floods and blackouts the
tissues of our gut and sweat glands. It's no big voodoo - it's the
ordinary way that we function and experience our lives.

    Similarly with an artificial neuron, for the purposes of this
    discussion we are interested only in whether it stimulates the other
    neurons with the same timing and in response to the same inputs as a
    biological neuron would.

  Even if you could create an artificial neuron which could impersonate
  the responsiveness of an natural one, it wouldn't matter because it
  still doesn't feel anything.

 How do you know it doesn't feel anything?  How do you know it doesn't
 feel exactly the same as the neuron it replaced?

Because there is no reason to imagine it would. How do I know that a
ventrilioquist's dummy doesn't feel anything? Because I know it's a
manufactured artifact that has no living tissue in it. Same reason a
semiconductor array has no feeling. I don't *know* know it has no
feeling, but I think that whatever it does have is likely on the
microcosmic level rather than a higher order simplicity, and I think
that because of that it's likely not to be very similar to the
feelings of a conscious Homo sapien.

 How do you know the
 feeling of either the neuron or the artificial neuron has an effect on
 what you would feel?

Because we can feel it when we use transcranial magnetic stimulation
or a taser to change the electromagnetic conditions of our neurons.

  We know from operations on the brain that
 electrostimulation may evoke memories, the sound of a melody, and other
 qualia.  The subject never says, That felt like electrostimulation. or
 That didn't produce any feelings.

You could have electrostimulation to a lot of parts of your brain and
you wouldn't feel it. Only areas relevant to your perception and
cognition would end up being experienced in real time by you. I think
that the perceptual frame determines whether the stimulation is felt
as blind electric shock or a sound or a memory, but it's not really
debatable whether changes to neurons affect how we feel and how we can

Re: bruno list

2011-08-26 Thread meekerdb

On 8/26/2011 4:41 PM, Craig Weinberg wrote:

On Aug 26, 4:38 pm, meekerdbmeeke...@verizon.net  wrote:
   

On 8/26/2011 1:14 PM, Craig Weinberg wrote:

 

That's the problem. You're interested in the wrong thing. Cells and
organsims are not billiard balls. If you treat them as predictable
mechanisms, you lose the very dimension that you are trying to
emulate.
   

It's not a question of treating them as predictable; so far as anyone
has been able to tell they *are* predictable.  No one has found any
evidence that they do not behave according to the known laws of physics
and chemistry - which means they are predictable. What evidence do you
have to the contrary?
 

None of what goes on in a cell could be predicted purely by chemistry.
If you were down at the level of individual atoms, you would have no
possible clue of the existence of anything like a cell, just as
looking at the surface of a TV screen with a microscope excludes the
possibility of making sense of a movie being watched.

You have to grasp this concept of perceptual frame of reference. A
cell is not the same thing as molecules - it's meta-molecular.
 I'm well aware that a cell is made of molecules and that it is their 
structural and dynamic relations that constitute the cell.  I understand 
perceptual frame of reference:  If I stand a different place I see a 
different scene.  What's that have to do with cells and being 
meta-molecular?



Most of
what a cell does makes sense in pure chemical terms,


But not all (according to you).  And that's the question.  What part 
doesn't?  How can this part be detected?



like most of what
an animal does makes sense in purely cellular terms. It has absolutely
nothing to do with defying physics or chemistry, it's that the
reliable, predictable levels of physical reality are routinely
manipulated to serve the purposes, whims, and fantasies of meta-meta-
meta organic entities.
   


Which is it?  Does being manipulated by organic entities entail doing 
something other than predicted by the laws of physics and chemistry?
   

The unpredictable behavior of a cell doesn't arise out of
complexity, it arises out of a higher order of simplicity that organic
molecules facilitate.
   

Higher order simplicity??  More magic or more poetry?
 

Do you consider cells and bodies magic or poetic? What about higher
order simplicity sounds like witchcraft to you?
   


All of it.  What's the operational definition whereby I can recognize 
and measure simplicity and order it as higher and lower?  It is a 
thing?  A substance?  A property...of what?  A relation?


   

  You seem to be
agreeing that complexity is not sufficient to make cells unpredictable.  
So in principle the complex behavior of the cell could be predicted even

at the molecular level.  You are claiming this prediction would fail
because of ...what?
 

No. You're equating simplicity with microcosm. That's what I mean by
higher order simplicity and perceptual frames of reference. You can't
predict how a baseball game will turn out by looking at nothing but
the trajectories of baseballs in previous games.


But you could do it by looking at the microstates of the players in the 
present game plus various environmental states.  No one has suggested 
that you could predict the behavior of a neuron or a brain by looking at 
past brains or neurons.  Why do you bring up such strawman arguments?



That is exactly what
substance monism suggests by insisting that the macrocosm can always
be predicted by scaling up the microcosm. I didn't think that kind of
mechanistic view is even taken seriously anymore, even in the hard
sciences. All that went out the window in the 20th century.
   


Fortunately it's the 21st century now.  Who told you the macrocosm 
couldn't be predicted by synthesis of the micro?  I must have missed 
that in physics class.  If you're relying quantum randomness then please 
say where Tegmark went wrong in his paper showing the brain must operate 
classically?  If you're relying on classical chaos theory then your 
argument is with Bruno who assumes everything can be simulated in digits.



The prediction fails because it's basing the prediction on the wrong
thing. What we think about has an effect on our body on a systemic
level. The neurons behavior is caught up in that like we're caught up
in weather systems. We make brain hurricanes happen just by thinking
about something we enjoy or hate.


And how do you choose what to enjoy and who to hate?


Those make floods and blackouts the
tissues of our gut and sweat glands. It's no big voodoo - it's the
ordinary way that we function and experience our lives.
   


And it supervenes on the processes of our body and brain.

   

  Similarly with an artificial neuron, for the purposes of this
  discussion we are interested only in whether it stimulates the other
  neurons with the same timing and in response to the same inputs as a
  biological neuron would.