Re: Re: Re: Yes, Doctor!

2012-10-15 Thread Roger Clough
Hi John Clark

Contempt prior to investigation is not a scientific attitude. 
 


Roger Clough, rclo...@verizon.net 
10/15/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: John Clark  
Receiver: everything-list  
Time: 2012-10-13, 13:40:06 
Subject: Re: Re: Yes, Doctor! 


On Sat, Oct 13, 2012Roger Clough  wrote: 



 This is supposed to be a scientific discussion. 


Yes, so why are you talking about? NDEs and UFOs? If I was interested in that 
crap I wouldn't read a scientific journal or go to the Everything List, I'd 
just pick up a copy of the National Enquirer at my local supermarket, that way 
I'd also get the astrology column and I could read about the diet tips of the 
movie stars. 

? John K Clark 

--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes, Doctor!

2012-10-13 Thread John Clark
On Wed, Oct 10, 2012 , Roger Clough rclo...@verizon.net wrote:

 NDEs are like UFOs.


Yes they're both bullshit. The trouble with UFOs is that people forget what
the U stands for and keep identifying the damn thing as a flying saucer
from another planet; I see a light in the sky and I don't know what it is,
therefore it's a spaceship full of aliens. Bullshit is it not?

The trouble with NDEs is that people forget what the N stands for,
because as Monty Python taught us decades ago, being nearly dead just isn't
good enough. CDEs would be far more interesting, when you find somebody
that has been COMPLETELY dead and buried for a decade or two and comes back
and tells us what experiences he had then talk to me again.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Yes, Doctor!

2012-10-13 Thread Roger Clough
Hi John Clark  

This is supposed to be a scientific discussion.

Roger Clough, rclo...@verizon.net 
10/13/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: John Clark  
Receiver: everything-list  
Time: 2012-10-13, 12:50:23 
Subject: Re: Yes, Doctor! 


On Wed, Oct 10, 2012 , Roger Clough  wrote: 



 NDEs are like UFOs.  

Yes they're both bullshit. The trouble with UFOs is that people forget what the 
U stands for and keep identifying the damn thing as a flying saucer from 
another planet; I see a light in the sky and I don't know what it is, therefore 
it's a spaceship full of aliens. Bullshit is it not? 

The trouble with NDEs is that people forget what the N stands for, because as 
Monty Python taught us decades ago, being nearly dead just isn't good enough. 
CDEs would be far more interesting, when you find somebody that has been 
COMPLETELY dead and buried for a decade or two and comes back and tells us what 
experiences he had then talk to me again. 

? John K Clark 


--  
You received this message because you are subscribed to the Google Groups 
Everything List group. 
To post to this group, send email to everything-list@googlegroups.com. 
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com. 
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Re: Yes, Doctor!

2012-10-13 Thread John Clark
On Sat, Oct 13, 2012Roger Clough rclo...@verizon.net wrote:

 This is supposed to be a scientific discussion.


Yes, so why are you talking about  NDEs and UFOs? If I was interested in
that crap I wouldn't read a scientific journal or go to the Everything
List, I'd just pick up a copy of the National Enquirer at my local
supermarket, that way I'd also get the astrology column and I could read
about the diet tips of the movie stars.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes, Doctor!

2012-10-10 Thread meekerdb

On 10/9/2012 11:54 PM, Kim Jones wrote:
http://www.telegraph.co.uk/news/worldnews/northamerica/usa/9597345/Afterlife-exists-says-top-brain-surgeon.html 




Comments, theories, reflections welcome.

You pays your money and you makes your choice.

Kim Jones


I wouldn't say yes to him.  He thinks your brain doesn't need to function.  He might 
substitute a rock.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes, Doctor!

2012-10-10 Thread Roger Clough
Hi Kim Jones  

My own opinion is that the world is much stranger than we
think it is. NDEs are like UFOs. They don't make any scientific sense,
but they are widely experienced and reported.  I don't think
all of the observers can be crazy. But I am a type P in the mbti,
so loose ends don't  bother me. 

Roger Clough, rclo...@verizon.net 
10/10/2012  
Forever is a long time, especially near the end. -Woody Allen 


- Receiving the following content -  
From: Kim Jones  
Receiver: Everything List  
Time: 2012-10-10, 02:54:07 
Subject: Yes, Doctor! 


http://www.telegraph.co.uk/news/worldnews/northamerica/usa/9597345/Afterlife-exists-says-top-brain-surgeon.html
 




Comments, theories, reflections welcome. 


You pays your money and you makes your choice. 


Kim Jones

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes, Doctor!

2012-10-10 Thread Craig Weinberg
NDEs make sense to me in my model. With personal consciousness as a subset 
of super-personal consciousness, it stands to reason that the personal 
event of one's own death would or could be a super-signifiying presentation 
in the native language of one's person (or super-person).

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/everything-list/-/i8oyl4-GBgIJ.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes, Doctor!

2012-10-10 Thread Bruno Marchal


On 10 Oct 2012, at 09:14, meekerdb wrote:


On 10/9/2012 11:54 PM, Kim Jones wrote:

http://www.telegraph.co.uk/news/worldnews/northamerica/usa/9597345/Afterlife-exists-says-top-brain-surgeon.html


Comments, theories, reflections welcome.

You pays your money and you makes your choice.

Kim Jones


I wouldn't say yes to him.  He thinks your brain doesn't need to  
function.  He might substitute a rock.


If your goal is to survive, that might work. If your goal is to  
complete your mission nearby, that might not work, especially from the  
point of view of the observers.


A machine with the cognitive ability sufficient to bet genuinely on an  
artificial digital brain can understand we don't really need one to  
survive.


But then the question is who are you?, really.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-04 Thread Craig Weinberg
On Mar 4, 2:04 am, Bruno Marchal marc...@ulb.ac.be wrote:

 But this is a distracting issue given that your point is that NO
 program ever can give a computation able to manifest consciousness.

If an organism is already able to sustain consciousness, I think that
a program could jump start consciousness or assist in the organism
healing itself. My view is that there is no actual program, in the
sense that a shadow is not an independent entity. Computation is an
epiphenomenon of awareness.




  We are relatively manifested by the function of our brain. we are
  not function.

  That seems to make 'functionalism' a misnomer.

 Yes. For many reasons, including that function can be seen as
 extensional objects, defined by their inputs-outputs, or as
 intensional objects, by taking into account how they are computed with
 some details, or by taking into account modalities, resource, etc.
 Putnam's functionalism was fuzzy on the choice of level, leading him
 and its followers to some confusion.

 In my early writing I define comp by it exists a level such that
 functionalism is right at that level.  That existence is not
 constructive (and thus the need of the act of faith), and that allow
 some clarification on what comp can mean.


It all seems very squirrely to me. The relation between computation
and material substrates, the unexplained existence of presentation and
anomalous levels of presentation, the assignment of awareness to self-
referential abstractions...it seems like any real explanatory power of
comp becomes more and more diluted the more we examine it. It seems to
be a vanishing God of the Gaps to me.



  If so, that objection
  evaporates when we use a symmetrical form  content model rather
  than
  a cause  effect model of brain-mind.

  Form and content are not symmetrical.
  The dependence of content to form requires at least universal
  machine.

  What if content is not dependent on form and requires nothing except
  being real?

 Define real.

To define real overturns the superiority of reality and replaces it
with theory. Real defines itself.


  I think that content and form are anomalous symmetries
  inherent in all real things. It is only our perspective, as human
  content, that makes us assume otherwise. Objectively, form and content
  are different aspects of the same thing - one side a shape of matter
  in space, the other a meaning through time.

 Except that it is usually not symmetrical. Form are 3p describable and
 content are not.

That is their symmetry. They are anomalous. 1p is private, 3p is
public. This isn't coincidental. It isn't that 1p tends to be private
but occasionally our thoughts drip our of our ears, it is that the
symmetry circumscribes 1p and 3p completely. There is no way in which
they are not precisely opposite. Wherever there seems to be, that's
where our way of thinking about it needs to be adjusted. For example,
we might think that space is not the opposite of time, because we
observe that objects move over time. Using the symmetry as our guide
we can ask whether we really do see time passing, or whether we see a
fluid phenomenology of change in a limited experiential window of
'now'. We see objects change position and transform, but we don't see
anything that is not 'now' outside of ourselves. We see things over
there, but never things over then. Our experience of time is inferred
directly through perceptual inertia and memory. Our experience of
space is inferred indirectly through relativity and electromagnetism -
the 3p view of our body. Internally we feel no space - no stable sense
of being able to set something down in our mind and come back to it in
a month or a year.

 Also that contradicts what you say above, that content might be
 independent to form.

Independent from form in the sense that content cannot be created
through manipulations of apparent form alone. Form is not a reaction
to content, it is the back door of content. They are an inseparable
whole, but content is the head and form is the tail. You can't reverse
engineer the head out of tails.

 Then, even if that where true, why would that not apply to machine?

Because a machine is only real to the programmer and the audience. The
material substrate is not generating the machine out of it's own sense
and motive as it does in a living cell or species, it is only
performing its normal behaviors to the extent that it can under an
alien motive which it has no capacity to understand. The substrate is
real, and it has experience on its own level, but not on the level
that the audience might assume - as in this picture:

http://24.media.tumblr.com/tumblr_lmbsmluCns1qep93so1_500.jpg




  Your non attribution of consciousness to the machine might comes from
  the fact that you believes that the machines is only handled by the
  3p
  Bp, but it happens that the machine, and its universal self-
  transformation has self-referential correct fixed point, and who are
  you to judge if she meant 

Re: Yes Doctor circularity

2012-03-04 Thread Craig Weinberg
On Mar 4, 6:39 am, Stathis Papaioannou stath...@gmail.com wrote:

 While you believed it for 20 years, what was your reasoning?

The same reasoning you see here. That the brain is a finite physical
system which could be modeled or reproduced just like any other
physical system. That decisions we make are essentially made for us by
how the conditions are presented to us. That qualia is a
representational system for neurological feedback. All of those things
are true in a sense, until you try to imagine making a copy of the
universe based on that alone. Not only would such a universe not have
any possibility for awareness or participation, but neither are they
conceivable in any universe which does not place them as primitive.
There is no functional reason for experience to exist, however there
is an experiential reason for function to exist.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-03 Thread Craig Weinberg
On Mar 3, 1:49 am, Stathis Papaioannou stath...@gmail.com wrote:
 On Sat, Mar 3, 2012 at 12:55 PM, Craig Weinberg whatsons...@gmail.com wrote:
  I understand your argument from the very beginning. I debate people
  about it all week long with the same view exactly. It's by far the
  most popular position I have encountered online. It is the
  conventional wisdom wisdom position. There is nothing remotely new or
  difficult to understand about it.

 I know that you understand the claim, but what you don't understand is
 the reasoning behind it.


I understand the reasoning very well. As I say - I used to believe it
myself for 20 years. The problem isn't the reasoning, it's the initial
assumptions.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-03 Thread Bruno Marchal


On 03 Mar 2012, at 00:05, Craig Weinberg wrote:


On Mar 2, 2:49 pm, Bruno Marchal marc...@ulb.ac.be wrote:



A sentence is not a program.



Okay, WHILE  program  0 DO program. Program = Program + 1. END
WHILE



Does running that program (or one like it) create a 1p experience?


Very plausibly not. It lacks self-reference and universality.


Why isn't a WHILE loop self-referential?


It will depend on the procedure that you accept for the test, and  
action in while test action.
Self-reference does not need universality but sub-universality, and so  
will be quickly rich enough for self-reference to occur, but not  
universality. Unless the action is rich enough, again.
But this is a distracting issue given that your point is that NO  
program ever can give a computation able to manifest consciousness.







We are relatively manifested by the function of our brain. we are
not function.


That seems to make 'functionalism' a misnomer.


Yes. For many reasons, including that function can be seen as  
extensional objects, defined by their inputs-outputs, or as  
intensional objects, by taking into account how they are computed with  
some details, or by taking into account modalities, resource, etc.
Putnam's functionalism was fuzzy on the choice of level, leading him  
and its followers to some confusion.


In my early writing I define comp by it exists a level such that  
functionalism is right at that level.  That existence is not  
constructive (and thus the need of the act of faith), and that allow  
some clarification on what comp can mean.









If so, that objection
evaporates when we use a symmetrical form  content model rather  
than

a cause  effect model of brain-mind.


Form and content are not symmetrical.
The dependence of content to form requires at least universal  
machine.


What if content is not dependent on form and requires nothing except
being real?


Define real.



I think that content and form are anomalous symmetries
inherent in all real things. It is only our perspective, as human
content, that makes us assume otherwise. Objectively, form and content
are different aspects of the same thing - one side a shape of matter
in space, the other a meaning through time.


Except that it is usually not symmetrical. Form are 3p describable and  
content are not.
Also that contradicts what you say above, that content might be  
independent to form.

Then, even if that where true, why would that not apply to machine?




Your non attribution of consciousness to the machine might comes from
the fact that you believes that the machines is only handled by the  
3p

Bp, but it happens that the machine, and its universal self-
transformation has self-referential correct fixed point, and who are
you to judge if she meant them or not? If you define consciousness by
the restriction of the Bp on the such true fixed point, the PA baby
machine will already not be satisfied if you call her a zombie.


Take for example how a computer writes compared to a person. If you
blow up a character from a digital font enough, you will see the
jagged bits. If you look at a person's hand writing you will see
dynamic expressiveness and character. No two words or letters that a
person writes will be exactly the same.

A computer of course, produces only identical characters, and its text
has no emotional connection to the author. There will never be a
computer who signs it's John Hancock any differently than any other
computer - unless programmed specifically to do so. All machines have
the same personality by default (which is no personality).

This is a good example of how we can project our own perceptions on an
inanimate, unconscious canvas and see our own reflection in it. These
letters only look like letters to us, but to a computer, they look
like nothing whatsoever.


You confuse the proposition could a computer think, with the  
question could today's man-made computers think.





Reducing consciousness into mathematical terms


Which comp precisely does not. Comp might be said theologicalist, even  
if 99% mathematicalist.





can yield only a
mathematical sculpture that reminds us of consciousness. It is an
inside out approach, a reverse engineering of meaning by modeling
grammar and punctuation extensively. There is much more to awareness
than Bp  p.


You could refute plasma physics by saying that plasma have nothing to  
do with ink and papers, which typically appears in book on plasma  
physics.


Ironically this is, in the language of the Löbian machine, a confusion  
very similar to the confusion between Bp and Bp  p.
But I think that you need to invest more time in the technics for  
appreciating this, to be honest.











If
you were able to make a living zygote large enough to walk into,  
it
wouldn't be like that. Structures would emerge spontaneously out  
of

circulating fluid and molecules acting spontaneously and
simultaneously, not just in chain reaction.



It doesn't really 

Re: Yes Doctor circularity

2012-03-02 Thread Bruno Marchal


On 01 Mar 2012, at 22:32, Craig Weinberg wrote:


On Mar 1, 7:34 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 29 Feb 2012, at 23:29, Craig Weinberg wrote:




There is no such thing as evidence when it comes to  
qualitative

phenomenology. You don't need evidence to infer that a clock
doesn't
know what time it is.



A clock has no self-referential ability.



How do you know?



By looking at the structure of the clock. It does not implement
self-
reference. It is a finite automaton, much lower in complexity
than a
universal machine.



Knowing what time it is doesn't require self reference.



That's what I said, and it makes my point.


The difference between a clock knowing what time it is, Google  
knowing

what you mean when you search for it, and an AI bot knowing how to
have a conversation with someone is a matter of degree. If comp  
claims

that certain kinds of processes have 1p experiences associated with
them it has to explain why that should be the case.


Because they have the ability to refer to themselves and understand
the difference between 1p, 3p, the mind-body problem, etc.
That some numbers have the ability to refer to themselves is proved  
in

computer science textbook.
A clock lacks it. A computer has it.


This sentence refers to 'itself' too. I see no reason why any number
or computer would have any more of a 1p experience than that.


A sentence is not a program.










By comp it
should be generated by the 1p experience of the logic of the gears
of
the clock.



?



If the Chinese Room is intelligent, then why not gears?


The chinese room is not intelligent.


I agree.


The person which supervene on the
some computation done by the chinese room might be intelligent.


Like a metaphysical 'person' that arises out of the computation ?


It is more like prime numbers arising from + and *. Or like a chess  
player arising from some program, except that prime number and chess  
player have (today) no universal self-referential abilities.



















By comp logic, the clock could just be part of a
universal timekeeping machine - just a baby of course, so we  
can't

expect it to show any signs of being a universal machine yet,
but by
comp, we cannot assume that clocks can't know what time it is  
just
because these primitive clocks don't know how to tell us that  
they

know it yet.



Then the universal timekeeping would be conscious, not the baby
clock.
Level confusion.



A Swiss watch has a fairly complicated movement. How many watches
does
it take before they collectively have a chance at knowing what
time it
is? If all self referential machines arise from finite automation
though (by UDA inevitability?), the designation of any Level at
all is
arbitrary. How does comp conceive of self referential machines
evolving in the first place?


They exist arithmetically, in many relative way, that is to  
universal

numbers. Relative Evolution exists in higher level description of
those relation.
Evolution of species, presuppose arithmetic and even comp,  
plausibly.

Genetics is already digital relatively to QM.



My question though was how many watches does it take to make an
intelligent watch?


Difficult question. One hundred might be enough, but a good engineers
might be able to optimize it. I would not be so much astonished that
one clock is enough, to implement a very simple (and inefficacious)
universal system, but then you have to rearrange all the parts of  
that

clock.


The misapprehensions of comp are even clearer to me imagining a
universal system in clockwork mechanisms. Electronic computers sort of
mesmerize us because electricity seems magical to us, but having a
warehouse full of brass gears manually clattering together and
assuming that there is a  conscious entity experiencing something
there is hard to seriously consider. It's like Leibniz' Windmill.


Or like Ned block chinese people computer. This is not convincing. It  
is just helpful to understand that consciousness relies on logical  
informational patterns that on matter. That problem is not a problem  
for comp, but for theories without notion of first person. It breaks  
down when you can apply a theory of knowledge, which is the case for  
machine, thanks to incompleteness. Consciousness is in the true  
fixed point of self-reference. It is not easy to explain this shortly  
and it relies on Gödel and Tarski works. There will be opportunities  
to come back on this.





If
you were able to make a living zygote large enough to walk into, it
wouldn't be like that. Structures would emerge spontaneously out of
circulating fluid and molecules acting spontaneously and
simultaneously, not just in chain reaction.




It doesn't really make sense to me if comp were
true that there would be anything other than QM.


?


Why would there be any other 'levels'?


So you assume QM in your theory. I do not.




No matter how complicated a
computer program is, it doesn't need to form some kind of 

Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 1, 8:12 pm, Stathis Papaioannou stath...@gmail.com wrote:


 You do assume, though, that brain function can't be replicated by a
 machine.

No, I presume that consciousness is not limited to what we consider to
be brain function. Brain function, as we understand it now, is already
a machine.

That has no firmer basis than a claim that kidney function
 cannot be replicated by a machine. After all, brains and kidneys are
 made out of the same stuff.

The difference is that I am not my kidneys, but the same cannot be
said about my brain. It doesn't matter to me if my kidneys aren't
aware, as long as they keep me alive. The brain is a completely
different story. Keeping my body alive is of no concern to anyone
unless I am able to participate and participate directly in the life
of that body. If a replicated brain has no awareness, or if its
awareness is not 'me', then it is no better than a kidney grafted onto
a spinal cord.

You could bite the bullet and declare
 yourself a vitalist.

I'm not though. I'm a panexperientialist. I only point out that there
is a difference between the experience of a kidney, a brain, and an
array of transistors. You can't make a jellyfish out of clocks or a
glass of water out of sand.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 2, 4:43 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 01 Mar 2012, at 22:32, Craig Weinberg wrote:

  There is no such thing as evidence when it comes to
  qualitative
  phenomenology. You don't need evidence to infer that a clock
  doesn't
  know what time it is.

  A clock has no self-referential ability.

  How do you know?

  By looking at the structure of the clock. It does not implement
  self-
  reference. It is a finite automaton, much lower in complexity
  than a
  universal machine.

  Knowing what time it is doesn't require self reference.

  That's what I said, and it makes my point.

  The difference between a clock knowing what time it is, Google
  knowing
  what you mean when you search for it, and an AI bot knowing how to
  have a conversation with someone is a matter of degree. If comp
  claims
  that certain kinds of processes have 1p experiences associated with
  them it has to explain why that should be the case.

  Because they have the ability to refer to themselves and understand
  the difference between 1p, 3p, the mind-body problem, etc.
  That some numbers have the ability to refer to themselves is proved
  in
  computer science textbook.
  A clock lacks it. A computer has it.

  This sentence refers to 'itself' too. I see no reason why any number
  or computer would have any more of a 1p experience than that.

 A sentence is not a program.

Okay, WHILE  program  0 DO program. Program = Program + 1. END
WHILE

Does running that program (or one like it) create a 1p experience?




  By comp it
  should be generated by the 1p experience of the logic of the gears
  of
  the clock.

  ?

  If the Chinese Room is intelligent, then why not gears?

  The chinese room is not intelligent.

  I agree.

  The person which supervene on the
  some computation done by the chinese room might be intelligent.

  Like a metaphysical 'person' that arises out of the computation ?

 It is more like prime numbers arising from + and *. Or like a chess
 player arising from some program, except that prime number and chess
 player have (today) no universal self-referential abilities.

That sounds like what I said.




  By comp logic, the clock could just be part of a
  universal timekeeping machine - just a baby of course, so we
  can't
  expect it to show any signs of being a universal machine yet,
  but by
  comp, we cannot assume that clocks can't know what time it is
  just
  because these primitive clocks don't know how to tell us that
  they
  know it yet.

  Then the universal timekeeping would be conscious, not the baby
  clock.
  Level confusion.

  A Swiss watch has a fairly complicated movement. How many watches
  does
  it take before they collectively have a chance at knowing what
  time it
  is? If all self referential machines arise from finite automation
  though (by UDA inevitability?), the designation of any Level at
  all is
  arbitrary. How does comp conceive of self referential machines
  evolving in the first place?

  They exist arithmetically, in many relative way, that is to
  universal
  numbers. Relative Evolution exists in higher level description of
  those relation.
  Evolution of species, presuppose arithmetic and even comp,
  plausibly.
  Genetics is already digital relatively to QM.

  My question though was how many watches does it take to make an
  intelligent watch?

  Difficult question. One hundred might be enough, but a good engineers
  might be able to optimize it. I would not be so much astonished that
  one clock is enough, to implement a very simple (and inefficacious)
  universal system, but then you have to rearrange all the parts of
  that
  clock.

  The misapprehensions of comp are even clearer to me imagining a
  universal system in clockwork mechanisms. Electronic computers sort of
  mesmerize us because electricity seems magical to us, but having a
  warehouse full of brass gears manually clattering together and
  assuming that there is a  conscious entity experiencing something
  there is hard to seriously consider. It's like Leibniz' Windmill.

 Or like Ned block chinese people computer. This is not convincing.

Why not? Because our brain can be broken down into components also and
we assume that we are the function of our brain? If so, that objection
evaporates when we use a symmetrical form  content model rather than
a cause  effect model of brain-mind.

 It
 is just helpful to understand that consciousness relies on logical
 informational patterns that on matter. That problem is not a problem
 for comp, but for theories without notion of first person. It breaks
 down when you can apply a theory of knowledge, which is the case for
 machine, thanks to incompleteness. Consciousness is in the true
 fixed point of self-reference. It is not easy to explain this shortly
 and it relies on Gödel and Tarski works. There will be opportunities
 to come back on this.

All of that sounds still like the easy problem of consciousness.
Arithmetic can show 

Re: Yes Doctor circularity

2012-03-02 Thread Bruno Marchal


On 02 Mar 2012, at 18:03, Craig Weinberg wrote:


On Mar 2, 4:43 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 01 Mar 2012, at 22:32, Craig Weinberg wrote:


There is no such thing as evidence when it comes to
qualitative
phenomenology. You don't need evidence to infer that a clock
doesn't
know what time it is.



A clock has no self-referential ability.



How do you know?



By looking at the structure of the clock. It does not implement
self-
reference. It is a finite automaton, much lower in complexity
than a
universal machine.



Knowing what time it is doesn't require self reference.



That's what I said, and it makes my point.



The difference between a clock knowing what time it is, Google
knowing
what you mean when you search for it, and an AI bot knowing how to
have a conversation with someone is a matter of degree. If comp
claims
that certain kinds of processes have 1p experiences associated  
with

them it has to explain why that should be the case.



Because they have the ability to refer to themselves and understand
the difference between 1p, 3p, the mind-body problem, etc.
That some numbers have the ability to refer to themselves is proved
in
computer science textbook.
A clock lacks it. A computer has it.


This sentence refers to 'itself' too. I see no reason why any  
number

or computer would have any more of a 1p experience than that.


A sentence is not a program.


Okay, WHILE  program  0 DO program. Program = Program + 1. END
WHILE

Does running that program (or one like it) create a 1p experience?


Very plausibly not. It lacks self-reference and universality.









By comp it
should be generated by the 1p experience of the logic of the  
gears

of
the clock.



?



If the Chinese Room is intelligent, then why not gears?



The chinese room is not intelligent.



I agree.



The person which supervene on the
some computation done by the chinese room might be intelligent.



Like a metaphysical 'person' that arises out of the computation ?


It is more like prime numbers arising from + and *. Or like a chess
player arising from some program, except that prime number and chess
player have (today) no universal self-referential abilities.


That sounds like what I said.






By comp logic, the clock could just be part of a
universal timekeeping machine - just a baby of course, so we
can't
expect it to show any signs of being a universal machine yet,
but by
comp, we cannot assume that clocks can't know what time it is
just
because these primitive clocks don't know how to tell us that
they
know it yet.



Then the universal timekeeping would be conscious, not the baby
clock.
Level confusion.


A Swiss watch has a fairly complicated movement. How many  
watches

does
it take before they collectively have a chance at knowing what
time it
is? If all self referential machines arise from finite  
automation

though (by UDA inevitability?), the designation of any Level at
all is
arbitrary. How does comp conceive of self referential machines
evolving in the first place?



They exist arithmetically, in many relative way, that is to
universal
numbers. Relative Evolution exists in higher level  
description of

those relation.
Evolution of species, presuppose arithmetic and even comp,
plausibly.
Genetics is already digital relatively to QM.



My question though was how many watches does it take to make an
intelligent watch?


Difficult question. One hundred might be enough, but a good  
engineers
might be able to optimize it. I would not be so much astonished  
that

one clock is enough, to implement a very simple (and inefficacious)
universal system, but then you have to rearrange all the parts of
that
clock.



The misapprehensions of comp are even clearer to me imagining a
universal system in clockwork mechanisms. Electronic computers  
sort of

mesmerize us because electricity seems magical to us, but having a
warehouse full of brass gears manually clattering together and
assuming that there is a  conscious entity experiencing something
there is hard to seriously consider. It's like Leibniz' Windmill.


Or like Ned block chinese people computer. This is not convincing.


Why not? Because our brain can be broken down into components also and
we assume that we are the function of our brain?


We are relatively manifested by the function of our brain. we are  
not function.





If so, that objection
evaporates when we use a symmetrical form  content model rather than
a cause  effect model of brain-mind.


Form and content are not symmetrical.
The dependence of content to form requires at least universal machine.






It
is just helpful to understand that consciousness relies on logical
informational patterns that on matter. That problem is not a problem
for comp, but for theories without notion of first person. It breaks
down when you can apply a theory of knowledge, which is the case for
machine, thanks to incompleteness. Consciousness is in the true
fixed point of self-reference. 

Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 2, 2:49 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 02 Mar 2012, at 18:03, Craig Weinberg wrote:


  There is no such thing as evidence when it comes to
  qualitative
  phenomenology. You don't need evidence to infer that a clock
  doesn't
  know what time it is.

  A clock has no self-referential ability.

  How do you know?

  By looking at the structure of the clock. It does not implement
  self-
  reference. It is a finite automaton, much lower in complexity
  than a
  universal machine.

  Knowing what time it is doesn't require self reference.

  That's what I said, and it makes my point.

  The difference between a clock knowing what time it is, Google
  knowing
  what you mean when you search for it, and an AI bot knowing how to
  have a conversation with someone is a matter of degree. If comp
  claims
  that certain kinds of processes have 1p experiences associated
  with
  them it has to explain why that should be the case.

  Because they have the ability to refer to themselves and understand
  the difference between 1p, 3p, the mind-body problem, etc.
  That some numbers have the ability to refer to themselves is proved
  in
  computer science textbook.
  A clock lacks it. A computer has it.

  This sentence refers to 'itself' too. I see no reason why any
  number
  or computer would have any more of a 1p experience than that.

  A sentence is not a program.

  Okay, WHILE  program  0 DO program. Program = Program + 1. END
  WHILE

  Does running that program (or one like it) create a 1p experience?

 Very plausibly not. It lacks self-reference and universality.

Why isn't a WHILE loop self-referential?




  By comp it
  should be generated by the 1p experience of the logic of the
  gears
  of
  the clock.

  ?

  If the Chinese Room is intelligent, then why not gears?

  The chinese room is not intelligent.

  I agree.

  The person which supervene on the
  some computation done by the chinese room might be intelligent.

  Like a metaphysical 'person' that arises out of the computation ?

  It is more like prime numbers arising from + and *. Or like a chess
  player arising from some program, except that prime number and chess
  player have (today) no universal self-referential abilities.

  That sounds like what I said.

  By comp logic, the clock could just be part of a
  universal timekeeping machine - just a baby of course, so we
  can't
  expect it to show any signs of being a universal machine yet,
  but by
  comp, we cannot assume that clocks can't know what time it is
  just
  because these primitive clocks don't know how to tell us that
  they
  know it yet.

  Then the universal timekeeping would be conscious, not the baby
  clock.
  Level confusion.

  A Swiss watch has a fairly complicated movement. How many
  watches
  does
  it take before they collectively have a chance at knowing what
  time it
  is? If all self referential machines arise from finite
  automation
  though (by UDA inevitability?), the designation of any Level at
  all is
  arbitrary. How does comp conceive of self referential machines
  evolving in the first place?

  They exist arithmetically, in many relative way, that is to
  universal
  numbers. Relative Evolution exists in higher level
  description of
  those relation.
  Evolution of species, presuppose arithmetic and even comp,
  plausibly.
  Genetics is already digital relatively to QM.

  My question though was how many watches does it take to make an
  intelligent watch?

  Difficult question. One hundred might be enough, but a good
  engineers
  might be able to optimize it. I would not be so much astonished
  that
  one clock is enough, to implement a very simple (and inefficacious)
  universal system, but then you have to rearrange all the parts of
  that
  clock.

  The misapprehensions of comp are even clearer to me imagining a
  universal system in clockwork mechanisms. Electronic computers
  sort of
  mesmerize us because electricity seems magical to us, but having a
  warehouse full of brass gears manually clattering together and
  assuming that there is a  conscious entity experiencing something
  there is hard to seriously consider. It's like Leibniz' Windmill.

  Or like Ned block chinese people computer. This is not convincing.

  Why not? Because our brain can be broken down into components also and
  we assume that we are the function of our brain?

 We are relatively manifested by the function of our brain. we are
 not function.

That seems to make 'functionalism' a misnomer.


  If so, that objection
  evaporates when we use a symmetrical form  content model rather than
  a cause  effect model of brain-mind.

 Form and content are not symmetrical.
 The dependence of content to form requires at least universal machine.

What if content is not dependent on form and requires nothing except
being real? I think that content and form are anomalous symmetries
inherent in all real things. It is only our perspective, as human

Re: Yes Doctor circularity

2012-03-02 Thread Stathis Papaioannou
On Sat, Mar 3, 2012 at 3:01 AM, Craig Weinberg whatsons...@gmail.com wrote:
 On Mar 1, 8:12 pm, Stathis Papaioannou stath...@gmail.com wrote:


 You do assume, though, that brain function can't be replicated by a
 machine.

 No, I presume that consciousness is not limited to what we consider to
 be brain function. Brain function, as we understand it now, is already
 a machine.

You've moved on since I discussed this with you a few months ago,
since then you claimed that brain function (i.e. observable function
or behaviour) could not be replicated by machine. If you now accept
this, the further argument is that it is not possible to replicate
brain function without also replicating consciousness. This is valid
even if it isn't actually possible to replicate brain function. We've
discussed this before and I don't think you understand it.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 2, 7:46 pm, Stathis Papaioannou stath...@gmail.com wrote:
 On Sat, Mar 3, 2012 at 3:01 AM, Craig Weinberg whatsons...@gmail.com wrote:
  On Mar 1, 8:12 pm, Stathis Papaioannou stath...@gmail.com wrote:

  You do assume, though, that brain function can't be replicated by a
  machine.

  No, I presume that consciousness is not limited to what we consider to
  be brain function. Brain function, as we understand it now, is already
  a machine.

 You've moved on since I discussed this with you a few months ago,
 since then you claimed that brain function (i.e. observable function
 or behaviour) could not be replicated by machine.

No, there's no change. Brain function consists of physiological
processes, but physiology is too broad and generic to resolve subtle
anthropological processes. Eventually any machine replication will be
exposed to some human observer. This is because the idea of
'observable function or behavior' presumes a universal observer or
absolute frame of reference, which I have no reason to entertain as
legitimate. Are these words made of English letters or black pixels or
RGB pixels...colorless electrons..? A machine can produce the
electrons, the pixels, the letters, but not the cadence, the ideas,
the fluid presence of a singular voice over time. These are subtle
kinds of considerations but they make a difference over time. Machines
repeat themselves in an unnatural way. They are tone deaf and socially
awkward. They have no charisma. It shows. Brains have no charisma
either, so reproducing their function does not reproduce that. It is
the character which drives the brain function, not the other way
around.

 If you now accept
 this, the further argument is that it is not possible to replicate
 brain function without also replicating consciousness.

No, you're missing my argument now as you have in the past.

 This is valid
 even if it isn't actually possible to replicate brain function. We've
 discussed this before and I don't think you understand it.

I understand your argument from the very beginning. I debate people
about it all week long with the same view exactly. It's by far the
most popular position I have encountered online. It is the
conventional wisdom wisdom position. There is nothing remotely new or
difficult to understand about it.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Terren Suydam
On Fri, Mar 2, 2012 at 8:55 PM, Craig Weinberg whatsons...@gmail.com wrote:
 On Mar 2, 7:46 pm, Stathis Papaioannou stath...@gmail.com wrote:
 On Sat, Mar 3, 2012 at 3:01 AM, Craig Weinberg whatsons...@gmail.com wrote:
  On Mar 1, 8:12 pm, Stathis Papaioannou stath...@gmail.com wrote:

  You do assume, though, that brain function can't be replicated by a
  machine.

  No, I presume that consciousness is not limited to what we consider to
  be brain function. Brain function, as we understand it now, is already
  a machine.

 You've moved on since I discussed this with you a few months ago,
 since then you claimed that brain function (i.e. observable function
 or behaviour) could not be replicated by machine.

 No, there's no change. Brain function consists of physiological
 processes, but physiology is too broad and generic to resolve subtle
 anthropological processes. Eventually any machine replication will be
 exposed to some human observer. This is because the idea of
 'observable function or behavior' presumes a universal observer or
 absolute frame of reference, which I have no reason to entertain as
 legitimate. Are these words made of English letters or black pixels or
 RGB pixels...colorless electrons..? A machine can produce the
 electrons, the pixels, the letters, but not the cadence, the ideas,
 the fluid presence of a singular voice over time. These are subtle
 kinds of considerations but they make a difference over time. Machines
 repeat themselves in an unnatural way. They are tone deaf and socially
 awkward. They have no charisma. It shows. Brains have no charisma
 either, so reproducing their function does not reproduce that. It is
 the character which drives the brain function, not the other way
 around.

 If you now accept
 this, the further argument is that it is not possible to replicate
 brain function without also replicating consciousness.

 No, you're missing my argument now as you have in the past.

 This is valid
 even if it isn't actually possible to replicate brain function. We've
 discussed this before and I don't think you understand it.

 I understand your argument from the very beginning. I debate people
 about it all week long with the same view exactly. It's by far the
 most popular position I have encountered online. It is the
 conventional wisdom wisdom position. There is nothing remotely new or
 difficult to understand about it.

 Craig

Or, maybe it's ... http://en.wikipedia.org/wiki/Dunning-Kruger

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Craig Weinberg
On Mar 2, 9:41 pm, Terren Suydam terren.suy...@gmail.com wrote:


 Or, maybe it's ...http://en.wikipedia.org/wiki/Dunning-Kruger

Or this...
http://www.alternet.org/health/154225/would_we_have_drugged_up_einstein_how_anti-authoritarianism_is_deemed_a_mental_health_problem

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Stephen P. King

On 3/2/2012 10:17 PM, Craig Weinberg wrote:

On Mar 2, 9:41 pm, Terren Suydamterren.suy...@gmail.com  wrote:


Or, maybe it's ...http://en.wikipedia.org/wiki/Dunning-Kruger

Or this...
http://www.alternet.org/health/154225/would_we_have_drugged_up_einstein_how_anti-authoritarianism_is_deemed_a_mental_health_problem


Hear Hear!

Drug us into compliance, please! Ever read Brave New World 
http://www.huxley.net/bnw/? I have seen first hand the effects of 
anti-ADD drugs...


Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-02 Thread Stathis Papaioannou
On Sat, Mar 3, 2012 at 12:55 PM, Craig Weinberg whatsons...@gmail.com wrote:

 I understand your argument from the very beginning. I debate people
 about it all week long with the same view exactly. It's by far the
 most popular position I have encountered online. It is the
 conventional wisdom wisdom position. There is nothing remotely new or
 difficult to understand about it.

I know that you understand the claim, but what you don't understand is
the reasoning behind it.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-01 Thread Bruno Marchal


On 29 Feb 2012, at 23:29, Craig Weinberg wrote:


On Feb 29, 1:30 pm, Bruno Marchal marc...@ulb.ac.be wrote:

On 29 Feb 2012, at 17:10, Craig Weinberg wrote:






On Feb 28, 5:42 am, Bruno Marchal marc...@ulb.ac.be wrote:



There is no such thing as evidence when it comes to qualitative
phenomenology. You don't need evidence to infer that a clock
doesn't
know what time it is.



A clock has no self-referential ability.



How do you know?


By looking at the structure of the clock. It does not implement  
self-
reference. It is a finite automaton, much lower in complexity  
than a

universal machine.



Knowing what time it is doesn't require self reference.


That's what I said, and it makes my point.


The difference between a clock knowing what time it is, Google knowing
what you mean when you search for it, and an AI bot knowing how to
have a conversation with someone is a matter of degree. If comp claims
that certain kinds of processes have 1p experiences associated with
them it has to explain why that should be the case.


Because they have the ability to refer to themselves and understand  
the difference between 1p, 3p, the mind-body problem, etc.
That some numbers have the ability to refer to themselves is proved in  
computer science textbook.

A clock lacks it. A computer has it.







By comp it
should be generated by the 1p experience of the logic of the gears  
of

the clock.


?


If the Chinese Room is intelligent, then why not gears?


The chinese room is not intelligent. The person which supervene on the  
some computation done by the chinese room might be intelligent.







By comp logic, the clock could just be part of a
universal timekeeping machine - just a baby of course, so we can't
expect it to show any signs of being a universal machine yet,  
but by

comp, we cannot assume that clocks can't know what time it is just
because these primitive clocks don't know how to tell us that they
know it yet.



Then the universal timekeeping would be conscious, not the baby
clock.
Level confusion.


A Swiss watch has a fairly complicated movement. How many watches  
does
it take before they collectively have a chance at knowing what  
time it

is? If all self referential machines arise from finite automation
though (by UDA inevitability?), the designation of any Level at  
all is

arbitrary. How does comp conceive of self referential machines
evolving in the first place?


They exist arithmetically, in many relative way, that is to universal
numbers. Relative Evolution exists in higher level description of
those relation.
Evolution of species, presuppose arithmetic and even comp, plausibly.
Genetics is already digital relatively to QM.


My question though was how many watches does it take to make an
intelligent watch?


Difficult question. One hundred might be enough, but a good engineers  
might be able to optimize it. I would not be so much astonished that  
one clock is enough, to implement a very simple (and inefficacious)  
universal system, but then you have to rearrange all the parts of that  
clock.





It doesn't really make sense to me if comp were
true that there would be anything other than QM.


?



Why go through the
formality of genetics or cells? What would possibly be the point? If
silicon makes just as good of a person as do living mammal cells, why
not just make people out of quantum to begin with?


Nature does that, but it takes time. If you have a brain disease, your  
answer is like a doctor who would tell you, just wait life appears on  
some planet and with some luck it will do your brain.
But my interest in comp is not in the practice, but in the conceptual  
revolution it brings.







A machine which can only add, cannot be universal.
A machine which can only multiply cannot be universal.
But a machine which can add and multiply is universal.


A calculator can add and multiply. Will it know what time it is if I
connect it to a clock?


Too much ambiguity, but a priori: yes. Actually it does not need a  
clock. + and * can simulate the clock. Clock is a part of all  
computers, explicitly or implicitly.








The machine is a whole, its function belongs to none of its parts.
When the components are unrelated, the machine does not work. The
machine works well when its components are well assembled, be it
artificially, naturally, virtually or arithmetically (that does not
matter, and can't matter).


The machine isn't a whole though. Any number of parts can be replaced
without irreversibly killing the machine.


Like us. There is no one construct in the human body which lasts for  
more than seven years.
Brains have much shorter material identity. Only bones change more  
slowly, but are still replaced quasi completely in seven years,  
according to biologists.








All know theories in biology are known to be reducible to QM, which  
is
Turing emulable. So your theory/opinion is that all known theories  
are

false.


They aren't false, they are only 

Re: Yes Doctor circularity

2012-03-01 Thread Craig Weinberg
On Mar 1, 7:34 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 29 Feb 2012, at 23:29, Craig Weinberg wrote:


  There is no such thing as evidence when it comes to qualitative
  phenomenology. You don't need evidence to infer that a clock
  doesn't
  know what time it is.

  A clock has no self-referential ability.

  How do you know?

  By looking at the structure of the clock. It does not implement
  self-
  reference. It is a finite automaton, much lower in complexity
  than a
  universal machine.

  Knowing what time it is doesn't require self reference.

  That's what I said, and it makes my point.

  The difference between a clock knowing what time it is, Google knowing
  what you mean when you search for it, and an AI bot knowing how to
  have a conversation with someone is a matter of degree. If comp claims
  that certain kinds of processes have 1p experiences associated with
  them it has to explain why that should be the case.

 Because they have the ability to refer to themselves and understand
 the difference between 1p, 3p, the mind-body problem, etc.
 That some numbers have the ability to refer to themselves is proved in
 computer science textbook.
 A clock lacks it. A computer has it.

This sentence refers to 'itself' too. I see no reason why any number
or computer would have any more of a 1p experience than that.




  By comp it
  should be generated by the 1p experience of the logic of the gears
  of
  the clock.

  ?

  If the Chinese Room is intelligent, then why not gears?

 The chinese room is not intelligent.

I agree.

 The person which supervene on the
 some computation done by the chinese room might be intelligent.

Like a metaphysical 'person' that arises out of the computation ?












  By comp logic, the clock could just be part of a
  universal timekeeping machine - just a baby of course, so we can't
  expect it to show any signs of being a universal machine yet,
  but by
  comp, we cannot assume that clocks can't know what time it is just
  because these primitive clocks don't know how to tell us that they
  know it yet.

  Then the universal timekeeping would be conscious, not the baby
  clock.
  Level confusion.

  A Swiss watch has a fairly complicated movement. How many watches
  does
  it take before they collectively have a chance at knowing what
  time it
  is? If all self referential machines arise from finite automation
  though (by UDA inevitability?), the designation of any Level at
  all is
  arbitrary. How does comp conceive of self referential machines
  evolving in the first place?

  They exist arithmetically, in many relative way, that is to universal
  numbers. Relative Evolution exists in higher level description of
  those relation.
  Evolution of species, presuppose arithmetic and even comp, plausibly.
  Genetics is already digital relatively to QM.

  My question though was how many watches does it take to make an
  intelligent watch?

 Difficult question. One hundred might be enough, but a good engineers
 might be able to optimize it. I would not be so much astonished that
 one clock is enough, to implement a very simple (and inefficacious)
 universal system, but then you have to rearrange all the parts of that
 clock.

The misapprehensions of comp are even clearer to me imagining a
universal system in clockwork mechanisms. Electronic computers sort of
mesmerize us because electricity seems magical to us, but having a
warehouse full of brass gears manually clattering together and
assuming that there is a  conscious entity experiencing something
there is hard to seriously consider. It's like Leibniz' Windmill. If
you were able to make a living zygote large enough to walk into, it
wouldn't be like that. Structures would emerge spontaneously out of
circulating fluid and molecules acting spontaneously and
simultaneously, not just in chain reaction.


  It doesn't really make sense to me if comp were
  true that there would be anything other than QM.

 ?

Why would there be any other 'levels'? No matter how complicated a
computer program is, it doesn't need to form some kind of non-
programmatic precipitate or accretion. What would be the point and how
would such a thing even be accomplished?


  Why go through the
  formality of genetics or cells? What would possibly be the point? If
  silicon makes just as good of a person as do living mammal cells, why
  not just make people out of quantum to begin with?

 Nature does that, but it takes time. If you have a brain disease, your
 answer is like a doctor who would tell you, just wait life appears on
 some planet and with some luck it will do your brain.
 But my interest in comp is not in the practice, but in the conceptual
 revolution it brings.

I think that comp has conceptual validity, and actually could help us
understand consciousness in spite of it being exactly wrong about it.
Because of the disorientation problem, being wrong about it may in
fact be the only way to study it...as long as you know 

Re: Yes Doctor circularity

2012-03-01 Thread Stathis Papaioannou
On Fri, Mar 2, 2012 at 8:32 AM, Craig Weinberg whatsons...@gmail.com wrote:

 It depends how good the artificial brain stem was. The more of the
 brain you try to replace, the more intolerant it will be, probably
 exponentially so. Just as having four prosthetic limbs would be more
 of a burden than just one, the more the ratio of living brain to
 prosthetic brain tilts toward the prosthetic, the less person there is
 left. It's not strictly linear, as neuroplasticity would allow the
 person to scale down to what is left of the natural brain (as in cases
 where people have an entire hemisphere removed), and even if the
 prosthetics were good it is not clear that it would feel the same for
 the person. If the person survived with an artificial brain stem, they
 may never again feel that they were 'really' in their body again. If
 the cortex were replaced, they may regress to infancy and never be
 able to learn to use the new brain.

It's not a completely adequate artificial brain stem or cortex if it
doesn't work properly, is it? Just as an artificial heart that doesn't
increase output appropriately in response to exercise is not
completely adequate, though it might be adequate to prevent the person
from dying immediately.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-01 Thread Craig Weinberg
On Mar 1, 5:41 pm, Stathis Papaioannou stath...@gmail.com wrote:
 On Fri, Mar 2, 2012 at 8:32 AM, Craig Weinberg whatsons...@gmail.com wrote:
  It depends how good the artificial brain stem was. The more of the
  brain you try to replace, the more intolerant it will be, probably
  exponentially so. Just as having four prosthetic limbs would be more
  of a burden than just one, the more the ratio of living brain to
  prosthetic brain tilts toward the prosthetic, the less person there is
  left. It's not strictly linear, as neuroplasticity would allow the
  person to scale down to what is left of the natural brain (as in cases
  where people have an entire hemisphere removed), and even if the
  prosthetics were good it is not clear that it would feel the same for
  the person. If the person survived with an artificial brain stem, they
  may never again feel that they were 'really' in their body again. If
  the cortex were replaced, they may regress to infancy and never be
  able to learn to use the new brain.

 It's not a completely adequate artificial brain stem or cortex if it
 doesn't work properly, is it? Just as an artificial heart that doesn't
 increase output appropriately in response to exercise is not
 completely adequate, though it might be adequate to prevent the person
 from dying immediately.

That's what I'm saying. It may be the case though that no artificial
organ can be completely adequate in every sense - or even a
transplant. It's one thing when it's a kidney, but when it's a brain,
I don't think we can assume anything.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-03-01 Thread Stathis Papaioannou
On Fri, Mar 2, 2012 at 10:48 AM, Craig Weinberg whatsons...@gmail.com wrote:
 On Mar 1, 5:41 pm, Stathis Papaioannou stath...@gmail.com wrote:
 On Fri, Mar 2, 2012 at 8:32 AM, Craig Weinberg whatsons...@gmail.com wrote:
  It depends how good the artificial brain stem was. The more of the
  brain you try to replace, the more intolerant it will be, probably
  exponentially so. Just as having four prosthetic limbs would be more
  of a burden than just one, the more the ratio of living brain to
  prosthetic brain tilts toward the prosthetic, the less person there is
  left. It's not strictly linear, as neuroplasticity would allow the
  person to scale down to what is left of the natural brain (as in cases
  where people have an entire hemisphere removed), and even if the
  prosthetics were good it is not clear that it would feel the same for
  the person. If the person survived with an artificial brain stem, they
  may never again feel that they were 'really' in their body again. If
  the cortex were replaced, they may regress to infancy and never be
  able to learn to use the new brain.

 It's not a completely adequate artificial brain stem or cortex if it
 doesn't work properly, is it? Just as an artificial heart that doesn't
 increase output appropriately in response to exercise is not
 completely adequate, though it might be adequate to prevent the person
 from dying immediately.

 That's what I'm saying. It may be the case though that no artificial
 organ can be completely adequate in every sense - or even a
 transplant. It's one thing when it's a kidney, but when it's a brain,
 I don't think we can assume anything.

You do assume, though, that brain function can't be replicated by a
machine. That has no firmer basis than a claim that kidney function
cannot be replicated by a machine. After all, brains and kidneys are
made out of the same stuff. You could bite the bullet and declare
yourself a vitalist.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-29 Thread Bruno Marchal


On 28 Feb 2012, at 20:18, Craig Weinberg wrote:


On Feb 28, 5:42 am, Bruno Marchal marc...@ulb.ac.be wrote:



There is no such thing as evidence when it comes to qualitative
phenomenology. You don't need evidence to infer that a clock doesn't
know what time it is.


A clock has no self-referential ability.


How do you know?


By looking at the structure of the clock. It does not implement self- 
reference. It is a finite automaton, much lower in complexity than a  
universal machine.






By comp logic, the clock could just be part of a
universal timekeeping machine - just a baby of course, so we can't
expect it to show any signs of being a universal machine yet, but by
comp, we cannot assume that clocks can't know what time it is just
because these primitive clocks don't know how to tell us that they
know it yet.


Then the universal timekeeping would be conscious, not the baby clock.  
Level confusion.







You reason like that: no animals can fly, because pigs cannot fly.


You mistake my common sense reductio for shortsighted prejudice. I
would say that your reasoning is that if we take a pig on a plane, we
can't rule out the possibility that it has become a bird.


No. You were saying that computer cannot think, because clock cannot  
thing.





This is
another variation on the Chinese Room. The pig can walk around at
30,000 feet and we can ask it questions about the view from up there,
but the pig has not, in fact learned to fly or become a bird. Neither
has the plane, for that matter.


Your analogy is confusing. I would say that the pig in the plane does  
fly, but this is out of the topic.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-29 Thread Craig Weinberg
On Feb 29, 4:33 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 28 Feb 2012, at 20:18, Craig Weinberg wrote:

  On Feb 28, 5:42 am, Bruno Marchal marc...@ulb.ac.be wrote:

  There is no such thing as evidence when it comes to qualitative
  phenomenology. You don't need evidence to infer that a clock doesn't
  know what time it is.

  A clock has no self-referential ability.

  How do you know?

 By looking at the structure of the clock. It does not implement self-
 reference. It is a finite automaton, much lower in complexity than a
 universal machine.

Knowing what time it is doesn't require self reference. By comp it
should be generated by the 1p experience of the logic of the gears of
the clock.


  By comp logic, the clock could just be part of a
  universal timekeeping machine - just a baby of course, so we can't
  expect it to show any signs of being a universal machine yet, but by
  comp, we cannot assume that clocks can't know what time it is just
  because these primitive clocks don't know how to tell us that they
  know it yet.

 Then the universal timekeeping would be conscious, not the baby clock.
 Level confusion.

A Swiss watch has a fairly complicated movement. How many watches does
it take before they collectively have a chance at knowing what time it
is? If all self referential machines arise from finite automation
though (by UDA inevitability?), the designation of any Level at all is
arbitrary. How does comp conceive of self referential machines
evolving in the first place?




  You reason like that: no animals can fly, because pigs cannot fly.

  You mistake my common sense reductio for shortsighted prejudice. I
  would say that your reasoning is that if we take a pig on a plane, we
  can't rule out the possibility that it has become a bird.

 No. You were saying that computer cannot think, because clock cannot
 thing.

And I'm right. A brain can think because it's made of living cells
which diverged from an organic syzygy in a single moment. A computer
or clock cannot think because they are assembled artificially from
unrelated components, none of which have the qualities of an organic
molecule or living cell.


  This is
  another variation on the Chinese Room. The pig can walk around at
  30,000 feet and we can ask it questions about the view from up there,
  but the pig has not, in fact learned to fly or become a bird. Neither
  has the plane, for that matter.

 Your analogy is confusing. I would say that the pig in the plane does
 fly, but this is out of the topic.

It could be said that the pig is flying, but not that he has *learned
to fly* (and especially not learned to fly like a bird - which would
be the direct analogy for a computer simulating human consciousness).

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-29 Thread Bruno Marchal


On 29 Feb 2012, at 17:10, Craig Weinberg wrote:


On Feb 29, 4:33 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 28 Feb 2012, at 20:18, Craig Weinberg wrote:


On Feb 28, 5:42 am, Bruno Marchal marc...@ulb.ac.be wrote:



There is no such thing as evidence when it comes to qualitative
phenomenology. You don't need evidence to infer that a clock  
doesn't

know what time it is.



A clock has no self-referential ability.



How do you know?


By looking at the structure of the clock. It does not implement self-
reference. It is a finite automaton, much lower in complexity than a
universal machine.


Knowing what time it is doesn't require self reference.


That's what I said, and it makes my point.




By comp it
should be generated by the 1p experience of the logic of the gears of
the clock.



?







By comp logic, the clock could just be part of a
universal timekeeping machine - just a baby of course, so we can't
expect it to show any signs of being a universal machine yet, but by
comp, we cannot assume that clocks can't know what time it is just
because these primitive clocks don't know how to tell us that they
know it yet.


Then the universal timekeeping would be conscious, not the baby  
clock.

Level confusion.


A Swiss watch has a fairly complicated movement. How many watches does
it take before they collectively have a chance at knowing what time it
is? If all self referential machines arise from finite automation
though (by UDA inevitability?), the designation of any Level at all is
arbitrary. How does comp conceive of self referential machines
evolving in the first place?


They exist arithmetically, in many relative way, that is to universal  
numbers. Relative Evolution exists in higher level description of  
those relation.
Evolution of species, presuppose arithmetic and even comp, plausibly.  
Genetics is already digital relatively to QM.












You reason like that: no animals can fly, because pigs cannot fly.



You mistake my common sense reductio for shortsighted prejudice. I
would say that your reasoning is that if we take a pig on a plane,  
we

can't rule out the possibility that it has become a bird.


No. You were saying that computer cannot think, because clock cannot
thing.


And I'm right.
A brain can think because it's made of living cells
which diverged from an organic syzygy in a single moment. A computer
or clock cannot think because they are assembled artificially from
unrelated components, none of which have the qualities of an organic
molecule or living cell.



You reason like this.
A little clock cannot think.
To attach something which does not think, to something which cannot  
think, can still not think.

So all assembly of clocks cannot think.

But such an induction will not work, if you substitute think by is  
Turing universal, or has self-referential abilities, etc.


A machine which can only add, cannot be universal.
A machine which can only multiply cannot be universal.
But a machine which can add and multiply is universal.

The machine is a whole, its function belongs to none of its parts.  
When the components are unrelated, the machine does not work. The  
machine works well when its components are well assembled, be it  
artificially, naturally, virtually or arithmetically (that does not  
matter, and can't matter).


All know theories in biology are known to be reducible to QM, which is  
Turing emulable. So your theory/opinion is that all known theories are  
false. You have to lower the comp level in the infinitely low, and  
introduce special infinities, not 1p machine recoverable to make comp  
false.












This is
another variation on the Chinese Room. The pig can walk around at
30,000 feet and we can ask it questions about the view from up  
there,
but the pig has not, in fact learned to fly or become a bird.  
Neither

has the plane, for that matter.


Your analogy is confusing. I would say that the pig in the plane does
fly, but this is out of the topic.


It could be said that the pig is flying, but not that he has *learned
to fly* (and especially not learned to fly like a bird - which would
be the direct analogy for a computer simulating human consciousness).


That why the flying analogy does not work. Consciousness concerns  
something unprovable for everone concerned, except oneself.


May I ask you a question? Is a human with an artificial heart still a  
human?


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-29 Thread Craig Weinberg
On Feb 29, 1:30 pm, Bruno Marchal marc...@ulb.ac.be wrote:
 On 29 Feb 2012, at 17:10, Craig Weinberg wrote:





  On Feb 28, 5:42 am, Bruno Marchal marc...@ulb.ac.be wrote:

  There is no such thing as evidence when it comes to qualitative
  phenomenology. You don't need evidence to infer that a clock
  doesn't
  know what time it is.

  A clock has no self-referential ability.

  How do you know?

  By looking at the structure of the clock. It does not implement self-
  reference. It is a finite automaton, much lower in complexity than a
  universal machine.

  Knowing what time it is doesn't require self reference.

 That's what I said, and it makes my point.

The difference between a clock knowing what time it is, Google knowing
what you mean when you search for it, and an AI bot knowing how to
have a conversation with someone is a matter of degree. If comp claims
that certain kinds of processes have 1p experiences associated with
them it has to explain why that should be the case.


  By comp it
  should be generated by the 1p experience of the logic of the gears of
  the clock.

 ?

If the Chinese Room is intelligent, then why not gears?












  By comp logic, the clock could just be part of a
  universal timekeeping machine - just a baby of course, so we can't
  expect it to show any signs of being a universal machine yet, but by
  comp, we cannot assume that clocks can't know what time it is just
  because these primitive clocks don't know how to tell us that they
  know it yet.

  Then the universal timekeeping would be conscious, not the baby
  clock.
  Level confusion.

  A Swiss watch has a fairly complicated movement. How many watches does
  it take before they collectively have a chance at knowing what time it
  is? If all self referential machines arise from finite automation
  though (by UDA inevitability?), the designation of any Level at all is
  arbitrary. How does comp conceive of self referential machines
  evolving in the first place?

 They exist arithmetically, in many relative way, that is to universal
 numbers. Relative Evolution exists in higher level description of
 those relation.
 Evolution of species, presuppose arithmetic and even comp, plausibly.
 Genetics is already digital relatively to QM.

My question though was how many watches does it take to make an
intelligent watch? It doesn't really make sense to me if comp were
true that there would be anything other than QM. Why go through the
formality of genetics or cells? What would possibly be the point? If
silicon makes just as good of a person as do living mammal cells, why
not just make people out of quantum to begin with?












  You reason like that: no animals can fly, because pigs cannot fly.

  You mistake my common sense reductio for shortsighted prejudice. I
  would say that your reasoning is that if we take a pig on a plane,
  we
  can't rule out the possibility that it has become a bird.

  No. You were saying that computer cannot think, because clock cannot
  thing.

  And I'm right.
  A brain can think because it's made of living cells
  which diverged from an organic syzygy in a single moment. A computer
  or clock cannot think because they are assembled artificially from
  unrelated components, none of which have the qualities of an organic
  molecule or living cell.

 You reason like this.
 A little clock cannot think.
 To attach something which does not think, to something which cannot
 think, can still not think.
 So all assembly of clocks cannot think.

 But such an induction will not work, if you substitute think by is
 Turing universal, or has self-referential abilities, etc.

That reframes the question though so that comp theory is taken for
granted and natural phenomenology is put on the defensive. Suddenly we
are proving what we already assume rather than probing experiential
truth.


 A machine which can only add, cannot be universal.
 A machine which can only multiply cannot be universal.
 But a machine which can add and multiply is universal.

A calculator can add and multiply. Will it know what time it is if I
connect it to a clock?


 The machine is a whole, its function belongs to none of its parts.
 When the components are unrelated, the machine does not work. The
 machine works well when its components are well assembled, be it
 artificially, naturally, virtually or arithmetically (that does not
 matter, and can't matter).

The machine isn't a whole though. Any number of parts can be replaced
without irreversibly killing the machine.


 All know theories in biology are known to be reducible to QM, which is
 Turing emulable. So your theory/opinion is that all known theories are
 false.

They aren't false, they are only catastrophically incomplete. Neither
biology nor QM has any opinion on a purpose for awareness or living
organisms to exist.

 You have to lower the comp level in the infinitely low, and
 introduce special infinities, not 1p machine recoverable to make comp
 false.


Re: Yes Doctor circularity

2012-02-28 Thread Bruno Marchal


On 27 Feb 2012, at 21:56, Craig Weinberg wrote:


On Feb 25, 4:50 am, Bruno Marchal marc...@ulb.ac.be wrote:

On 24 Feb 2012, at 23:40, Craig Weinberg wrote:










On Feb 23, 9:41 pm, Pierz pier...@gmail.com wrote:

Let us suppose you're right and... but hold on! We can't do that.
That
would be circular. That would be sneaking in the assumption that
you're right from the outset. That would be shifty', fishy, etc
etc. You just don't seem to grasp the rudiments of philosophical
reasoning.



I understand that it seems that way to you.



'Yes doctor' is not an underhand move.



Not intentionally.



It asks you up-front
to assume that comp is true in order then to examine the  
implications

of that, whilst acknowledging (by calling it a 'bet') that this is
just a hypothesis, an unprovable leap of faith.


I think that asking for an unprovable leap of faith in this  
context is

philosophically problematic since the purpose of computation is to
make unprovable leaps of faith unnecessary.


This is were you are the most wrong from a theoretical computer
science pov.
It is just an Aristotelian myth than science can avoid leap of faith.
Doubly so for a (meta) theory like comp, where we bet on a form of
reincarnation.
Betting on a reality or on self-consistency gives a tremendous
selective advantage, but it can never be 100% justified rationally.
Comp meta-justifies the need of going beyond pure reason. Correct
betting mechanism cannot be 100% rational. That'swhat is cute  
with

incompleteness-like phenomena, they show that reason *can* see beyond
reason, and indeed 99,9% of the self-referential truth belongs to the
unjustifiable.



How can it really be said to be computational though? 2+2 =
unjustifiable self-referential 'truth'...f


?


orm of
reincarnation...faith?


Yes. Comp is a scheme of possible theologies.
















You complain that
using the term 'bet' assumes non-comp (I suppose because computers
can't bet, or care about their bets), but that is just daft.



Saying 'that is just daft' to something which is clearly the honest
truth in my estimation doesn't persuade me in the slightest.



You might
as well argue that the UDA is invalid because it is couched in
natural
language, which no computer can (or according to you, could ever)
understand. If we accepted such arguments, we'd be incapable of
debating comp at all.



That would be ok with me. I don't see anything to debate with comp,
because I understand why it seems like it could be true but actually
isn't.


But, as you seem to believe yourself, it is just the case that the 1p
cannot feel like comp is true. It is due to the clash between Bp and
Bp  p I have just been talking about in my previous mail.


It's not a feeling that comp isn't true, it's an understanding that
comp can't be causally efficacious.


You beg the question.




Computation can only inform those
who can be informed by it.


You beg the question.




To make something happen, information has
to be acted upon subjectively through sense and motive.


OK.





Sense works on
multiple levels though, so that we can cajole a computer into opening
and closing logic gates which seem meaningful to us, but have no
larger coherence to the computer itself.


Provably wrong in comp. You forget that we can define self-referential  
machine, and even study their non definable knowledge.












Saying 'no' to the doctor is anyone's right - nobody forces you to
accept that first step or tries to pull the wool over your eyes if
you
choose to say 'yes'. Having said no you can then either say I  
don't
believe in comp because (I just don't like it, it doesn't feel  
right,
it's against my religion etc) or you can present a rational  
argument

against it.



Or you can be rationally skeptical about it and say It has not been
proved or I see through the logic and understand the error in its
assumptions.


I will never been proved, for purely logical reason. Comp can only be
refuted, or hoped. Comp remains science, at the meta-level, but  
saying

yes to a doctor asks for a leap of faith.


I don't think that comp can ask for that. Even within a program, you
can't have a GOTO leap of faith.


The contrary is true. Self-referential programs cannot avoid the leap  
of faith.

Consciousness itself is plausibly based on an unconscious leap of faith.





It is only we who can ask or offer
a leap of faith.


That's anthropocentrism.





Computers need to know. Since they don't know where
they've been and they don't know who they are, they have nothing to
invest in such a leap. If it could then we could beg our ATM that we
lost our wallet and it could agree to help us out.


You beg the question.




















That is to say, if asked to justify why you say no, you
can either provide no reason and say simply that you choose to bet
against it - which is OK but uninteresting - or you can present  
some

reasoning which attempts to refute comp. You've made many such

Re: Yes Doctor circularity

2012-02-28 Thread Bruno Marchal


On 27 Feb 2012, at 23:15, Craig Weinberg wrote:


On Feb 27, 4:52 pm, meekerdb meeke...@verizon.net wrote:

On 2/27/2012 1:09 PM, Craig Weinberg wrote:


On Feb 27, 3:32 pm, meekerdbmeeke...@verizon.net  wrote:

On 2/27/2012 11:54 AM, Craig Weinberg wrote:



  AIs can generate their own software. That is the point of AI.
They don't have to generate their own software though, we have  
to tell

them to do that and specify exactly how we want them to do it.
Not exactly.  AI learns from interactions which are not known to  
those who write the AI

program.
...when we program them specifically to 'learn' in the  the exact  
ways

which we want them to.


They can learn by higher level program modifications too, and those  
can also be random.
So there is no evidence that their learning is qualitatively  
different from yours.


There is no such thing as evidence when it comes to qualitative
phenomenology. You don't need evidence to infer that a clock doesn't
know what time it is.


A clock has no self-referential ability.
You reason like that: no animals can fly, because pigs cannot fly.

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-28 Thread Craig Weinberg
On Feb 28, 5:42 am, Bruno Marchal marc...@ulb.ac.be wrote:


  There is no such thing as evidence when it comes to qualitative
  phenomenology. You don't need evidence to infer that a clock doesn't
  know what time it is.

 A clock has no self-referential ability.

How do you know? By comp logic, the clock could just be part of a
universal timekeeping machine - just a baby of course, so we can't
expect it to show any signs of being a universal machine yet, but by
comp, we cannot assume that clocks can't know what time it is just
because these primitive clocks don't know how to tell us that they
know it yet.

 You reason like that: no animals can fly, because pigs cannot fly.

You mistake my common sense reductio for shortsighted prejudice. I
would say that your reasoning is that if we take a pig on a plane, we
can't rule out the possibility that it has become a bird. This is
another variation on the Chinese Room. The pig can walk around at
30,000 feet and we can ask it questions about the view from up there,
but the pig has not, in fact learned to fly or become a bird. Neither
has the plane, for that matter.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-27 Thread Craig Weinberg
On Feb 25, 11:05 pm, 1Z peterdjo...@yahoo.com wrote:
 On Feb 24, 11:02 pm, Craig Weinberg whatsons...@gmail.com wrote:

  On Feb 24, 7:40 am, 1Z peterdjo...@yahoo.com wrote:

  Which only underscores how different consciousness is from
  computation. We can't share the exact same software, but computers
  can. We can't re-run our experiences, but computers can. By default
  humans cannot help but generate their own unique software, but the
  reverse is true with computers. We have to work to write each update
  to the code, which is then distributed uniformly to every (nearly)
  identical client machine.

 AIs can generate their own software. That is the point of AI.

They don't have to generate their own software though, we have to tell
them to do that and specify exactly how we want them to do it.


By default,
everything that a computer does is mechanistic. We have to go out of
our way to generate sophisticated algorithms to emulate naturalistic
human patterns.

   which could mean humans transcend computation, or
   could mean humans are more complex than current computers

  Complexity is the deus ex anima of comp. There is no reason to imagine
  that a complex arrangement of dumb marbles adds up to be something
  which experiences the universe in some synergistic way.

 THat;s a more plausible reason for doubting CT0M.

   Human development proves just the contrary. We start
out wild and willful and become more mechanistic through
domestication.

   You think mechanisms can't be random or unpredictable?

  That's not the same thing as wild and willful.

 Isn't it? Is there any hard evidence of that?

There can't be hard evidence of anything having to do with
consciousness. Consciousness has to be experienced first hand.


 There is agency there.
  Intentional exuberance that can be domesticated. Babies are noisy
  alright, but they aren't noise. Randomness and unpredictability is
  mere noise.
 Altenatively, they might
 just be illogical...even if we are computers. It is a subtle
 fallacy to say that computers run on logic: they run on rules.

Yes! This is why they have a trivial intelligence and no true
understanding.

   Or current ones are too simple

  Again - complexity is not the magic.

 Again..you can;t infer to all computers from the limitations
 of some computers.

But complexity shows no sign of making a difference. Watson or Deep
Blue are no more aware of anything outside the scope of their
programming than a pocket calculator. You could run them for a
thousand years and they won't ever learn the meaning of the word 'I'.


Rule followers are dumb.

   You have no evidence that humans are not following
   complex rules.

  We are following rules too, but we also break them.

 Rule-breaking might be based on rules. Adolescents are
 predictably rebellious.

You could just as easily say that rule making might be based on
voluntary agreement. Adolescent rebellion varies from culture to
culture, time to time, and individual to individual. For those who do
rebel, it doesn't make their rebellion any less willful. They rebel
because they feel that they want to, not because they don't know what
they are going to do next.










   Logic is a form of
intelligence which we use to write these rules that write more rules.
The more rules you have, the better the machine, but no amount of
rules make the machine more (or less) logical. Humans vary widely in
their preference for logic, emotion, pragmatism, leadership, etc.
Computers don't vary at all in their approach. It is all the same rule
follower only with different rules.

 They have no guarantee to be rational. If the rules are
 wrong, you have bugs. Humans are known to have
 any number of cognitive bugs. The jumping thing
 could be implemented by real or pseudo randomness, too.

  Because of 1, it is assumed that the thought experiment universe
  includes the subjective experience of personal value - that the
  patient has a stake, or 'money to bet'.

 What's the problem ? the experience (quale) or the value?

The significance of the quale.

   You mean apparent significance. But apparent significance *is* a
   quale.

  Apparent is redundant. All qualia are apparent. Significance is a meta
  quale (appears more apparent - a 'signal' or 'sign').

 Apparent significance, you mean.

There isn't any other kind. It's a quale. Apparent blue is blue.


 Do you know the value to be real?

I know it to be subjective.

   Great. So it's an opinion. How does that stop the mechanistic-
   physicalistic show?

  Mechanism is the opinion of things that are not us.

 Says who?

Multisense Realism. That's how I think perceptual inertia works. When
something is unlike you, the perception is that it is impersonal. The
more impersonal it is, the more mechanical it appears.


 Do you think a computer
 could not be deluded about value?

I 

Re: Yes Doctor circularity

2012-02-27 Thread meekerdb

On 2/27/2012 11:54 AM, Craig Weinberg wrote:

  AIs can generate their own software. That is the point of AI.

They don't have to generate their own software though, we have to tell
them to do that and specify exactly how we want them to do it.



Not exactly.  AI learns from interactions which are not known to those who write the AI 
program.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-27 Thread Craig Weinberg
On Feb 25, 4:50 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 24 Feb 2012, at 23:40, Craig Weinberg wrote:









  On Feb 23, 9:41 pm, Pierz pier...@gmail.com wrote:
  Let us suppose you're right and... but hold on! We can't do that.
  That
  would be circular. That would be sneaking in the assumption that
  you're right from the outset. That would be shifty', fishy, etc
  etc. You just don't seem to grasp the rudiments of philosophical
  reasoning.

  I understand that it seems that way to you.

  'Yes doctor' is not an underhand move.

  Not intentionally.

  It asks you up-front
  to assume that comp is true in order then to examine the implications
  of that, whilst acknowledging (by calling it a 'bet') that this is
  just a hypothesis, an unprovable leap of faith.

  I think that asking for an unprovable leap of faith in this context is
  philosophically problematic since the purpose of computation is to
  make unprovable leaps of faith unnecessary.

 This is were you are the most wrong from a theoretical computer
 science pov.
 It is just an Aristotelian myth than science can avoid leap of faith.
 Doubly so for a (meta) theory like comp, where we bet on a form of
 reincarnation.
 Betting on a reality or on self-consistency gives a tremendous
 selective advantage, but it can never be 100% justified rationally.
 Comp meta-justifies the need of going beyond pure reason. Correct
 betting mechanism cannot be 100% rational. That'swhat is cute with
 incompleteness-like phenomena, they show that reason *can* see beyond
 reason, and indeed 99,9% of the self-referential truth belongs to the
 unjustifiable.


How can it really be said to be computational though? 2+2 =
unjustifiable self-referential 'truth'...form of
reincarnation...faith?











  You complain that
  using the term 'bet' assumes non-comp (I suppose because computers
  can't bet, or care about their bets), but that is just daft.

  Saying 'that is just daft' to something which is clearly the honest
  truth in my estimation doesn't persuade me in the slightest.

  You might
  as well argue that the UDA is invalid because it is couched in
  natural
  language, which no computer can (or according to you, could ever)
  understand. If we accepted such arguments, we'd be incapable of
  debating comp at all.

  That would be ok with me. I don't see anything to debate with comp,
  because I understand why it seems like it could be true but actually
  isn't.

 But, as you seem to believe yourself, it is just the case that the 1p
 cannot feel like comp is true. It is due to the clash between Bp and
 Bp  p I have just been talking about in my previous mail.

It's not a feeling that comp isn't true, it's an understanding that
comp can't be causally efficacious. Computation can only inform those
who can be informed by it. To make something happen, information has
to be acted upon subjectively through sense and motive. Sense works on
multiple levels though, so that we can cajole a computer into opening
and closing logic gates which seem meaningful to us, but have no
larger coherence to the computer itself.




  Saying 'no' to the doctor is anyone's right - nobody forces you to
  accept that first step or tries to pull the wool over your eyes if
  you
  choose to say 'yes'. Having said no you can then either say I don't
  believe in comp because (I just don't like it, it doesn't feel right,
  it's against my religion etc) or you can present a rational argument
  against it.

  Or you can be rationally skeptical about it and say It has not been
  proved or I see through the logic and understand the error in its
  assumptions.

 I will never been proved, for purely logical reason. Comp can only be
 refuted, or hoped. Comp remains science, at the meta-level, but saying
 yes to a doctor asks for a leap of faith.

I don't think that comp can ask for that. Even within a program, you
can't have a GOTO leap of faith. It is only we who can ask or offer
a leap of faith. Computers need to know. Since they don't know where
they've been and they don't know who they are, they have nothing to
invest in such a leap. If it could then we could beg our ATM that we
lost our wallet and it could agree to help us out.












  That is to say, if asked to justify why you say no, you
  can either provide no reason and say simply that you choose to bet
  against it - which is OK but uninteresting - or you can present some
  reasoning which attempts to refute comp. You've made many such
  attempts, though to be honest all I've ever really been able to glean
  from your arguments is a sort of impressionistic revulsion at the
  idea
  of humans being computers,

  That is your impressionistic revulsion at the idea of stepping outside
  the entrenched positions of the argument. I have no revulsion
  whatsoever at the idea of humans being computers. As I have mentioned
  several times, I have believed in comp for most of my life, for the
  same reasons that you do. I am 

Re: Yes Doctor circularity

2012-02-27 Thread Terren Suydam
On Mon, Feb 27, 2012 at 3:56 PM, Craig Weinberg whatsons...@gmail.com wrote:
 I keep repeating this list, adding more each time. What else can I do.
 Comp cannot disprove itself, so if you are looking for that to happen
 then I can tell you already that it won't. I can't prove the existence
 of color on a black and white TV alone. To prove color exists you have
 to look away from the TV and see the world with your own eyes.

I actually agree with you here Craig. It's probably best for your own
sanity if you moved on to greener pastures where the people might have
a hope of understanding what you're talking about. We're hopeless!

Terren

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-27 Thread Craig Weinberg
On Feb 27, 3:32 pm, meekerdb meeke...@verizon.net wrote:
 On 2/27/2012 11:54 AM, Craig Weinberg wrote:

    AIs can generate their own software. That is the point of AI.
  They don't have to generate their own software though, we have to tell
  them to do that and specify exactly how we want them to do it.

 Not exactly.  AI learns from interactions which are not known to those who 
 write the AI
 program.

...when we program them specifically to 'learn' in the  the exact ways
which we want them to. You can't really even say learning in any
strong sense, it's really only doing something recursively enumerating
something with a success criteria filter. It is no more learning than
a fishing net learns how to catch the biggest fish every time.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-27 Thread meekerdb

On 2/27/2012 1:09 PM, Craig Weinberg wrote:

On Feb 27, 3:32 pm, meekerdbmeeke...@verizon.net  wrote:

On 2/27/2012 11:54 AM, Craig Weinberg wrote:


  AIs can generate their own software. That is the point of AI.

They don't have to generate their own software though, we have to tell
them to do that and specify exactly how we want them to do it.

Not exactly.  AI learns from interactions which are not known to those who 
write the AI
program.

...when we program them specifically to 'learn' in the  the exact ways
which we want them to.


They can learn by higher level program modifications too, and those can also be random.  
So there is no evidence that their learning is qualitatively different from yours.


Brent


You can't really even say learning in any
strong sense, it's really only doing something recursively enumerating
something with a success criteria filter. It is no more learning than
a fishing net learns how to catch the biggest fish every time.

Craig



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-27 Thread Craig Weinberg
On Feb 27, 4:52 pm, meekerdb meeke...@verizon.net wrote:
 On 2/27/2012 1:09 PM, Craig Weinberg wrote:

  On Feb 27, 3:32 pm, meekerdbmeeke...@verizon.net  wrote:
  On 2/27/2012 11:54 AM, Craig Weinberg wrote:

    AIs can generate their own software. That is the point of AI.
  They don't have to generate their own software though, we have to tell
  them to do that and specify exactly how we want them to do it.
  Not exactly.  AI learns from interactions which are not known to those who 
  write the AI
  program.
  ...when we program them specifically to 'learn' in the  the exact ways
  which we want them to.

 They can learn by higher level program modifications too, and those can also 
 be random.
 So there is no evidence that their learning is qualitatively different from 
 yours.

There is no such thing as evidence when it comes to qualitative
phenomenology. You don't need evidence to infer that a clock doesn't
know what time it is.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-27 Thread Stathis Papaioannou
On Fri, Feb 24, 2012 at 12:53 AM, Quentin Anciaux allco...@gmail.com wrote:

 [Comp is] not a question... but a starting hypothesis...

The hypothesis is that the physics of the brain is computable. If this
is granted, then it follows by the fading qualia argument that the
consciousness of the brain is also computable.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-27 Thread meekerdb

On 2/27/2012 2:15 PM, Craig Weinberg wrote:

On Feb 27, 4:52 pm, meekerdbmeeke...@verizon.net  wrote:

On 2/27/2012 1:09 PM, Craig Weinberg wrote:


On Feb 27, 3:32 pm, meekerdbmeeke...@verizon.netwrote:

On 2/27/2012 11:54 AM, Craig Weinberg wrote:

   AIs can generate their own software. That is the point of AI.

They don't have to generate their own software though, we have to tell
them to do that and specify exactly how we want them to do it.

Not exactly.  AI learns from interactions which are not known to those who 
write the AI
program.

...when we program them specifically to 'learn' in the  the exact ways
which we want them to.

They can learn by higher level program modifications too, and those can also be 
random.
So there is no evidence that their learning is qualitatively different from 
yours.

There is no such thing as evidence when it comes to qualitative
phenomenology. You don't need evidence to infer that a clock doesn't
know what time it is.


Then I guess that means I don't need evidence to infer it does either.  It must be 
comforting to live in an evidence free world where your opinion is the the only standard.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-27 Thread Craig Weinberg
On Feb 27, 5:37 pm, meekerdb meeke...@verizon.net wrote:
 On 2/27/2012 2:15 PM, Craig Weinberg wrote:









  On Feb 27, 4:52 pm, meekerdbmeeke...@verizon.net  wrote:
  On 2/27/2012 1:09 PM, Craig Weinberg wrote:

  On Feb 27, 3:32 pm, meekerdbmeeke...@verizon.net    wrote:
  On 2/27/2012 11:54 AM, Craig Weinberg wrote:
     AIs can generate their own software. That is the point of AI.
  They don't have to generate their own software though, we have to tell
  them to do that and specify exactly how we want them to do it.
  Not exactly.  AI learns from interactions which are not known to those 
  who write the AI
  program.
  ...when we program them specifically to 'learn' in the  the exact ways
  which we want them to.
  They can learn by higher level program modifications too, and those can 
  also be random.
  So there is no evidence that their learning is qualitatively different 
  from yours.
  There is no such thing as evidence when it comes to qualitative
  phenomenology. You don't need evidence to infer that a clock doesn't
  know what time it is.

 Then I guess that means I don't need evidence to infer it does either.  It 
 must be
 comforting to live in an evidence free world where your opinion is the the 
 only standard.


If you believe that clocks know what time it is, and you need evidence
to convince you otherwise, then no amount of argument can persuade you
to common sense. I don't need any intellectual crutches to understand
that subjective phenomenology has a different standard of epistemology
than objective conditions.

Craig

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-25 Thread Bruno Marchal


On 24 Feb 2012, at 23:40, Craig Weinberg wrote:


On Feb 23, 9:41 pm, Pierz pier...@gmail.com wrote:
Let us suppose you're right and... but hold on! We can't do that.  
That

would be circular. That would be sneaking in the assumption that
you're right from the outset. That would be shifty', fishy, etc
etc. You just don't seem to grasp the rudiments of philosophical
reasoning.


I understand that it seems that way to you.


'Yes doctor' is not an underhand move.


Not intentionally.


It asks you up-front
to assume that comp is true in order then to examine the implications
of that, whilst acknowledging (by calling it a 'bet') that this is
just a hypothesis, an unprovable leap of faith.


I think that asking for an unprovable leap of faith in this context is
philosophically problematic since the purpose of computation is to
make unprovable leaps of faith unnecessary.


This is were you are the most wrong from a theoretical computer  
science pov.

It is just an Aristotelian myth than science can avoid leap of faith.
Doubly so for a (meta) theory like comp, where we bet on a form of  
reincarnation.
Betting on a reality or on self-consistency gives a tremendous  
selective advantage, but it can never be 100% justified rationally.
Comp meta-justifies the need of going beyond pure reason. Correct  
betting mechanism cannot be 100% rational. That'swhat is cute with  
incompleteness-like phenomena, they show that reason *can* see beyond  
reason, and indeed 99,9% of the self-referential truth belongs to the  
unjustifiable.







You complain that
using the term 'bet' assumes non-comp (I suppose because computers
can't bet, or care about their bets), but that is just daft.


Saying 'that is just daft' to something which is clearly the honest
truth in my estimation doesn't persuade me in the slightest.


You might
as well argue that the UDA is invalid because it is couched in  
natural

language, which no computer can (or according to you, could ever)
understand. If we accepted such arguments, we'd be incapable of
debating comp at all.


That would be ok with me. I don't see anything to debate with comp,
because I understand why it seems like it could be true but actually
isn't.


But, as you seem to believe yourself, it is just the case that the 1p  
cannot feel like comp is true. It is due to the clash between Bp and  
Bp  p I have just been talking about in my previous mail.









Saying 'no' to the doctor is anyone's right - nobody forces you to
accept that first step or tries to pull the wool over your eyes if  
you

choose to say 'yes'. Having said no you can then either say I don't
believe in comp because (I just don't like it, it doesn't feel right,
it's against my religion etc) or you can present a rational argument
against it.


Or you can be rationally skeptical about it and say It has not been
proved or I see through the logic and understand the error in its
assumptions.


I will never been proved, for purely logical reason. Comp can only be  
refuted, or hoped. Comp remains science, at the meta-level, but saying  
yes to a doctor asks for a leap of faith.







That is to say, if asked to justify why you say no, you
can either provide no reason and say simply that you choose to bet
against it - which is OK but uninteresting - or you can present some
reasoning which attempts to refute comp. You've made many such
attempts, though to be honest all I've ever really been able to glean
from your arguments is a sort of impressionistic revulsion at the  
idea

of humans being computers,


That is your impressionistic revulsion at the idea of stepping outside
the entrenched positions of the argument. I have no revulsion
whatsoever at the idea of humans being computers. As I have mentioned
several times, I have believed in comp for most of my life, for the
same reasons that you do. I am fine with being uploaded and digitized,
but I know now why that won't work. I know exactly why.


Then you are not a machine. That's possible, but up to now, it is just  
a begging type of argument, given that you don't succeed to provide an  
argument for how and why you know that.
The very fact that you feel obliged to mention that you know that can  
only make us suspicious that actually you don't have an argument, but  
only a feeling. Then such feeling are already explainable by machine.
Machine also said that they (1p) knows that they are not any machine  
we could describe to them, and later, by deepening the introspection  
and the study of comp, they can understand that such a knowledge  
proves nothing.


Bruno






yet one which seems founded in a
fundamental misunderstanding about what a computer is.


I have been using and programming computers almost every day for the
last 30 years. I know exactly what a computer is.


You repeatedly
mistake the mathematical construct for the concrete, known object you
use to type up your posts. This has been pointed out many times, but
you still make arguments like 

Re: Yes Doctor circularity

2012-02-25 Thread Bruno Marchal


On 25 Feb 2012, at 00:05, John Mikes wrote:


People have too much time on their hand to argue back and forth.
Whatever (theory) we talk about has been born from human mind(s)
consequently only HALF _ TRUE max (if at all).

I imagine te doctor, I imagine the numbers (there are none in  
Nature)

I imagine controversies and matches, arithemtics, calculus and bio.
Project the I-s into 3rd person I-s and FEEL justified to BELIEVE
that it is   T R U E  .

How 'universal' is a universal machine (number)? it extends its  
universality

till our imagination's end. Can we imagine what we cannot imagine?


Yes. Universal machine can do that by using implicitly or explicitly  
the diagonalization technic. That is why the closure of the UMs for  
diagonalization is a very strong evidence for their universality. If  
you doubt about Church thesis, it is up to you to give an argument  
against it.


We never know any truth in science. Only philosophers argue for the  
truth and falsity of proposition. In science we build theories, with  
the hope to see them wrong one day. That's the only way we can learn.


Bruno





JM



On Wed, Feb 22, 2012 at 2:42 AM, Craig Weinberg  
whatsons...@gmail.com wrote:

Has someone already mentioned this?

I woke up in the middle of the night with this, so it might not make
sense...or...

The idea of saying yes to the doctor presumes that we, in the thought
experiment, bring to the thought experiment universe:

1. our sense of own significance (we have to be able to care about
ourselves and our fate in the first place)
2. our perceptual capacity to jump to conclusions without logic (we
have to be able feel what it seems like rather than know what it
simply is.)

Because of 1, it is assumed that the thought experiment universe
includes the subjective experience of personal value - that the
patient has a stake, or 'money to bet'. Because of 2, it is assumed
that libertarian free will exists in the scenario - we have to be able
to 'bet' in the first place. As far as I know, comp can only answer
'True, doctor', 'False, doctor', or 'I don't know, or I can't answer,
doctor.'

So, what this means is that in the scenario, while not precluding that
a form of comp based consciousness could exist, does preclude that it
is the only form of consciousness that exists, so therefore does not
prove that in comp consciousness must arise from comp since it relies
on non-comp to prove it. The same goes for the Turing Test, which
after all is only about betting on imitation. Does the robot seem real
to me? Bruno adds another layer to this by forcing our thought
experimenter to care whether they are or not.

What say ye, mighty logicians? Both of these tests succeed
unintentionally at revealing the essentials of consciousness, not in
front of our eyes with the thought experiment, but behind our backs.
The sleight of hand is hidden innocently in the assumption of free
will (and significance). In any universe where consciousness arises
from comp, consciousness may be able to pass or fail the test as the
tested object, but it cannot receive the test as a testing subject
unless free will and significance are already presumed to be comp.

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-25 Thread Bruno Marchal


On 25 Feb 2012, at 01:21, meekerdb wrote:


On 2/24/2012 3:05 PM, John Mikes wrote:


People have too much time on their hand to argue back and forth.
Whatever (theory) we talk about has been born from human mind(s)
consequently only HALF _ TRUE max (if at all).


Almost all our theories are not only probably false, they are  
*known* to be false.  But that doesn't mean they should be discarded  
or they are not useful.  It means they have limited accuracy and  
limited domains of validity.




I imagine te doctor, I imagine the numbers (there are none in  
Nature)

I imagine controversies and matches, arithemtics, calculus and bio.
Project the I-s into 3rd person I-s and FEEL justified to BELIEVE
that it is   T R U E  .


True means different things in different theories.


Yes. Unlike computability, truth, but also probability, definability,  
etc. is highly dependent of the theory or machine used.
With Church thesis, computability is the same for machines, alines,  
Gods, every possible one. All the rest is relative.



In ordinary, declarative speech it means correspondence with a  
fact.  In science it's the goal of predictive accuracy over the  
whole range of applications and consilience with other all other  
'true' theories.  In logic it's an attribute t of propositions  
that are axioms and that's preserved by the rules of inference.




How 'universal' is a universal machine (number)? it extends its  
universality

till our imagination's end. Can we imagine what we cannot imagine?


We have to build on what we have.


Exactly.




Brent
You have to make the good out of the bad because that is all you have
got to make it out of.
   --- Robert Penn Warren


Not bad :)

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-25 Thread 1Z


On Feb 24, 11:02 pm, Craig Weinberg whatsons...@gmail.com wrote:
 On Feb 24, 7:40 am, 1Z peterdjo...@yahoo.com wrote:



 Which only underscores how different consciousness is from
 computation. We can't share the exact same software, but computers
 can. We can't re-run our experiences, but computers can. By default
 humans cannot help but generate their own unique software, but the
 reverse is true with computers. We have to work to write each update
 to the code, which is then distributed uniformly to every (nearly)
 identical client machine.

AIs can generate their own software. That is the point of AI.

   By default,
   everything that a computer does is mechanistic. We have to go out of
   our way to generate sophisticated algorithms to emulate naturalistic
   human patterns.

  which could mean humans transcend computation, or
  could mean humans are more complex than current computers

 Complexity is the deus ex anima of comp. There is no reason to imagine
 that a complex arrangement of dumb marbles adds up to be something
 which experiences the universe in some synergistic way.

THat;s a more plausible reason for doubting CT0M.

  Human development proves just the contrary. We start
   out wild and willful and become more mechanistic through
   domestication.

  You think mechanisms can't be random or unpredictable?

 That's not the same thing as wild and willful.

Isn't it? Is there any hard evidence of that?

There is agency there.
 Intentional exuberance that can be domesticated. Babies are noisy
 alright, but they aren't noise. Randomness and unpredictability is
 mere noise.

Altenatively, they might
just be illogical...even if we are computers. It is a subtle
fallacy to say that computers run on logic: they run on rules.

   Yes! This is why they have a trivial intelligence and no true
   understanding.

  Or current ones are too simple

 Again - complexity is not the magic.

Again..you can;t infer to all computers from the limitations
of some computers.

   Rule followers are dumb.

  You have no evidence that humans are not following
  complex rules.

 We are following rules too, but we also break them.

Rule-breaking might be based on rules. Adolescents are
predictably rebellious.

  Logic is a form of
   intelligence which we use to write these rules that write more rules.
   The more rules you have, the better the machine, but no amount of
   rules make the machine more (or less) logical. Humans vary widely in
   their preference for logic, emotion, pragmatism, leadership, etc.
   Computers don't vary at all in their approach. It is all the same rule
   follower only with different rules.

They have no guarantee to be rational. If the rules are
wrong, you have bugs. Humans are known to have
any number of cognitive bugs. The jumping thing
could be implemented by real or pseudo randomness, too.

 Because of 1, it is assumed that the thought experiment universe
 includes the subjective experience of personal value - that the
 patient has a stake, or 'money to bet'.

What's the problem ? the experience (quale) or the value?

   The significance of the quale.

  You mean apparent significance. But apparent significance *is* a
  quale.

 Apparent is redundant. All qualia are apparent. Significance is a meta
 quale (appears more apparent - a 'signal' or 'sign').


Apparent significance, you mean.

Do you know the value to be real?

   I know it to be subjective.

  Great. So it's an opinion. How does that stop the mechanistic-
  physicalistic show?

 Mechanism is the opinion of things that are not us.

Says who?

Do you think a computer
could not be deluded about value?

   I think a computer can't be anything but turned off and on.

  Well, you;'re wrong. It takes more than one bit (on/off) to
  describe computation.

 you forgot the 'turning'.

That does't help.

 Because of 2, it is assumed
 that libertarian free will exists in the scenario

I don't see that FW of a specifically libertarian aort is posited
in the scenario. It just assumes you can make a choice in
some sense.

   It assumes that choice is up to you and not determined by
   computations.

  Nope. It just assumes you can make some sort of choice.

 A voluntary choice.

 Craig

Some sort of voluntary

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-24 Thread 1Z


On Feb 23, 9:14 pm, Craig Weinberg whatsons...@gmail.com wrote:
 On Feb 23, 3:25 pm, 1Z peterdjo...@yahoo.com wrote:

  On Feb 22, 7:42 am, Craig Weinberg whatsons...@gmail.com wrote:

   Has someone already mentioned this?

   I woke up in the middle of the night with this, so it might not make
   sense...or...

   The idea of saying yes to the doctor presumes that we, in the thought
   experiment, bring to the thought experiment universe:

   1. our sense of own significance (we have to be able to care about
   ourselves and our fate in the first place)

  I can't see why you would think that is incompatible with CTM

 It is not posed as a question of 'Do you believe that CTM includes X',
 but rather, 'using X, do you believe that there is any reason to doubt
 that Y(X) is X.'

I don't see what you mean.

2. our perceptual capacity to jump to conclusions without logic (we
   have to be able feel what it seems like rather than know what it
   simply is.)

  Whereas that seems to be based on a mistake. It might be
  that our conclusions ARE based on logic, just logic that
  we are consciously unaware of.

 That's a good point but it could just as easily be based on
 subconscious idiopathic preferences.

that's could for you!

 The patterns of human beings in
 guessing and betting vary from person to person whereas one of the
 hallmarks of computation is to get the same results.

given the same software. But human software is formed
by life experince and genetics, both of which vary from
individual to individual

 By default,
 everything that a computer does is mechanistic. We have to go out of
 our way to generate sophisticated algorithms to emulate naturalistic
 human patterns.

which could mean humans transcend computation, or
could mean humans are more complex than current computers


Human development proves just the contrary. We start
 out wild and willful and become more mechanistic through
 domestication.

You think mechanisms can't be random or unpredictable?

  Altenatively, they might
  just be illogical...even if we are computers. It is a subtle
  fallacy to say that computers run on logic: they run on rules.

 Yes! This is why they have a trivial intelligence and no true
 understanding.

Or current ones are too simple

 Rule followers are dumb.

You have no evidence that humans are not following
complex rules.

Logic is a form of
 intelligence which we use to write these rules that write more rules.
 The more rules you have, the better the machine, but no amount of
 rules make the machine more (or less) logical. Humans vary widely in
 their preference for logic, emotion, pragmatism, leadership, etc.
 Computers don't vary at all in their approach. It is all the same rule
 follower only with different rules.

  They have no guarantee to be rational. If the rules are
  wrong, you have bugs. Humans are known to have
  any number of cognitive bugs. The jumping thing
  could be implemented by real or pseudo randomness, too.

   Because of 1, it is assumed that the thought experiment universe
   includes the subjective experience of personal value - that the
   patient has a stake, or 'money to bet'.

  What's the problem ? the experience (quale) or the value?

 The significance of the quale.

You mean apparent significance. But apparent significance *is* a
quale.

  Do you know the value to be real?

 I know it to be subjective.

Great. So it's an opinion. How does that stop the mechanistic-
physicalistic show?

  Do you think a computer
  could not be deluded about value?

 I think a computer can't be anything but turned off and on.

Well, you;'re wrong. It takes more than one bit (on/off) to
describe computation.

   Because of 2, it is assumed
   that libertarian free will exists in the scenario

  I don't see that FW of a specifically libertarian aort is posited
  in the scenario. It just assumes you can make a choice in
  some sense.

 It assumes that choice is up to you and not determined by
 computations.

Nope. It just assumes you can make some sort of choice.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-24 Thread Craig Weinberg
On Feb 23, 9:41 pm, Pierz pier...@gmail.com wrote:
 Let us suppose you're right and... but hold on! We can't do that. That
 would be circular. That would be sneaking in the assumption that
 you're right from the outset. That would be shifty', fishy, etc
 etc. You just don't seem to grasp the rudiments of philosophical
 reasoning.

I understand that it seems that way to you.

 'Yes doctor' is not an underhand move.

Not intentionally.

 It asks you up-front
 to assume that comp is true in order then to examine the implications
 of that, whilst acknowledging (by calling it a 'bet') that this is
 just a hypothesis, an unprovable leap of faith.

I think that asking for an unprovable leap of faith in this context is
philosophically problematic since the purpose of computation is to
make unprovable leaps of faith unnecessary.

 You complain that
 using the term 'bet' assumes non-comp (I suppose because computers
 can't bet, or care about their bets), but that is just daft.

Saying 'that is just daft' to something which is clearly the honest
truth in my estimation doesn't persuade me in the slightest.

You might
 as well argue that the UDA is invalid because it is couched in natural
 language, which no computer can (or according to you, could ever)
 understand. If we accepted such arguments, we'd be incapable of
 debating comp at all.

That would be ok with me. I don't see anything to debate with comp,
because I understand why it seems like it could be true but actually
isn't.


 Saying 'no' to the doctor is anyone's right - nobody forces you to
 accept that first step or tries to pull the wool over your eyes if you
 choose to say 'yes'. Having said no you can then either say I don't
 believe in comp because (I just don't like it, it doesn't feel right,
 it's against my religion etc) or you can present a rational argument
 against it.

Or you can be rationally skeptical about it and say It has not been
proved or I see through the logic and understand the error in its
assumptions.

 That is to say, if asked to justify why you say no, you
 can either provide no reason and say simply that you choose to bet
 against it - which is OK but uninteresting - or you can present some
 reasoning which attempts to refute comp. You've made many such
 attempts, though to be honest all I've ever really been able to glean
 from your arguments is a sort of impressionistic revulsion at the idea
 of humans being computers,

That is your impressionistic revulsion at the idea of stepping outside
the entrenched positions of the argument. I have no revulsion
whatsoever at the idea of humans being computers. As I have mentioned
several times, I have believed in comp for most of my life, for the
same reasons that you do. I am fine with being uploaded and digitized,
but I know now why that won't work. I know exactly why.

 yet one which seems founded in a
 fundamental misunderstanding about what a computer is.

I have been using and programming computers almost every day for the
last 30 years. I know exactly what a computer is.

You repeatedly
 mistake the mathematical construct for the concrete, known object you
 use to type up your posts. This has been pointed out many times, but
 you still make arguments like that thing about one's closed eyes being
 unlike a switched-off screen, which verged on ludicrous.

I have no confusion whatsoever discriminating between the logic of
software, programming, and simulation and the technology of hardware,
engineering, and fabrication. I use metaphors which draw on familiar
examples to try to communicate unfamiliar ideas.

The example of closed eye noise is an odd one, but no more so than
Daniel Dennett's slides about optical illusion. With it I show that
there are counterexamples, where our sensation reflects factual truth
in spite of there being no advantageous purpose for it.


 I should say I'm no comp proponent, as my previous posts should
 attest. I'm agnostic on the subject, but at least I understand it.
 Your posts can make exasperating reading.

May I suggest that you stop reading them.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-24 Thread Quentin Anciaux
2012/2/24 Craig Weinberg whatsons...@gmail.com

 On Feb 23, 9:41 pm, Pierz pier...@gmail.com wrote:
  Let us suppose you're right and... but hold on! We can't do that. That
  would be circular. That would be sneaking in the assumption that
  you're right from the outset. That would be shifty', fishy, etc
  etc. You just don't seem to grasp the rudiments of philosophical
  reasoning.

 I understand that it seems that way to you.

  'Yes doctor' is not an underhand move.

 Not intentionally.

  It asks you up-front
  to assume that comp is true in order then to examine the implications
  of that, whilst acknowledging (by calling it a 'bet') that this is
  just a hypothesis, an unprovable leap of faith.

 I think that asking for an unprovable leap of faith in this context is
 philosophically problematic since the purpose of computation is to
 make unprovable leaps of faith unnecessary.

  You complain that
  using the term 'bet' assumes non-comp (I suppose because computers
  can't bet, or care about their bets), but that is just daft.

 Saying 'that is just daft' to something which is clearly the honest
 truth in my estimation doesn't persuade me in the slightest.

 You might
  as well argue that the UDA is invalid because it is couched in natural
  language, which no computer can (or according to you, could ever)
  understand. If we accepted such arguments, we'd be incapable of
  debating comp at all.

 That would be ok with me. I don't see anything to debate with comp,
 because I understand why it seems like it could be true but actually
 isn't.

 
  Saying 'no' to the doctor is anyone's right - nobody forces you to
  accept that first step or tries to pull the wool over your eyes if you
  choose to say 'yes'. Having said no you can then either say I don't
  believe in comp because (I just don't like it, it doesn't feel right,
  it's against my religion etc) or you can present a rational argument
  against it.

 Or you can be rationally skeptical about it and say It has not been
 proved or I see through the logic and understand the error in its
 assumptions.

  That is to say, if asked to justify why you say no, you
  can either provide no reason and say simply that you choose to bet
  against it - which is OK but uninteresting - or you can present some
  reasoning which attempts to refute comp. You've made many such
  attempts, though to be honest all I've ever really been able to glean
  from your arguments is a sort of impressionistic revulsion at the idea
  of humans being computers,

 That is your impressionistic revulsion at the idea of stepping outside
 the entrenched positions of the argument. I have no revulsion
 whatsoever at the idea of humans being computers. As I have mentioned
 several times, I have believed in comp for most of my life, for the
 same reasons that you do. I am fine with being uploaded and digitized,
 but I know now why that won't work. I know exactly why.


Then explain *exactly why* you know it. I'm not interrested to know you
know it.



  yet one which seems founded in a
  fundamental misunderstanding about what a computer is.

 I have been using and programming computers almost every day for the
 last 30 years. I know exactly what a computer is.

 You repeatedly
  mistake the mathematical construct for the concrete, known object you
  use to type up your posts. This has been pointed out many times, but
  you still make arguments like that thing about one's closed eyes being
  unlike a switched-off screen, which verged on ludicrous.

 I have no confusion whatsoever discriminating between the logic of
 software, programming, and simulation and the technology of hardware,
 engineering, and fabrication. I use metaphors which draw on familiar
 examples to try to communicate unfamiliar ideas.

 The example of closed eye noise is an odd one, but no more so than
 Daniel Dennett's slides about optical illusion. With it I show that
 there are counterexamples, where our sensation reflects factual truth
 in spite of there being no advantageous purpose for it.

 
  I should say I'm no comp proponent, as my previous posts should
  attest. I'm agnostic on the subject, but at least I understand it.
  Your posts can make exasperating reading.

 May I suggest that you stop reading them.

 Craig

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more 

Re: Yes Doctor circularity

2012-02-24 Thread Craig Weinberg
On Feb 24, 7:40 am, 1Z peterdjo...@yahoo.com wrote:


   I can't see why you would think that is incompatible with CTM

  It is not posed as a question of 'Do you believe that CTM includes X',
  but rather, 'using X, do you believe that there is any reason to doubt
  that Y(X) is X.'

 I don't see what you mean.

Will you take a leap of faith that there is no reason to doubt that
faith is unnecessary?.


     2. our perceptual capacity to jump to conclusions without logic (we
have to be able feel what it seems like rather than know what it
simply is.)

   Whereas that seems to be based on a mistake. It might be
   that our conclusions ARE based on logic, just logic that
   we are consciously unaware of.

  That's a good point but it could just as easily be based on
  subconscious idiopathic preferences.

 that's could for you!

?


  The patterns of human beings in
  guessing and betting vary from person to person whereas one of the
  hallmarks of computation is to get the same results.

 given the same software. But human software is formed
 by life experince and genetics, both of which vary from
 individual to individual

Which only underscores how different consciousness is from
computation. We can't share the exact same software, but computers
can. We can't re-run our experiences, but computers can. By default
humans cannot help but generate their own unique software, but the
reverse is true with computers. We have to work to write each update
to the code, which is then distributed uniformly to every (nearly)
identical client machine.


  By default,
  everything that a computer does is mechanistic. We have to go out of
  our way to generate sophisticated algorithms to emulate naturalistic
  human patterns.

 which could mean humans transcend computation, or
 could mean humans are more complex than current computers

Complexity is the deus ex anima of comp. There is no reason to imagine
that a complex arrangement of dumb marbles adds up to be something
which experiences the universe in some synergistic way.


 Human development proves just the contrary. We start
  out wild and willful and become more mechanistic through
  domestication.

 You think mechanisms can't be random or unpredictable?

That's not the same thing as wild and willful. There is agency there.
Intentional exuberance that can be domesticated. Babies are noisy
alright, but they aren't noise. Randomness and unpredictability is
mere noise.


   Altenatively, they might
   just be illogical...even if we are computers. It is a subtle
   fallacy to say that computers run on logic: they run on rules.

  Yes! This is why they have a trivial intelligence and no true
  understanding.

 Or current ones are too simple

Again - complexity is not the magic.


  Rule followers are dumb.

 You have no evidence that humans are not following
 complex rules.

We are following rules too, but we also break them.

I don't break the law, I make the law. - Charles Manson










 Logic is a form of
  intelligence which we use to write these rules that write more rules.
  The more rules you have, the better the machine, but no amount of
  rules make the machine more (or less) logical. Humans vary widely in
  their preference for logic, emotion, pragmatism, leadership, etc.
  Computers don't vary at all in their approach. It is all the same rule
  follower only with different rules.

   They have no guarantee to be rational. If the rules are
   wrong, you have bugs. Humans are known to have
   any number of cognitive bugs. The jumping thing
   could be implemented by real or pseudo randomness, too.

Because of 1, it is assumed that the thought experiment universe
includes the subjective experience of personal value - that the
patient has a stake, or 'money to bet'.

   What's the problem ? the experience (quale) or the value?

  The significance of the quale.

 You mean apparent significance. But apparent significance *is* a
 quale.

Apparent is redundant. All qualia are apparent. Significance is a meta
quale (appears more apparent - a 'signal' or 'sign').


   Do you know the value to be real?

  I know it to be subjective.

 Great. So it's an opinion. How does that stop the mechanistic-
 physicalistic show?

Mechanism is the opinion of things that are not us.


   Do you think a computer
   could not be deluded about value?

  I think a computer can't be anything but turned off and on.

 Well, you;'re wrong. It takes more than one bit (on/off) to
 describe computation.

you forgot the 'turning'.


Because of 2, it is assumed
that libertarian free will exists in the scenario

   I don't see that FW of a specifically libertarian aort is posited
   in the scenario. It just assumes you can make a choice in
   some sense.

  It assumes that choice is up to you and not determined by
  computations.

 Nope. It just assumes you can make some sort of choice.

A voluntary choice.

Craig

-- 
You received this message because you are 

Re: Yes Doctor circularity

2012-02-24 Thread John Mikes
People have too much time on their hand to argue back and forth.
Whatever (theory) we talk about has been born from human mind(s)
consequently only HALF _ TRUE max (if at all).

I imagine te doctor, I imagine the numbers (there are none in Nature)
I imagine controversies and matches, arithemtics, calculus and bio.
Project the I-s into 3rd person I-s and FEEL justified to BELIEVE
that it is  * T R U E  .*
**
How 'universal' is a universal machine (number)? it extends its universality
till our imagination's end. Can we imagine what we cannot imagine?
**
*JM*
**


On Wed, Feb 22, 2012 at 2:42 AM, Craig Weinberg whatsons...@gmail.comwrote:

 Has someone already mentioned this?

 I woke up in the middle of the night with this, so it might not make
 sense...or...

 The idea of saying yes to the doctor presumes that we, in the thought
 experiment, bring to the thought experiment universe:

 1. our sense of own significance (we have to be able to care about
 ourselves and our fate in the first place)
 2. our perceptual capacity to jump to conclusions without logic (we
 have to be able feel what it seems like rather than know what it
 simply is.)

 Because of 1, it is assumed that the thought experiment universe
 includes the subjective experience of personal value - that the
 patient has a stake, or 'money to bet'. Because of 2, it is assumed
 that libertarian free will exists in the scenario - we have to be able
 to 'bet' in the first place. As far as I know, comp can only answer
 'True, doctor', 'False, doctor', or 'I don't know, or I can't answer,
 doctor.'

 So, what this means is that in the scenario, while not precluding that
 a form of comp based consciousness could exist, does preclude that it
 is the only form of consciousness that exists, so therefore does not
 prove that in comp consciousness must arise from comp since it relies
 on non-comp to prove it. The same goes for the Turing Test, which
 after all is only about betting on imitation. Does the robot seem real
 to me? Bruno adds another layer to this by forcing our thought
 experimenter to care whether they are or not.

 What say ye, mighty logicians? Both of these tests succeed
 unintentionally at revealing the essentials of consciousness, not in
 front of our eyes with the thought experiment, but behind our backs.
 The sleight of hand is hidden innocently in the assumption of free
 will (and significance). In any universe where consciousness arises
 from comp, consciousness may be able to pass or fail the test as the
 tested object, but it cannot receive the test as a testing subject
 unless free will and significance are already presumed to be comp.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-24 Thread meekerdb

On 2/24/2012 3:05 PM, John Mikes wrote:

People have too much time on their hand to argue back and forth.
Whatever (theory) we talk about has been born from human mind(s)
consequently only HALF _ TRUE max (if at all).


Almost all our theories are not only probably false, they are *known* to be false.  But 
that doesn't mean they should be discarded or they are not useful.  It means they have 
limited accuracy and limited domains of validity.



I imagine te doctor, I imagine the numbers (there are none in Nature)
I imagine controversies and matches, arithemtics, calculus and bio.
Project the I-s into 3rd person I-s and FEEL justified to BELIEVE
that it is *_T R U E  ._*


True means different things in different theories.  In ordinary, declarative speech it 
means correspondence with a fact.  In science it's the goal of predictive accuracy over 
the whole range of applications and consilience with other all other 'true' theories.  In 
logic it's an attribute t of propositions that are axioms and that's preserved by the 
rules of inference.



**
How 'universal' is a universal machine (number)? it extends its universality
till our imagination's end. Can we imagine what we cannot imagine?


We have to build on what we have.

Brent
You have to make the good out of the bad because that is all you have
got to make it out of.
   --- Robert Penn Warren

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Bruno Marchal


On 23 Feb 2012, at 06:42, Craig Weinberg wrote:


On Feb 22, 6:10 pm, Pierz pier...@gmail.com wrote:

'Yes doctor' is merely an establishment of the assumption of comp.
Saying yes means you are a computationalist. If you say no the you  
are

not one, and one cannot proceed with the argument that follows -
though then the onus will be on you to explain *why* you don't  
believe

a computer can substitute for a brain.


That's what is circular. The question cheats by using the notion of a
bet to put the onus on us to take comp for granted in the first place
when there is no reason to presume that bets can exist in a universe
where comp is true. It's a loaded question, but in a sneaky way. It is
to say 'if you don't think the computer is happy, that's fine, but you
have to explain why'.


It is circular only if we said that saying yes was an argument for  
comp, which nobody claims.

I agree with Stathis and Pierz comment.

You do seem to have some difficulties in the understanding of what is  
an assumption or an hypothesis.


We defend comp against non valid refutation, this does not mean that  
we conclude that comp is true. it is our working hypothesis.







If you've said yes, then this
of course entails that you believe that 'free choice' and 'personal
value' (or the subjective experience of them) can be products of a
computer program, so there's no contradiction.


Right, so why ask the question? Why not just ask 'do you believe a
computer program can be happy'?


A machine could think (Strong AI thesis) does not entail comp (that we  
are machine).
The fact that a computer program can be happy does not logically  
entail that we are ourself computer program. may be angels and Gods  
(non machine) can be happy too. To sum up:


COMP implies STRONG-AI

but

STRONG-AI does not imply COMP.






When it is posed as a logical
consequence instead of a decision, it implicitly privileges the
passive voice. We are invited to believe that we have chosen to agree
to comp because there is a logical argument for it rather than an
arbitrary preference committed to in advance. It is persuasion by
rhetoric, not by science.


Nobody tries to advocate comp. We assume it. So if we get a  
contradiction we can abandon it. But we find only weirdness, even  
testable weirdness.







In fact the circularity
is in your reasoning. You are merely reasserting your assumption that
choice and personal value must be non-comp,


No, the scenario asserts that by relying on the device of choice and
personal value as the engine of the thought experiment. My objection
is not based on any prejudice against comp I may have, it is based on
the prejudice of the way the question is posed.


The question is used to give a quasi-operational definition of  
computationalism, by its acceptance of a digital brain transplant.  
This makes possible to reason without solving the hard task to define  
consciousness or thinking. This belongs to the axiomatic method  
usually favored by mathematicians.







but that is exactly what
is at issue in the yes doctor question. That is precisely what we're
betting on.


If we are betting on anything then we are in a universe which has not
been proved to be supported by comp alone.


That is exactly what we try to make precise enough so that it can be  
tested. Up to now, comp is 'saved' by the quantum weirdness it implies  
(MW, indeterminacy, non locality, non-cloning), without mentioning the  
candidate for consciousness, qualia, ... that is, the many things that  
a machine can produce as 1p-true without any 3p-means to justify them.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg whatsons...@gmail.com

 On Feb 23, 1:09 am, Stathis Papaioannou stath...@gmail.com wrote:
 
  The yes doctor scenario considers the belief that if you are issued
  with a computerised brain you will feel just the same. It's equivalent
  to the yes barber scenario: that if you receive a haircut you will
  feel just the same, and not become a zombie or otherwise radically
  different being.

 That is one reason why it's a loaded question.


It's not a question... but a starting hypothesis... Consider something true
and then either shows a contradiction or not (if you find a contradiction
starting from the assumption that the hypothesis is true... then you've
disproved the hypothesis).

But as usual you cannot grasp basic logic. You have to past the hypothesis
to discuss it and eventually find or not a contradiction... stopping at the
hypothesis, will let you stuck... you can discuss it for an infinite time
it won't help.

http://en.wikipedia.org/wiki/Mathematical_proof#Proof_by_contradiction


 It equates having your
 brain surgically replaced with getting a haircut. It's the way that it
 does it that's fishy though. It's more equivalent to saying 'In a
 world where having haircuts is ordinary, are you afraid of having a
 haircut.'. or more accurately, 'In a world where arithmetic is true,
 are arithmetic truths your truths.'

 Craig

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 4:32 am, Bruno Marchal marc...@ulb.ac.be wrote:
 On 23 Feb 2012, at 06:42, Craig Weinberg wrote:

  On Feb 22, 6:10 pm, Pierz pier...@gmail.com wrote:
  'Yes doctor' is merely an establishment of the assumption of comp.
  Saying yes means you are a computationalist. If you say no the you
  are
  not one, and one cannot proceed with the argument that follows -
  though then the onus will be on you to explain *why* you don't
  believe
  a computer can substitute for a brain.

  That's what is circular. The question cheats by using the notion of a
  bet to put the onus on us to take comp for granted in the first place
  when there is no reason to presume that bets can exist in a universe
  where comp is true. It's a loaded question, but in a sneaky way. It is
  to say 'if you don't think the computer is happy, that's fine, but you
  have to explain why'.

 It is circular only if we said that saying yes was an argument for
 comp, which nobody claims.

I'm not saying comp is claimed explicitly. My point is that the
structure of the thought experiment implicitly assumes comp from the
start. It seats you at the Blackjack table with money and then asks if
you want to play.

 I agree with Stathis and Pierz comment.

 You do seem to have some difficulties in the understanding of what is
 an assumption or an hypothesis.

From my perspective it seems that others have difficulties
understanding when I am seeing through their assumptions.


 We defend comp against non valid refutation, this does not mean that
 we conclude that comp is true. it is our working hypothesis.

I understand that is how you think of it, but I am pointing out your
unconscious bias. You take consciousness for granted from the start.
It may seem innocent, but in this case what it does it preclude the
subjective thesis from being considered fundamental. It's a straw man
of the possibility of unconsciousness.




  If you've said yes, then this
  of course entails that you believe that 'free choice' and 'personal
  value' (or the subjective experience of them) can be products of a
  computer program, so there's no contradiction.

  Right, so why ask the question? Why not just ask 'do you believe a
  computer program can be happy'?

 A machine could think (Strong AI thesis) does not entail comp (that we
 are machine).

I understand that, but we are talking about comp. The thought
experiment focuses on the brain replacement, but the argument is
already lost in the initial conditions which presuppose the ability to
care or tell the difference and have free will to choose. It's subtle,
but so is the question of consciousness. Nothing whatsoever can be
left unchallenged, including the capacity to leave something
unchallenged.

 The fact that a computer program can be happy does not logically
 entail that we are ourself computer program. may be angels and Gods
 (non machine) can be happy too. To sum up:

 COMP implies STRONG-AI

 but

 STRONG-AI does not imply COMP.

I understand, but Yes Doctor considers whether STRONG-AI is likely to
be functionally identical and fully interchangeable with human
consciousness. It may not say that we are machine, but it says that
machines can be us - which is really even stronger, since we can only
be ourselves but machines apparently can be anything.


  When it is posed as a logical
  consequence instead of a decision, it implicitly privileges the
  passive voice. We are invited to believe that we have chosen to agree
  to comp because there is a logical argument for it rather than an
  arbitrary preference committed to in advance. It is persuasion by
  rhetoric, not by science.

 Nobody tries to advocate comp. We assume it. So if we get a
 contradiction we can abandon it. But we find only weirdness, even
 testable weirdness.

I understand the reason for that though. Comp itself is the rabbit
hole of empiricism. Once you allow it the initial assumption, it can
only support itself. Comp has no ability to contradict itself, but the
universe does.




  In fact the circularity
  is in your reasoning. You are merely reasserting your assumption that
  choice and personal value must be non-comp,

  No, the scenario asserts that by relying on the device of choice and
  personal value as the engine of the thought experiment. My objection
  is not based on any prejudice against comp I may have, it is based on
  the prejudice of the way the question is posed.

 The question is used to give a quasi-operational definition of
 computationalism, by its acceptance of a digital brain transplant.
 This makes possible to reason without solving the hard task to define
 consciousness or thinking. This belongs to the axiomatic method
 usually favored by mathematicians.

I know. What I'm saying is that the axiomatic method precludes any
useful examination of consciousness axiomatically. It's a screwdriver
instead of a hot meal.




  but that is exactly what
  is at issue in the yes doctor question. That is precisely what we're
 

Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg whatsons...@gmail.com

 On Feb 23, 4:32 am, Bruno Marchal marc...@ulb.ac.be wrote:
  On 23 Feb 2012, at 06:42, Craig Weinberg wrote:
 
   On Feb 22, 6:10 pm, Pierz pier...@gmail.com wrote:
   'Yes doctor' is merely an establishment of the assumption of comp.
   Saying yes means you are a computationalist. If you say no the you
   are
   not one, and one cannot proceed with the argument that follows -
   though then the onus will be on you to explain *why* you don't
   believe
   a computer can substitute for a brain.
 
   That's what is circular. The question cheats by using the notion of a
   bet to put the onus on us to take comp for granted in the first place
   when there is no reason to presume that bets can exist in a universe
   where comp is true. It's a loaded question, but in a sneaky way. It is
   to say 'if you don't think the computer is happy, that's fine, but you
   have to explain why'.
 
  It is circular only if we said that saying yes was an argument for
  comp, which nobody claims.

 I'm not saying comp is claimed explicitly. My point is that the
 structure of the thought experiment implicitly assumes comp from the
 start. It seats you at the Blackjack table with money and then asks if
 you want to play.

  I agree with Stathis and Pierz comment.
 
  You do seem to have some difficulties in the understanding of what is
  an assumption or an hypothesis.

 From my perspective it seems that others have difficulties
 understanding when I am seeing through their assumptions.

 
  We defend comp against non valid refutation, this does not mean that
  we conclude that comp is true. it is our working hypothesis.

 I understand that is how you think of it, but I am pointing out your
 unconscious bias. You take consciousness for granted from the start.


Because it is... I don't know/care for you, but I'm conscious... the
existence of consciousness from my own POV, is not a discussion.


 It may seem innocent, but in this case what it does it preclude the
 subjective thesis from being considered fundamental. It's a straw man


Read what is a straw man... a straw man is taking the opponent argument and
deforming it to means other things which are obvious to disprove.

http://en.wikipedia.org/wiki/Straw_man


 of the possibility of unconsciousness.

 
 
 
   If you've said yes, then this
   of course entails that you believe that 'free choice' and 'personal
   value' (or the subjective experience of them) can be products of a
   computer program, so there's no contradiction.
 
   Right, so why ask the question? Why not just ask 'do you believe a
   computer program can be happy'?
 
  A machine could think (Strong AI thesis) does not entail comp (that we
  are machine).

 I understand that, but we are talking about comp. The thought
 experiment focuses on the brain replacement, but the argument is
 already lost in the initial conditions which presuppose the ability to
 care or tell the difference and have free will to choose.


But I have that ability and don't care to discuss it further. I'm
conscious, I'm sorry you're not.


 It's subtle,
 but so is the question of consciousness. Nothing whatsoever can be
 left unchallenged, including the capacity to leave something
 unchallenged.

  The fact that a computer program can be happy does not logically
  entail that we are ourself computer program. may be angels and Gods
  (non machine) can be happy too. To sum up:
 
  COMP implies STRONG-AI
 
  but
 
  STRONG-AI does not imply COMP.

 I understand, but Yes Doctor considers whether STRONG-AI is likely to
 be functionally identical and fully interchangeable with human
 consciousness. It may not say that we are machine, but it says that
 machines can be us


It says machines could be conscious as we are without us being machine.

== strong ai.

Comp says that we are machine, this entails strong-ai, because if we are
machine, as we are conscious, then of course machine can be conscious...
But if you knew machine could be conscious, that doesn't mean the humans
would be machines... we could be more than that.


 - which is really even stronger, since we can only
 be ourselves but machines apparently can be anything.


No read upper.



 
   When it is posed as a logical
   consequence instead of a decision, it implicitly privileges the
   passive voice. We are invited to believe that we have chosen to agree
   to comp because there is a logical argument for it rather than an
   arbitrary preference committed to in advance. It is persuasion by
   rhetoric, not by science.
 
  Nobody tries to advocate comp. We assume it. So if we get a
  contradiction we can abandon it. But we find only weirdness, even
  testable weirdness.

 I understand the reason for that though. Comp itself is the rabbit
 hole of empiricism. Once you allow it the initial assumption, it can
 only support itself.


Then you could never show a contradiction for any hypothesis that you
consider true... and that's 

Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 8:53 am, Quentin Anciaux allco...@gmail.com wrote:
 2012/2/23 Craig Weinberg whatsons...@gmail.com

  On Feb 23, 1:09 am, Stathis Papaioannou stath...@gmail.com wrote:

   The yes doctor scenario considers the belief that if you are issued
   with a computerised brain you will feel just the same. It's equivalent
   to the yes barber scenario: that if you receive a haircut you will
   feel just the same, and not become a zombie or otherwise radically
   different being.

  That is one reason why it's a loaded question.

 It's not a question... but a starting hypothesis...

Do you say yes to the doctor? is a question. It could be a
hypothesis too if you want. I don't see why the difference is
relevant.

 Consider something true
 and then either shows a contradiction or not (if you find a contradiction
 starting from the assumption that the hypothesis is true... then you've
 disproved the hypothesis).

The question relates specifically to consciousness. Empirical logic is
a subordinate category of consciousness. We cannot treat the subject
of consciousness as if it were subordinate to logic without cognitive
bias that privileges reductionism.


 But as usual you cannot grasp basic logic. You have to past the hypothesis
 to discuss it and eventually find or not a contradiction... stopping at the
 hypothesis, will let you stuck... you can discuss it for an infinite time
 it won't help.

I can almost make sense of what you are trying to write there. Near as
I can come it's some kind of ad hominem foaming at the mouth about
what assumptions I'm allowed to challenge.


 http://en.wikipedia.org/wiki/Mathematical_proof#Proof_by_contradiction

http://en.wikipedia.org/wiki/Bandwagon_effect

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 9:26 am, Quentin Anciaux allco...@gmail.com wrote:

  I understand that is how you think of it, but I am pointing out your
  unconscious bias. You take consciousness for granted from the start.

 Because it is... I don't know/care for you, but I'm conscious... the
 existence of consciousness from my own POV, is not a discussion.

The whole thought experiment has to do specifically with testing the
existence of consciousness and POV. If we were being honest about the
scenario, we would rely only on known comp truths to arrive at the
answer. It's cheating to smuggle in human introspection in a test of
the nature of human introspection. Let us think only in terms of
'true, doctor'. If comp is valid, there should be no difference
between 'true' and 'yes'.


  It may seem innocent, but in this case what it does it preclude the
  subjective thesis from being considered fundamental. It's a straw man

 Read what is a straw man... a straw man is taking the opponent argument and
 deforming it to means other things which are obvious to disprove.

 http://en.wikipedia.org/wiki/Straw_man

a superficially similar yet unequivalent proposition (the straw
man)

I think that yes doctor makes a straw man of the non-comp position. It
argues that we have to choose whether or not we believe in comp, when
the non-comp position might be that with comp, we cannot choose to
believe in anything in the first place.










  of the possibility of unconsciousness.

If you've said yes, then this
of course entails that you believe that 'free choice' and 'personal
value' (or the subjective experience of them) can be products of a
computer program, so there's no contradiction.

Right, so why ask the question? Why not just ask 'do you believe a
computer program can be happy'?

   A machine could think (Strong AI thesis) does not entail comp (that we
   are machine).

  I understand that, but we are talking about comp. The thought
  experiment focuses on the brain replacement, but the argument is
  already lost in the initial conditions which presuppose the ability to
  care or tell the difference and have free will to choose.

 But I have that ability and don't care to discuss it further. I'm
 conscious, I'm sorry you're not.

But you aren't in the thought experiment.










  It's subtle,
  but so is the question of consciousness. Nothing whatsoever can be
  left unchallenged, including the capacity to leave something
  unchallenged.

   The fact that a computer program can be happy does not logically
   entail that we are ourself computer program. may be angels and Gods
   (non machine) can be happy too. To sum up:

   COMP implies STRONG-AI

   but

   STRONG-AI does not imply COMP.

  I understand, but Yes Doctor considers whether STRONG-AI is likely to
  be functionally identical and fully interchangeable with human
  consciousness. It may not say that we are machine, but it says that
  machines can be us

 It says machines could be conscious as we are without us being machine.

 == strong ai.

That's what I said. That makes machines more flexible than organically
conscious beings. They can be machines or like us, but we can't fully
be machines so we are less than machines.


 Comp says that we are machine, this entails strong-ai, because if we are
 machine, as we are conscious, then of course machine can be conscious...
 But if you knew machine could be conscious, that doesn't mean the humans
 would be machines... we could be more than that.

More than that in what way? Different maybe, but Strong AI by
definition makes machines more than us, because we cannot compete with
machines at being mechanical but they can compete as equals with us in
every other way.


  - which is really even stronger, since we can only
  be ourselves but machines apparently can be anything.

 No read upper.

No read upper.












When it is posed as a logical
consequence instead of a decision, it implicitly privileges the
passive voice. We are invited to believe that we have chosen to agree
to comp because there is a logical argument for it rather than an
arbitrary preference committed to in advance. It is persuasion by
rhetoric, not by science.

   Nobody tries to advocate comp. We assume it. So if we get a
   contradiction we can abandon it. But we find only weirdness, even
   testable weirdness.

  I understand the reason for that though. Comp itself is the rabbit
  hole of empiricism. Once you allow it the initial assumption, it can
  only support itself.

 Then you could never show a contradiction for any hypothesis that you
 consider true... and that's simply false, hence you cannot be correct.

You are doing exactly what I just said. You assume initially that all
truths are bound by Aristotelian logic. You cannot contradict any
hypothesis that says you aren't a zombie, hence you are a zombie. My
whole point is that consciousness is not like any other subject. You
cannot stand 

Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg whatsons...@gmail.com

 On Feb 23, 9:26 am, Quentin Anciaux allco...@gmail.com wrote:
 
   I understand that is how you think of it, but I am pointing out your
   unconscious bias. You take consciousness for granted from the start.
 
  Because it is... I don't know/care for you, but I'm conscious... the
  existence of consciousness from my own POV, is not a discussion.

 The whole thought experiment has to do specifically with testing the
 existence of consciousness and POV. If we were being honest about the
 scenario, we would rely only on known comp truths to arrive at the
 answer. It's cheating to smuggle in human introspection in a test of
 the nature of human introspection. Let us think only in terms of
 'true, doctor'. If comp is valid, there should be no difference
 between 'true' and 'yes'.

 
   It may seem innocent, but in this case what it does it preclude the
   subjective thesis from being considered fundamental. It's a straw man
 
  Read what is a straw man... a straw man is taking the opponent argument
 and
  deforming it to means other things which are obvious to disprove.
 
  http://en.wikipedia.org/wiki/Straw_man

 a superficially similar yet unequivalent proposition (the straw
 man)

 I think that yes doctor makes a straw man of the non-comp position. It
 argues that we have to choose whether or not we believe in comp, when
 the non-comp position might be that with comp, we cannot choose to
 believe in anything in the first place.

 
 
 
 
 
 
 
 
 
   of the possibility of unconsciousness.
 
 If you've said yes, then this
 of course entails that you believe that 'free choice' and
 'personal
 value' (or the subjective experience of them) can be products of a
 computer program, so there's no contradiction.
 
 Right, so why ask the question? Why not just ask 'do you believe a
 computer program can be happy'?
 
A machine could think (Strong AI thesis) does not entail comp (that
 we
are machine).
 
   I understand that, but we are talking about comp. The thought
   experiment focuses on the brain replacement, but the argument is
   already lost in the initial conditions which presuppose the ability to
   care or tell the difference and have free will to choose.
 
  But I have that ability and don't care to discuss it further. I'm
  conscious, I'm sorry you're not.

 But you aren't in the thought experiment.

 
 
 
 
 
 
 
 
 
   It's subtle,
   but so is the question of consciousness. Nothing whatsoever can be
   left unchallenged, including the capacity to leave something
   unchallenged.
 
The fact that a computer program can be happy does not logically
entail that we are ourself computer program. may be angels and Gods
(non machine) can be happy too. To sum up:
 
COMP implies STRONG-AI
 
but
 
STRONG-AI does not imply COMP.
 
   I understand, but Yes Doctor considers whether STRONG-AI is likely to
   be functionally identical and fully interchangeable with human
   consciousness. It may not say that we are machine, but it says that
   machines can be us
 
  It says machines could be conscious as we are without us being machine.
 
  == strong ai.

 That's what I said. That makes machines more flexible than organically
 conscious beings. They can be machines or like us, but we can't fully
 be machines so we are less than machines.

 Either we are machines or we are not... If machines can be conscious and
we're not machines then we are *more* than machines... not less.


  
  Comp says that we are machine, this entails strong-ai, because if we are
  machine, as we are conscious, then of course machine can be conscious...
  But if you knew machine could be conscious, that doesn't mean the humans
  would be machines... we could be more than that.

 More than that in what way?


We must contain infinite components if we are not machines emulable. So we
are *more* than machines if machines can be conscious and we're not
machines.


 Different maybe, but Strong AI by
 definition makes machines more than us, because we cannot compete with
 machines at being mechanical but they can compete as equals with us in
 every other way.

 
   - which is really even stronger, since we can only
   be ourselves but machines apparently can be anything.
 
  No read upper.

 No read upper.

 
 
 
 
 
 
 
 
 
 
 
 When it is posed as a logical
 consequence instead of a decision, it implicitly privileges the
 passive voice. We are invited to believe that we have chosen to
 agree
 to comp because there is a logical argument for it rather than an
 arbitrary preference committed to in advance. It is persuasion by
 rhetoric, not by science.
 
Nobody tries to advocate comp. We assume it. So if we get a
contradiction we can abandon it. But we find only weirdness, even
testable weirdness.
 
   I understand the reason for that though. Comp itself is the rabbit
   hole of empiricism. Once you allow it the 

Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg whatsons...@gmail.com

 On Feb 23, 9:26 am, Quentin Anciaux allco...@gmail.com wrote:
 
   I understand that is how you think of it, but I am pointing out your
   unconscious bias. You take consciousness for granted from the start.
 
  Because it is... I don't know/care for you, but I'm conscious... the
  existence of consciousness from my own POV, is not a discussion.

 The whole thought experiment has to do specifically with testing the
 existence of consciousness and POV. If we were being honest about the
 scenario, we would rely only on known comp truths to arrive at the
 answer. It's cheating to smuggle in human introspection in a test of
 the nature of human introspection. Let us think only in terms of
 'true, doctor'. If comp is valid, there should be no difference
 between 'true' and 'yes'.

 
   It may seem innocent, but in this case what it does it preclude the
   subjective thesis from being considered fundamental. It's a straw man
 
  Read what is a straw man... a straw man is taking the opponent argument
 and
  deforming it to means other things which are obvious to disprove.
 
  http://en.wikipedia.org/wiki/Straw_man

 a superficially similar yet unequivalent proposition (the straw
 man)

 I think that yes doctor makes a straw man of the non-comp position. It
 argues that we have to choose whether or not we believe in comp, when
 the non-comp position might be that with comp, we cannot choose to
 believe in anything in the first place.

 
 
 
 
 
 
 
 
 
   of the possibility of unconsciousness.
 
 If you've said yes, then this
 of course entails that you believe that 'free choice' and
 'personal
 value' (or the subjective experience of them) can be products of a
 computer program, so there's no contradiction.
 
 Right, so why ask the question? Why not just ask 'do you believe a
 computer program can be happy'?
 
A machine could think (Strong AI thesis) does not entail comp (that
 we
are machine).
 
   I understand that, but we are talking about comp. The thought
   experiment focuses on the brain replacement, but the argument is
   already lost in the initial conditions which presuppose the ability to
   care or tell the difference and have free will to choose.
 
  But I have that ability and don't care to discuss it further. I'm
  conscious, I'm sorry you're not.

 But you aren't in the thought experiment.

 
 
 
 
 
 
 
 
 
   It's subtle,
   but so is the question of consciousness. Nothing whatsoever can be
   left unchallenged, including the capacity to leave something
   unchallenged.
 
The fact that a computer program can be happy does not logically
entail that we are ourself computer program. may be angels and Gods
(non machine) can be happy too. To sum up:
 
COMP implies STRONG-AI
 
but
 
STRONG-AI does not imply COMP.
 
   I understand, but Yes Doctor considers whether STRONG-AI is likely to
   be functionally identical and fully interchangeable with human
   consciousness. It may not say that we are machine, but it says that
   machines can be us
 
  It says machines could be conscious as we are without us being machine.
 
  == strong ai.

 That's what I said. That makes machines more flexible than organically
 conscious beings. They can be machines or like us, but we can't fully
 be machines so we are less than machines.

 
  Comp says that we are machine, this entails strong-ai, because if we are
  machine, as we are conscious, then of course machine can be conscious...
  But if you knew machine could be conscious, that doesn't mean the humans
  would be machines... we could be more than that.

 More than that in what way? Different maybe, but Strong AI by
 definition makes machines more than us, because we cannot compete with
 machines at being mechanical but they can compete as equals with us in
 every other way.

 
   - which is really even stronger, since we can only
   be ourselves but machines apparently can be anything.
 
  No read upper.

 No read upper.

 
 
 
 
 
 
 
 
 
 
 
 When it is posed as a logical
 consequence instead of a decision, it implicitly privileges the
 passive voice. We are invited to believe that we have chosen to
 agree
 to comp because there is a logical argument for it rather than an
 arbitrary preference committed to in advance. It is persuasion by
 rhetoric, not by science.
 
Nobody tries to advocate comp. We assume it. So if we get a
contradiction we can abandon it. But we find only weirdness, even
testable weirdness.
 
   I understand the reason for that though. Comp itself is the rabbit
   hole of empiricism. Once you allow it the initial assumption, it can
   only support itself.
 
  Then you could never show a contradiction for any hypothesis that you
  consider true... and that's simply false, hence you cannot be correct.

 You are doing exactly what I just said. You assume initially that all
 truths are bound by 

Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 12:53 pm, Quentin Anciaux allco...@gmail.com wrote:
 2012/2/23 Craig Weinberg whatsons...@gmail.com









  On Feb 23, 9:26 am, Quentin Anciaux allco...@gmail.com wrote:

I understand that is how you think of it, but I am pointing out your
unconscious bias. You take consciousness for granted from the start.

   Because it is... I don't know/care for you, but I'm conscious... the
   existence of consciousness from my own POV, is not a discussion.

  The whole thought experiment has to do specifically with testing the
  existence of consciousness and POV. If we were being honest about the
  scenario, we would rely only on known comp truths to arrive at the
  answer. It's cheating to smuggle in human introspection in a test of
  the nature of human introspection. Let us think only in terms of
  'true, doctor'. If comp is valid, there should be no difference
  between 'true' and 'yes'.

It may seem innocent, but in this case what it does it preclude the
subjective thesis from being considered fundamental. It's a straw man

   Read what is a straw man... a straw man is taking the opponent argument
  and
   deforming it to means other things which are obvious to disprove.

  http://en.wikipedia.org/wiki/Straw_man

  a superficially similar yet unequivalent proposition (the straw
  man)

  I think that yes doctor makes a straw man of the non-comp position. It
  argues that we have to choose whether or not we believe in comp, when
  the non-comp position might be that with comp, we cannot choose to
  believe in anything in the first place.

of the possibility of unconsciousness.

  If you've said yes, then this
  of course entails that you believe that 'free choice' and
  'personal
  value' (or the subjective experience of them) can be products of a
  computer program, so there's no contradiction.

  Right, so why ask the question? Why not just ask 'do you believe a
  computer program can be happy'?

 A machine could think (Strong AI thesis) does not entail comp (that
  we
 are machine).

I understand that, but we are talking about comp. The thought
experiment focuses on the brain replacement, but the argument is
already lost in the initial conditions which presuppose the ability to
care or tell the difference and have free will to choose.

   But I have that ability and don't care to discuss it further. I'm
   conscious, I'm sorry you're not.

  But you aren't in the thought experiment.

It's subtle,
but so is the question of consciousness. Nothing whatsoever can be
left unchallenged, including the capacity to leave something
unchallenged.

 The fact that a computer program can be happy does not logically
 entail that we are ourself computer program. may be angels and Gods
 (non machine) can be happy too. To sum up:

 COMP implies STRONG-AI

 but

 STRONG-AI does not imply COMP.

I understand, but Yes Doctor considers whether STRONG-AI is likely to
be functionally identical and fully interchangeable with human
consciousness. It may not say that we are machine, but it says that
machines can be us

   It says machines could be conscious as we are without us being machine.

   == strong ai.

  That's what I said. That makes machines more flexible than organically
  conscious beings. They can be machines or like us, but we can't fully
  be machines so we are less than machines.

  Either we are machines or we are not... If machines can be conscious and

 we're not machines then we are *more* than machines... not less.

How do you figure. If we are A and not B, and machines are A and B,
how does that make us more?




   Comp says that we are machine, this entails strong-ai, because if we are
   machine, as we are conscious, then of course machine can be conscious...
   But if you knew machine could be conscious, that doesn't mean the humans
   would be machines... we could be more than that.

  More than that in what way?

 We must contain infinite components if we are not machines emulable. So we
 are *more* than machines if machines can be conscious and we're not
 machines.

It only means we are different, not that we are more. If I am a doctor
but not a plumber and a machine is a doctor and a plumber then we are
both doctors. Just because I am not a plumber doesn't mean that I am
more than a doctor. If so, in what way?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 12:57 pm, Quentin Anciaux allco...@gmail.com wrote:
 2012/2/23 Craig Weinberg whatsons...@gmail.com


Comp has no ability to contradict itself,

   You say so.

  Is it not true?

 no it is not true.. for example, proving consciousness cannot be emulate on
 machines would proves computationalism wrong.

Consciousness isn't falsifiable in the first place.

 Showing an infinite
 components necessary for consciousnes would prove computationalism wrong,

Also not falsifiable. I can't prove that you are conscious or that you
don't require infinite components.

 showing that a biological neurons is necessary for consciousness would
 prove computationalism wrong... and so on.

Not possible to prove, but possible to nearly disprove if you walk
yourself off of your brain onto a digital brain and back on.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread 1Z


On Feb 22, 7:42 am, Craig Weinberg whatsons...@gmail.com wrote:
 Has someone already mentioned this?

 I woke up in the middle of the night with this, so it might not make
 sense...or...

 The idea of saying yes to the doctor presumes that we, in the thought
 experiment, bring to the thought experiment universe:

 1. our sense of own significance (we have to be able to care about
 ourselves and our fate in the first place)

I can't see why you would think that is incompatible with CTM

 2. our perceptual capacity to jump to conclusions without logic (we
 have to be able feel what it seems like rather than know what it
 simply is.)

Whereas that seems to be based on a mistake. It might be
that our conclusions ARE based on logic, just logic that
we are consciously unaware of. Altenatively, they might
just be illogical...even if we are computers. It is a subtle
fallacy to say that computers run on logic: they run on rules.
They have no guarantee to be rational. If the rules are
wrong, you have bugs. Humans are known to have
any number of cognitive bugs. The jumping thing
could be implemented by real or pseudo randomness, too.


 Because of 1, it is assumed that the thought experiment universe
 includes the subjective experience of personal value - that the
 patient has a stake, or 'money to bet'.

What's the problem ? the experience (quale) or the value?
Do you know the value to be real? Do you think a computer
could not be deluded about value?

 Because of 2, it is assumed
 that libertarian free will exists in the scenario

I don't see that FW of a specifically libertarian aort is posited
in the scenario. It just assumes you can make a choice in
some sense.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Craig Weinberg
On Feb 23, 3:25 pm, 1Z peterdjo...@yahoo.com wrote:
 On Feb 22, 7:42 am, Craig Weinberg whatsons...@gmail.com wrote:

  Has someone already mentioned this?

  I woke up in the middle of the night with this, so it might not make
  sense...or...

  The idea of saying yes to the doctor presumes that we, in the thought
  experiment, bring to the thought experiment universe:

  1. our sense of own significance (we have to be able to care about
  ourselves and our fate in the first place)

 I can't see why you would think that is incompatible with CTM

It is not posed as a question of 'Do you believe that CTM includes X',
but rather, 'using X, do you believe that there is any reason to doubt
that Y(X) is X.'


  2. our perceptual capacity to jump to conclusions without logic (we
  have to be able feel what it seems like rather than know what it
  simply is.)

 Whereas that seems to be based on a mistake. It might be
 that our conclusions ARE based on logic, just logic that
 we are consciously unaware of.

That's a good point but it could just as easily be based on
subconscious idiopathic preferences. The patterns of human beings in
guessing and betting vary from person to person whereas one of the
hallmarks of computation is to get the same results. By default,
everything that a computer does is mechanistic. We have to go out of
our way to generate sophisticated algorithms to emulate naturalistic
human patterns. Human development proves just the contrary. We start
out wild and willful and become more mechanistic through
domestication.

 Altenatively, they might
 just be illogical...even if we are computers. It is a subtle
 fallacy to say that computers run on logic: they run on rules.

Yes! This is why they have a trivial intelligence and no true
understanding. Rule followers are dumb. Logic is a form of
intelligence which we use to write these rules that write more rules.
The more rules you have, the better the machine, but no amount of
rules make the machine more (or less) logical. Humans vary widely in
their preference for logic, emotion, pragmatism, leadership, etc.
Computers don't vary at all in their approach. It is all the same rule
follower only with different rules.

 They have no guarantee to be rational. If the rules are
 wrong, you have bugs. Humans are known to have
 any number of cognitive bugs. The jumping thing
 could be implemented by real or pseudo randomness, too.

  Because of 1, it is assumed that the thought experiment universe
  includes the subjective experience of personal value - that the
  patient has a stake, or 'money to bet'.

 What's the problem ? the experience (quale) or the value?

The significance of the quale.

 Do you know the value to be real?

I know it to be subjective.

 Do you think a computer
 could not be deluded about value?

I think a computer can't be anything but turned off and on.


  Because of 2, it is assumed
  that libertarian free will exists in the scenario

 I don't see that FW of a specifically libertarian aort is posited
 in the scenario. It just assumes you can make a choice in
 some sense.

It assumes that choice is up to you and not determined by
computations.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Quentin Anciaux
2012/2/23 Craig Weinberg whatsons...@gmail.com

 On Feb 23, 12:57 pm, Quentin Anciaux allco...@gmail.com wrote:
  2012/2/23 Craig Weinberg whatsons...@gmail.com

 
 Comp has no ability to contradict itself,
 
You say so.
 
   Is it not true?
 
  no it is not true.. for example, proving consciousness cannot be emulate
 on
  machines would proves computationalism wrong.

 Consciousness isn't falsifiable in the first place.


And you know that how ? Because you said if a machine acted in every way as
a human being you would still says it is not conscious... but you can't say
that if consciousness wasn't falsifiable. How could you know a disproof
before knowing it ? You asked for how to falsify computationalism, showing
consciousness cannot be emulated on machines is enough whatever the proof
is.


  Showing an infinite
  components necessary for consciousnes would prove computationalism wrong,

 Also not falsifiable.


It is, just show a component of human consciousness which *cannot* be
described in finite terms, and is necessary for consciousness.


 I can't prove that you are conscious or that you
 don't require infinite components.

  showing that a biological neurons is necessary for consciousness would
  prove computationalism wrong... and so on.

 Not possible to prove,


Why wouldn't it be possible to prove... Prove that it's not possible to
prove first...


 but possible to nearly disprove if you walk
 yourself off of your brain onto a digital brain and back on.

 Craig

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-list@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-23 Thread Pierz
Let us suppose you're right and... but hold on! We can't do that. That
would be circular. That would be sneaking in the assumption that
you're right from the outset. That would be shifty', fishy, etc
etc. You just don't seem to grasp the rudiments of philosophical
reasoning. 'Yes doctor' is not an underhand move. It asks you up-front
to assume that comp is true in order then to examine the implications
of that, whilst acknowledging (by calling it a 'bet') that this is
just a hypothesis, an unprovable leap of faith. You complain that
using the term 'bet' assumes non-comp (I suppose because computers
can't bet, or care about their bets), but that is just daft. You might
as well argue that the UDA is invalid because it is couched in natural
language, which no computer can (or according to you, could ever)
understand. If we accepted such arguments, we'd be incapable of
debating comp at all.

Saying 'no' to the doctor is anyone's right - nobody forces you to
accept that first step or tries to pull the wool over your eyes if you
choose to say 'yes'. Having said no you can then either say I don't
believe in comp because (I just don't like it, it doesn't feel right,
it's against my religion etc) or you can present a rational argument
against it. That is to say, if asked to justify why you say no, you
can either provide no reason and say simply that you choose to bet
against it - which is OK but uninteresting - or you can present some
reasoning which attempts to refute comp. You've made many such
attempts, though to be honest all I've ever really been able to glean
from your arguments is a sort of impressionistic revulsion at the idea
of humans being computers, yet one which seems founded in a
fundamental misunderstanding about what a computer is. You repeatedly
mistake the mathematical construct for the concrete, known object you
use to type up your posts. This has been pointed out many times, but
you still make arguments like that thing about one's closed eyes being
unlike a switched-off screen, which verged on ludicrous.

I should say I'm no comp proponent, as my previous posts should
attest. I'm agnostic on the subject, but at least I understand it.
Your posts can make exasperating reading.


On Feb 24, 8:14 am, Craig Weinberg whatsons...@gmail.com wrote:
 On Feb 23, 3:25 pm, 1Z peterdjo...@yahoo.com wrote:

  On Feb 22, 7:42 am, Craig Weinberg whatsons...@gmail.com wrote:

   Has someone already mentioned this?

   I woke up in the middle of the night with this, so it might not make
   sense...or...

   The idea of saying yes to the doctor presumes that we, in the thought
   experiment, bring to the thought experiment universe:

   1. our sense of own significance (we have to be able to care about
   ourselves and our fate in the first place)

  I can't see why you would think that is incompatible with CTM

 It is not posed as a question of 'Do you believe that CTM includes X',
 but rather, 'using X, do you believe that there is any reason to doubt
 that Y(X) is X.'



   2. our perceptual capacity to jump to conclusions without logic (we
   have to be able feel what it seems like rather than know what it
   simply is.)

  Whereas that seems to be based on a mistake. It might be
  that our conclusions ARE based on logic, just logic that
  we are consciously unaware of.

 That's a good point but it could just as easily be based on
 subconscious idiopathic preferences. The patterns of human beings in
 guessing and betting vary from person to person whereas one of the
 hallmarks of computation is to get the same results. By default,
 everything that a computer does is mechanistic. We have to go out of
 our way to generate sophisticated algorithms to emulate naturalistic
 human patterns. Human development proves just the contrary. We start
 out wild and willful and become more mechanistic through
 domestication.

  Altenatively, they might
  just be illogical...even if we are computers. It is a subtle
  fallacy to say that computers run on logic: they run on rules.

 Yes! This is why they have a trivial intelligence and no true
 understanding. Rule followers are dumb. Logic is a form of
 intelligence which we use to write these rules that write more rules.
 The more rules you have, the better the machine, but no amount of
 rules make the machine more (or less) logical. Humans vary widely in
 their preference for logic, emotion, pragmatism, leadership, etc.
 Computers don't vary at all in their approach. It is all the same rule
 follower only with different rules.

  They have no guarantee to be rational. If the rules are
  wrong, you have bugs. Humans are known to have
  any number of cognitive bugs. The jumping thing
  could be implemented by real or pseudo randomness, too.

   Because of 1, it is assumed that the thought experiment universe
   includes the subjective experience of personal value - that the
   patient has a stake, or 'money to bet'.

  What's the problem ? the experience (quale) or the value?

 

Re: Yes Doctor circularity

2012-02-22 Thread Craig Weinberg
On Feb 22, 6:10 pm, Pierz pier...@gmail.com wrote:
 'Yes doctor' is merely an establishment of the assumption of comp.
 Saying yes means you are a computationalist. If you say no the you are
 not one, and one cannot proceed with the argument that follows -
 though then the onus will be on you to explain *why* you don't believe
 a computer can substitute for a brain.

That's what is circular. The question cheats by using the notion of a
bet to put the onus on us to take comp for granted in the first place
when there is no reason to presume that bets can exist in a universe
where comp is true. It's a loaded question, but in a sneaky way. It is
to say 'if you don't think the computer is happy, that's fine, but you
have to explain why'.

 If you've said yes, then this
 of course entails that you believe that 'free choice' and 'personal
 value' (or the subjective experience of them) can be products of a
 computer program, so there's no contradiction.

Right, so why ask the question? Why not just ask 'do you believe a
computer program can be happy'? When it is posed as a logical
consequence instead of a decision, it implicitly privileges the
passive voice. We are invited to believe that we have chosen to agree
to comp because there is a logical argument for it rather than an
arbitrary preference committed to in advance. It is persuasion by
rhetoric, not by science.

 In fact the circularity
 is in your reasoning. You are merely reasserting your assumption that
 choice and personal value must be non-comp,

No, the scenario asserts that by relying on the device of choice and
personal value as the engine of the thought experiment. My objection
is not based on any prejudice against comp I may have, it is based on
the prejudice of the way the question is posed.

 but that is exactly what
 is at issue in the yes doctor question. That is precisely what we're
 betting on.

If we are betting on anything then we are in a universe which has not
been proved to be supported by comp alone.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Yes Doctor circularity

2012-02-22 Thread Stathis Papaioannou
On Thu, Feb 23, 2012 at 4:42 PM, Craig Weinberg whatsons...@gmail.com wrote:
 On Feb 22, 6:10 pm, Pierz pier...@gmail.com wrote:
 'Yes doctor' is merely an establishment of the assumption of comp.
 Saying yes means you are a computationalist. If you say no the you are
 not one, and one cannot proceed with the argument that follows -
 though then the onus will be on you to explain *why* you don't believe
 a computer can substitute for a brain.

 That's what is circular. The question cheats by using the notion of a
 bet to put the onus on us to take comp for granted in the first place
 when there is no reason to presume that bets can exist in a universe
 where comp is true. It's a loaded question, but in a sneaky way. It is
 to say 'if you don't think the computer is happy, that's fine, but you
 have to explain why'.

 If you've said yes, then this
 of course entails that you believe that 'free choice' and 'personal
 value' (or the subjective experience of them) can be products of a
 computer program, so there's no contradiction.

 Right, so why ask the question? Why not just ask 'do you believe a
 computer program can be happy'? When it is posed as a logical
 consequence instead of a decision, it implicitly privileges the
 passive voice. We are invited to believe that we have chosen to agree
 to comp because there is a logical argument for it rather than an
 arbitrary preference committed to in advance. It is persuasion by
 rhetoric, not by science.

 In fact the circularity
 is in your reasoning. You are merely reasserting your assumption that
 choice and personal value must be non-comp,

 No, the scenario asserts that by relying on the device of choice and
 personal value as the engine of the thought experiment. My objection
 is not based on any prejudice against comp I may have, it is based on
 the prejudice of the way the question is posed.

 but that is exactly what
 is at issue in the yes doctor question. That is precisely what we're
 betting on.

 If we are betting on anything then we are in a universe which has not
 been proved to be supported by comp alone.

The yes doctor scenario considers the belief that if you are issued
with a computerised brain you will feel just the same. It's equivalent
to the yes barber scenario: that if you receive a haircut you will
feel just the same, and not become a zombie or otherwise radically
different being.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.