Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-17 Thread Bruno Marchal


On 17 Oct 2013, at 03:19, LizR wrote:

On 17 October 2013 14:08, Craig Weinberg whatsons...@gmail.com  
wrote:
How could a machine be racist if it is totally incapable of any form  
of relation or sentience, according to you?


Not according to me, I'm going along with Bruno. By his view, I am a  
machine, or a product of a machine, so if I am racist against  
machines, then it is inevitable that there will be machines who are  
similarly racist against humans or biology - the only difference  
being that they may be placed in a position to exert much more  
control on the world.


I don't remember Bruno saying that. (Unless one considers arithmetic  
to be a machine?)


Just to be clear, I use often the term elementary arithmetic to  
denote some (Robinsonian or not) theories or machine. Those are finite  
entities (with an infinite set of beliefs/theorems).


I use Arithmetic or Arithmetical truth for the set of true  
arithmetical proposition.


The first is a machine, the second is not. Arithmetical truth is not  
Turing emulable. It is very big, even from outside. Then it is non- 
conceivably big when seen from inside.


Bruno







--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-17 Thread LizR
On 17 October 2013 16:58, Craig Weinberg whatsons...@gmail.com wrote:


 I would have agreed with Bruno completely a few years ago, but since then
 I think that it makes more sense that arithmetic is a kind of sense than
 that sense could be a kind of arithmetic. I think that mechanism is a kind
 of arithmetic and arithmetic is a kind of sense, as is private awareness a
 kind of sense.


I'm sure that makes sense! (Even multisense, perhaps.)

But I may need a bit more explanation...which I hope I will get once I
have read what's at those links you posted.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-17 Thread LizR
On 17 October 2013 21:36, Bruno Marchal marc...@ulb.ac.be wrote:

 Arithmetical truth is not Turing emulable.

 Is that anything to do with the halting problem ?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-17 Thread Bruno Marchal


On 17 Oct 2013, at 11:08, LizR wrote:


On 17 October 2013 21:36, Bruno Marchal marc...@ulb.ac.be wrote:
Arithmetical truth is not Turing emulable.

Is that anything to do with the halting problem ?


The halting problem gives an example of a simple problem, which is not  
mechanically solvable.
For all theories, there will be machines x such that those theories  
cannot prove proposition like machine 567 does not halt, which will,  
when translated into arithmetic, defined an arithmetical truth  
escaping the power of that machine.


But there is the more complex problem x is the code of a total  
computable function. As being more complex, it is simpler to show it  
being non soluble, (as we did if you see what I am thinking about) and  
so from it, you get that there is no general theory for deciding  
between totality and strict partiallity of machines, which for any  
machines will generates deeper and more complex functions to compute,  
or arithmetical set to decide, and that will define more complex  
arithmetical propositions.


When you look at computability in term of arithmetical provability,  
Turing universality correspond to the sigma_1 complete set. A  
proposition sigma_1 as the shape EnP(n), with P(n) being completely  
decidable (can even be a diophantine equation).


A machine, an entity, a set, a number...  is said sigma_1 complete if,  
each time a proposition EnP(n) is true, it can prove it. It is  
complete in the sense of proving all true sigma_1 sentences.


You, Liz, are sigma_1 complete, (assuming you are immortal, we are  
working in Plato heaven, OK?). Indeed if there is a number n such that  
P(n), that is if EnP(n) is true, you can, given that P is easy to  
verify, verify P for 0, and if O does not very P, look at s(0), etc.  
If EnP(n) is true, that method guaranty that you will find it.


Sigma_1 completeness is one of the many characterization of Turing  
universality.


The price of universality?  The existence, for all universal machines,  
to be in front of proposition like ~EnP(n), which are true but cannot  
be proved by them.


Note that those propositions ~EnP(n) are equivalent with An~P(n) (to  
say that there is no ferocious number is the same as saying that all  
numbers are not-ferocious).


And if P(n) is completely verifiable, decidable, ~P(n) is too. So the  
type of formula An~P(n) is really the same as the type AnP(n). Those  
are the pi_1 sentences, typically negation of sigma_1 sentences.


Then you have the sigma_2 sentences, with the shape EnAmP(n, m), with  
P(n, m) easily decidable.

And their negations, the pi_2 sentences, AnEmP(n, m), and so one.

The computable = the sigma_1

But arithmetical truth contains the true sigma_1, and the true pi_1  
(which might, or not, contains Riemann hypothesis), the true sigma_2,  
etc. It is the union of all *true* sigma_i and pi_i formula. That set  
is not just non computable, but it is not definable in the arithmetic  
language (like the first person will be to).


The computable is only a very tiny part of arithmetical truth, but  
(with comp) the sigma_1 complete is already clever enough to get an  
idea how hard it is for itself to solve pi_problems, and above. It can  
also understand why it is concerned by those truth.


Machines can climbs those degrees of non solvability by the use of  
oracles, which are nothing more that the answer to some non solvable  
problems. This is useful to classify the degrees of insolubility.  
Imagine an oracle for the halting problem, well, that would help to  
solve pi_1 problems, but that would not provide a solution to the  
sigma_2 problems.


Hope I was not too much technical, we an come back on this, soon or  
later.


Bruno









--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread Bruno Marchal


On 16 Oct 2013, at 03:01, Craig Weinberg wrote:




On Tuesday, October 15, 2013 3:45:38 AM UTC-4, Bruno Marchal wrote:






I can give you the code in Lisp, and it is up to you to find a good  
free lisp. But don't mind too much, AUDA is an integral description  
of the interview. Today, such interviews is done by paper and  
pencils, and appears in books and papers.
You better buy Boolos 1979, or 1993, but you have to study more  
logic too.


Doesn't it seem odd that there isn't much out there that is newer  
than 20 years old,


That is simply wrong, and I don't see why you say that. But even if  
that was true, that would prove nothing.


It still seems odd. There are a lot of good programmers out there.  
If this is the frontier of machine intelligence, where is the  
interest? Not saying it proves something, but it doesn't instill  
much confidence that this is as fertile an area as you imply.


A revolutionary contemporary result (Gödel's incompleteness) shows  
that the oldest definition of knowledge (greeks, chinese, indians) can  
be applied to the oldest philosophy, mechanism, and that this is  
indeed very fertile, if only by providing an utterly transparent  
arithmetical interpretation of Plotinu's theology, which is the peak  
of the rationalist approach in that field, and you say that this  
instill any confidence in mechanism?









and that paper and pencils are the preferred instruments?


Maybe I was premature in saying it was promissory...it would  
appears that there has not been any promise for it in quite some  
time.






It is almost applicable, but the hard part is that it is blind to  
its own blindness, so that the certainty offered by mathematics  
comes at a cost which mathematics has no choice but to deny  
completely. Because mathematics cannot lie,


G* proves []f

Even Peano Arithmetic can lie.
Mathematical theories (set of beliefs) can lie.

Only truth cannot lie, but nobody know the truth as such.

 Something that is a paradox or inconsistent is not the same thing  
as an intentional attempt to deceive. I'm not sure what 'G* proves  
[]f' means but I think it will mean the same thing to anyone who  
understands it, and not something different to the boss than it  
does to the neighbor.


Actually it will have as much meaning as there are correct machines  
(a lot), but the laws remains the same. Then adding the non- 
monotonical umbrella, saving the Lôbian machines from the constant  
mistakes and lies they do, provides different interpretation of  
[]f, like


I dream,
I die,
I get mad,
I am in a cul-de-sac
I get wrong

etc.

It will depend on the intensional nuances in play.

Couldn't the machine output the same product as musical notes or  
colored pixels instead?


Why not. Humans can do that too.

If I asked a person to turn some data into music or art, no two  
people would agree on what that output would be and no person's  
output would be decipherable as input to another person. Computers,  
on the other hand, would automatically be able to reverse any kind  
of i/o in the same way.


I don't see how.



One computer could play a file as a song, and another could make a  
graphic file out of the audio line out data which would be fully  
reversible to the original binary file.


If the computer can do it, me too.

















it cannot intentionally tell the truth either, and no matter how  
sophisticated and self-referential a logic it is based on, it can  
never transcend its own alienation from feeling, physics, and  
authenticity.


That is correct, but again, that is justifiable by all correct  
sufficiently rich machines.


Not sure I understand. Are you saying that we, as rich machines,  
cannot intentionally lie or tell the truth either?


No, I am saying that all correct machines can eventually justify  
that if they are correct they can't  express it, and if they are  
consistent, it will be consistent they are wrong. So it means they  
can eventually exploits the false locally. Team of universal  
numbers get entangled in very subtle prisoner dilemma.

Universal machines can lie, and can crash.

That sounds like they can lie only when they calculate that they  
must, not that they can lie intentionally because they enjoy it or  
out of sadism.


That sounds like an opportunistic inference.

I think that computationalism maintains the illusion of legitimacy  
on basis of seducing us to play only by its rules.


The technical points is that low level rules leads to no rules at  
the higher levels. You continue to criticized 19th century  
reductionist conception of machines. We know today that such a  
reductionist view of machines is plain wrong.





It says that we must give the undead a chance to be alive - that we  
cannot know for sure whether a machine is not at least as worthy of  
our love as a newborn baby.


You cannot do that comparison. Is an newborn alien worthy of human  
love? Other parameters than thinking and 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread Craig Weinberg


On Wednesday, October 16, 2013 4:21:34 AM UTC-4, Bruno Marchal wrote:


 On 16 Oct 2013, at 03:01, Craig Weinberg wrote:



 On Tuesday, October 15, 2013 3:45:38 AM UTC-4, Bruno Marchal wrote:





 I can give you the code in Lisp, and it is up to you to find a good free 
 lisp. But don't mind too much, AUDA is an integral description of the 
 interview. Today, such interviews is done by paper and pencils, and appears 
 in books and papers.
 You better buy Boolos 1979, or 1993, but you have to study more logic 
 too.


 Doesn't it seem odd that there isn't much out there that is newer than 20 
 years old, 


 That is simply wrong, and I don't see why you say that. But even if that 
 was true, that would prove nothing.


 It still seems odd. There are a lot of good programmers out there. If this 
 is the frontier of machine intelligence, where is the interest? Not saying 
 it proves something, but it doesn't instill much confidence that this is as 
 fertile an area as you imply.


 A revolutionary contemporary result (Gödel's incompleteness) shows that 
 the oldest definition of knowledge (greeks, chinese, indians) can be 
 applied to the oldest philosophy, mechanism, and that this is indeed very 
 fertile, if only by providing an utterly transparent arithmetical 
 interpretation of Plotinu's theology, which is the peak of the rationalist 
 approach in that field, and you say that this instill any confidence in 
 mechanism?


It doesn't instill confidence of your interpretation of incompleteness. For 
myself, and I am guessing for others, incompleteness is about the 
lack-of-completeness of mathematical systems rather than a 
hyper-completeness of arithmetic metaphysics. Do you say that Gödel was a 
supporter of the Plotinus view, or are saying that even he didn't realize 
the implications.




  



 and that paper and pencils are the preferred instruments?


 Maybe I was premature in saying it was promissory...it would appears that 
 there has not been any promise for it in quite some time.
  




 It is almost applicable, but the hard part is that it is blind to its 
 own blindness, so that the certainty offered by mathematics comes at a 
 cost 
 which mathematics has no choice but to deny completely. Because 
 mathematics 
 cannot lie, 


 G* proves []f

 Even Peano Arithmetic can lie.  
 Mathematical theories (set of beliefs) can lie.

 Only truth cannot lie, but nobody know the truth as such.


  Something that is a paradox or inconsistent is not the same thing as an 
 intentional attempt to deceive. I'm not sure what 'G* proves []f' means 
 but I think it will mean the same thing to anyone who understands it, and 
 not something different to the boss than it does to the neighbor.


 Actually it will have as much meaning as there are correct machines (a 
 lot), but the laws remains the same. Then adding the non-monotonical 
 umbrella, saving the Lôbian machines from the constant mistakes and lies 
 they do, provides different interpretation of []f, like

 I dream,
 I die,
 I get mad,
 I am in a cul-de-sac
 I get wrong

 etc.

 It will depend on the intensional nuances in play.


 Couldn't the machine output the same product as musical notes or colored 
 pixels instead?


 Why not. Humans can do that too.


 If I asked a person to turn some data into music or art, no two people 
 would agree on what that output would be and no person's output would be 
 decipherable as input to another person. Computers, on the other hand, 
 would automatically be able to reverse any kind of i/o in the same way. 


 I don't see how.


By scanning the image or recording the sound in the same way that it was 
encoded to be played in the first place.
 




 One computer could play a file as a song, and another could make a graphic 
 file out of the audio line out data which would be fully reversible to the 
 original binary file.


 If the computer can do it, me too.


You can't make a graphic file out of a song that 'is' the data of a song. 
Your artistic interpretation will not match anyone else's.
 








  







 it cannot intentionally tell the truth either, and no matter how 
 sophisticated and self-referential a logic it is based on, it can never 
 transcend its own alienation from feeling, physics, and authenticity. 


 That is correct, but again, that is justifiable by all correct 
 sufficiently rich machines.


 Not sure I understand. Are you saying that we, as rich machines, cannot 
 intentionally lie or tell the truth either?


 No, I am saying that all correct machines can eventually justify that if 
 they are correct they can't  express it, and if they are consistent, it 
 will be consistent they are wrong. So it means they can eventually exploits 
 the false locally. Team of universal numbers get entangled in very subtle 
 prisoner dilemma. 
 Universal machines can lie, and can crash.


 That sounds like they can lie only when they calculate that they must, 
 not that they can lie intentionally 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread Bruno Marchal


On 16 Oct 2013, at 14:49, Craig Weinberg wrote:




On Wednesday, October 16, 2013 4:21:34 AM UTC-4, Bruno Marchal wrote:

On 16 Oct 2013, at 03:01, Craig Weinberg wrote:




On Tuesday, October 15, 2013 3:45:38 AM UTC-4, Bruno Marchal wrote:






I can give you the code in Lisp, and it is up to you to find a  
good free lisp. But don't mind too much, AUDA is an integral  
description of the interview. Today, such interviews is done by  
paper and pencils, and appears in books and papers.
You better buy Boolos 1979, or 1993, but you have to study more  
logic too.


Doesn't it seem odd that there isn't much out there that is newer  
than 20 years old,


That is simply wrong, and I don't see why you say that. But even if  
that was true, that would prove nothing.


It still seems odd. There are a lot of good programmers out there.  
If this is the frontier of machine intelligence, where is the  
interest? Not saying it proves something, but it doesn't instill  
much confidence that this is as fertile an area as you imply.


A revolutionary contemporary result (Gödel's incompleteness) shows  
that the oldest definition of knowledge (greeks, chinese, indians)  
can be applied to the oldest philosophy, mechanism, and that this is  
indeed very fertile, if only by providing an utterly transparent  
arithmetical interpretation of Plotinu's theology, which is the peak  
of the rationalist approach in that field, and you say that this  
instill any confidence in mechanism?


It doesn't instill confidence of your interpretation of  
incompleteness. For myself, and I am guessing for others,  
incompleteness is about the lack-of-completeness of mathematical  
systems rather than a hyper-completeness of arithmetic metaphysics.


The whole point here is that the machines prove their own theorem  
about themselves. The meta-arithmetic belongs to arithmetic. I don't  
say much more than what the machines already say. I just need the  
classical theory of knowledge (the modal logic S4), just to compare  
with the machine's theory (S4Grz), like I need QM to compare with the  
machines's statistics on computation seen from inside.






Do you say that Gödel was a supporter of the Plotinus view, or are  
saying that even he didn't realize the implications.


Gödel was indeed a defender of platonism, at the start. But he has  
been quite slow on Church thesis, and not so quick on mechanism  
either. That is suggested notably by his leaning toward Anselm notion  
of God.





The reductionist view of machines may be wrong, but that doesn't  
mean that its absence of rules at higher level translates into  
proprietary feelings, sounds, flavors, etc. Why would it?


Why not? Evidences are that a brain does that. You need to find  
something non-Turing emulable in the brain to provide evidences that  
it does not.





In theory it could, sure, but the universe that we live in seems to  
suggest exactly the opposite.



But we can understand what is that universe, and why it suggests this,  
for the machine embedded in that apparent universe.










It says that we must give the undead a chance to be alive - that we  
cannot know for sure whether a machine is not at least as worthy of  
our love as a newborn baby.


You cannot do that comparison. Is an newborn alien worthy of human  
love? Other parameters than thinking and consciousness are at play.


What are those parameters, and how do they fit in with mechanism?


The parameters are that love asks for some close familiarity. It fits  
with mechanism through long computational histories.
Anyway, it is up to you to find something non mechanical. I don't  
defend comp, I just try to show why your methodology to criticize comp  
is not valid.









To fight this seduction,


You beg the question. You are the one creating an enemy here. Just  
from your prejudice and lack of reflexion on machines.


Sometimes an enemy creates themselves.


That is weird for an enemy about which you reject the autonomy.








we must use what is our birthright as living beings. We can be  
opportunistic, we can cheat, and lie, and unplug machines whenever  
we want, because that is what makes us superior to recorded logic.  
We are alive, so we get to do whatever we want to that which is not  
alive.


Here you are more than invalid. You are frightening.
We have compared you to racist, and what you say now reminds me of  
the strategy used by Nazy to prove that the white caucasian were  
superior. Lies, lies and lies.


We can lie, machines can lie, but I am not sure it is the best  
science, or the best politics.

With comp, God = Truth, and lies are Devil's play.

If there is a chance that a machine will be born that is like me,  
only billions of times more capable and more racist than I am  
against all forms of life, wouldn't you say that it would be worth  
trying to stop at all costs?


Should we prevent human birth because it might lead to people like  
Hitler?

You are 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread Platonist Guitar Cowboy
On Wed, Oct 16, 2013 at 2:49 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Wednesday, October 16, 2013 4:21:34 AM UTC-4, Bruno Marchal wrote:


 On 16 Oct 2013, at 03:01, Craig Weinberg wrote:





 we must use what is our birthright as living beings. We can be
 opportunistic, we can cheat, and lie, and unplug machines whenever we want,
 because that is what makes us superior to recorded logic. We are alive, so
 we get to do whatever we want to that which is not alive.

 Craig, these are murky waters you're fishing in this time.

I forgot who said the following: X is giving reasons for why reasoning is
bad. His reasoning was bad.



 Here you are more than invalid. You are frightening.
 We have compared you to racist, and what you say now reminds me of the
 strategy used by Nazy to prove that the white caucasian were superior.
 Lies, lies and lies.

 We can lie, machines can lie, but I am not sure it is the best science,
 or the best politics.
 With comp, God = Truth, and lies are Devil's play.


 If there is a chance that a machine will be born that is like me, only
 billions of times more capable and more racist than I am against all forms
 of life, wouldn't you say that it would be worth trying to stop at all
 costs?


How could a machine be racist if it is totally incapable of any form of
relation or sentience, according to you?




 But thanks for warning us about the way you proceed.

 This does not help for your case,


 I am just the beginning. Your sun in law will make me seem like Snoopy.


If the above holds and you're not just playing, then these ideas make you
totally mainstream: hunger for opportunistic dominance and perverted sense
of liberty so expansive that we poison the very air we breathe and the soil
that grounds our homes. You'd be saying nothing new at all, just the
opposite in fact.

The opportunism program is so old, cockroaches run it successfully and will
continue to do so. They also eat their young. Makes sense, consistent with
opportunism, but not the apex of aesthetics to put it mildly. To anybody
with the luxury of cultivating an aesthetic sense, even when inevitable,
that is merely ugly and to be avoided.

 PGC



 Craig



 Bruno



 http://iridia.ulb.ac.be/~**marchal/ http://iridia.ulb.ac.be/~marchal/



  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread Craig Weinberg


On Wednesday, October 16, 2013 5:34:08 PM UTC-4, Bruno Marchal wrote:


 On 16 Oct 2013, at 14:49, Craig Weinberg wrote:



 On Wednesday, October 16, 2013 4:21:34 AM UTC-4, Bruno Marchal wrote:


 On 16 Oct 2013, at 03:01, Craig Weinberg wrote:



 On Tuesday, October 15, 2013 3:45:38 AM UTC-4, Bruno Marchal wrote:





 I can give you the code in Lisp, and it is up to you to find a good 
 free lisp. But don't mind too much, AUDA is an integral description of the 
 interview. Today, such interviews is done by paper and pencils, and 
 appears 
 in books and papers.
 You better buy Boolos 1979, or 1993, but you have to study more logic 
 too.


 Doesn't it seem odd that there isn't much out there that is newer than 
 20 years old, 


 That is simply wrong, and I don't see why you say that. But even if that 
 was true, that would prove nothing.


 It still seems odd. There are a lot of good programmers out there. If 
 this is the frontier of machine intelligence, where is the interest? Not 
 saying it proves something, but it doesn't instill much confidence that 
 this is as fertile an area as you imply.


 A revolutionary contemporary result (Gödel's incompleteness) shows that 
 the oldest definition of knowledge (greeks, chinese, indians) can be 
 applied to the oldest philosophy, mechanism, and that this is indeed very 
 fertile, if only by providing an utterly transparent arithmetical 
 interpretation of Plotinu's theology, which is the peak of the rationalist 
 approach in that field, and you say that this instill any confidence in 
 mechanism?


 It doesn't instill confidence of your interpretation of incompleteness. 
 For myself, and I am guessing for others, incompleteness is about the 
 lack-of-completeness of mathematical systems rather than a 
 hyper-completeness of arithmetic metaphysics. 


 The whole point here is that the machines prove their own theorem about 
 themselves.


Which is why their proofs are not reliable as general principles. If you 
ask people who cannot hear about music, they might confirm each others view 
that music consists only of vibrations that you can feel through your body.
 

 The meta-arithmetic belongs to arithmetic. I don't say much more than what 
 the machines already say. I just need the classical theory of knowledge 
 (the modal logic S4), just to compare with the machine's theory (S4Grz), 
 like I need QM to compare with the machines's statistics on computation 
 seen from inside.


I think that all theories of logic are incestuous and ungrounded.
 






 Do you say that Gödel was a supporter of the Plotinus view, or are saying 
 that even he didn't realize the implications.


 Gödel was indeed a defender of platonism, at the start. But he has been 
 quite slow on Church thesis, and not so quick on mechanism either. That is 
 suggested notably by his leaning toward Anselm notion of God.


Platonism is alright, but it just doesn't go far enough. It takes the 
ability to sense forms for granted.
 




 The reductionist view of machines may be wrong, but that doesn't mean that 
 its absence of rules at higher level translates into proprietary feelings, 
 sounds, flavors, etc. Why would it? 


 Why not? Evidences are that a brain does that. You need to find something 
 non-Turing emulable in the brain to provide evidences that it does not.


No, I don't need to find something non-Turing emulable in the brain, any 
more than I need to find something non-pixel descriptive in a TV set to 
provide evidence that a TV show can have characters and dialogue.
 





 In theory it could, sure, but the universe that we live in seems to 
 suggest exactly the opposite.



 But we can understand what is that universe, and why it suggests this, for 
 the machine embedded in that apparent universe.


I have no problem with using mathematics to describe a theoretical 
universe. I don't even say that such a universe could not be real, I only 
say that the universe which hosts our experience does not quite make sense 
as a mathematical universe.
 









 It says that we must give the undead a chance to be alive - that we 
 cannot know for sure whether a machine is not at least as worthy of our 
 love as a newborn baby. 


 You cannot do that comparison. Is an newborn alien worthy of human love? 
 Other parameters than thinking and consciousness are at play.


 What are those parameters, and how do they fit in with mechanism?


 The parameters are that love asks for some close familiarity. It fits with 
 mechanism through long computational histories.


You can have long computational histories without inventing love, surely?
 

 Anyway, it is up to you to find something non mechanical. I don't defend 
 comp, I just try to show why your methodology to criticize comp is not 
 valid.


I already am something non mechanical, and all of the qualia that has ever 
been experienced.
 




  



 To fight this seduction, 


 You beg the question. You are the one creating an enemy 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread Craig Weinberg


On Wednesday, October 16, 2013 8:18:28 PM UTC-4, Platonist Guitar Cowboy 
wrote:




 On Wed, Oct 16, 2013 at 2:49 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, October 16, 2013 4:21:34 AM UTC-4, Bruno Marchal wrote:


 On 16 Oct 2013, at 03:01, Craig Weinberg wrote:





 we must use what is our birthright as living beings. We can be 
 opportunistic, we can cheat, and lie, and unplug machines whenever we want, 
 because that is what makes us superior to recorded logic. We are alive, so 
 we get to do whatever we want to that which is not alive.

 Craig, these are murky waters you're fishing in this time.

 I forgot who said the following: X is giving reasons for why reasoning is 
 bad. His reasoning was bad.


Murky, yes. I think that consciousness and life are trans-rational, 
trans-measurable, and trans-ontological.
 

  


 Here you are more than invalid. You are frightening. 
 We have compared you to racist, and what you say now reminds me of the 
 strategy used by Nazy to prove that the white caucasian were superior. 
 Lies, lies and lies.

 We can lie, machines can lie, but I am not sure it is the best science, 
 or the best politics.
 With comp, God = Truth, and lies are Devil's play.


 If there is a chance that a machine will be born that is like me, only 
 billions of times more capable and more racist than I am against all forms 
 of life, wouldn't you say that it would be worth trying to stop at all 
 costs?


 How could a machine be racist if it is totally incapable of any form of 
 relation or sentience, according to you?


Not according to me, I'm going along with Bruno. By his view, I am a 
machine, or a product of a machine, so if I am racist against machines, 
then it is inevitable that there will be machines who are similarly racist 
against humans or biology - the only difference being that they may be 
placed in a position to exert much more control on the world.
 

  



 But thanks for warning us about the way you proceed.

 This does not help for your case,


 I am just the beginning. Your sun in law will make me seem like Snoopy.


 If the above holds and you're not just playing, then these ideas make you 
 totally mainstream: hunger for opportunistic dominance and perverted sense 
 of liberty so expansive that we poison the very air we breathe and the soil 
 that grounds our homes. You'd be saying nothing new at all, just the 
 opposite in fact. 


Even if that's not what I think that I advocate personally, my point is 
that there is no reason to assume that an AI would be any different, given 
that we are machines.
 


 The opportunism program is so old, cockroaches run it successfully and 
 will continue to do so. They also eat their young. Makes sense, consistent 
 with opportunism, but not the apex of aesthetics to put it mildly. To 
 anybody with the luxury of cultivating an aesthetic sense, even when 
 inevitable, that is merely ugly and to be avoided.


I agree, but that's because I'm not a machine. The part of me that is a 
machine is no better or worse than a cockroach.

Craig
 


  PGC
  
  

 Craig
  


 Bruno



  http://iridia.ulb.ac.be/~**marchal/ http://iridia.ulb.ac.be/~marchal/



  -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread LizR
On 17 October 2013 14:08, Craig Weinberg whatsons...@gmail.com wrote:

 How could a machine be racist if it is totally incapable of any form of
 relation or sentience, according to you?


 Not according to me, I'm going along with Bruno. By his view, I am a
 machine, or a product of a machine, so if I am racist against machines,
 then it is inevitable that there will be machines who are similarly racist
 against humans or biology - the only difference being that they may be
 placed in a position to exert much more control on the world.


I don't remember Bruno saying that. (Unless one considers arithmetic to be
a machine?)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread Craig Weinberg


On Wednesday, October 16, 2013 9:19:13 PM UTC-4, Liz R wrote:

 On 17 October 2013 14:08, Craig Weinberg whats...@gmail.com javascript:
  wrote:

 How could a machine be racist if it is totally incapable of any form of 
 relation or sentience, according to you?


 Not according to me, I'm going along with Bruno. By his view, I am a 
 machine, or a product of a machine, so if I am racist against machines, 
 then it is inevitable that there will be machines who are similarly racist 
 against humans or biology - the only difference being that they may be 
 placed in a position to exert much more control on the world.

  
 I don't remember Bruno saying that. (Unless one considers arithmetic to be 
 a machine?)


Yes, if I understand his view correctly, Bruno considers arithmetic to be 
behind mechanism, mechanism to be behind awareness, and awareness to be 
behind physics. 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread Craig Weinberg


On Tuesday, October 15, 2013 9:11:02 PM UTC-4, Liz R wrote:

 On 16 October 2013 14:05, Craig Weinberg whats...@gmail.com javascript:
  wrote:



 On Tuesday, October 15, 2013 8:51:17 PM UTC-4, Liz R wrote:

 On 16 October 2013 13:48, Craig Weinberg whats...@gmail.com wrote:


 No, that's begging the question. A human body may be a machine, but 
 that does not mean that a human experience can be created from the outside 
 in. That's what all of these points are about - a machine does not build 
 itself from a single reproducing cell. A machine does not care what it is 
 doing, it doesn't get bored or tired. A machine is great at doing things 
 that people are terrible at doing and vice versa. There is much more 
 evidence to suggest that human experience is the polar opposite of 
 mechanism than that it could be defined by mechanism.

 So what is a human being, if not a (very complicated, 
 molecular-component-**containing) machine? (Or is machine being 
 defined in a specialised sense here?) 


 A human being is the collective self experience received during the 
 phenomenon known as a human lifetime. The body is only one aspect of that 
 experience - a reflection defined as a familiar body in the context of its 
 own perception.


 That's cool, but if the body is a (complicated, etc) machine, then either 
 those experiences are part of the machine, or they're something else. If 
 they're part of the machine then you're wrong in some of the above-quoted 
 statements (and you contradicted yourself by saying that a machine doesn't 
 grow from a cell, by the way) If it's something else, then - depending on 
 the nature of that something else - it's possible that other things have 
 it, and we don't recognise the fact. It would be important to know what 
 that something else is before one can construct an argument. (For example, 
 I believe Bruno thinks the something else is an infinite sheaf of 
 computations.)


Have you considered that it might be the body which is part of a sheaf of 
experiences? 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread LizR
On 17 October 2013 16:12, Craig Weinberg whatsons...@gmail.com wrote:



 On Tuesday, October 15, 2013 9:11:02 PM UTC-4, Liz R wrote:

 On 16 October 2013 14:05, Craig Weinberg whats...@gmail.com wrote:



 On Tuesday, October 15, 2013 8:51:17 PM UTC-4, Liz R wrote:

 On 16 October 2013 13:48, Craig Weinberg whats...@gmail.com wrote:


 No, that's begging the question. A human body may be a machine, but
 that does not mean that a human experience can be created from the outside
 in. That's what all of these points are about - a machine does not build
 itself from a single reproducing cell. A machine does not care what it is
 doing, it doesn't get bored or tired. A machine is great at doing things
 that people are terrible at doing and vice versa. There is much more
 evidence to suggest that human experience is the polar opposite of
 mechanism than that it could be defined by mechanism.

 So what is a human being, if not a (very complicated,
 molecular-component-**containing**) machine? (Or is machine being
 defined in a specialised sense here?)


 A human being is the collective self experience received during the
 phenomenon known as a human lifetime. The body is only one aspect of that
 experience - a reflection defined as a familiar body in the context of its
 own perception.


 That's cool, but if the body is a (complicated, etc) machine, then either
 those experiences are part of the machine, or they're something else. If
 they're part of the machine then you're wrong in some of the above-quoted
 statements (and you contradicted yourself by saying that a machine doesn't
 grow from a cell, by the way) If it's something else, then - depending on
 the nature of that something else - it's possible that other things have
 it, and we don't recognise the fact. It would be important to know what
 that something else is before one can construct an argument. (For example,
 I believe Bruno thinks the something else is an infinite sheaf of
 computations.)


 Have you considered that it might be the body which is part of a sheaf of
 experiences?


Since Bruno started trying to explain comp to me, I have indeed considered
that. It could be, for example, via the mechanism you mentioned in your
previous post:

Bruno considers arithmetic to be behind mechanism, mechanism to be behind
 awareness, and awareness to be behind physics.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-16 Thread Craig Weinberg


On Wednesday, October 16, 2013 11:18:39 PM UTC-4, Liz R wrote:

 On 17 October 2013 16:12, Craig Weinberg whats...@gmail.com javascript:
  wrote:



 On Tuesday, October 15, 2013 9:11:02 PM UTC-4, Liz R wrote:

 On 16 October 2013 14:05, Craig Weinberg whats...@gmail.com wrote:



 On Tuesday, October 15, 2013 8:51:17 PM UTC-4, Liz R wrote:

 On 16 October 2013 13:48, Craig Weinberg whats...@gmail.com wrote:


 No, that's begging the question. A human body may be a machine, but 
 that does not mean that a human experience can be created from the 
 outside 
 in. That's what all of these points are about - a machine does not build 
 itself from a single reproducing cell. A machine does not care what it 
 is 
 doing, it doesn't get bored or tired. A machine is great at doing things 
 that people are terrible at doing and vice versa. There is much more 
 evidence to suggest that human experience is the polar opposite of 
 mechanism than that it could be defined by mechanism.

 So what is a human being, if not a (very complicated, 
 molecular-component-**containing**) machine? (Or is machine being 
 defined in a specialised sense here?) 
  

 A human being is the collective self experience received during the 
 phenomenon known as a human lifetime. The body is only one aspect of that 
 experience - a reflection defined as a familiar body in the context of its 
 own perception.


 That's cool, but if the body is a (complicated, etc) machine, then 
 either those experiences are part of the machine, or they're something 
 else. If they're part of the machine then you're wrong in some of the 
 above-quoted statements (and you contradicted yourself by saying that a 
 machine doesn't grow from a cell, by the way) If it's something else, then 
 - depending on the nature of that something else - it's possible that other 
 things have it, and we don't recognise the fact. It would be important to 
 know what that something else is before one can construct an argument. (For 
 example, I believe Bruno thinks the something else is an infinite sheaf 
 of computations.)


 Have you considered that it might be the body which is part of a sheaf of 
 experiences? 


 Since Bruno started trying to explain comp to me, I have indeed considered 
 that. It could be, for example, via the mechanism you mentioned in your 
 previous post:

 Bruno considers arithmetic to be behind mechanism, mechanism to be behind 
 awareness, and awareness to be behind physics.


I would have agreed with Bruno completely a few years ago, but since then I 
think that it makes more sense that arithmetic is a kind of sense than that 
sense could be a kind of arithmetic. I think that mechanism is a kind of 
arithmetic and arithmetic is a kind of sense, as is private awareness a 
kind of sense. 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Bruno Marchal


On 14 Oct 2013, at 22:04, Craig Weinberg wrote:




On Monday, October 14, 2013 3:17:06 PM UTC-4, Bruno Marchal wrote:

On 14 Oct 2013, at 20:13, Craig Weinberg wrote:




On Sunday, October 13, 2013 5:03:45 AM UTC-4, Bruno Marchal wrote:


All object are conscious?

No objects are conscious.


We agree on this.








Not at all. It is here and now. I have already interview such  
machines.


Are there any such machines available to interview online?


I can give you the code in Lisp, and it is up to you to find a good  
free lisp. But don't mind too much, AUDA is an integral description  
of the interview. Today, such interviews is done by paper and  
pencils, and appears in books and papers.
You better buy Boolos 1979, or 1993, but you have to study more  
logic too.


Doesn't it seem odd that there isn't much out there that is newer  
than 20 years old,


That is simply wrong, and I don't see why you say that. But even if  
that was true, that would prove nothing.




and that paper and pencils are the preferred instruments?


Maybe I was premature in saying it was promissory...it would appears  
that there has not been any promise for it in quite some time.






It is almost applicable, but the hard part is that it is blind to  
its own blindness, so that the certainty offered by mathematics  
comes at a cost which mathematics has no choice but to deny  
completely. Because mathematics cannot lie,


G* proves []f

Even Peano Arithmetic can lie.
Mathematical theories (set of beliefs) can lie.

Only truth cannot lie, but nobody know the truth as such.

 Something that is a paradox or inconsistent is not the same thing  
as an intentional attempt to deceive. I'm not sure what 'G* proves  
[]f' means but I think it will mean the same thing to anyone who  
understands it, and not something different to the boss than it  
does to the neighbor.


Actually it will have as much meaning as there are correct machines  
(a lot), but the laws remains the same. Then adding the non- 
monotonical umbrella, saving the Lôbian machines from the constant  
mistakes and lies they do, provides different interpretation of []f,  
like


I dream,
I die,
I get mad,
I am in a cul-de-sac
I get wrong

etc.

It will depend on the intensional nuances in play.

Couldn't the machine output the same product as musical notes or  
colored pixels instead?


Why not. Humans can do that too.












it cannot intentionally tell the truth either, and no matter how  
sophisticated and self-referential a logic it is based on, it can  
never transcend its own alienation from feeling, physics, and  
authenticity.


That is correct, but again, that is justifiable by all correct  
sufficiently rich machines.


Not sure I understand. Are you saying that we, as rich machines,  
cannot intentionally lie or tell the truth either?


No, I am saying that all correct machines can eventually justify  
that if they are correct they can't  express it, and if they are  
consistent, it will be consistent they are wrong. So it means they  
can eventually exploits the false locally. Team of universal numbers  
get entangled in very subtle prisoner dilemma.

Universal machines can lie, and can crash.

That sounds like they can lie only when they calculate that they  
must, not that they can lie intentionally because they enjoy it or  
out of sadism.


That sounds like an opportunistic inference.

Bruno




Craig


Bruno



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Craig Weinberg


On Monday, October 14, 2013 11:14:36 PM UTC-4, Jason wrote:




 On Mon, Oct 14, 2013 at 9:59 PM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Monday, October 14, 2013 4:37:35 PM UTC-4, Jason wrote:




 On Thu, Oct 10, 2013 at 10:54 AM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, October 9, 2013 8:08:01 PM UTC-4, Jason wrote:




 On Wed, Oct 9, 2013 at 4:52 PM, LizR liz...@gmail.com wrote:

  On 10 October 2013 09:47, Craig Weinberg whats...@gmail.com wrote:

 It's not that computers can't do what humans do, it's that they 
 can't experience anything. Mozart could dig a hole as well as compose 
 music, but that doesn't mean that a backhoe with a player piano on it 
 is 
 Mozart. It's a much deeper problem with how machines are conceptualized 
 that has nothing at all to do with humans.


 So you think strong AI is wrong. OK. But why can't computers 
 experience anything, in principle, given that people can, and assuming 
 people are complicated machines?
  


 I think Craig would say he does think computers (and many/all other 
 things) do experience something,


 You're half right. I would say:

 1. All experiences correspond to some natural thing.
 2. Not all things are natural things. Bugs Bunny has no independent 
 experience, and neither does Pinocchio. 
 3. Computers are made of natural things but, like all machines, are 
 ultimately assembled unnaturally.
 4. The natural things that machines are made of would have to be very 
 low level, i.e., not gears but the molecules that make up the gears.

 Unless a machine used living organisms, molecules would probably be the 
 only natural things which an experience would be associated with. They 
 don't know that they are part of a machine, but there is probably an 
 experience that corresponds to thermodynamic and electromagnetic 
 conditions. Experiences on that level may not be proprietary to any 
 particular molecule - it could be very exotic, who knows. Maybe every atom 
 of the same structure represents the same kind of experience on some 
 radically different time scale from ours. 

 It's not really important - the main thing is to see how there is no 
 substitute for experience and a machine which is assembled from unrelated 
 parts has no experience and cannot gain new experience in an alien context.

 I think that a machine (or any inanimate object or symbol) can also 
 serve as a vehicle for synchronicity. That's a completely different thing 
 because it is the super-personal, holistic end of the sensible spectrum, 
 not the sub-personal, granular end. The creepiness of a ventriloquist 
 dummy 
 is in our imagination, but that too is 'real' in an absolute sense. If 
 your 
 life takes you on a path which tempts you to believe that machines are 
 conscious, then the super-personal lensing of your life will stack the 
 deck 
 just enough to let you jump to those conclusions. It's what we would call 
 supernatural or coincidental, depending on which lens we use to define 
 it..  
 http://s33light.org/post/**62173912616http://s33light.org/post/62173912616
   
 (Don't you want to have a body?)


 After reading this ( 
 http://marshallbrain.com/**discard1.htmhttp://marshallbrain.com/discard1.htm
  ) 
 I am not so sure...
  

  

  just that it is necessarily different from what we experience. The 
 reason for this has something to do with our history as biological 
 organisms (according to his theory).


 Right, although not necessarily just biological history, it could be 
 chemical too. We may have branched off from anything that could be made 
 into a useful machine (servant to alien agendas) long before life on Earth.


 What if humanity left behind a nano-technology that eventually evolved 
 into mechanical organisms like dogs and fish, would they have animal like 
 experiences despite that they descended from unnatural things?


 The thing that makes sense to me is that the richness of sensation and 
 intention are inversely proportionate to the degree to which a phenomenon 
 can be controlled from the outside. If we put nano-tech extensions on some 
 living organism, then sure, the organism could learn how to use those 
 extensions and evolve a symbiotic post-biology. I don't think that project 
 would be controllable though. They would not be machines in the sense that 
 they would not necessarily be of service to those who created them. 



 Craig,

 Thanks for your answer.  That was not quite what I was asking though.  
 Let's say the nano-tech did not extend some living organism, but were some 
 entirely autonomous, entirely artificial  cell-like structures, which could 
 find and utilize energy sources in the environment and reproduce 
 themselves.  Let's say after millions (or billions) of years, these 
 self-replicating nanobots evolved into multi-cellular organisms like 
 animals we are familiar with today. Could they have experiences like other 
 biological creatures that have a biological lineage? 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Jason Resch



On Oct 15, 2013, at 7:26 AM, Craig Weinberg whatsons...@gmail.com  
wrote:





On Monday, October 14, 2013 11:14:36 PM UTC-4, Jason wrote:



On Mon, Oct 14, 2013 at 9:59 PM, Craig Weinberg whats...@gmail.com  
wrote:



On Monday, October 14, 2013 4:37:35 PM UTC-4, Jason wrote:



On Thu, Oct 10, 2013 at 10:54 AM, Craig Weinberg  
whats...@gmail.com wrote:



On Wednesday, October 9, 2013 8:08:01 PM UTC-4, Jason wrote:



On Wed, Oct 9, 2013 at 4:52 PM, LizR liz...@gmail.com wrote:

On 10 October 2013 09:47, Craig Weinberg whats...@gmail.com wrote:
It's not that computers can't do what humans do, it's that they  
can't experience anything. Mozart could dig a hole as well as  
compose music, but that doesn't mean that a backhoe with a player  
piano on it is Mozart. It's a much deeper problem with how machines  
are conceptualized that has nothing at all to do with humans.


So you think strong AI is wrong. OK. But why can't computers  
experience anything, in principle, given that people can, and  
assuming people are complicated machines?



I think Craig would say he does think computers (and many/all other  
things) do experience something,


You're half right. I would say:

1. All experiences correspond to some natural thing.
2. Not all things are natural things. Bugs Bunny has no independent  
experience, and neither does Pinocchio.
3. Computers are made of natural things but, like all machines, are  
ultimately assembled unnaturally.
4. The natural things that machines are made of would have to be  
very low level, i.e., not gears but the molecules that make up the  
gears.


Unless a machine used living organisms, molecules would probably be  
the only natural things which an experience would be associated  
with. They don't know that they are part of a machine, but there is  
probably an experience that corresponds to thermodynamic and  
electromagnetic conditions. Experiences on that level may not be  
proprietary to any particular molecule - it could be very exotic,  
who knows. Maybe every atom of the same structure represents the  
same kind of experience on some radically different time scale from  
ours.


It's not really important - the main thing is to see how there is no  
substitute for experience and a machine which is assembled from  
unrelated parts has no experience and cannot gain new experience in  
an alien context.


I think that a machine (or any inanimate object or symbol) can also  
serve as a vehicle for synchronicity. That's a completely different  
thing because it is the super-personal, holistic end of the sensible  
spectrum, not the sub-personal, granular end. The creepiness of a  
ventriloquist dummy is in our imagination, but that too is 'real' in  
an absolute sense. If your life takes you on a path which tempts you  
to believe that machines are conscious, then the super-personal  
lensing of your life will stack the deck just enough to let you jump  
to those conclusions. It's what we would call supernatural or  
coincidental, depending on which lens we use to define it..  http://s33light.org/post/62173912616 
  (Don't you want to have a body?)


After reading this ( http://marshallbrain.com/discard1.htm ) I am  
not so sure...



just that it is necessarily different from what we experience. The  
reason for this has something to do with our history as biological  
organisms (according to his theory).


Right, although not necessarily just biological history, it could be  
chemical too. We may have branched off from anything that could be  
made into a useful machine (servant to alien agendas) long before  
life on Earth.



What if humanity left behind a nano-technology that eventually  
evolved into mechanical organisms like dogs and fish, would they  
have animal like experiences despite that they descended from  
unnatural things?


The thing that makes sense to me is that the richness of sensation  
and intention are inversely proportionate to the degree to which a  
phenomenon can be controlled from the outside. If we put nano-tech  
extensions on some living organism, then sure, the organism could  
learn how to use those extensions and evolve a symbiotic post- 
biology. I don't think that project would be controllable though.  
They would not be machines in the sense that they would not  
necessarily be of service to those who created them.



Craig,

Thanks for your answer.  That was not quite what I was asking  
though.  Let's say the nano-tech did not extend some living  
organism, but were some entirely autonomous, entirely artificial   
cell-like structures, which could find and utilize energy sources in  
the environment and reproduce themselves.  Let's say after millions  
(or billions) of years, these self-replicating nanobots evolved into  
multi-cellular organisms like animals we are familiar with today.  
Could they have experiences like other biological creatures that  
have a biological lineage? If not, why not?


No, I don't think that 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread meekerdb

On 10/15/2013 12:59 PM, Jason Resch wrote:
8. an organism which emerges spontaneously from Boltzmann conditions in the environment 
rather than seeded inheritance


Like the first RNA replicators on Earth.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread LizR
On 16 October 2013 01:26, Craig Weinberg whatsons...@gmail.com wrote:


 On Monday, October 14, 2013 11:14:36 PM UTC-4, Jason wrote:



 Thanks for your answer.  That was not quite what I was asking though.
 Let's say the nano-tech did not extend some living organism, but were some
 entirely autonomous, entirely artificial  cell-like structures, which could
 find and utilize energy sources in the environment and reproduce
 themselves.  Let's say after millions (or billions) of years, these
 self-replicating nanobots evolved into multi-cellular organisms like
 animals we are familiar with today. Could they have experiences like other
 biological creatures that have a biological lineage? If not, why not?


 No, I don't think that they could have experiences like biological
 creatures. If they could, then we *should *probably see at least one
 example of


Excuse me for butting in, but I'm not sure what should means here. Are
you saying these things should *already* exist? But the original suggestion
was about future technology... Though I can't see what else you could mean,
though.


 1. a natural occurrence of inorganic biology


Why would it occur naturally, when organic biology has done so, and
presumably used up all the food sources that might be available?


 2. an organism which can survive only on inorganic nutrients


???


 3. a successful experiment to create life from basic molecules


Arguably the biosphere counts as this, presumably not an intentional
experiment.


 4. a machine which seems to feel, care, and have a unique and unrepeatable
 personal presence


Arguably a human being is one of these


 5. a mechanized process which produces artifacts that seem handmade and
 unique
 6. two separate bodies who are the same person
 7. an organism which reproduces by transforming its environment rather
 than reproducing by cell division


This seems to me to have gone completely off the point.


 8. an organism which emerges spontaneously from Boltzmann conditions in
 the environment rather than seeded inheritance


What?!? (He said billions of years, not googolplexes...!)


 9. an event or observation which leads us to conclude that gathering
 energy and reproduction are sufficient to constitute bio-quality awareness.

 I don't understand that sentence.

I may be missing something here but I believe the question is whether
machines can have experiences. Isn't a human being a machine that has
experiences?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread LizR
On 16 October 2013 08:59, Jason Resch jasonre...@gmail.com wrote:


 7. an organism which reproduces by transforming its environment rather
 than reproducing by cell division


 Bruno said cigarettes might qualify as such life forms.

 Viruses, surely?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Jason Resch



On Oct 15, 2013, at 5:52 PM, LizR lizj...@gmail.com wrote:


On 16 October 2013 08:59, Jason Resch jasonre...@gmail.com wrote:

7. an organism which reproduces by transforming its environment  
rather than reproducing by cell division


Bruno said cigarettes might qualify as such life forms.

Viruses, surely?




Yes that's a much better example.

Jason


--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Craig Weinberg


On Tuesday, October 15, 2013 3:59:33 PM UTC-4, Jason wrote:



 On Oct 15, 2013, at 7:26 AM, Craig Weinberg whats...@gmail.comjavascript: 
 wrote:



 On Monday, October 14, 2013 11:14:36 PM UTC-4, Jason wrote:




 On Mon, Oct 14, 2013 at 9:59 PM, Craig Weinberg whats...@gmail.comwrote:



 On Monday, October 14, 2013 4:37:35 PM UTC-4, Jason wrote:




 On Thu, Oct 10, 2013 at 10:54 AM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, October 9, 2013 8:08:01 PM UTC-4, Jason wrote:




 On Wed, Oct 9, 2013 at 4:52 PM, LizR liz...@gmail.com wrote:

  On 10 October 2013 09:47, Craig Weinberg whats...@gmail.com wrote:

 It's not that computers can't do what humans do, it's that they 
 can't experience anything. Mozart could dig a hole as well as compose 
 music, but that doesn't mean that a backhoe with a player piano on it 
 is 
 Mozart. It's a much deeper problem with how machines are 
 conceptualized 
 that has nothing at all to do with humans.


 So you think strong AI is wrong. OK. But why can't computers 
 experience anything, in principle, given that people can, and assuming 
 people are complicated machines?
  


 I think Craig would say he does think computers (and many/all other 
 things) do experience something,


 You're half right. I would say:

 1. All experiences correspond to some natural thing.
 2. Not all things are natural things. Bugs Bunny has no independent 
 experience, and neither does Pinocchio. 
 3. Computers are made of natural things but, like all machines, are 
 ultimately assembled unnaturally.
 4. The natural things that machines are made of would have to be very 
 low level, i.e., not gears but the molecules that make up the gears.

 Unless a machine used living organisms, molecules would probably be 
 the only natural things which an experience would be associated with. 
 They 
 don't know that they are part of a machine, but there is probably an 
 experience that corresponds to thermodynamic and electromagnetic 
 conditions. Experiences on that level may not be proprietary to any 
 particular molecule - it could be very exotic, who knows. Maybe every 
 atom 
 of the same structure represents the same kind of experience on some 
 radically different time scale from ours. 

 It's not really important - the main thing is to see how there is no 
 substitute for experience and a machine which is assembled from unrelated 
 parts has no experience and cannot gain new experience in an alien 
 context.

 I think that a machine (or any inanimate object or symbol) can also 
 serve as a vehicle for synchronicity. That's a completely different thing 
 because it is the super-personal, holistic end of the sensible spectrum, 
 not the sub-personal, granular end. The creepiness of a ventriloquist 
 dummy 
 is in our imagination, but that too is 'real' in an absolute sense. If 
 your 
 life takes you on a path which tempts you to believe that machines are 
 conscious, then the super-personal lensing of your life will stack the 
 deck 
 just enough to let you jump to those conclusions. It's what we would call 
 supernatural or coincidental, depending on which lens we use to define 
 it..  http://s33light.org/post/62173912616http://s33light.org/post/*
 *62173912616  (Don't you want to have a body?)


 After reading this (  http://marshallbrain.com/discard1.htm
 http://marshallbrain.com/**discard1.htm ) I am not so sure...
  

  

  just that it is necessarily different from what we experience. The 
 reason for this has something to do with our history as biological 
 organisms (according to his theory).


 Right, although not necessarily just biological history, it could be 
 chemical too. We may have branched off from anything that could be made 
 into a useful machine (servant to alien agendas) long before life on 
 Earth.


 What if humanity left behind a nano-technology that eventually evolved 
 into mechanical organisms like dogs and fish, would they have animal like 
 experiences despite that they descended from unnatural things?


 The thing that makes sense to me is that the richness of sensation and 
 intention are inversely proportionate to the degree to which a phenomenon 
 can be controlled from the outside. If we put nano-tech extensions on some 
 living organism, then sure, the organism could learn how to use those 
 extensions and evolve a symbiotic post-biology. I don't think that project 
 would be controllable though. They would not be machines in the sense that 
 they would not necessarily be of service to those who created them. 



 Craig,

 Thanks for your answer.  That was not quite what I was asking though.  
 Let's say the nano-tech did not extend some living organism, but were some 
 entirely autonomous, entirely artificial  cell-like structures, which could 
 find and utilize energy sources in the environment and reproduce 
 themselves.  Let's say after millions (or billions) of years, these 
 self-replicating nanobots evolved into multi-cellular organisms 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread LizR
On 16 October 2013 13:30, Craig Weinberg whatsons...@gmail.com wrote:


 All that we know for sure is that there does not seem to be a single
 example of an inorganic species now, nor does there seem to be a single
 example from the fossil record. It doesn't mean that conscious machines
 cannot evolve, but since it appears that they have not so far, we should
 not, scientifically speaking, give it the benefit of the doubt.

 I thought the default stance of science was that they did evolve, and
here we are.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Craig Weinberg


On Tuesday, October 15, 2013 6:50:53 PM UTC-4, Liz R wrote:

 On 16 October 2013 01:26, Craig Weinberg whats...@gmail.com javascript:
  wrote:


 On Monday, October 14, 2013 11:14:36 PM UTC-4, Jason wrote:

  

 Thanks for your answer.  That was not quite what I was asking though.  
 Let's say the nano-tech did not extend some living organism, but were some 
 entirely autonomous, entirely artificial  cell-like structures, which could 
 find and utilize energy sources in the environment and reproduce 
 themselves.  Let's say after millions (or billions) of years, these 
 self-replicating nanobots evolved into multi-cellular organisms like 
 animals we are familiar with today. Could they have experiences like other 
 biological creatures that have a biological lineage? If not, why not?


 No, I don't think that they could have experiences like biological 
 creatures. If they could, then we *should *probably see at least one 
 example of 


 Excuse me for butting in, but I'm not sure what should means here. Are 
 you saying these things should *already* exist? But the original 
 suggestion was about future technology... Though I can't see what else you 
 could mean, though.


 1. a natural occurrence of inorganic biology


 Why would it occur naturally, when organic biology has done so, and 
 presumably used up all the food sources that might be available?


If inorganic biology were possible, shouldn't it use inorganic food sources?
 

  

 2. an organism which can survive only on inorganic nutrients


 ???


A bird that can live on rocks, etc.
 

  

 3. a successful experiment to create life from basic molecules

  
 Arguably the biosphere counts as this, presumably not an intentional 
 experiment.


That's begging the question. We don't know that abiogenesis is a fact, or 
if it was, we don't know that it is possible to reoccur. Our experiments 
thus far have not supported the idea that biological life can be be created 
again.
 

  

 4. a machine which seems to feel, care, and have a unique and 
 unrepeatable personal presence


 Arguably a human being is one of these


It's begging the question. I'm saying people are not like machines, 
because people are all unique but machines are not. You can't use that 
fact to claim that people are representative of machines, and then 
therefore that machines can be like people.  If I said oil and water don't 
mix, you can't say 'arguably oil is a type of water'.
 

  

 5. a mechanized process which produces artifacts that seem handmade and 
 unique
 6. two separate bodies who are the same person
 7. an organism which reproduces by transforming its environment rather 
 than reproducing by cell division


 This seems to me to have gone completely off the point.


I would need you to explain more of what you mean.
 

  

 8. an organism which emerges spontaneously from Boltzmann conditions in 
 the environment rather than seeded inheritance


 What?!? (He said billions of years, not googolplexes...!)


I didn't say Boltzmann brain, just a Boltzmann organism.
 

  

 9. an event or observation which leads us to conclude that gathering 
 energy and reproduction are sufficient to constitute bio-quality awareness.

 I don't understand that sentence. 


The whole basis of computationalism hinges on the assumption that acting 
like you are alive is the same as being alive, which I think is 
demonstrably false. We know for a fact that something that is not alive can 
seem like it is. We know that a machine can produce strings of language 
that carry no meaning for it. So what is it, other than pure blue-sky 
wishful thinking, that leads us to conclude that moving a puppet around in 
the right way is going to bring Pinocchio to life?
 


 I may be missing something here but I believe the question is whether 
 machines can have experiences. Isn't a human being a machine that has 
 experiences?


No, that's begging the question. A human body may be a machine, but that 
does not mean that a human experience can be created from the outside in. 
That's what all of these points are about - a machine does not build itself 
from a single reproducing cell. A machine does not care what it is doing, 
it doesn't get bored or tired. A machine is great at doing things that 
people are terrible at doing and vice versa. There is much more evidence to 
suggest that human experience is the polar opposite of mechanism than that 
it could be defined by mechanism.

Thanks,
Craig 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread LizR
On 16 October 2013 13:48, Craig Weinberg whatsons...@gmail.com wrote:


 No, that's begging the question. A human body may be a machine, but that
 does not mean that a human experience can be created from the outside in.
 That's what all of these points are about - a machine does not build itself
 from a single reproducing cell. A machine does not care what it is doing,
 it doesn't get bored or tired. A machine is great at doing things that
 people are terrible at doing and vice versa. There is much more evidence to
 suggest that human experience is the polar opposite of mechanism than that
 it could be defined by mechanism.

 So what is a human being, if not a (very complicated,
molecular-component-containing) machine? (Or is machine being defined in
a specialised sense here?)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread LizR
Sorry I should have added... your statement A human body may be a machine
contradicts a machine does not build itself from a single reproducing
cell. A machine does not care what it is doing, it doesn't get bored or
tired - unless a human being is not the same thing as a human body, of
course. Is that the point?




On 16 October 2013 13:51, LizR lizj...@gmail.com wrote:

 On 16 October 2013 13:48, Craig Weinberg whatsons...@gmail.com wrote:


 No, that's begging the question. A human body may be a machine, but that
 does not mean that a human experience can be created from the outside in.
 That's what all of these points are about - a machine does not build itself
 from a single reproducing cell. A machine does not care what it is doing,
 it doesn't get bored or tired. A machine is great at doing things that
 people are terrible at doing and vice versa. There is much more evidence to
 suggest that human experience is the polar opposite of mechanism than that
 it could be defined by mechanism.

 So what is a human being, if not a (very complicated,
 molecular-component-containing) machine? (Or is machine being defined in
 a specialised sense here?)



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Craig Weinberg


On Tuesday, October 15, 2013 3:45:38 AM UTC-4, Bruno Marchal wrote:


 On 14 Oct 2013, at 22:04, Craig Weinberg wrote:



 On Monday, October 14, 2013 3:17:06 PM UTC-4, Bruno Marchal wrote:


 On 14 Oct 2013, at 20:13, Craig Weinberg wrote:



 On Sunday, October 13, 2013 5:03:45 AM UTC-4, Bruno Marchal wrote:



 All object are conscious?


 No objects are conscious.


 We agree on this.


  





 Not at all. It is here and now. I have already interview such machines. 


 Are there any such machines available to interview online?


 I can give you the code in Lisp, and it is up to you to find a good free 
 lisp. But don't mind too much, AUDA is an integral description of the 
 interview. Today, such interviews is done by paper and pencils, and appears 
 in books and papers.
 You better buy Boolos 1979, or 1993, but you have to study more logic too.


 Doesn't it seem odd that there isn't much out there that is newer than 20 
 years old, 


 That is simply wrong, and I don't see why you say that. But even if that 
 was true, that would prove nothing.


It still seems odd. There are a lot of good programmers out there. If this 
is the frontier of machine intelligence, where is the interest? Not saying 
it proves something, but it doesn't instill much confidence that this is as 
fertile an area as you imply.
 



 and that paper and pencils are the preferred instruments?


 Maybe I was premature in saying it was promissory...it would appears that 
 there has not been any promise for it in quite some time.
  




 It is almost applicable, but the hard part is that it is blind to its 
 own blindness, so that the certainty offered by mathematics comes at a cost 
 which mathematics has no choice but to deny completely. Because mathematics 
 cannot lie, 


 G* proves []f

 Even Peano Arithmetic can lie.  
 Mathematical theories (set of beliefs) can lie.

 Only truth cannot lie, but nobody know the truth as such.


  Something that is a paradox or inconsistent is not the same thing as an 
 intentional attempt to deceive. I'm not sure what 'G* proves []f' means 
 but I think it will mean the same thing to anyone who understands it, and 
 not something different to the boss than it does to the neighbor.


 Actually it will have as much meaning as there are correct machines (a 
 lot), but the laws remains the same. Then adding the non-monotonical 
 umbrella, saving the Lôbian machines from the constant mistakes and lies 
 they do, provides different interpretation of []f, like

 I dream,
 I die,
 I get mad,
 I am in a cul-de-sac
 I get wrong

 etc.

 It will depend on the intensional nuances in play.


 Couldn't the machine output the same product as musical notes or colored 
 pixels instead?


 Why not. Humans can do that too.


If I asked a person to turn some data into music or art, no two people 
would agree on what that output would be and no person's output would be 
decipherable as input to another person. Computers, on the other hand, 
would automatically be able to reverse any kind of i/o in the same way. One 
computer could play a file as a song, and another could make a graphic file 
out of the audio line out data which would be fully reversible to the 
original binary file.




  







 it cannot intentionally tell the truth either, and no matter how 
 sophisticated and self-referential a logic it is based on, it can never 
 transcend its own alienation from feeling, physics, and authenticity. 


 That is correct, but again, that is justifiable by all correct 
 sufficiently rich machines.


 Not sure I understand. Are you saying that we, as rich machines, cannot 
 intentionally lie or tell the truth either?


 No, I am saying that all correct machines can eventually justify that if 
 they are correct they can't  express it, and if they are consistent, it 
 will be consistent they are wrong. So it means they can eventually exploits 
 the false locally. Team of universal numbers get entangled in very subtle 
 prisoner dilemma. 
 Universal machines can lie, and can crash.


 That sounds like they can lie only when they calculate that they must, not 
 that they can lie intentionally because they enjoy it or out of sadism.


 That sounds like an opportunistic inference.


I think that computationalism maintains the illusion of legitimacy on basis 
of seducing us to play only by its rules. It says that we must give the 
undead a chance to be alive - that we cannot know for sure whether a 
machine is not at least as worthy of our love as a newborn baby. To fight 
this seduction, we must use what is our birthright as living beings. We can 
be opportunistic, we can cheat, and lie, and unplug machines whenever we 
want, because that is what makes us superior to recorded logic. We are 
alive, so we get to do whatever we want to that which is not alive.

Craig
 


 Bruno



 Craig
  


 Bruno



 http://iridia.ulb.ac.be/~marchal/




 -- 
 You received this message because you are subscribed to the Google 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Craig Weinberg


On Tuesday, October 15, 2013 8:51:17 PM UTC-4, Liz R wrote:

 On 16 October 2013 13:48, Craig Weinberg whats...@gmail.com javascript:
  wrote:


 No, that's begging the question. A human body may be a machine, but that 
 does not mean that a human experience can be created from the outside in. 
 That's what all of these points are about - a machine does not build itself 
 from a single reproducing cell. A machine does not care what it is doing, 
 it doesn't get bored or tired. A machine is great at doing things that 
 people are terrible at doing and vice versa. There is much more evidence to 
 suggest that human experience is the polar opposite of mechanism than that 
 it could be defined by mechanism.

 So what is a human being, if not a (very complicated, 
 molecular-component-containing) machine? (Or is machine being defined in 
 a specialised sense here?) 


A human being is the collective self experience received during the 
phenomenon known as a human lifetime. The body is only one aspect of that 
experience - a reflection defined as a familiar body in the context of its 
own perception.
 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Craig Weinberg


On Tuesday, October 15, 2013 8:52:48 PM UTC-4, Liz R wrote:

 Sorry I should have added... your statement A human body may be a 
 machine contradicts a machine does not build itself from a single 
 reproducing cell. A machine does not care what it is doing, it doesn't get 
 bored or tired - unless a human being is not the same thing as a human 
 body, of course. Is that the point?


Right, a human body is not the same thing as a human being. A human body is 
still a body after the human ceases being. Not because there is an 
immaterial spirit, but because the entire universe is a nested experience 
and the body is more about experiences on the cellular and molecular level 
than it is about individual lifetimes.

Craig
 





 On 16 October 2013 13:51, LizR liz...@gmail.com javascript: wrote:

 On 16 October 2013 13:48, Craig Weinberg whats...@gmail.comjavascript:
  wrote:


 No, that's begging the question. A human body may be a machine, but that 
 does not mean that a human experience can be created from the outside in. 
 That's what all of these points are about - a machine does not build itself 
 from a single reproducing cell. A machine does not care what it is doing, 
 it doesn't get bored or tired. A machine is great at doing things that 
 people are terrible at doing and vice versa. There is much more evidence to 
 suggest that human experience is the polar opposite of mechanism than that 
 it could be defined by mechanism.

 So what is a human being, if not a (very complicated, 
 molecular-component-containing) machine? (Or is machine being defined in 
 a specialised sense here?) 

  


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread LizR
On 16 October 2013 14:05, Craig Weinberg whatsons...@gmail.com wrote:



 On Tuesday, October 15, 2013 8:51:17 PM UTC-4, Liz R wrote:

 On 16 October 2013 13:48, Craig Weinberg whats...@gmail.com wrote:


 No, that's begging the question. A human body may be a machine, but that
 does not mean that a human experience can be created from the outside in.
 That's what all of these points are about - a machine does not build itself
 from a single reproducing cell. A machine does not care what it is doing,
 it doesn't get bored or tired. A machine is great at doing things that
 people are terrible at doing and vice versa. There is much more evidence to
 suggest that human experience is the polar opposite of mechanism than that
 it could be defined by mechanism.

 So what is a human being, if not a (very complicated,
 molecular-component-**containing) machine? (Or is machine being
 defined in a specialised sense here?)


 A human being is the collective self experience received during the
 phenomenon known as a human lifetime. The body is only one aspect of that
 experience - a reflection defined as a familiar body in the context of its
 own perception.


That's cool, but if the body is a (complicated, etc) machine, then either
those experiences are part of the machine, or they're something else. If
they're part of the machine then you're wrong in some of the above-quoted
statements (and you contradicted yourself by saying that a machine doesn't
grow from a cell, by the way) If it's something else, then - depending on
the nature of that something else - it's possible that other things have
it, and we don't recognise the fact. It would be important to know what
that something else is before one can construct an argument. (For example,
I believe Bruno thinks the something else is an infinite sheaf of
computations.)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread LizR
On 16 October 2013 14:09, Craig Weinberg whatsons...@gmail.com wrote:



 On Tuesday, October 15, 2013 8:52:48 PM UTC-4, Liz R wrote:

 Sorry I should have added... your statement A human body may be a
 machine contradicts a machine does not build itself from a single
 reproducing cell. A machine does not care what it is doing, it doesn't get
 bored or tired - unless a human being is not the same thing as a human
 body, of course. Is that the point?


 Right, a human body is not the same thing as a human being. A human body
 is still a body after the human ceases being. Not because there is an
 immaterial spirit, but because the entire universe is a nested experience
 and the body is more about experiences on the cellular and molecular level
 than it is about individual lifetimes.


Now you've lost me. Is a nested experience anything like Max Tegmark's
self-aware subsystems ?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-15 Thread Jason Resch
On Tue, Oct 15, 2013 at 7:30 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Tuesday, October 15, 2013 3:59:33 PM UTC-4, Jason wrote:



 On Oct 15, 2013, at 7:26 AM, Craig Weinberg whats...@gmail.com wrote:



 On Monday, October 14, 2013 11:14:36 PM UTC-4, Jason wrote:




 On Mon, Oct 14, 2013 at 9:59 PM, Craig Weinberg whats...@gmail.comwrote:



 On Monday, October 14, 2013 4:37:35 PM UTC-4, Jason wrote:




 On Thu, Oct 10, 2013 at 10:54 AM, Craig Weinberg 
 whats...@gmail.comwrote:



 On Wednesday, October 9, 2013 8:08:01 PM UTC-4, Jason wrote:




 On Wed, Oct 9, 2013 at 4:52 PM, LizR liz...@gmail.com wrote:

  On 10 October 2013 09:47, Craig Weinberg whats...@gmail.comwrote:

 It's not that computers can't do what humans do, it's that they
 can't experience anything. Mozart could dig a hole as well as compose
 music, but that doesn't mean that a backhoe with a player piano on it 
 is
 Mozart. It's a much deeper problem with how machines are 
 conceptualized
 that has nothing at all to do with humans.


 So you think strong AI is wrong. OK. But why can't computers
 experience anything, in principle, given that people can, and assuming
 people are complicated machines?



 I think Craig would say he does think computers (and many/all other
 things) do experience something,


 You're half right. I would say:

 1. All experiences correspond to some natural thing.
 2. Not all things are natural things. Bugs Bunny has no independent
 experience, and neither does Pinocchio.
 3. Computers are made of natural things but, like all machines, are
 ultimately assembled unnaturally.
 4. The natural things that machines are made of would have to be very
 low level, i.e., not gears but the molecules that make up the gears.

 Unless a machine used living organisms, molecules would probably be
 the only natural things which an experience would be associated with. 
 They
 don't know that they are part of a machine, but there is probably an
 experience that corresponds to thermodynamic and electromagnetic
 conditions. Experiences on that level may not be proprietary to any
 particular molecule - it could be very exotic, who knows. Maybe every 
 atom
 of the same structure represents the same kind of experience on some
 radically different time scale from ours.

 It's not really important - the main thing is to see how there is no
 substitute for experience and a machine which is assembled from unrelated
 parts has no experience and cannot gain new experience in an alien 
 context.

 I think that a machine (or any inanimate object or symbol) can also
 serve as a vehicle for synchronicity. That's a completely different thing
 because it is the super-personal, holistic end of the sensible spectrum,
 not the sub-personal, granular end. The creepiness of a ventriloquist 
 dummy
 is in our imagination, but that too is 'real' in an absolute sense. If 
 your
 life takes you on a path which tempts you to believe that machines are
 conscious, then the super-personal lensing of your life will stack the 
 deck
 just enough to let you jump to those conclusions. It's what we would call
 supernatural or coincidental, depending on which lens we use to define
 it..  http://s33light.org/post/62173912616http://s33light.org/post/
 **62173**912616  (Don't you want to have a body?)


 After reading this (  http://marshallbrain.com/discard1.htm
 http://marshallbrain.com/**dis**card1.htm ) I am not so sure...




  just that it is necessarily different from what we experience. The
 reason for this has something to do with our history as biological
 organisms (according to his theory).


 Right, although not necessarily just biological history, it could be
 chemical too. We may have branched off from anything that could be made
 into a useful machine (servant to alien agendas) long before life on 
 Earth.


 What if humanity left behind a nano-technology that eventually evolved
 into mechanical organisms like dogs and fish, would they have animal like
 experiences despite that they descended from unnatural things?


 The thing that makes sense to me is that the richness of sensation and
 intention are inversely proportionate to the degree to which a phenomenon
 can be controlled from the outside. If we put nano-tech extensions on some
 living organism, then sure, the organism could learn how to use those
 extensions and evolve a symbiotic post-biology. I don't think that project
 would be controllable though. They would not be machines in the sense that
 they would not necessarily be of service to those who created them.



 Craig,

 Thanks for your answer.  That was not quite what I was asking though.
 Let's say the nano-tech did not extend some living organism, but were some
 entirely autonomous, entirely artificial  cell-like structures, which could
 find and utilize energy sources in the environment and reproduce
 themselves.  Let's say after millions (or billions) of years, these
 self-replicating nanobots evolved into 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-14 Thread Craig Weinberg


On Sunday, October 13, 2013 5:14:00 AM UTC-4, stathisp wrote:

 On 13 October 2013 15:29, Craig Weinberg whats...@gmail.com javascript: 
 wrote: 

  Perform to whose satisfaction? A cadaver can be made to twitch, or 
  propped up to stand. 
  
  
  Perform to the satisfaction of anyone you care to nominate. A committee 
 of 
  humans examine two people who have had a haircut, one from a human and 
 the 
  other from a computer, and try to decide which is which. This is 
 repeated 
  several times. If they can't tell the difference then we say the 
 computer 
  has succeeded in cutting hair as well as a human. Is there any task you 
  think a computer will never be able manage as well as a human in this 
 sort 
  of test? 
  
  
  What does performing tasks have to do with anything? We are talking 
 about 
  the capacity to feel, experience, and care. If you could replace your 
 hands 
  with machines that would do everything your hands could do and quite a 
 bit 
  more, but would have no feeling in them at all, would you say that the 
 robot 
  hands were just as good to you as human hands? If your tongue could 
 detect 
  any chemical in the universe accurately and provide you with precise 
  knowledge of it, but never allow you to taste any flavor or feel 
 anything 
  with your tongue again, would that be equivalent? 

 I understand that you don't think computers can have feelings, but I 
 was asking if if computers can perform all tasks that a human can 
 perform, or if there are some tasks they just won't be able to do. If 
 there are, then this suggests a test for consciousness. 


I don't know that there is a such thing as 'all tasks that a human can 
perform'. Before Mozart, humans could not perform Mozart concertos. 
Anything that a computer does is actually being done by the inventors and 
programmers of the computer, plus the physics of the medium being used to 
do the computing. By themselves, computers can't do much of anything. If 
someone could invent a computer which was smart enough even to be able turn 
themselves off when they are stuck they could make a fortune.

Craig
 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-14 Thread Craig Weinberg


On Sunday, October 13, 2013 5:03:45 AM UTC-4, Bruno Marchal wrote:


 On 13 Oct 2013, at 06:40, Craig Weinberg wrote:



 On Saturday, October 12, 2013 12:27:08 PM UTC-4, Bruno Marchal wrote:


 On 12 Oct 2013, at 09:49, Stathis Papaioannou wrote:



 On Saturday, October 12, 2013, Craig Weinberg wrote:



 On Friday, October 11, 2013 11:32:49 PM UTC-4, stathisp wrote:



 On Saturday, October 12, 2013, Craig Weinberg wrote:



 On Friday, October 11, 2013 5:37:52 PM UTC-4, stathisp wrote:



 On Oct 11, 2013, at 8:19 PM, Craig Weinberg whats...@gmail.com 
 wrote:



 On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:

 On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com wrote: 
  http://online.wsj.com/article/SB1000142405270230349250457911**
 **5310362925246.htmlhttp://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
  


  A lot of what I am always talking about is in there...computers 
 don't 
  understand produce because they have no aesthetic sensibility. A 
 mechanical 
  description of a function is not the same thing as participating 
 in an 
  experience. 

 This is effectively a test for consciousness: if the entity can 
 perform the type of task you postulate requires aesthetic 
 sensibility, 
 it must have aesthetic sensibility. 


 Not at all. That's exactly the opposite of what I am saying. The 
 failure of digital mechanism to interface with aesthetic presence is not 
 testable unless you yourself become a digital mechanism. There can never 
 be 
 a test of aesthetic sensibility because testing is by definition 
 anesthetic. To test is to measure into a system of universal 
 representation. Measurement is the removal of presence for the purpose 
 of 
 distribution as symbol. I can draw a picture of a robot correctly 
 identifying a vegetable, but that doesn't mean that the drawing of the 
 robot is doing anything. I can make a movie of the robot cartoon, or a 
 sculpture, or an animated sculpture that has a sensor for iodine or 
 magnesium which can be correlated to a higher probability of a 
 particular 
 vegetable, but that doesn't change anything at all. There is still no 
 robot 
 except in our experience and our expectations of its experience. The 
 robot 
 is not even a zombie, it is a puppet playing back recordings of our 
 thoughts in a clever way.
  

 OK, so it would prove nothing to you if the supermarket computers did 
 a better job than the checkout chicks. Why then did you cite this 
 article?


 Because the article is consistent with my view that there is a 
 fundamental difference between quantitative tasks and aesthetic 
 awareness. 
 If there were no difference, then I would expect that the problems that 
 supermarket computers would have would not be related to its 
 unconsciousness, but to unreliability or even willfulness developing. Why 
 isn't the story Automated cashiers have begun throwing temper tantrums 
 at 
 some locations which are contagious to certain smart phones that now 
 become 
 upset in sympathy...we had anticipated this, but not so soon, yadda 
 yadda? 
 I think it's pretty clear why. For the same reason that all machines will 
 always fall short of authentic personality and sensitivity.


 So you would just say that computers lack authentic personality and 
 sensitivity, no matter what they did.


 Beyond question, yes. I wouldn't just say it, I would bet my life on it, 
 because I understand it completely.


 Do you believe that computers can perform any task a human can perform? 
 If not, what is an example of a relatively simple task that a computer 
 could never perform? 


 I thought Craig just made clear that computers might performs as well as 
 humans, and that even in that case, he will not attribute sense and 
 aesthetic to them.
 This was already clear with my sun-in-law (who got an artificial brain, 
 and who can't enjoy a good meal at his restaurant). 

 He call them puppets, but he believes in philosophical zombies.


 I don't believe in philosophical zombies. I use puppet because a puppet 
 implies an absence of conscious presence, which is an ordinary condition of 
 macrocosmic objects as we seem them, because the sensation associated with 
 them belongs to a distant frame (microcosm). 


 All object are conscious?


No objects are conscious.
 




 A zombie is supernatural because rather than the seeming absence of 
 presence (normal), they imply the presence of absence, 


 ?



 which is unnatural and cannot exist. There can be no undead, only the 
 unlive.
  


 He is coherent, but invalid in his debunking of comp. He debunks only the 
 19th century conception of machines (controllable physical beings).


 I think that I also debunk the 21st century reality of machines. The 
 promissory mechanism offered by comp is purely a theoretical futurism - 


 Not at all. It is here and now. I have already interview such machines. 


Are there any such machines available to interview online?
 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-14 Thread Bruno Marchal


On 14 Oct 2013, at 20:13, Craig Weinberg wrote:




On Sunday, October 13, 2013 5:03:45 AM UTC-4, Bruno Marchal wrote:


All object are conscious?

No objects are conscious.


We agree on this.








Not at all. It is here and now. I have already interview such  
machines.


Are there any such machines available to interview online?


I can give you the code in Lisp, and it is up to you to find a good  
free lisp. But don't mind too much, AUDA is an integral description of  
the interview. Today, such interviews is done by paper and pencils,  
and appears in books and papers.
You better buy Boolos 1979, or 1993, but you have to study more logic  
too.





It is almost applicable, but the hard part is that it is blind to  
its own blindness, so that the certainty offered by mathematics  
comes at a cost which mathematics has no choice but to deny  
completely. Because mathematics cannot lie,


G* proves []f

Even Peano Arithmetic can lie.
Mathematical theories (set of beliefs) can lie.

Only truth cannot lie, but nobody know the truth as such.

 Something that is a paradox or inconsistent is not the same thing  
as an intentional attempt to deceive. I'm not sure what 'G* proves  
[]f' means but I think it will mean the same thing to anyone who  
understands it, and not something different to the boss than it does  
to the neighbor.


Actually it will have as much meaning as there are correct machines (a  
lot), but the laws remains the same. Then adding the non-monotonical  
umbrella, saving the Lôbian machines from the constant mistakes and  
lies they do, provides different interpretation of []f, like


I dream,
I die,
I get mad,
I am in a cul-de-sac
I get wrong

etc.

It will depend on the intensional nuances in play.







it cannot intentionally tell the truth either, and no matter how  
sophisticated and self-referential a logic it is based on, it can  
never transcend its own alienation from feeling, physics, and  
authenticity.


That is correct, but again, that is justifiable by all correct  
sufficiently rich machines.


Not sure I understand. Are you saying that we, as rich machines,  
cannot intentionally lie or tell the truth either?


No, I am saying that all correct machines can eventually justify that  
if they are correct they can't  express it, and if they are  
consistent, it will be consistent they are wrong. So it means they can  
eventually exploits the false locally. Team of universal numbers get  
entangled in very subtle prisoner dilemma.

Universal machines can lie, and can crash.

Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-14 Thread Craig Weinberg


On Monday, October 14, 2013 3:17:06 PM UTC-4, Bruno Marchal wrote:


 On 14 Oct 2013, at 20:13, Craig Weinberg wrote:



 On Sunday, October 13, 2013 5:03:45 AM UTC-4, Bruno Marchal wrote:



 All object are conscious?


 No objects are conscious.


 We agree on this.


  





 Not at all. It is here and now. I have already interview such machines. 


 Are there any such machines available to interview online?


 I can give you the code in Lisp, and it is up to you to find a good free 
 lisp. But don't mind too much, AUDA is an integral description of the 
 interview. Today, such interviews is done by paper and pencils, and appears 
 in books and papers.
 You better buy Boolos 1979, or 1993, but you have to study more logic too.


Doesn't it seem odd that there isn't much out there that is newer than 20 
years old, and that paper and pencils are the preferred instruments? Maybe 
I was premature in saying it was promissory...it would appears that there 
has not been any promise for it in quite some time.
 




 It is almost applicable, but the hard part is that it is blind to its own 
 blindness, so that the certainty offered by mathematics comes at a cost 
 which mathematics has no choice but to deny completely. Because mathematics 
 cannot lie, 


 G* proves []f

 Even Peano Arithmetic can lie.  
 Mathematical theories (set of beliefs) can lie.

 Only truth cannot lie, but nobody know the truth as such.


  Something that is a paradox or inconsistent is not the same thing as an 
 intentional attempt to deceive. I'm not sure what 'G* proves []f' means 
 but I think it will mean the same thing to anyone who understands it, and 
 not something different to the boss than it does to the neighbor.


 Actually it will have as much meaning as there are correct machines (a 
 lot), but the laws remains the same. Then adding the non-monotonical 
 umbrella, saving the Lôbian machines from the constant mistakes and lies 
 they do, provides different interpretation of []f, like

 I dream,
 I die,
 I get mad,
 I am in a cul-de-sac
 I get wrong

 etc.

 It will depend on the intensional nuances in play.


Couldn't the machine output the same product as musical notes or colored 
pixels instead?
 







 it cannot intentionally tell the truth either, and no matter how 
 sophisticated and self-referential a logic it is based on, it can never 
 transcend its own alienation from feeling, physics, and authenticity. 


 That is correct, but again, that is justifiable by all correct 
 sufficiently rich machines.


 Not sure I understand. Are you saying that we, as rich machines, cannot 
 intentionally lie or tell the truth either?


 No, I am saying that all correct machines can eventually justify that if 
 they are correct they can't  express it, and if they are consistent, it 
 will be consistent they are wrong. So it means they can eventually exploits 
 the false locally. Team of universal numbers get entangled in very subtle 
 prisoner dilemma. 
 Universal machines can lie, and can crash.


That sounds like they can lie only when they calculate that they must, not 
that they can lie intentionally because they enjoy it or out of sadism.

Craig
 


 Bruno



 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-14 Thread Stathis Papaioannou
On 15 October 2013 05:05, Craig Weinberg whatsons...@gmail.com wrote:

 I understand that you don't think computers can have feelings, but I
 was asking if if computers can perform all tasks that a human can
 perform, or if there are some tasks they just won't be able to do. If
 there are, then this suggests a test for consciousness.


 I don't know that there is a such thing as 'all tasks that a human can
 perform'. Before Mozart, humans could not perform Mozart concertos. Anything
 that a computer does is actually being done by the inventors and programmers
 of the computer, plus the physics of the medium being used to do the
 computing. By themselves, computers can't do much of anything. If someone
 could invent a computer which was smart enough even to be able turn
 themselves off when they are stuck they could make a fortune.

I think you are avoiding the question. Do you think there is any task
or job that a human can do but a computer is incapable of doing? For
example, being a hairdresser is certainly beyond any computer at
present. Is that because it requires consciousness, hence beyond
computers forever?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-14 Thread Craig Weinberg


On Monday, October 14, 2013 4:37:35 PM UTC-4, Jason wrote:




 On Thu, Oct 10, 2013 at 10:54 AM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:



 On Wednesday, October 9, 2013 8:08:01 PM UTC-4, Jason wrote:




 On Wed, Oct 9, 2013 at 4:52 PM, LizR liz...@gmail.com wrote:

  On 10 October 2013 09:47, Craig Weinberg whats...@gmail.com wrote:

 It's not that computers can't do what humans do, it's that they can't 
 experience anything. Mozart could dig a hole as well as compose music, 
 but 
 that doesn't mean that a backhoe with a player piano on it is Mozart. 
 It's 
 a much deeper problem with how machines are conceptualized that has 
 nothing 
 at all to do with humans.


 So you think strong AI is wrong. OK. But why can't computers 
 experience anything, in principle, given that people can, and assuming 
 people are complicated machines?
  


 I think Craig would say he does think computers (and many/all other 
 things) do experience something,


 You're half right. I would say:

 1. All experiences correspond to some natural thing.
 2. Not all things are natural things. Bugs Bunny has no independent 
 experience, and neither does Pinocchio. 
 3. Computers are made of natural things but, like all machines, are 
 ultimately assembled unnaturally.
 4. The natural things that machines are made of would have to be very low 
 level, i.e., not gears but the molecules that make up the gears.

 Unless a machine used living organisms, molecules would probably be the 
 only natural things which an experience would be associated with. They 
 don't know that they are part of a machine, but there is probably an 
 experience that corresponds to thermodynamic and electromagnetic 
 conditions. Experiences on that level may not be proprietary to any 
 particular molecule - it could be very exotic, who knows. Maybe every atom 
 of the same structure represents the same kind of experience on some 
 radically different time scale from ours. 

 It's not really important - the main thing is to see how there is no 
 substitute for experience and a machine which is assembled from unrelated 
 parts has no experience and cannot gain new experience in an alien context.

 I think that a machine (or any inanimate object or symbol) can also serve 
 as a vehicle for synchronicity. That's a completely different thing because 
 it is the super-personal, holistic end of the sensible spectrum, not the 
 sub-personal, granular end. The creepiness of a ventriloquist dummy is in 
 our imagination, but that too is 'real' in an absolute sense. If your life 
 takes you on a path which tempts you to believe that machines are 
 conscious, then the super-personal lensing of your life will stack the deck 
 just enough to let you jump to those conclusions. It's what we would call 
 supernatural or coincidental, depending on which lens we use to define 
 it..  http://s33light.org/post/62173912616  (Don't you want to have a 
 body?)


 After reading this ( http://marshallbrain.com/discard1.htm ) I am not so 
 sure...
  

  

  just that it is necessarily different from what we experience. The 
 reason for this has something to do with our history as biological 
 organisms (according to his theory).


 Right, although not necessarily just biological history, it could be 
 chemical too. We may have branched off from anything that could be made 
 into a useful machine (servant to alien agendas) long before life on Earth.


 What if humanity left behind a nano-technology that eventually evolved 
 into mechanical organisms like dogs and fish, would they have animal like 
 experiences despite that they descended from unnatural things?


The thing that makes sense to me is that the richness of sensation and 
intention are inversely proportionate to the degree to which a phenomenon 
can be controlled from the outside. If we put nano-tech extensions on some 
living organism, then sure, the organism could learn how to use those 
extensions and evolve a symbiotic post-biology. I don't think that project 
would be controllable though. They would not be machines in the sense that 
they would not necessarily be of service to those who created them. 

Craig


 Jason
  

 Craig
  

  
 Jason 

  -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-14 Thread Jason Resch
On Mon, Oct 14, 2013 at 9:59 PM, Craig Weinberg whatsons...@gmail.comwrote:



 On Monday, October 14, 2013 4:37:35 PM UTC-4, Jason wrote:




 On Thu, Oct 10, 2013 at 10:54 AM, Craig Weinberg whats...@gmail.comwrote:



 On Wednesday, October 9, 2013 8:08:01 PM UTC-4, Jason wrote:




 On Wed, Oct 9, 2013 at 4:52 PM, LizR liz...@gmail.com wrote:

  On 10 October 2013 09:47, Craig Weinberg whats...@gmail.com wrote:

 It's not that computers can't do what humans do, it's that they can't
 experience anything. Mozart could dig a hole as well as compose music, 
 but
 that doesn't mean that a backhoe with a player piano on it is Mozart. 
 It's
 a much deeper problem with how machines are conceptualized that has 
 nothing
 at all to do with humans.


 So you think strong AI is wrong. OK. But why can't computers
 experience anything, in principle, given that people can, and assuming
 people are complicated machines?



 I think Craig would say he does think computers (and many/all other
 things) do experience something,


 You're half right. I would say:

 1. All experiences correspond to some natural thing.
 2. Not all things are natural things. Bugs Bunny has no independent
 experience, and neither does Pinocchio.
 3. Computers are made of natural things but, like all machines, are
 ultimately assembled unnaturally.
 4. The natural things that machines are made of would have to be very
 low level, i.e., not gears but the molecules that make up the gears.

 Unless a machine used living organisms, molecules would probably be the
 only natural things which an experience would be associated with. They
 don't know that they are part of a machine, but there is probably an
 experience that corresponds to thermodynamic and electromagnetic
 conditions. Experiences on that level may not be proprietary to any
 particular molecule - it could be very exotic, who knows. Maybe every atom
 of the same structure represents the same kind of experience on some
 radically different time scale from ours.

 It's not really important - the main thing is to see how there is no
 substitute for experience and a machine which is assembled from unrelated
 parts has no experience and cannot gain new experience in an alien context.

 I think that a machine (or any inanimate object or symbol) can also
 serve as a vehicle for synchronicity. That's a completely different thing
 because it is the super-personal, holistic end of the sensible spectrum,
 not the sub-personal, granular end. The creepiness of a ventriloquist dummy
 is in our imagination, but that too is 'real' in an absolute sense. If your
 life takes you on a path which tempts you to believe that machines are
 conscious, then the super-personal lensing of your life will stack the deck
 just enough to let you jump to those conclusions. It's what we would call
 supernatural or coincidental, depending on which lens we use to define
 it..  
 http://s33light.org/post/**62173912616http://s33light.org/post/62173912616
 (Don't you want to have a body?)


 After reading this ( 
 http://marshallbrain.com/**discard1.htmhttp://marshallbrain.com/discard1.htm
  )
 I am not so sure...




  just that it is necessarily different from what we experience. The
 reason for this has something to do with our history as biological
 organisms (according to his theory).


 Right, although not necessarily just biological history, it could be
 chemical too. We may have branched off from anything that could be made
 into a useful machine (servant to alien agendas) long before life on Earth.


 What if humanity left behind a nano-technology that eventually evolved
 into mechanical organisms like dogs and fish, would they have animal like
 experiences despite that they descended from unnatural things?


 The thing that makes sense to me is that the richness of sensation and
 intention are inversely proportionate to the degree to which a phenomenon
 can be controlled from the outside. If we put nano-tech extensions on some
 living organism, then sure, the organism could learn how to use those
 extensions and evolve a symbiotic post-biology. I don't think that project
 would be controllable though. They would not be machines in the sense that
 they would not necessarily be of service to those who created them.



Craig,

Thanks for your answer.  That was not quite what I was asking though.
Let's say the nano-tech did not extend some living organism, but were some
entirely autonomous, entirely artificial  cell-like structures, which could
find and utilize energy sources in the environment and reproduce
themselves.  Let's say after millions (or billions) of years, these
self-replicating nanobots evolved into multi-cellular organisms like
animals we are familiar with today. Could they have experiences like other
biological creatures that have a biological lineage? If not, why not?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-13 Thread Craig Weinberg


On Sunday, October 13, 2013 12:56:58 AM UTC-4, Liz R wrote:

 On 13 October 2013 17:40, Craig Weinberg whats...@gmail.com javascript:
  wrote:


 I don't believe in philosophical zombies. I use puppet because a puppet 
 implies an absence of conscious presence, which is an ordinary condition of 
 macrocosmic objects as we seem them, because the sensation associated with 
 them belongs to a distant frame (microcosm). A zombie is supernatural 
 because rather than the seeming absence of presence (normal), they imply 
 the presence of absence, which is unnatural and cannot exist. There can be 
 no undead, only the unlive.


 Puppet implies a puppeteer. In a sense our bodies are puppets controlled 
 by our brains. So there is a conscious presence.


But the brain cannot be separated from the body. It's made of the same stem 
cell. It is not the prosthetic appendage of itself, it is a whole organism 
on a zoological level, a community of organisms on a biological level, and 
an ocean of chemical reactions on a chemical level. That the brain can 
influence the behavior of the other organs and tissues of the body and vice 
versa is not a puppet-ventriloquist relation, it is a multivalent fugue of 
interdependence (in which we participate directly, and through which we 
participate in social and super-personal dramas).


 I wonder if there are psychological conditions that are similar to 
 philosophical zombiehood? I.e. doing things as though conscious when you 
 aren't. (Maybe sleep walking?) 


Sure, sleepwalking, blindsight (patients can guess what they are seeing 
correctly but have no ability to see), psychopathy (emotions are simulated 
but not felt). Synesthesia and plain old acting show that specific qualia 
and behaviors need not be linked automatically to what we expect them to 
represent.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-13 Thread Bruno Marchal


On 12 Oct 2013, at 22:47, Stathis Papaioannou wrote:




On Sunday, 13 October 2013, Bruno Marchal wrote:

On 12 Oct 2013, at 09:49, Stathis Papaioannou wrote:

Because the article is consistent with my view that there is a  
fundamental difference between quantitative tasks and aesthetic  
awareness. If there were no difference, then I would expect that  
the problems that supermarket computers would have would not be  
related to its unconsciousness, but to unreliability or even  
willfulness developing. Why isn't the story Automated cashiers  
have begun throwing temper tantrums at some locations which are  
contagious to certain smart phones that now become upset in  
sympathy...we had anticipated this, but not so soon, yadda yadda?  
I think it's pretty clear why. For the same reason that all  
machines will always fall short of authentic personality and  
sensitivity.


So you would just say that computers lack authentic personality and  
sensitivity, no matter what they did.


Beyond question, yes. I wouldn't just say it, I would bet my life  
on it, because I understand it completely.


Do you believe that computers can perform any task a human can  
perform? If not, what is an example of a relatively simple task  
that a computer could never perform?


I thought Craig just made clear that computers might performs as  
well as humans, and that even in that case, he will not attribute  
sense and aesthetic to them.
This was already clear with my sun-in-law (who got an artificial  
brain, and who can't enjoy a good meal at his restaurant).


He call them puppets, but he believes in philosophical zombies.

He is coherent, but invalid in his debunking of comp. He debunks  
only the 19th century conception of machines (controllable physical  
beings).


Craig is neither clear


I can accept that.



nor coherent.


I was just saying that he was coherent in his belief in some primary  
nature, and his disbelief in computationalism.





For example, he suggests above that the inadequacies of supermarket  
computers are due to their unconsciousness, which implies that there  
are some things an unconscious entity cannot do, and therefore there  
cannot be philosophical zombies. However, he says (I think - he is  
not clear) there is no test to tell the computers apart from the  
humans. This is inconsistent.


OK. I think he is incoherent by opportunism. he want to use result in  
the literature, but those result concerns behavior. There he is indeed  
often incoherent, as you illustrate well.


You are confronted with the task of explaining to someone incoherent  
that he is incoherent: a very difficult if not impossible task.  
Incoherent people can answer all questions very easily. Eventually he  
will (and already has) just refer to its own understanding. Like I  
know that ..., etc.


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-13 Thread Bruno Marchal


On 13 Oct 2013, at 06:40, Craig Weinberg wrote:




On Saturday, October 12, 2013 12:27:08 PM UTC-4, Bruno Marchal wrote:

On 12 Oct 2013, at 09:49, Stathis Papaioannou wrote:




On Saturday, October 12, 2013, Craig Weinberg wrote:


On Friday, October 11, 2013 11:32:49 PM UTC-4, stathisp wrote:


On Saturday, October 12, 2013, Craig Weinberg wrote:


On Friday, October 11, 2013 5:37:52 PM UTC-4, stathisp wrote:


On Oct 11, 2013, at 8:19 PM, Craig Weinberg whats...@gmail.com  
wrote:





On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:
On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com wrote:
 http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html


 A lot of what I am always talking about is in there...computers  
don't
 understand produce because they have no aesthetic sensibility. A  
mechanical
 description of a function is not the same thing as participating  
in an

 experience.

This is effectively a test for consciousness: if the entity can
perform the type of task you postulate requires aesthetic  
sensibility,

it must have aesthetic sensibility.


Not at all. That's exactly the opposite of what I am saying. The  
failure of digital mechanism to interface with aesthetic presence  
is not testable unless you yourself become a digital mechanism.  
There can never be a test of aesthetic sensibility because testing  
is by definition anesthetic. To test is to measure into a system  
of universal representation. Measurement is the removal of  
presence for the purpose of distribution as symbol. I can draw a  
picture of a robot correctly identifying a vegetable, but that  
doesn't mean that the drawing of the robot is doing anything. I  
can make a movie of the robot cartoon, or a sculpture, or an  
animated sculpture that has a sensor for iodine or magnesium which  
can be correlated to a higher probability of a particular  
vegetable, but that doesn't change anything at all. There is still  
no robot except in our experience and our expectations of its  
experience. The robot is not even a zombie, it is a puppet playing  
back recordings of our thoughts in a clever way.


OK, so it would prove nothing to you if the supermarket computers  
did a better job than the checkout chicks. Why then did you cite  
this article?


Because the article is consistent with my view that there is a  
fundamental difference between quantitative tasks and aesthetic  
awareness. If there were no difference, then I would expect that  
the problems that supermarket computers would have would not be  
related to its unconsciousness, but to unreliability or even  
willfulness developing. Why isn't the story Automated cashiers  
have begun throwing temper tantrums at some locations which are  
contagious to certain smart phones that now become upset in  
sympathy...we had anticipated this, but not so soon, yadda yadda?  
I think it's pretty clear why. For the same reason that all  
machines will always fall short of authentic personality and  
sensitivity.


So you would just say that computers lack authentic personality and  
sensitivity, no matter what they did.


Beyond question, yes. I wouldn't just say it, I would bet my life  
on it, because I understand it completely.


Do you believe that computers can perform any task a human can  
perform? If not, what is an example of a relatively simple task  
that a computer could never perform?


I thought Craig just made clear that computers might performs as  
well as humans, and that even in that case, he will not attribute  
sense and aesthetic to them.
This was already clear with my sun-in-law (who got an artificial  
brain, and who can't enjoy a good meal at his restaurant).


He call them puppets, but he believes in philosophical zombies.

I don't believe in philosophical zombies. I use puppet because a  
puppet implies an absence of conscious presence, which is an  
ordinary condition of macrocosmic objects as we seem them, because  
the sensation associated with them belongs to a distant frame  
(microcosm).


All object are conscious?



A zombie is supernatural because rather than the seeming absence of  
presence (normal), they imply the presence of absence,


?



which is unnatural and cannot exist. There can be no undead, only  
the unlive.



He is coherent, but invalid in his debunking of comp. He debunks  
only the 19th century conception of machines (controllable physical  
beings).


I think that I also debunk the 21st century reality of machines. The  
promissory mechanism offered by comp is purely a theoretical  
futurism -


Not at all. It is here and now. I have already interview such machines.



which I would not object to at all, but in this case, it so happens  
that it is not applicable to the universe that we actually live in.


Let me say it simply: I don't believe in universe(s). I have few doubt  
that there is a physical reality, but I have no evidence it comes from  
something like an 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-13 Thread Stathis Papaioannou
On 13 October 2013 15:29, Craig Weinberg whatsons...@gmail.com wrote:

 Perform to whose satisfaction? A cadaver can be made to twitch, or
 propped up to stand.


 Perform to the satisfaction of anyone you care to nominate. A committee of
 humans examine two people who have had a haircut, one from a human and the
 other from a computer, and try to decide which is which. This is repeated
 several times. If they can't tell the difference then we say the computer
 has succeeded in cutting hair as well as a human. Is there any task you
 think a computer will never be able manage as well as a human in this sort
 of test?


 What does performing tasks have to do with anything? We are talking about
 the capacity to feel, experience, and care. If you could replace your hands
 with machines that would do everything your hands could do and quite a bit
 more, but would have no feeling in them at all, would you say that the robot
 hands were just as good to you as human hands? If your tongue could detect
 any chemical in the universe accurately and provide you with precise
 knowledge of it, but never allow you to taste any flavor or feel anything
 with your tongue again, would that be equivalent?

I understand that you don't think computers can have feelings, but I
was asking if if computers can perform all tasks that a human can
perform, or if there are some tasks they just won't be able to do. If
there are, then this suggests a test for consciousness.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-12 Thread Craig Weinberg


On Saturday, October 12, 2013 3:49:22 AM UTC-4, stathisp wrote:



 On Saturday, October 12, 2013, Craig Weinberg wrote:



 On Friday, October 11, 2013 11:32:49 PM UTC-4, stathisp wrote:



 On Saturday, October 12, 2013, Craig Weinberg wrote:



 On Friday, October 11, 2013 5:37:52 PM UTC-4, stathisp wrote:



 On Oct 11, 2013, at 8:19 PM, Craig Weinberg whats...@gmail.com 
 wrote:



 On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:

 On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com wrote: 
  http://online.wsj.com/article/SB1000142405270230349250457911***
 *5310362925246.htmlhttp://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
  


  A lot of what I am always talking about is in there...computers 
 don't 
  understand produce because they have no aesthetic sensibility. A 
 mechanical 
  description of a function is not the same thing as participating in 
 an 
  experience. 

 This is effectively a test for consciousness: if the entity can 
 perform the type of task you postulate requires aesthetic 
 sensibility, 
 it must have aesthetic sensibility. 


 Not at all. That's exactly the opposite of what I am saying. The 
 failure of digital mechanism to interface with aesthetic presence is not 
 testable unless you yourself become a digital mechanism. There can never 
 be 
 a test of aesthetic sensibility because testing is by definition 
 anesthetic. To test is to measure into a system of universal 
 representation. Measurement is the removal of presence for the purpose of 
 distribution as symbol. I can draw a picture of a robot correctly 
 identifying a vegetable, but that doesn't mean that the drawing of the 
 robot is doing anything. I can make a movie of the robot cartoon, or a 
 sculpture, or an animated sculpture that has a sensor for iodine or 
 magnesium which can be correlated to a higher probability of a particular 
 vegetable, but that doesn't change anything at all. There is still no 
 robot 
 except in our experience and our expectations of its experience. The 
 robot 
 is not even a zombie, it is a puppet playing back recordings of our 
 thoughts in a clever way.
  

 OK, so it would prove nothing to you if the supermarket computers did 
 a better job than the checkout chicks. Why then did you cite this article?


 Because the article is consistent with my view that there is a 
 fundamental difference between quantitative tasks and aesthetic awareness. 
 If there were no difference, then I would expect that the problems that 
 supermarket computers would have would not be related to its 
 unconsciousness, but to unreliability or even willfulness developing. Why 
 isn't the story Automated cashiers have begun throwing temper tantrums at 
 some locations which are contagious to certain smart phones that now 
 become 
 upset in sympathy...we had anticipated this, but not so soon, yadda 
 yadda? 
 I think it's pretty clear why. For the same reason that all machines will 
 always fall short of authentic personality and sensitivity.


 So you would just say that computers lack authentic personality and 
 sensitivity, no matter what they did.


 Beyond question, yes. I wouldn't just say it, I would bet my life on it, 
 because I understand it completely.


 Do you believe that computers can perform any task a human can perform? If 
 not, what is an example of a relatively simple task that a computer could 
 never perform? 


Perform to whose satisfaction? A cadaver can be made to twitch, or propped 
up to stand.

Being human is nothing to do with performing tasks. Our immune system 
probably does more complex tasks every minute than the whole history of 
human beings has ever done (when we build a machine that looks like an 
insulin molecule, we're still just beginning). Being human is about 
experiencing with a particular depth of sensitivity. A computer is not even 
a whole thing except in our mind. It is a collection of switches, which are 
collections of molecules. Those molecules, I think, do share sensitivity, 
or rather, there is a sensitivity which appears to our extended senses as 
molecules, but they are sensitive to very different ranges of presence.

By compulsively reducing everything to an expectation of repeatable tasks 
and behaviors, there is no chance to locate what awareness is, since it is 
the opposite of all repetition and all that is repeatable.

Craig
 



 -- 
 Stathis Papaioannou


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-12 Thread Craig Weinberg


On Saturday, October 12, 2013 5:01:40 AM UTC-4, Bruno Marchal wrote:


 On 12 Oct 2013, at 06:28, Craig Weinberg wrote:



 On Friday, October 11, 2013 11:32:49 PM UTC-4, stathisp wrote:



 On Saturday, October 12, 2013, Craig Weinberg wrote:



 On Friday, October 11, 2013 5:37:52 PM UTC-4, stathisp wrote:



 On Oct 11, 2013, at 8:19 PM, Craig Weinberg whats...@gmail.com wrote:



 On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:

 On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com wrote: 
  http://online.wsj.com/article/**SB1000142405270230349250457911**
 5310362925246.htmlhttp://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
  


  A lot of what I am always talking about is in there...computers 
 don't 
  understand produce because they have no aesthetic sensibility. A 
 mechanical 
  description of a function is not the same thing as participating in 
 an 
  experience. 

 This is effectively a test for consciousness: if the entity can 
 perform the type of task you postulate requires aesthetic sensibility, 
 it must have aesthetic sensibility. 


 Not at all. That's exactly the opposite of what I am saying. The 
 failure of digital mechanism to interface with aesthetic presence is not 
 testable unless you yourself become a digital mechanism. There can never 
 be 
 a test of aesthetic sensibility because testing is by definition 
 anesthetic. To test is to measure into a system of universal 
 representation. Measurement is the removal of presence for the purpose of 
 distribution as symbol. I can draw a picture of a robot correctly 
 identifying a vegetable, but that doesn't mean that the drawing of the 
 robot is doing anything. I can make a movie of the robot cartoon, or a 
 sculpture, or an animated sculpture that has a sensor for iodine or 
 magnesium which can be correlated to a higher probability of a particular 
 vegetable, but that doesn't change anything at all. There is still no 
 robot 
 except in our experience and our expectations of its experience. The robot 
 is not even a zombie, it is a puppet playing back recordings of our 
 thoughts in a clever way.
  

 OK, so it would prove nothing to you if the supermarket computers did a 
 better job than the checkout chicks. Why then did you cite this article?


 Because the article is consistent with my view that there is a 
 fundamental difference between quantitative tasks and aesthetic awareness. 
 If there were no difference, then I would expect that the problems that 
 supermarket computers would have would not be related to its 
 unconsciousness, but to unreliability or even willfulness developing. Why 
 isn't the story Automated cashiers have begun throwing temper tantrums at 
 some locations which are contagious to certain smart phones that now become 
 upset in sympathy...we had anticipated this, but not so soon, yadda yadda? 
 I think it's pretty clear why. For the same reason that all machines will 
 always fall short of authentic personality and sensitivity.


 So you would just say that computers lack authentic personality and 
 sensitivity, no matter what they did.


 Beyond question, yes. I wouldn't just say it, I would bet my life on it, 
 because I understand it completely.


 That's an authoritative argument. Whatever machines can do, they can't 
 think, because I think so.


It's not because I think so, it's because I understand why the experience 
of thinking is not necessary to do anything that a machine can do. I 
understand why paint by numbers of the Mona Lisa is not the same thing as 
Leonardo Da Vinci.

Craig
 


 Hmm

 Bruno





  



 -- 
 Stathis Papaioannou


 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-12 Thread Stathis Papaioannou
On Saturday, October 12, 2013, Craig Weinberg wrote:

Do you believe that computers can perform any task a human can perform? If
 not, what is an example of a relatively simple task that a computer could
 never perform?


 Perform to whose satisfaction? A cadaver can be made to twitch, or propped
 up to stand.


Perform to the satisfaction of anyone you care to nominate. A committee of
humans examine two people who have had a haircut, one from a human and the
other from a computer, and try to decide which is which. This is repeated
several times. If they can't tell the difference then we say the computer
has succeeded in cutting hair as well as a human. Is there any task you
think a computer will never be able manage as well as a human in this sort
of test?


 Being human is nothing to do with performing tasks. Our immune system
 probably does more complex tasks every minute than the whole history of
 human beings has ever done (when we build a machine that looks like an
 insulin molecule, we're still just beginning). Being human is about
 experiencing with a particular depth of sensitivity. A computer is not even
 a whole thing except in our mind. It is a collection of switches, which are
 collections of molecules. Those molecules, I think, do share sensitivity,
 or rather, there is a sensitivity which appears to our extended senses as
 molecules, but they are sensitive to very different ranges of presence.

 By compulsively reducing everything to an expectation of repeatable tasks
 and behaviors, there is no chance to locate what awareness is, since it is
 the opposite of all repetition and all that is repeatable.

 Craig







 --
 Stathis Papaioannou

  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com javascript:_e({},
 'cvml', 'everything-list%2bunsubscr...@googlegroups.com');.
 To post to this group, send email to 
 everything-list@googlegroups.comjavascript:_e({}, 'cvml', 
 'everything-list@googlegroups.com');
 .
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-12 Thread Bruno Marchal


On 12 Oct 2013, at 09:49, Stathis Papaioannou wrote:




On Saturday, October 12, 2013, Craig Weinberg wrote:


On Friday, October 11, 2013 11:32:49 PM UTC-4, stathisp wrote:


On Saturday, October 12, 2013, Craig Weinberg wrote:


On Friday, October 11, 2013 5:37:52 PM UTC-4, stathisp wrote:


On Oct 11, 2013, at 8:19 PM, Craig Weinberg whats...@gmail.com  
wrote:





On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:
On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com wrote:
 http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html


 A lot of what I am always talking about is in there...computers  
don't
 understand produce because they have no aesthetic sensibility. A  
mechanical
 description of a function is not the same thing as participating  
in an

 experience.

This is effectively a test for consciousness: if the entity can
perform the type of task you postulate requires aesthetic  
sensibility,

it must have aesthetic sensibility.


Not at all. That's exactly the opposite of what I am saying. The  
failure of digital mechanism to interface with aesthetic presence  
is not testable unless you yourself become a digital mechanism.  
There can never be a test of aesthetic sensibility because testing  
is by definition anesthetic. To test is to measure into a system of  
universal representation. Measurement is the removal of presence  
for the purpose of distribution as symbol. I can draw a picture of  
a robot correctly identifying a vegetable, but that doesn't mean  
that the drawing of the robot is doing anything. I can make a movie  
of the robot cartoon, or a sculpture, or an animated sculpture that  
has a sensor for iodine or magnesium which can be correlated to a  
higher probability of a particular vegetable, but that doesn't  
change anything at all. There is still no robot except in our  
experience and our expectations of its experience. The robot is not  
even a zombie, it is a puppet playing back recordings of our  
thoughts in a clever way.


OK, so it would prove nothing to you if the supermarket computers  
did a better job than the checkout chicks. Why then did you cite  
this article?


Because the article is consistent with my view that there is a  
fundamental difference between quantitative tasks and aesthetic  
awareness. If there were no difference, then I would expect that the  
problems that supermarket computers would have would not be related  
to its unconsciousness, but to unreliability or even willfulness  
developing. Why isn't the story Automated cashiers have begun  
throwing temper tantrums at some locations which are contagious to  
certain smart phones that now become upset in sympathy...we had  
anticipated this, but not so soon, yadda yadda? I think it's pretty  
clear why. For the same reason that all machines will always fall  
short of authentic personality and sensitivity.


So you would just say that computers lack authentic personality and  
sensitivity, no matter what they did.


Beyond question, yes. I wouldn't just say it, I would bet my life on  
it, because I understand it completely.


Do you believe that computers can perform any task a human can  
perform? If not, what is an example of a relatively simple task that  
a computer could never perform?


I thought Craig just made clear that computers might performs as well  
as humans, and that even in that case, he will not attribute sense and  
aesthetic to them.
This was already clear with my sun-in-law (who got an artificial  
brain, and who can't enjoy a good meal at his restaurant).


He call them puppets, but he believes in philosophical zombies.

He is coherent, but invalid in his debunking of comp. He debunks only  
the 19th century conception of machines (controllable physical beings).


Bruno






--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-12 Thread Stathis Papaioannou
On Sunday, 13 October 2013, Bruno Marchal wrote:


 On 12 Oct 2013, at 09:49, Stathis Papaioannou wrote:

 Because the article is consistent with my view that there is a fundamental
 difference between quantitative tasks and aesthetic awareness. If there
 were no difference, then I would expect that the problems that supermarket
 computers would have would not be related to its unconsciousness, but to
 unreliability or even willfulness developing. Why isn't the story
 Automated cashiers have begun throwing temper tantrums at some locations
 which are contagious to certain smart phones that now become upset in
 sympathy...we had anticipated this, but not so soon, yadda yadda? I think
 it's pretty clear why. For the same reason that all machines will always
 fall short of authentic personality and sensitivity.


 So you would just say that computers lack authentic personality and
 sensitivity, no matter what they did.


 Beyond question, yes. I wouldn't just say it, I would bet my life on it,
 because I understand it completely.


 Do you believe that computers can perform any task a human can perform? If
 not, what is an example of a relatively simple task that a computer could
 never perform?


 I thought Craig just made clear that computers might performs as well as
 humans, and that even in that case, he will not attribute sense and
 aesthetic to them.
 This was already clear with my sun-in-law (who got an artificial brain,
 and who can't enjoy a good meal at his restaurant).

 He call them puppets, but he believes in philosophical zombies.

 He is coherent, but invalid in his debunking of comp. He debunks only the
 19th century conception of machines (controllable physical beings).


Craig is neither clear nor coherent. For example, he suggests above that
the inadequacies of supermarket computers are due to their unconsciousness,
which implies that there are some things an unconscious entity cannot do,
and therefore there cannot be philosophical zombies. However, he says (I
think - he is not clear) there is no test to tell the computers apart from
the humans. This is inconsistent.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-12 Thread Craig Weinberg


On Saturday, October 12, 2013 10:11:20 AM UTC-4, stathisp wrote:



 On Saturday, October 12, 2013, Craig Weinberg wrote:

 Do you believe that computers can perform any task a human can perform? If 
 not, what is an example of a relatively simple task that a computer could 
 never perform? 


 Perform to whose satisfaction? A cadaver can be made to twitch, or 
 propped up to stand.


 Perform to the satisfaction of anyone you care to nominate. A committee of 
 humans examine two people who have had a haircut, one from a human and the 
 other from a computer, and try to decide which is which. This is repeated 
 several times. If they can't tell the difference then we say the computer 
 has succeeded in cutting hair as well as a human. Is there any task you 
 think a computer will never be able manage as well as a human in this 
 sort of test?


What does performing tasks have to do with anything? We are talking about 
the capacity to feel, experience, and care. If you could replace your hands 
with machines that would do everything your hands could do and quite a bit 
more, but would have no feeling in them at all, would you say that the 
robot hands were just as good to you as human hands? If your tongue could 
detect any chemical in the universe accurately and provide you with precise 
knowledge of it, but never allow you to taste any flavor or feel anything 
with your tongue again, would that be equivalent?

Craig

 

 Being human is nothing to do with performing tasks. Our immune system 
 probably does more complex tasks every minute than the whole history of 
 human beings has ever done (when we build a machine that looks like an 
 insulin molecule, we're still just beginning). Being human is about 
 experiencing with a particular depth of sensitivity. A computer is not even 
 a whole thing except in our mind. It is a collection of switches, which are 
 collections of molecules. Those molecules, I think, do share sensitivity, 
 or rather, there is a sensitivity which appears to our extended senses as 
 molecules, but they are sensitive to very different ranges of presence.

 By compulsively reducing everything to an expectation of repeatable tasks 
 and behaviors, there is no chance to locate what awareness is, since it is 
 the opposite of all repetition and all that is repeatable.

 Craig
  


  

  

 -- 
 Stathis Papaioannou

  -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.



 -- 
 Stathis Papaioannou


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-12 Thread Craig Weinberg


On Saturday, October 12, 2013 12:27:08 PM UTC-4, Bruno Marchal wrote:


 On 12 Oct 2013, at 09:49, Stathis Papaioannou wrote:



 On Saturday, October 12, 2013, Craig Weinberg wrote:



 On Friday, October 11, 2013 11:32:49 PM UTC-4, stathisp wrote:



 On Saturday, October 12, 2013, Craig Weinberg wrote:



 On Friday, October 11, 2013 5:37:52 PM UTC-4, stathisp wrote:



 On Oct 11, 2013, at 8:19 PM, Craig Weinberg whats...@gmail.com 
 wrote:



 On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:

 On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com wrote: 
  http://online.wsj.com/article/SB1000142405270230349250457911***
 *5310362925246.htmlhttp://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
  


  A lot of what I am always talking about is in there...computers 
 don't 
  understand produce because they have no aesthetic sensibility. A 
 mechanical 
  description of a function is not the same thing as participating in 
 an 
  experience. 

 This is effectively a test for consciousness: if the entity can 
 perform the type of task you postulate requires aesthetic 
 sensibility, 
 it must have aesthetic sensibility. 


 Not at all. That's exactly the opposite of what I am saying. The 
 failure of digital mechanism to interface with aesthetic presence is not 
 testable unless you yourself become a digital mechanism. There can never 
 be 
 a test of aesthetic sensibility because testing is by definition 
 anesthetic. To test is to measure into a system of universal 
 representation. Measurement is the removal of presence for the purpose of 
 distribution as symbol. I can draw a picture of a robot correctly 
 identifying a vegetable, but that doesn't mean that the drawing of the 
 robot is doing anything. I can make a movie of the robot cartoon, or a 
 sculpture, or an animated sculpture that has a sensor for iodine or 
 magnesium which can be correlated to a higher probability of a particular 
 vegetable, but that doesn't change anything at all. There is still no 
 robot 
 except in our experience and our expectations of its experience. The 
 robot 
 is not even a zombie, it is a puppet playing back recordings of our 
 thoughts in a clever way.
  

 OK, so it would prove nothing to you if the supermarket computers did 
 a better job than the checkout chicks. Why then did you cite this article?


 Because the article is consistent with my view that there is a 
 fundamental difference between quantitative tasks and aesthetic awareness. 
 If there were no difference, then I would expect that the problems that 
 supermarket computers would have would not be related to its 
 unconsciousness, but to unreliability or even willfulness developing. Why 
 isn't the story Automated cashiers have begun throwing temper tantrums at 
 some locations which are contagious to certain smart phones that now 
 become 
 upset in sympathy...we had anticipated this, but not so soon, yadda 
 yadda? 
 I think it's pretty clear why. For the same reason that all machines will 
 always fall short of authentic personality and sensitivity.


 So you would just say that computers lack authentic personality and 
 sensitivity, no matter what they did.


 Beyond question, yes. I wouldn't just say it, I would bet my life on it, 
 because I understand it completely.


 Do you believe that computers can perform any task a human can perform? If 
 not, what is an example of a relatively simple task that a computer could 
 never perform? 


 I thought Craig just made clear that computers might performs as well as 
 humans, and that even in that case, he will not attribute sense and 
 aesthetic to them.
 This was already clear with my sun-in-law (who got an artificial brain, 
 and who can't enjoy a good meal at his restaurant). 

 He call them puppets, but he believes in philosophical zombies.


I don't believe in philosophical zombies. I use puppet because a puppet 
implies an absence of conscious presence, which is an ordinary condition of 
macrocosmic objects as we seem them, because the sensation associated with 
them belongs to a distant frame (microcosm). A zombie is supernatural 
because rather than the seeming absence of presence (normal), they imply 
the presence of absence, which is unnatural and cannot exist. There can be 
no undead, only the unlive.
 


 He is coherent, but invalid in his debunking of comp. He debunks only the 
 19th century conception of machines (controllable physical beings).


I think that I also debunk the 21st century reality of machines. The 
promissory mechanism offered by comp is purely a theoretical futurism - 
which I would not object to at all, but in this case, it so happens that it 
is not applicable to the universe that we actually live in. It is almost 
applicable, but the hard part is that it is blind to its own blindness, so 
that the certainty offered by mathematics comes at a cost which mathematics 
has no choice but to deny completely. Because 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-12 Thread LizR
On 13 October 2013 17:40, Craig Weinberg whatsons...@gmail.com wrote:


 I don't believe in philosophical zombies. I use puppet because a puppet
 implies an absence of conscious presence, which is an ordinary condition of
 macrocosmic objects as we seem them, because the sensation associated with
 them belongs to a distant frame (microcosm). A zombie is supernatural
 because rather than the seeming absence of presence (normal), they imply
 the presence of absence, which is unnatural and cannot exist. There can be
 no undead, only the unlive.


Puppet implies a puppeteer. In a sense our bodies are puppets controlled by
our brains. So there is a conscious presence.

I wonder if there are psychological conditions that are similar to
philosophical zombiehood? I.e. doing things as though conscious when you
aren't. (Maybe sleep walking?)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-12 Thread meekerdb

On 10/12/2013 9:56 PM, LizR wrote:
On 13 October 2013 17:40, Craig Weinberg whatsons...@gmail.com 
mailto:whatsons...@gmail.com wrote:



I don't believe in philosophical zombies. I use puppet because a puppet 
implies an
absence of conscious presence, which is an ordinary condition of 
macrocosmic objects
as we seem them, because the sensation associated with them belongs to a 
distant
frame (microcosm). A zombie is supernatural because rather than the seeming 
absence
of presence (normal), they imply the presence of absence, which is 
unnatural and
cannot exist. There can be no undead, only the unlive.


Puppet implies a puppeteer. In a sense our bodies are puppets controlled by our brains. 
So there is a conscious presence.


I wonder if there are psychological conditions that are similar to philosophical 
zombiehood? I.e. doing things as though conscious when you aren't. (Maybe sleep walking?)


Or proving theorems in mathematics, c.f. Poincare effect. 
http://www.is.wayne.edu/DRBOWEN/CRTVYW99/POINCARE.HTM


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-11 Thread Craig Weinberg


On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:

 On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com javascript: 
 wrote: 
  
 http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
  


  A lot of what I am always talking about is in there...computers don't 
  understand produce because they have no aesthetic sensibility. A 
 mechanical 
  description of a function is not the same thing as participating in an 
  experience. 

 This is effectively a test for consciousness: if the entity can 
 perform the type of task you postulate requires aesthetic sensibility, 
 it must have aesthetic sensibility. 


Not at all. That's exactly the opposite of what I am saying. The failure of 
digital mechanism to interface with aesthetic presence is not testable 
unless you yourself become a digital mechanism. There can never be a test 
of aesthetic sensibility because testing is by definition anesthetic. To 
test is to measure into a system of universal representation. Measurement 
is the removal of presence for the purpose of distribution as symbol. I can 
draw a picture of a robot correctly identifying a vegetable, but that 
doesn't mean that the drawing of the robot is doing anything. I can make a 
movie of the robot cartoon, or a sculpture, or an animated sculpture that 
has a sensor for iodine or magnesium which can be correlated to a higher 
probability of a particular vegetable, but that doesn't change anything at 
all. There is still no robot except in our experience and our expectations 
of its experience. The robot is not even a zombie, it is a puppet playing 
back recordings of our thoughts in a clever way.

Craig


 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-11 Thread Stathis Papaioannou


On Oct 11, 2013, at 8:19 PM, Craig Weinberg whatsons...@gmail.com wrote:

 
 
 On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:
 
 On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com wrote: 
  http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
   
 
 
  A lot of what I am always talking about is in there...computers don't 
  understand produce because they have no aesthetic sensibility. A 
  mechanical 
  description of a function is not the same thing as participating in an 
  experience. 
 
 This is effectively a test for consciousness: if the entity can 
 perform the type of task you postulate requires aesthetic sensibility, 
 it must have aesthetic sensibility.
 
 Not at all. That's exactly the opposite of what I am saying. The failure of 
 digital mechanism to interface with aesthetic presence is not testable unless 
 you yourself become a digital mechanism. There can never be a test of 
 aesthetic sensibility because testing is by definition anesthetic. To test is 
 to measure into a system of universal representation. Measurement is the 
 removal of presence for the purpose of distribution as symbol. I can draw a 
 picture of a robot correctly identifying a vegetable, but that doesn't mean 
 that the drawing of the robot is doing anything. I can make a movie of the 
 robot cartoon, or a sculpture, or an animated sculpture that has a sensor for 
 iodine or magnesium which can be correlated to a higher probability of a 
 particular vegetable, but that doesn't change anything at all. There is still 
 no robot except in our experience and our expectations of its experience. The 
 robot is not even a zombie, it is a puppet playing back recordings of our 
 thoughts in a clever way.

OK, so it would prove nothing to you if the supermarket computers did a better 
job than the checkout chicks. Why then did you cite this article?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-11 Thread Craig Weinberg


On Friday, October 11, 2013 5:37:52 PM UTC-4, stathisp wrote:



 On Oct 11, 2013, at 8:19 PM, Craig Weinberg whats...@gmail.comjavascript: 
 wrote:



 On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:

 On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com wrote: 
  
 http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
  


  A lot of what I am always talking about is in there...computers don't 
  understand produce because they have no aesthetic sensibility. A 
 mechanical 
  description of a function is not the same thing as participating in an 
  experience. 

 This is effectively a test for consciousness: if the entity can 
 perform the type of task you postulate requires aesthetic sensibility, 
 it must have aesthetic sensibility. 


 Not at all. That's exactly the opposite of what I am saying. The failure 
 of digital mechanism to interface with aesthetic presence is not testable 
 unless you yourself become a digital mechanism. There can never be a test 
 of aesthetic sensibility because testing is by definition anesthetic. To 
 test is to measure into a system of universal representation. Measurement 
 is the removal of presence for the purpose of distribution as symbol. I can 
 draw a picture of a robot correctly identifying a vegetable, but that 
 doesn't mean that the drawing of the robot is doing anything. I can make a 
 movie of the robot cartoon, or a sculpture, or an animated sculpture that 
 has a sensor for iodine or magnesium which can be correlated to a higher 
 probability of a particular vegetable, but that doesn't change anything at 
 all. There is still no robot except in our experience and our expectations 
 of its experience. The robot is not even a zombie, it is a puppet playing 
 back recordings of our thoughts in a clever way.


 OK, so it would prove nothing to you if the supermarket computers did a 
 better job than the checkout chicks. Why then did you cite this article?


Because the article is consistent with my view that there is a fundamental 
difference between quantitative tasks and aesthetic awareness. If there 
were no difference, then I would expect that the problems that supermarket 
computers would have would not be related to its unconsciousness, but to 
unreliability or even willfulness developing. Why isn't the story 
Automated cashiers have begun throwing temper tantrums at some locations 
which are contagious to certain smart phones that now become upset in 
sympathy...we had anticipated this, but not so soon, yadda yadda? I think 
it's pretty clear why. For the same reason that all machines will always 
fall short of authentic personality and sensitivity.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-11 Thread Craig Weinberg


On Friday, October 11, 2013 11:32:49 PM UTC-4, stathisp wrote:



 On Saturday, October 12, 2013, Craig Weinberg wrote:



 On Friday, October 11, 2013 5:37:52 PM UTC-4, stathisp wrote:



 On Oct 11, 2013, at 8:19 PM, Craig Weinberg whats...@gmail.com wrote:



 On Thursday, October 10, 2013 8:58:30 PM UTC-4, stathisp wrote:

 On 9 October 2013 05:25, Craig Weinberg whats...@gmail.com wrote: 
  http://online.wsj.com/article/**SB1000142405270230349250457911**
 5310362925246.htmlhttp://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
  


  A lot of what I am always talking about is in there...computers don't 
  understand produce because they have no aesthetic sensibility. A 
 mechanical 
  description of a function is not the same thing as participating in 
 an 
  experience. 

 This is effectively a test for consciousness: if the entity can 
 perform the type of task you postulate requires aesthetic sensibility, 
 it must have aesthetic sensibility. 


 Not at all. That's exactly the opposite of what I am saying. The failure 
 of digital mechanism to interface with aesthetic presence is not testable 
 unless you yourself become a digital mechanism. There can never be a test 
 of aesthetic sensibility because testing is by definition anesthetic. To 
 test is to measure into a system of universal representation. Measurement 
 is the removal of presence for the purpose of distribution as symbol. I can 
 draw a picture of a robot correctly identifying a vegetable, but that 
 doesn't mean that the drawing of the robot is doing anything. I can make a 
 movie of the robot cartoon, or a sculpture, or an animated sculpture that 
 has a sensor for iodine or magnesium which can be correlated to a higher 
 probability of a particular vegetable, but that doesn't change anything at 
 all. There is still no robot except in our experience and our expectations 
 of its experience. The robot is not even a zombie, it is a puppet playing 
 back recordings of our thoughts in a clever way.
  

 OK, so it would prove nothing to you if the supermarket computers did a 
 better job than the checkout chicks. Why then did you cite this article?


 Because the article is consistent with my view that there is a 
 fundamental difference between quantitative tasks and aesthetic awareness. 
 If there were no difference, then I would expect that the problems that 
 supermarket computers would have would not be related to its 
 unconsciousness, but to unreliability or even willfulness developing. Why 
 isn't the story Automated cashiers have begun throwing temper tantrums at 
 some locations which are contagious to certain smart phones that now become 
 upset in sympathy...we had anticipated this, but not so soon, yadda yadda? 
 I think it's pretty clear why. For the same reason that all machines will 
 always fall short of authentic personality and sensitivity.


 So you would just say that computers lack authentic personality and 
 sensitivity, no matter what they did.


Beyond question, yes. I wouldn't just say it, I would bet my life on it, 
because I understand it completely.

 



 -- 
 Stathis Papaioannou


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-10 Thread Craig Weinberg


On Wednesday, October 9, 2013 8:30:16 PM UTC-4, Liz R wrote:

 On 10 October 2013 13:03, Craig Weinberg whats...@gmail.com javascript:
  wrote:


 On Wednesday, October 9, 2013 5:52:46 PM UTC-4, Liz R wrote:

 On 10 October 2013 09:47, Craig Weinberg whats...@gmail.com wrote:

 It's not that computers can't do what humans do,* it's that they can't 
 experience anything.* Mozart could dig a hole as well as compose 
 music, but that doesn't mean that a backhoe with a player piano on it is 
 Mozart. It's a much deeper problem with how machines are conceptualized 
 that has nothing at all to do with humans.


 So you think strong AI is wrong. OK. But why can't computers 
 experience anything, in principle, given that people can, and assuming 
 people are complicated machines?


 I don't think that people are machines. A machine is assembled 
 intentionally from unrelated substances to perform a function which is 
 alien to any of the substances. Living organisms are not assembled, they 
 grow from a single cell. They have no unrelated substances and all 
 functions they perform are local to the motives of the organism as a whole. 


 I believe that, at least in discussions such as this one, defining people 
 as machines has nothing to do with how or why they are constructed, and 
 eveything to do with ruling out any supernatural components. 


Right, but that's what I am saying is the problem. It would be like making 
generalizations about liquids based on water and saying that alcohol can't 
burn because it's a liquid. A machine and a person might both be able to 
say 'hello', but the machine was constructed by people who know what hello 
means, and the person knows what hello means because they were the ones who 
constructed the word. The word exists to serve their own agenda, not that 
of an alien programmer.

 

 Anyway, allow me to rephrase the question.

 I assume from the underlined comment that you think that strong AI is 
 wrong, and that we will never be able to build a conscious computer. How do 
 you come to that conclusion?


I guess that I came to that conclusion by first trying to exhaust the other 
alternatives and then by coming up with a way to make sense of awareness as 
what I call Primordial Identity Pansensitivity. This means that physics and 
information are incomplete reflections within sense rather than producers 
of consciousness. Physics is sense experience that is alienated by entropy 
(spacetime) and information is sense experience which has been alienated by 
generalization (abstraction). Information cannot be pieced together to make 
an experience. No copy can be made into an original. This is not because of 
some special sentimental feeling about consciousness, it's rooted in an a 
careful consideration of the number of clues that we have about perceptual 
relativity, authenticity, uniqueness, polarity, multiplicity, automaticity, 
representation, impersonality, and significance.


 This is an even bigger deal if I am right about the universe being 
 fundamentally a subdividing capacity for experience rather than a place or 
 theater of interacting objects or forces. It means that we are not our 
 body, rather a body is what someone else's lifetime looks like from inside 
 of your lifetime. It's a token. The mechanisms of the brain do not produce 
 awareness as a product, any more than these combinations of letter produce 
 the thoughts I am communicating. What we see neurons doing is comparable to 
 looking at a satellite picture of a city at night. We can learn a lot about 
 what a city does, but nothing about who lives in the city. A city, like a 
 human body, is a machine when you look at it from a distance, but what we 
 see of a body or a city would be perfectly fine with no awareness happening 
 at all. 


 Insofar as I understand it, I agree with this. I often wonder how a load 
 of atoms can have experiences so to speak. This is the so-called hard 
 problem of AI. It is (I think) addressed by comp.


If I'm right, then comp cannot address the hard problem. If we try to make 
it seem to address it, I think that it would have no choice but to get it 
exactly wrong. Comp fails because of the symbol grounding problem and the 
pathetic fallacy. It should be evident from Incompleteness, that no symbol 
can literally symbolize anything, and that all mathematical systems can 
only relate to isolated specifics or universal tautologies. Math cannot 
live because it can't change. It doesn't care. It doesn't know where it's 
been or where it's going. Comp is only one footprint of the absolute - the 
generic vacuum which divides experiences from each other. It misses 
presentation entirely, and so can only be a representation of 
representation...as Baudrillard would say, a Stage Four Simulacra:

The fourth stage is pure simulation, in which the simulacrum has no 
relationship to any reality whatsoever. Here, signs merely reflect other 
signs and any claim to reality on the part 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-10 Thread LizR
On 11 October 2013 04:54, Craig Weinberg whatsons...@gmail.com wrote:

 Unless a machine used living organisms, molecules would probably be the
 only natural things which an experience would be associated with. They
 don't know that they are part of a machine, but there is probably an
 experience that corresponds to thermodynamic and electromagnetic
 conditions. Experiences on that level may not be proprietary to any
 particular molecule - it could be very exotic, who knows. Maybe every atom
 of the same structure represents the same kind of experience on some
 radically different time scale from ours.


Wow! Molecular experiences! That seems..far out, man. Could you get me
some of whatever you're taking? :)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-10 Thread LizR
On 11 October 2013 11:37, Craig Weinberg whatsons...@gmail.com wrote:



 On Thursday, October 10, 2013 4:32:54 PM UTC-4, Liz R wrote:

 On 11 October 2013 04:54, Craig Weinberg whats...@gmail.com wrote:

 Unless a machine used living organisms, molecules would probably be the
 only natural things which an experience would be associated with. They
 don't know that they are part of a machine, but there is probably an
 experience that corresponds to thermodynamic and electromagnetic
 conditions. Experiences on that level may not be proprietary to any
 particular molecule - it could be very exotic, who knows. Maybe every atom
 of the same structure represents the same kind of experience on some
 radically different time scale from ours.


 Wow! Molecular experiences! That seems..far out, man. Could you get
 me some of whatever you're taking? :)


 You mean can I get you some molecules to interact with the molecules of
 your brain :)?

 If we have experiences, and we are made of molecules, then what would be
 the logic of an arbitrary barrier beyond which non-experience suddenly
 turns into experience? If molecules don't need experiences to build
 biology, and stem cells don't need experience to build nervous systems and
 immune systems, then I find it pretty improbable that a particular species
 of animal would suddenly be the first entities to ever experience any part
 of the universe in any way, just because it makes it easier to to do the
 things that every other organism does - find food, reproduce, avoid
 threats.

 This is an interesting reversal of the usual argument of people like
Daniel Dennett, which goes something like we are made of molecules,
molecules can't have experiences, therefore we don't really have
experiences, we just think we do. -- Obviously paraphrased to absurdity,
but that's the basic idea as far as I can see. Your argument uses the same
logic, inverted - we have experiences, we're made of molecules, therefore
molecules have experiences!

Nice, although I feel that by stopping at molecules you're denying the fact
that quarks and electrons obviously have experiences too, and perhaps even
free will (Shall I be spin-up or spin down today?)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-10 Thread Craig Weinberg


On Thursday, October 10, 2013 6:53:18 PM UTC-4, Liz R wrote:

 On 11 October 2013 11:37, Craig Weinberg whats...@gmail.com javascript:
  wrote:



 On Thursday, October 10, 2013 4:32:54 PM UTC-4, Liz R wrote:

 On 11 October 2013 04:54, Craig Weinberg whats...@gmail.com wrote:

 Unless a machine used living organisms, molecules would probably be the 
 only natural things which an experience would be associated with. They 
 don't know that they are part of a machine, but there is probably an 
 experience that corresponds to thermodynamic and electromagnetic 
 conditions. Experiences on that level may not be proprietary to any 
 particular molecule - it could be very exotic, who knows. Maybe every atom 
 of the same structure represents the same kind of experience on some 
 radically different time scale from ours. 


 Wow! Molecular experiences! That seems..far out, man. Could you get 
 me some of whatever you're taking? :)


 You mean can I get you some molecules to interact with the molecules of 
 your brain :)? 

 If we have experiences, and we are made of molecules, then what would be 
 the logic of an arbitrary barrier beyond which non-experience suddenly 
 turns into experience? If molecules don't need experiences to build 
 biology, and stem cells don't need experience to build nervous systems and 
 immune systems, then I find it pretty improbable that a particular species 
 of animal would suddenly be the first entities to ever experience any part 
 of the universe in any way, just because it makes it easier to to do the 
 things that every other organism does - find food, reproduce, avoid 
 threats. 

 This is an interesting reversal of the usual argument of people like 
 Daniel Dennett, which goes something like we are made of molecules, 
 molecules can't have experiences, therefore we don't really have 
 experiences, we just think we do. -- Obviously paraphrased to absurdity, 
 but that's the basic idea as far as I can see. Your argument uses the same 
 logic, inverted - we have experiences, we're made of molecules, therefore 
 molecules have experiences!

 Nice, although I feel that by stopping at molecules you're denying the 
 fact that quarks and electrons obviously have experiences too, and perhaps 
 even free will (Shall I be spin-up or spin down today?)


I am more inclined to think that quarks and electrons actually *are* the 
experiences of atoms. When you use your body to use another collection of 
bodies to tell you about other bodies, what you get is something like the 
fairy tale of matter (except it's really an anti-fairy tale). As far as I 
can tell, there is no reason to assume that it is possible for anything 
other than experiences to exist. Something that is not experienced, and can 
never be experienced in any way, either directly or indirectly, is 
indistinguishable in every way from nothing at all.

As far as free will goes, my guess is that as we move further from our own 
scale of perception (I call pereptual inertial frame, because that is 
exactly what it seems to be) down to the instant of wavefunction collapse, 
or out to the open ended frame of 'fate', free will and probability are 
fused together. The dualistic sense that we have that makes our free will 
seem so personal and the world's causes so impersonal (either 
mechanistically determined or probabilistic - either way unintentional) is 
that every inertial frame acts like a lens (metaphorically) to bend the 
image of experience into this dipole of participation.

The only question to me is whether we just happen to be right smack in the 
middle of this continuum, in the most fertile band where the dipole has 
grown the most polaraized, or whether that too is a function of perceptual 
relativity (I call it eigenmorphism 
http://multisenserealism.com/thesis/6-panpsychism/eigenmorphism/)

As far as the Dennett comparison, I think that's reasonable, although I 
think that it actually makes sense my way, and is absurd Dennet's way, 
where we just think that there is a such thing as thinking??

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-10 Thread Stathis Papaioannou
On 9 October 2013 05:25, Craig Weinberg whatsons...@gmail.com wrote:
 http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html


 A lot of what I am always talking about is in there...computers don't
 understand produce because they have no aesthetic sensibility. A mechanical
 description of a function is not the same thing as participating in an
 experience.

This is effectively a test for consciousness: if the entity can
perform the type of task you postulate requires aesthetic sensibility,
it must have aesthetic sensibility.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Bruno Marchal


On 08 Oct 2013, at 22:22, smi...@zonnet.nl wrote:


Citeren Craig Weinberg whatsons...@gmail.com:


http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html

*Humans 1, Robots 0*
Cashiers Trump Self-Checkout Machines at the Grocery Store

Computers seem to be replacing humans across many industries, and  
we're all

getting very nervous.

But if you want some reason for optimism, visit your local  
supermarket. See
that self-checkout machine? It doesn't hold a candle to the humans-- 
and its
deficiencies neatly illustrate the limits of computers' abilities  
to mimic

human skills.

The human supermarket checker is superior to the self-checkout  
machine in
almost every way. The human is faster. The human has a more  
pleasing, less
buggy interface. The human doesn't expect me to remember or look up  
codes
for produce, she bags my groceries, and unlike the machine, she  
isn't on
hair-trigger alert for any sign that I might be trying to steal  
toilet
paper. Best of all, the human does all the work while I'm allowed  
to stand
there and stupidly stare at my phone, which is my natural state of  
being.


There is only one problem with human checkers: They're in short  
supply. At
my neighborhood big-box suburban supermarket, the lines for human  
checkers
are often three or four deep, while the self-checkout queue is  
usually
sparse. Customers who are new to self-checkout might take their  
short lines
to mean that the machines are more efficient than the humans, but  
that

would be a gross misunderstanding.

As far as I can tell, the self-checkout lines are short only  
because the

machines aren't very good.

They work well enough in a pinch--when you want to check out just a  
handful
of items, when you don't have much produce, when you aren't loaded  
down
with coupons. But for any standard order, they're a big pain.  
Perversely,
then, self-checkout machines' shortcomings are their best feature:  
because

they're useless for most orders, their lines are shorter, making the
machines seem faster than humans.

In most instances where I'm presented with a machine instead of a  
human, I
rejoice. I prefer an ATM to a flesh-and-blood banker, and I find  
airport
check-in machines more efficient than the unsmiling guy at the  
desk. But
both these tasks--along with more routine computerized skills like  
robotic
assembly lines--share a common feature: They're very narrow,  
specific,
repeatable problems, ones that require little physical labor and  
not much

cognitive flexibility.

Supermarket checkout--a low-wage job that doesn't require much
training--sounds like it should be similarly vulnerable to robotic  
invasion.

But it turns out that checking out groceries requires just enough
mental-processing skills to be a prohibitive challenge for  
computers. In
that way, supermarket checkout represents a class of jobs that  
computers
can't yet match because, for now, they're just not very good  
substituting

key human abilities.

What's so cognitively demanding about supermarket checkout? I spoke  
to
several former checkout people, and they all pointed to the same  
skill:
Identifying fruits and vegetables. Some supermarket produce is  
tagged with
small stickers carrying product-lookup codes, but a lot of stuff  
isn't.
It's the human checker's job to tell the difference between green  
leaf

lettuce and green bell peppers, and then to remember the proper code.

It took me about three or four weeks to get to the point where I  
wouldn't
have to look up most items that came by, said Sam Orme, a 30-year- 
old grad

student who worked as a checker when he was a teenager.

Another one-time checker, Ken Haskell, explained that even after  
months of
doing the job, he would often get stumped. Every once in a while  
I'd get a

papaya or a mango and I'd have to reach for the book, he said.

In a recent research paper called Dancing With Robots, the  
economists

Frank Levy and Richard Murnane point out that computers replace human
workers only when machines meet two key conditions. First, the  
information
necessary to carry out the task must be put in a form that  
computers can

understand, and second, the job must be routine enough that it can be
expressed in a series of rules.

Supermarket checkout machines meet the second of these conditions,  
but they
fail on the first. They lack proper information to do the job a  
human would
do. To put it another way: They can't tell shiitakes from Shinola.  
Instead
of identifying your produce, the machine asks you, the customer, to  
type in
a code for every leafy green in your cart. Many times you'll have  
to look
up the code in an on-screen directory. If a human checker asked you  
to
remind him what that bunch of the oblong yellow fruit in your  
basket was,

you'd ask to see his boss.

This deficiency extends far beyond the checkout lane.

In the '60s people assumed you'd be reading X-rays and CT scans by
computers within years, Mr. 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Telmo Menezes
On Tue, Oct 8, 2013 at 8:25 PM, Craig Weinberg whatsons...@gmail.com wrote:
 http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html

 Humans 1, Robots 0
 Cashiers Trump Self-Checkout Machines at the Grocery Store

 Computers seem to be replacing humans across many industries, and we're all
 getting very nervous.

 But if you want some reason for optimism, visit your local supermarket. See
 that self-checkout machine? It doesn't hold a candle to the humans—and its
 deficiencies neatly illustrate the limits of computers' abilities to mimic
 human skills.

 The human supermarket checker is superior to the self-checkout machine in
 almost every way. The human is faster. The human has a more pleasing, less
 buggy interface. The human doesn't expect me to remember or look up codes
 for produce, she bags my groceries, and unlike the machine, she isn't on
 hair-trigger alert for any sign that I might be trying to steal toilet
 paper. Best of all, the human does all the work while I'm allowed to stand
 there and stupidly stare at my phone, which is my natural state of being.

 There is only one problem with human checkers: They're in short supply. At
 my neighborhood big-box suburban supermarket, the lines for human checkers
 are often three or four deep, while the self-checkout queue is usually
 sparse. Customers who are new to self-checkout might take their short lines
 to mean that the machines are more efficient than the humans, but that would
 be a gross misunderstanding.

 As far as I can tell, the self-checkout lines are short only because the
 machines aren't very good.

 They work well enough in a pinch—when you want to check out just a handful
 of items, when you don't have much produce, when you aren't loaded down with
 coupons. But for any standard order, they're a big pain. Perversely, then,
 self-checkout machines' shortcomings are their best feature: because they're
 useless for most orders, their lines are shorter, making the machines seem
 faster than humans.

 In most instances where I'm presented with a machine instead of a human, I
 rejoice. I prefer an ATM to a flesh-and-blood banker, and I find airport
 check-in machines more efficient than the unsmiling guy at the desk. But
 both these tasks—along with more routine computerized skills like robotic
 assembly lines—share a common feature: They're very narrow, specific,
 repeatable problems, ones that require little physical labor and not much
 cognitive flexibility.

 Supermarket checkout—a low-wage job that doesn't require much
 training—sounds like it should be similarly vulnerable to robotic invasion.
 But it turns out that checking out groceries requires just enough
 mental-processing skills to be a prohibitive challenge for computers. In
 that way, supermarket checkout represents a class of jobs that computers
 can't yet match because, for now, they're just not very good substituting
 key human abilities.

 What's so cognitively demanding about supermarket checkout? I spoke to
 several former checkout people, and they all pointed to the same skill:
 Identifying fruits and vegetables. Some supermarket produce is tagged with
 small stickers carrying product-lookup codes, but a lot of stuff isn't. It's
 the human checker's job to tell the difference between green leaf lettuce
 and green bell peppers, and then to remember the proper code.

 It took me about three or four weeks to get to the point where I wouldn't
 have to look up most items that came by, said Sam Orme, a 30-year-old grad
 student who worked as a checker when he was a teenager.

 Another one-time checker, Ken Haskell, explained that even after months of
 doing the job, he would often get stumped. Every once in a while I'd get a
 papaya or a mango and I'd have to reach for the book, he said.

 In a recent research paper called Dancing With Robots, the economists
 Frank Levy and Richard Murnane point out that computers replace human
 workers only when machines meet two key conditions. First, the information
 necessary to carry out the task must be put in a form that computers can
 understand, and second, the job must be routine enough that it can be
 expressed in a series of rules.

 Supermarket checkout machines meet the second of these conditions, but they
 fail on the first. They lack proper information to do the job a human would
 do. To put it another way: They can't tell shiitakes from Shinola. Instead
 of identifying your produce, the machine asks you, the customer, to type in
 a code for every leafy green in your cart. Many times you'll have to look up
 the code in an on-screen directory. If a human checker asked you to remind
 him what that bunch of the oblong yellow fruit in your basket was, you'd ask
 to see his boss.

 This deficiency extends far beyond the checkout lane.

 In the '60s people assumed you'd be reading X-rays and CT scans by
 computers within years, Mr. Levy said. But it's nowhere near anything like
 that. You have certain 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Craig Weinberg
The point is not that they are stupid, its that they are much stupider 
about aesthetic realities than quantitative measurements, which should be 
or *at least could be* be a clue that there is much more of a difference 
between mathematical theory and experienced presence than Comp can possibly 
consider. This is not generalized from a particular case, it is a pattern 
which I have seen to be common to all cases, and I think that it is 
possible to understand that pattern without it being the product of any 
phobia or bias. I would love computers to be smarter than living organisms, 
and in some way, they are, but in other ways, it appears that they will 
never be, and for very good reasons.

Craig

On Wednesday, October 9, 2013 3:37:15 AM UTC-4, Bruno Marchal wrote:


 On 08 Oct 2013, at 22:22, smi...@zonnet.nl javascript: wrote: 

  Citeren Craig Weinberg whats...@gmail.com javascript:: 
  
  
 http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
  
  
  *Humans 1, Robots 0* 
  Cashiers Trump Self-Checkout Machines at the Grocery Store 
  
  Computers seem to be replacing humans across many industries, and   
  we're all 
  getting very nervous. 
  
  But if you want some reason for optimism, visit your local   
  supermarket. See 
  that self-checkout machine? It doesn't hold a candle to the humans-- 
  and its 
  deficiencies neatly illustrate the limits of computers' abilities   
  to mimic 
  human skills. 
  
  The human supermarket checker is superior to the self-checkout   
  machine in 
  almost every way. The human is faster. The human has a more   
  pleasing, less 
  buggy interface. The human doesn't expect me to remember or look up   
  codes 
  for produce, she bags my groceries, and unlike the machine, she   
  isn't on 
  hair-trigger alert for any sign that I might be trying to steal   
  toilet 
  paper. Best of all, the human does all the work while I'm allowed   
  to stand 
  there and stupidly stare at my phone, which is my natural state of   
  being. 
  
  There is only one problem with human checkers: They're in short   
  supply. At 
  my neighborhood big-box suburban supermarket, the lines for human   
  checkers 
  are often three or four deep, while the self-checkout queue is   
  usually 
  sparse. Customers who are new to self-checkout might take their   
  short lines 
  to mean that the machines are more efficient than the humans, but   
  that 
  would be a gross misunderstanding. 
  
  As far as I can tell, the self-checkout lines are short only   
  because the 
  machines aren't very good. 
  
  They work well enough in a pinch--when you want to check out just a   
  handful 
  of items, when you don't have much produce, when you aren't loaded   
  down 
  with coupons. But for any standard order, they're a big pain.   
  Perversely, 
  then, self-checkout machines' shortcomings are their best feature:   
  because 
  they're useless for most orders, their lines are shorter, making the 
  machines seem faster than humans. 
  
  In most instances where I'm presented with a machine instead of a   
  human, I 
  rejoice. I prefer an ATM to a flesh-and-blood banker, and I find   
  airport 
  check-in machines more efficient than the unsmiling guy at the   
  desk. But 
  both these tasks--along with more routine computerized skills like   
  robotic 
  assembly lines--share a common feature: They're very narrow,   
  specific, 
  repeatable problems, ones that require little physical labor and   
  not much 
  cognitive flexibility. 
  
  Supermarket checkout--a low-wage job that doesn't require much 
  training--sounds like it should be similarly vulnerable to robotic   
  invasion. 
  But it turns out that checking out groceries requires just enough 
  mental-processing skills to be a prohibitive challenge for   
  computers. In 
  that way, supermarket checkout represents a class of jobs that   
  computers 
  can't yet match because, for now, they're just not very good   
  substituting 
  key human abilities. 
  
  What's so cognitively demanding about supermarket checkout? I spoke   
  to 
  several former checkout people, and they all pointed to the same   
  skill: 
  Identifying fruits and vegetables. Some supermarket produce is   
  tagged with 
  small stickers carrying product-lookup codes, but a lot of stuff   
  isn't. 
  It's the human checker's job to tell the difference between green   
  leaf 
  lettuce and green bell peppers, and then to remember the proper code. 
  
  It took me about three or four weeks to get to the point where I   
  wouldn't 
  have to look up most items that came by, said Sam Orme, a 30-year- 
  old grad 
  student who worked as a checker when he was a teenager. 
  
  Another one-time checker, Ken Haskell, explained that even after   
  months of 
  doing the job, he would often get stumped. Every once in a while   
  I'd get a 
  papaya or a mango and I'd have to reach for the book, he said. 
  
  In 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Craig Weinberg
Why does the relation of aesthetic experience to computation have to be 
reduced to a simple question about convenience? If I don't want to be a 
ventriloquist's dummy does that mean I should keep quiet about Pinocchio 
not being a real boy?

On Wednesday, October 9, 2013 4:04:41 AM UTC-4, telmo_menezes wrote:



 Craig, a simple question: would you rather put up with the limitations 
 of automatic cashiers or have to work as a cashier sometimes? 

  Craig 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups 
  Everything List group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an 
  email to everything-li...@googlegroups.com javascript:. 
  To post to this group, send email to 
  everyth...@googlegroups.comjavascript:. 

  Visit this group at http://groups.google.com/group/everything-list. 
  For more options, visit https://groups.google.com/groups/opt_out. 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Telmo Menezes
On Wed, Oct 9, 2013 at 2:24 PM, Craig Weinberg whatsons...@gmail.com wrote:
 Why does the relation of aesthetic experience to computation have to be
 reduced to a simple question about convenience? If I don't want to be a
 ventriloquist's dummy does that mean I should keep quiet about Pinocchio not
 being a real boy?

Because it's a very straightforward way to use the human brain as a
test for how well a machine performs a human task. It's a fair test.
Once convenience is at stake, humans lie less.

Pinocchio is an excellent example. Suppose there's some TV show that
needs a boy to play a role. They would not be happy with Pinocchio,
but one day they might be happy with a robot. Then we will know that
some progress has been made. So I'm basically challenging the Humans -
1, Machines - 0 assertion.

One day the automatic cashier will be able to recognise vegetables
better than any human. When this day comes, you will complain that the
automatic cashier doesn't really mean it when it wishes you a nice
day.

More importantly: you set a standard that can never be achieved and
then you point out that it wasn't achieved by any artificial entity we
throw at you. Then you conclude that this is meaningful evidence for
your theory, but it's circular.

 On Wednesday, October 9, 2013 4:04:41 AM UTC-4, telmo_menezes wrote:



 Craig, a simple question: would you rather put up with the limitations
 of automatic cashiers or have to work as a cashier sometimes?

  Craig
 
  --
  You received this message because you are subscribed to the Google
  Groups
  Everything List group.
  To unsubscribe from this group and stop receiving emails from it, send
  an
  email to everything-li...@googlegroups.com.
  To post to this group, send email to everyth...@googlegroups.com.
  Visit this group at http://groups.google.com/group/everything-list.
  For more options, visit https://groups.google.com/groups/opt_out.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread smitra

Citeren Bruno Marchal marc...@ulb.ac.be:



On 08 Oct 2013, at 22:22, smi...@zonnet.nl wrote:


Citeren Craig Weinberg whatsons...@gmail.com:


http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html

*Humans 1, Robots 0*
Cashiers Trump Self-Checkout Machines at the Grocery Store

Computers seem to be replacing humans across many industries, and  
we're all

getting very nervous.

But if you want some reason for optimism, visit your local  
supermarket. See
that self-checkout machine? It doesn't hold a candle to the 
humans-- and its

deficiencies neatly illustrate the limits of computers' abilities  to mimic
human skills.

The human supermarket checker is superior to the self-checkout  machine in
almost every way. The human is faster. The human has a more  pleasing, less
buggy interface. The human doesn't expect me to remember or look up  codes
for produce, she bags my groceries, and unlike the machine, she  isn't on
hair-trigger alert for any sign that I might be trying to steal  toilet
paper. Best of all, the human does all the work while I'm allowed  to stand
there and stupidly stare at my phone, which is my natural state of  being.

There is only one problem with human checkers: They're in short  supply. At
my neighborhood big-box suburban supermarket, the lines for human  checkers
are often three or four deep, while the self-checkout queue is  usually
sparse. Customers who are new to self-checkout might take their  
short lines

to mean that the machines are more efficient than the humans, but  that
would be a gross misunderstanding.

As far as I can tell, the self-checkout lines are short only  because the
machines aren't very good.

They work well enough in a pinch--when you want to check out just a 
 handful

of items, when you don't have much produce, when you aren't loaded  down
with coupons. But for any standard order, they're a big pain.  Perversely,
then, self-checkout machines' shortcomings are their best feature:  because
they're useless for most orders, their lines are shorter, making the
machines seem faster than humans.

In most instances where I'm presented with a machine instead of a  human, I
rejoice. I prefer an ATM to a flesh-and-blood banker, and I find  airport
check-in machines more efficient than the unsmiling guy at the  desk. But
both these tasks--along with more routine computerized skills like  robotic
assembly lines--share a common feature: They're very narrow,  specific,
repeatable problems, ones that require little physical labor and  not much
cognitive flexibility.

Supermarket checkout--a low-wage job that doesn't require much
training--sounds like it should be similarly vulnerable to robotic  
invasion.

But it turns out that checking out groceries requires just enough
mental-processing skills to be a prohibitive challenge for  computers. In
that way, supermarket checkout represents a class of jobs that  computers
can't yet match because, for now, they're just not very good  substituting
key human abilities.

What's so cognitively demanding about supermarket checkout? I spoke  to
several former checkout people, and they all pointed to the same  skill:
Identifying fruits and vegetables. Some supermarket produce is  tagged with
small stickers carrying product-lookup codes, but a lot of stuff  isn't.
It's the human checker's job to tell the difference between green  leaf
lettuce and green bell peppers, and then to remember the proper code.

It took me about three or four weeks to get to the point where I  wouldn't
have to look up most items that came by, said Sam Orme, a 30-year- 
old grad

student who worked as a checker when he was a teenager.

Another one-time checker, Ken Haskell, explained that even after  months of
doing the job, he would often get stumped. Every once in a while  
I'd get a

papaya or a mango and I'd have to reach for the book, he said.

In a recent research paper called Dancing With Robots, the  economists
Frank Levy and Richard Murnane point out that computers replace human
workers only when machines meet two key conditions. First, the  information
necessary to carry out the task must be put in a form that  computers can
understand, and second, the job must be routine enough that it can be
expressed in a series of rules.

Supermarket checkout machines meet the second of these conditions,  
but they
fail on the first. They lack proper information to do the job a  
human would

do. To put it another way: They can't tell shiitakes from Shinola.  Instead
of identifying your produce, the machine asks you, the customer, to 
 type in

a code for every leafy green in your cart. Many times you'll have  to look
up the code in an on-screen directory. If a human checker asked you  to
remind him what that bunch of the oblong yellow fruit in your  basket was,
you'd ask to see his boss.

This deficiency extends far beyond the checkout lane.

In the '60s people assumed you'd be reading X-rays and CT scans by
computers within years, Mr. 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Bruno Marchal


On 09 Oct 2013, at 14:19, Craig Weinberg wrote:

The point is not that they are stupid, its that they are much  
stupider about aesthetic realities than quantitative measurements,  
which should be or *at least could be* be a clue


If that were true ...
But you don't really address the critic made against that idea. You  
seem just to have a prejudice against the possible relation between  
machines and aesthetic realities. Your argument takes too much into  
account the actual shape of current machines.



that there is much more of a difference between mathematical theory  
and experienced presence than Comp can possibly consider.


?
I keep trying to point to you that there is a mathematical theory of  
the experienced presence. Of course the mathematical theory itself is  
not asked to be an experienced presence, but it is a theory about such  
presence.

You confuse the menu and the food.




This is not generalized from a particular case, it is a pattern  
which I have seen to be common to all cases,


We cannot see infinitely many examples.
I guess you mean that there is a general argument, but you don't  
provide it.




and I think that it is possible to understand that pattern without  
it being the product of any phobia or bias. I would love computers  
to be smarter than living organisms, and in some way, they are, but  
in other ways, it appears that they will never be, and for very good  
reasons.


That we still ignore. As I said, the phenomenology that you describe  
fits well in the machine's machine qualia theory.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Craig Weinberg


On Wednesday, October 9, 2013 10:18:12 AM UTC-4, smi...@zonnet.nl wrote:

 Citeren Bruno Marchal mar...@ulb.ac.be javascript:: 

  
  On 08 Oct 2013, at 22:22, smi...@zonnet.nl javascript: wrote: 
  
  Citeren Craig Weinberg whats...@gmail.com javascript:: 
  
  
 http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html
  
  
  *Humans 1, Robots 0* 
  Cashiers Trump Self-Checkout Machines at the Grocery Store 
  
  Computers seem to be replacing humans across many industries, and   
  we're all 
  getting very nervous. 
  
  But if you want some reason for optimism, visit your local   
  supermarket. See 
  that self-checkout machine? It doesn't hold a candle to the 
  humans-- and its 
  deficiencies neatly illustrate the limits of computers' abilities  to 
 mimic 
  human skills. 
  
  The human supermarket checker is superior to the self-checkout 
  machine in 
  almost every way. The human is faster. The human has a more  pleasing, 
 less 
  buggy interface. The human doesn't expect me to remember or look up 
  codes 
  for produce, she bags my groceries, and unlike the machine, she  isn't 
 on 
  hair-trigger alert for any sign that I might be trying to steal 
  toilet 
  paper. Best of all, the human does all the work while I'm allowed  to 
 stand 
  there and stupidly stare at my phone, which is my natural state of 
  being. 
  
  There is only one problem with human checkers: They're in short 
  supply. At 
  my neighborhood big-box suburban supermarket, the lines for human 
  checkers 
  are often three or four deep, while the self-checkout queue is 
  usually 
  sparse. Customers who are new to self-checkout might take their   
  short lines 
  to mean that the machines are more efficient than the humans, but 
  that 
  would be a gross misunderstanding. 
  
  As far as I can tell, the self-checkout lines are short only  because 
 the 
  machines aren't very good. 
  
  They work well enough in a pinch--when you want to check out just a 
   handful 
  of items, when you don't have much produce, when you aren't loaded 
  down 
  with coupons. But for any standard order, they're a big pain. 
  Perversely, 
  then, self-checkout machines' shortcomings are their best feature: 
  because 
  they're useless for most orders, their lines are shorter, making the 
  machines seem faster than humans. 
  
  In most instances where I'm presented with a machine instead of a 
  human, I 
  rejoice. I prefer an ATM to a flesh-and-blood banker, and I find 
  airport 
  check-in machines more efficient than the unsmiling guy at the  desk. 
 But 
  both these tasks--along with more routine computerized skills like 
  robotic 
  assembly lines--share a common feature: They're very narrow, 
  specific, 
  repeatable problems, ones that require little physical labor and  not 
 much 
  cognitive flexibility. 
  
  Supermarket checkout--a low-wage job that doesn't require much 
  training--sounds like it should be similarly vulnerable to robotic   
  invasion. 
  But it turns out that checking out groceries requires just enough 
  mental-processing skills to be a prohibitive challenge for  computers. 
 In 
  that way, supermarket checkout represents a class of jobs that 
  computers 
  can't yet match because, for now, they're just not very good 
  substituting 
  key human abilities. 
  
  What's so cognitively demanding about supermarket checkout? I spoke 
  to 
  several former checkout people, and they all pointed to the same 
  skill: 
  Identifying fruits and vegetables. Some supermarket produce is  tagged 
 with 
  small stickers carrying product-lookup codes, but a lot of stuff 
  isn't. 
  It's the human checker's job to tell the difference between green 
  leaf 
  lettuce and green bell peppers, and then to remember the proper code. 
  
  It took me about three or four weeks to get to the point where I 
  wouldn't 
  have to look up most items that came by, said Sam Orme, a 30-year- 
  old grad 
  student who worked as a checker when he was a teenager. 
  
  Another one-time checker, Ken Haskell, explained that even after 
  months of 
  doing the job, he would often get stumped. Every once in a while   
  I'd get a 
  papaya or a mango and I'd have to reach for the book, he said. 
  
  In a recent research paper called Dancing With Robots, the 
  economists 
  Frank Levy and Richard Murnane point out that computers replace human 
  workers only when machines meet two key conditions. First, the 
  information 
  necessary to carry out the task must be put in a form that  computers 
 can 
  understand, and second, the job must be routine enough that it can be 
  expressed in a series of rules. 
  
  Supermarket checkout machines meet the second of these conditions,   
  but they 
  fail on the first. They lack proper information to do the job a   
  human would 
  do. To put it another way: They can't tell shiitakes from Shinola. 
  Instead 
  of identifying your produce, the machine 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Jason Resch
This thread reminds me of the following cartoon from:
http://www.kurzweilai.net/images/only-humans-cartoon.jpg

Jason


On Wed, Oct 9, 2013 at 7:24 AM, Craig Weinberg whatsons...@gmail.comwrote:

 Why does the relation of aesthetic experience to computation have to be
 reduced to a simple question about convenience? If I don't want to be a
 ventriloquist's dummy does that mean I should keep quiet about Pinocchio
 not being a real boy?


 On Wednesday, October 9, 2013 4:04:41 AM UTC-4, telmo_menezes wrote:



 Craig, a simple question: would you rather put up with the limitations
 of automatic cashiers or have to work as a cashier sometimes?

  Craig
 
  --
  You received this message because you are subscribed to the Google
 Groups
  Everything List group.
  To unsubscribe from this group and stop receiving emails from it, send
 an
  email to everything-li...@**googlegroups.com.
  To post to this group, send email to everyth...@googlegroups.**com.
  Visit this group at 
  http://groups.google.com/**group/everything-listhttp://groups.google.com/group/everything-list.

  For more options, visit 
  https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Craig Weinberg
It's not that computers can't do what humans do, it's that they can't 
experience anything. Mozart could dig a hole as well as compose music, but 
that doesn't mean that a backhoe with a player piano on it is Mozart. It's 
a much deeper problem with how machines are conceptualized that has nothing 
at all to do with humans.

On Wednesday, October 9, 2013 2:17:34 PM UTC-4, Jason wrote:

 This thread reminds me of the following cartoon from: 
 http://www.kurzweilai.net/images/only-humans-cartoon.jpg

 Jason


 On Wed, Oct 9, 2013 at 7:24 AM, Craig Weinberg 
 whats...@gmail.comjavascript:
  wrote:

 Why does the relation of aesthetic experience to computation have to be 
 reduced to a simple question about convenience? If I don't want to be a 
 ventriloquist's dummy does that mean I should keep quiet about Pinocchio 
 not being a real boy?


 On Wednesday, October 9, 2013 4:04:41 AM UTC-4, telmo_menezes wrote:



 Craig, a simple question: would you rather put up with the limitations 
 of automatic cashiers or have to work as a cashier sometimes? 

  Craig 
  
  -- 
  You received this message because you are subscribed to the Google 
 Groups 
  Everything List group. 
  To unsubscribe from this group and stop receiving emails from it, send 
 an 
  email to everything-li...@**googlegroups.com. 
  To post to this group, send email to everyth...@googlegroups.**com. 
  Visit this group at 
  http://groups.google.com/**group/everything-listhttp://groups.google.com/group/everything-list.
   

  For more options, visit 
  https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out.
   


  -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread LizR
On 10 October 2013 09:47, Craig Weinberg whatsons...@gmail.com wrote:

 It's not that computers can't do what humans do, it's that they can't
 experience anything. Mozart could dig a hole as well as compose music, but
 that doesn't mean that a backhoe with a player piano on it is Mozart. It's
 a much deeper problem with how machines are conceptualized that has nothing
 at all to do with humans.


So you think strong AI is wrong. OK. But why can't computers experience
anything, in principle, given that people can, and assuming people are
complicated machines?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Craig Weinberg


On Wednesday, October 9, 2013 5:52:46 PM UTC-4, Liz R wrote:

 On 10 October 2013 09:47, Craig Weinberg whats...@gmail.com javascript:
  wrote:

 It's not that computers can't do what humans do, it's that they can't 
 experience anything. Mozart could dig a hole as well as compose music, but 
 that doesn't mean that a backhoe with a player piano on it is Mozart. It's 
 a much deeper problem with how machines are conceptualized that has nothing 
 at all to do with humans.


 So you think strong AI is wrong. OK. But why can't computers experience 
 anything, in principle, given that people can, and assuming people are 
 complicated machines?


I don't think that people are machines. A machine is assembled 
intentionally from unrelated substances to perform a function which is 
alien to any of the substances. Living organisms are not assembled, they 
grow from a single cell. They have no unrelated substances and all 
functions they perform are local to the motives of the organism as a whole. 

This is an even bigger deal if I am right about the universe being 
fundamentally a subdividing capacity for experience rather than a place or 
theater of interacting objects or forces. It means that we are not our 
body, rather a body is what someone else's lifetime looks like from inside 
of your lifetime. It's a token. The mechanisms of the brain do not produce 
awareness as a product, any more than these combinations of letter produce 
the thoughts I am communicating. What we see neurons doing is comparable to 
looking at a satellite picture of a city at night. We can learn a lot about 
what a city does, but nothing about who lives in the city. A city, like a 
human body, is a machine when you look at it from a distance, but what we 
see of a body or a city would be perfectly fine with no awareness happening 
at all. 

Thanks,
Craig


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread Jason Resch
On Wed, Oct 9, 2013 at 4:52 PM, LizR lizj...@gmail.com wrote:

 On 10 October 2013 09:47, Craig Weinberg whatsons...@gmail.com wrote:

 It's not that computers can't do what humans do, it's that they can't
 experience anything. Mozart could dig a hole as well as compose music, but
 that doesn't mean that a backhoe with a player piano on it is Mozart. It's
 a much deeper problem with how machines are conceptualized that has nothing
 at all to do with humans.


 So you think strong AI is wrong. OK. But why can't computers experience
 anything, in principle, given that people can, and assuming people are
 complicated machines?



I think Craig would say he does think computers (and many/all other things)
do experience something, just that it is necessarily different from what we
experience. The reason for this has something to do with our history as
biological organisms (according to his theory).

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-09 Thread LizR
On 10 October 2013 13:03, Craig Weinberg whatsons...@gmail.com wrote:


 On Wednesday, October 9, 2013 5:52:46 PM UTC-4, Liz R wrote:

 On 10 October 2013 09:47, Craig Weinberg whats...@gmail.com wrote:

 It's not that computers can't do what humans do,* it's that they can't
 experience anything.* Mozart could dig a hole as well as compose music,
 but that doesn't mean that a backhoe with a player piano on it is Mozart.
 It's a much deeper problem with how machines are conceptualized that has
 nothing at all to do with humans.


 So you think strong AI is wrong. OK. But why can't computers experience
 anything, in principle, given that people can, and assuming people are
 complicated machines?


 I don't think that people are machines. A machine is assembled
 intentionally from unrelated substances to perform a function which is
 alien to any of the substances. Living organisms are not assembled, they
 grow from a single cell. They have no unrelated substances and all
 functions they perform are local to the motives of the organism as a whole.


I believe that, at least in discussions such as this one, defining people
as machines has nothing to do with how or why they are constructed, and
eveything to do with ruling out any supernatural components. Anyway, allow
me to rephrase the question.

I assume from the underlined comment that you think that strong AI is
wrong, and that we will never be able to build a conscious computer. How do
you come to that conclusion?


 This is an even bigger deal if I am right about the universe being
 fundamentally a subdividing capacity for experience rather than a place or
 theater of interacting objects or forces. It means that we are not our
 body, rather a body is what someone else's lifetime looks like from inside
 of your lifetime. It's a token. The mechanisms of the brain do not produce
 awareness as a product, any more than these combinations of letter produce
 the thoughts I am communicating. What we see neurons doing is comparable to
 looking at a satellite picture of a city at night. We can learn a lot about
 what a city does, but nothing about who lives in the city. A city, like a
 human body, is a machine when you look at it from a distance, but what we
 see of a body or a city would be perfectly fine with no awareness happening
 at all.


Insofar as I understand it, I agree with this. I often wonder how a load
of atoms can have experiences so to speak. This is the so-called hard
problem of AI. It is (I think) addressed by comp.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-08 Thread smitra

Citeren Craig Weinberg whatsons...@gmail.com:


http://online.wsj.com/article/SB10001424052702303492504579115310362925246.html

*Humans 1, Robots 0*
Cashiers Trump Self-Checkout Machines at the Grocery Store

Computers seem to be replacing humans across many industries, and we're all
getting very nervous.

But if you want some reason for optimism, visit your local supermarket. See
that self-checkout machine? It doesn't hold a candle to the humans--and its
deficiencies neatly illustrate the limits of computers' abilities to mimic
human skills.

The human supermarket checker is superior to the self-checkout machine in
almost every way. The human is faster. The human has a more pleasing, less
buggy interface. The human doesn't expect me to remember or look up codes
for produce, she bags my groceries, and unlike the machine, she isn't on
hair-trigger alert for any sign that I might be trying to steal toilet
paper. Best of all, the human does all the work while I'm allowed to stand
there and stupidly stare at my phone, which is my natural state of being.

There is only one problem with human checkers: They're in short supply. At
my neighborhood big-box suburban supermarket, the lines for human checkers
are often three or four deep, while the self-checkout queue is usually
sparse. Customers who are new to self-checkout might take their short lines
to mean that the machines are more efficient than the humans, but that
would be a gross misunderstanding.

As far as I can tell, the self-checkout lines are short only because the
machines aren't very good.

They work well enough in a pinch--when you want to check out just a handful
of items, when you don't have much produce, when you aren't loaded down
with coupons. But for any standard order, they're a big pain. Perversely,
then, self-checkout machines' shortcomings are their best feature: because
they're useless for most orders, their lines are shorter, making the
machines seem faster than humans.

In most instances where I'm presented with a machine instead of a human, I
rejoice. I prefer an ATM to a flesh-and-blood banker, and I find airport
check-in machines more efficient than the unsmiling guy at the desk. But
both these tasks--along with more routine computerized skills like robotic
assembly lines--share a common feature: They're very narrow, specific,
repeatable problems, ones that require little physical labor and not much
cognitive flexibility.

Supermarket checkout--a low-wage job that doesn't require much
training--sounds like it should be similarly vulnerable to robotic invasion.
But it turns out that checking out groceries requires just enough
mental-processing skills to be a prohibitive challenge for computers. In
that way, supermarket checkout represents a class of jobs that computers
can't yet match because, for now, they're just not very good substituting
key human abilities.

What's so cognitively demanding about supermarket checkout? I spoke to
several former checkout people, and they all pointed to the same skill:
Identifying fruits and vegetables. Some supermarket produce is tagged with
small stickers carrying product-lookup codes, but a lot of stuff isn't.
It's the human checker's job to tell the difference between green leaf
lettuce and green bell peppers, and then to remember the proper code.

It took me about three or four weeks to get to the point where I wouldn't
have to look up most items that came by, said Sam Orme, a 30-year-old grad
student who worked as a checker when he was a teenager.

Another one-time checker, Ken Haskell, explained that even after months of
doing the job, he would often get stumped. Every once in a while I'd get a
papaya or a mango and I'd have to reach for the book, he said.

In a recent research paper called Dancing With Robots, the economists
Frank Levy and Richard Murnane point out that computers replace human
workers only when machines meet two key conditions. First, the information
necessary to carry out the task must be put in a form that computers can
understand, and second, the job must be routine enough that it can be
expressed in a series of rules.

Supermarket checkout machines meet the second of these conditions, but they
fail on the first. They lack proper information to do the job a human would
do. To put it another way: They can't tell shiitakes from Shinola. Instead
of identifying your produce, the machine asks you, the customer, to type in
a code for every leafy green in your cart. Many times you'll have to look
up the code in an on-screen directory. If a human checker asked you to
remind him what that bunch of the oblong yellow fruit in your basket was,
you'd ask to see his boss.

This deficiency extends far beyond the checkout lane.

In the '60s people assumed you'd be reading X-rays and CT scans by
computers within years, Mr. Levy said. But it's nowhere near anything
like that. You have certain computerized enhancements for simple images,
but nothing like a real CT scan can be read by a 

Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-08 Thread meekerdb

On 10/8/2013 1:22 PM, smi...@zonnet.nl wrote:


A lot of what I am always talking about is in there...computers don't
understand produce because they have no aesthetic sensibility. A mechanical
description of a function is not the same thing as participating in an
experience.


So when the check-out robot can recognize okra - which the cashiers always have to look up 
- you'll agree that robots have aesthetic sensibilty.




Craig




You can't expect a machine with the computational capabilities of less than an insect 
brain to the job most people do. 


And they don't even give the machine two weeks to learn.

It's actually amazing that such machines can do quite a lot, but some tasks we perform 
are the result of a significant part of our brain power.


Most of the problem is in recognizing 3D objects.  It may prove easier to create sniffers 
and chemical detectors.  I'll bet my dog could tell papaya from mango blindfolded.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: WSJ Article On Why Computers Make Lame Supermarket Cashiers

2013-10-08 Thread Craig Weinberg


On Tuesday, October 8, 2013 4:33:32 PM UTC-4, Brent wrote:

 On 10/8/2013 1:22 PM, smi...@zonnet.nl javascript: wrote: 
  
  A lot of what I am always talking about is in there...computers don't 
  understand produce because they have no aesthetic sensibility. A 
 mechanical 
  description of a function is not the same thing as participating in an 
  experience. 

 So when the check-out robot can recognize okra - which the cashiers always 
 have to look up 
 - you'll agree that robots have aesthetic sensibilty. 


Aesthetic sensibility is not something that we can agree that something has 
except for ourselves. I mention aesthetic sensibility because the things 
that computers fail at in the article are related to sensation and the fact 
that it is different from states of computation. Similarly, a traffic 
signal is not the same thing as a traffic cop, even if they perform the 
same function relative to the flow of traffic. We get a robot to identify 
something which matches a description as 'okra' in the most primitive sense 
of matching, but that doesn't mean that it has any sense of what it is. A 
weighted picture of okra, or some plastic okra would probably do just as 
well.



 
  Craig 
  
  
  
  You can't expect a machine with the computational capabilities of less 
 than an insect 
  brain to the job most people do. 

 And they don't even give the machine two weeks to learn. 

  It's actually amazing that such machines can do quite a lot, but some 
 tasks we perform 
  are the result of a significant part of our brain power. 

 Most of the problem is in recognizing 3D objects.  It may prove easier to 
 create sniffers 
 and chemical detectors.  I'll bet my dog could tell papaya from mango 
 blindfolded. 

 Brent 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.