On Thursday, February 28, 2013 3:52:59 PM UTC-5, John Clark wrote:
> On Thu, Feb 28, 2013 at 11:43 AM, Craig Weinberg 
> <whats...@gmail.com<javascript:>
> > wrote
>  >>> Even if it could [ tell the difference between a audio and a video 
>>>> file] that would only represent a more advanced file analysis function, 
>>>> not 
>>>> any kind of audio or video sensitivity.
>>> >> Please explain the difference between the two.
>> > In the former, the computer can read up the file and pick up some bytes 
>> which can tell it which applications would be likely to open it. In the 
>> latter, the computer could actually see or hear the file 
> You have no way of knowing if I can actually see or hear, all you know is 
> that I behave as if I do. It's exactly precisely the same situation with a 
> smart computer. 

You have no way of knowing what I can't know about you either. You are 
using a double standard whereby you claim to be omniscient about what I can 
or can't know.

Of course this is all sophistry. All that matters is that we understand 
that there is no presentation quality to a file. Presentation is 100% in 
the interpreter. Since you can open any raw file as a video, audio, text, 
3-D printing, etc that would mean that all data would have to inherently 
have all possible sensory modalities contained within it. You are really 
saying that we could use a program that acts like a video screen instead of 
an actual video screen. That would be nice, but it can never happen, 
regardless of how sophisticated software becomes. There will never be an 
app on your iPhone to make it waterproof.

>>  > It's like a computer could be programmed to choose a healthy entree on 
>> a menu, but it can't actually eat the meal and tell you whether it was any 
>> good.
> Are you now saying that a digestive system is linked to consciousness?  

I am saying that the menu is not the meal. Computers do menus, but not 
meals. Consciousness is the capacity to discern between menu and meal 
(among other things).

> >> You're saying that if there was no audio or video properties in the file
>> > Meaning that there are no pictures or sounds within the file, yes. The 
>> file is only a pattern of countable switch positions, like a piano roll. 
> A electronic cochlear implant that enables deaf people to hear produces no 
> sound, all it makes is lots of zeros and ones. The same thing is true of 
> the experimental artificial eye.   

Sure, because there is ultimately a living person there to hear and see. 
Without the person, the implants won't do anything  worthwhile.

> > A player piano has no awareness of music,
> It must be grand being a "hard problem" theorist because it's the easiest 
> job in the world bar none, no matter how smart something is you just say 
> "yeah but it's not conscious" and there is no way anybody can prove you 
> wrong.  

Your argument then is that a player piano has an awareness of music. Maybe 
we should give scarecrows the right to vote also.

> > The computer can't tell if its audio or video no matter what. It can 
>> only tell what application might be associated with opening that file.
> As there are zero empirical differences between those two things HOW THE 

Because it won't be able to open any file without software to identify 
which application to associate it with. If the computer could tell the 
difference, then we wouldn't need to have programmed instructions. Do you 
seriously believe that changing an .mp3 to a .txt file makes the computer 
dizzy? How can you have spent any time programming a computer without 
noticing that everything must be explicitly defined and scripted or it will 
just halt/fail/error? How could it be any clearer that a player piano has 
no experience of music. It's a piano being played by a paper roll. 

> > it's not an audio or video file. Not literally or physically. A file is 
>> just a source of generic binary instructions. 
> And that's all a cochlear implant produces and yet the deaf report those 
> generic binary instructions give them the qualia of sound. 

Sure, computer + user = high quality user experience. Computer + computer = 
no high quality experience. Plug a cochlear implant into a computer and the 
raw data remains raw all the way through. There is no conversion to any 
sense modality - no way so simulate synesthesia or blindsight.

> If you believe that the deaf reported truthfully ( do you?) why wouldn't 
> you believe a computer if it said the same thing? 

Because I understand why computers have no experience. A computer is only 
going to say what it is programmed to say. If it has no vocabulary which 
refers to human experiences of sound, it will have nothing to say about 
some new stream of generic data that related to aural sensation. It's not 
going to try to express anything about the experience of sound.

> But maybe the deaf person is lying too, of course we could tell a story to 
> a deaf person with a cochlear implant and they could correctly answer 
> questions about it but that's just behavior and were talking about qualia 
> and
> deafness does not make you incapable of lying. Or maybe they think they 
> experience the qualia of sound but its nothing like the grand and glorious 
> thing you experience. 

That's not what I think, it's a fact that thus far implants do not compare 
favorably to natural cochlea.

"he quality of sound is different from natural hearing, with less sound 
information being received and processed by the brain. However, many 
patients are able to hear and understand speech and environmental sounds." 

It's not important though - even if the implant sounded perfect, it still 
requires a person listening through it. No person, no human experience of 

Or maybe Mozart would say you think you have experienced the qualia of 
> sound but it's nothing at all like the wonderful thing he has. All this is 
> pointless time wasting speculation because none of it can ever be proved or 
> disproved.  

All that you prove is that you are so unbelievably biased that you would 
rather believe that a roll of toilet paper with holes in it is as smart as 
anyone than face the possibility that computers may be every bit as 
subjectively inert and impersonal as they have proved to be.

>> >> It's like saying you can't tell if a book is written in English if 
>>> there are no English words in it!
>> > No, it's like saying that you can tell if a book is written in Japanese 
>> even if you don't speak Japanese. 
> Maybe you can but I can't, I couldn't tell if it was Chinese or Korean or 
> just a bunch of squiggles made up by a graphics designer yesterday.  

I guess you're not very observant. Does that mean if you saw Arabic writing 
and Chinese writing side by side, for $10,000 you would not be able to 
guess which was which? I apologize if your pattern recognition abilities 
are truly that impaired though. 

Here: This will give you everything that you need to understand how to 
speak words in two different languages. From this data, you can, like a 
computer, match one code into the other, but unlike a computer, you have an 
expectation of understanding which goes beyond that. The computer doesn't 
need to know which of these is Chinese and which is Arabic, and indeed it 
could never guess what that even would mean unless it was explicitly 
labelled that way. As far as the computer knows, these squiggles and their 
corresponding squiggles, are made up.



> > translating language from one generic code into another are mechanical 
>> processes which can be easily programmed.
> No, translating languages is extremely difficult and until about 5 years 
> ago computer translations were so bad that the only reason to do it is the 
> belly laugh you'd get out of it. Back in the computer Precambrian of 2007 
> or 2008 the consensus was that computers couldn't make good translations 
> unless they had some understanding of what was being said, I think they 
> were right, and computers make dramatically better translations now than 
> they did in 2007. 

As you see from the two images, a raw linkage between one set of characters 
to another is quite simple. Translation is difficult because computers have 
no understanding of meaning. A toddler can learn to be multilingual because 
the kinds of things that we usually mean are common to our individual 
experience. A computer has no experience so it has to simulate 
communication from the bottom levels up. It has to look at the overall 
characteristics of the sentence and whittle them down to a statistical 
probability of a match. We don't do that though. We pull meaning from 
garbled sentences, gestures, noises, glances, or just plain intuition.

>> > It's funny, sometimes ideas which can't be proved wrong are that way 
>> because they are actually right.
> Don't be so modest, your ideas about consciousness are twice as good as 
> that, not only can they never be proven wron they can never be proven right 
> either.

Proof is part of consciousness. Try proving something to a cadaver.

>> > People with a hard left-brained approach are not going to be able to 
>> look at consciousness independently of forms and functions
> I understand as well as you do that there is such a thing as 
> consciousness, but I also understand that because it has no observable 
> consequences

All observations are its consequences. Are you expecting the movie camera 
to be found in the movie?

>   obsessing over it is a complete waste of time if your goal is to obtain 
> some understanding of how the world works. 

Consciousness is the world.

> So when you make rubber stamp comments like "a computer can never know X" 
> or "a computer can never feel Y", comments that you simply decree without 
> evidence, 

If I decree, it's only because I understand and reason. Evidence is not an 
appropriate criteria for evaluating consciousness - again, you are watching 
the movie and demanding to see evidence of a movie camera. I can only point 
to how the scene changes in a way which reflects a particular perspective, 
I can show what kind of artifacts are produced by lens flares and double 
exposure...I can even show you a movie of a movie camera in a mirror, but 
you can always say that is just trick photography. With consciousness, you 
can't approach it with one hand tied behind your back as you would an 
object - cherry picking only those experiences which you deem to be 
comparable to bodies in space. You have to begin by recognizing that you 
are immersed in consciousness; that everything you have ever known about 
anything is 100% dependent on your consciousness to render in sensible 
ways. You can't take any of that for granted and expect consciousness to 
jump out of a computer program. 

> comments you have no way of knowing,

said the omniscient telepathic skeptic.

> comments neither you nor anybody else can ever prove or disprove even if 
> the machine behaves as if it knows and feels those things then I respond 
> with rubber stamp comments of my own.   

When computers don't respond the way that you like, do you keep trying the 
same thing over and over again expecting different results? Do you have the 
same expectation that the computer will be annoyed and frustrated by your 
behavior, or do you deep down know that you will never make a computer 
blink that way? You can't try to bully a computer by rubber stamp 
responses, because they have no way of being frustrated, and will happily 
respond back to you in the same way forever.


>   John K Clark

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to