On Wednesday, April 24, 2013 11:58:08 AM UTC-4, Brian Tenneson wrote:
>
> I probably shouldn't be talking to someone who thinks distinguishing a 
> sack of potatoes from a woman means understanding women.  
>
> News flash: understand tacitly implies understand completely.
>

If you define complete understanding as impossible a priori, and you insist 
that understanding must be complete, then you have just removed the word 
from the English language.

 

>
> On Wed, Apr 24, 2013 at 8:37 AM, Craig Weinberg 
> <whats...@gmail.com<javascript:>
> > wrote:
>
>>
>>
>> On Wednesday, April 24, 2013 10:09:44 AM UTC-4, Brian Tenneson wrote:
>>
>>>
>>>
>>> On Wed, Apr 24, 2013 at 4:46 AM, Craig Weinberg <whats...@gmail.com>wrote:
>>>
>>>>
>>>>
>>>> On Wednesday, April 24, 2013 4:31:55 AM UTC-4, Brian Tenneson wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Apr 23, 2013 at 8:53 PM, Craig Weinberg <whats...@gmail.com>wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tuesday, April 23, 2013 11:37:14 PM UTC-4, Brian Tenneson wrote:
>>>>>>>
>>>>>>> You keep claiming that we understand this and that or know this and 
>>>>>>> that.  And, yes, saying something along the lines of "we know we 
>>>>>>> understand 
>>>>>>> because we care about what we understand" *is* circular.  
>>>>>>
>>>>>>
>>>>>> No, it's not. I'm saying that it is impossible to doubt we 
>>>>>> understand. It's just playing with words. My point about caring is that 
>>>>>> it 
>>>>>> makes it clear that we intuitively make a distinction between merely 
>>>>>> being 
>>>>>> aware of something and understanding it.
>>>>>>
>>>>> I'll try to explain how  "we know we understand because we care about 
>>>>> what we understand" is circular.  Note the use of the word understand 
>>>>> towards the left edge of the statement in quotes followed by another 
>>>>> instance of the word understand. 
>>>>>
>>>>
>>>> You should read it as "we know we understand because we care about X". 
>>>> My only intention in repeating the word was to make it clear that the 
>>>> thing 
>>>> that we care about is the thing that we understand. It is the caring which 
>>>> is a symptom of understanding. The absence of that symptom of caring in a 
>>>> machine indicates to me that there is a lack of understanding. Things 
>>>> which 
>>>> understand can care, but things that cannot care cannot understand.
>>>>
>>>> Now that isn't circular but that's a poor sign of understanding.  I 
>>> care very much for women but I can't say that I understand them.
>>>
>>
>> That's a cliche. You may not be able to understand women completely, but 
>> you are not likely to confuse them with a sack of potatoes in a dress. With 
>> a computer, the dress might be all that a security camera search engine 
>> might look for, and may very well categorize a sack of potatoes as a woman 
>> if it happens to be wearing a dress.
>>  
>>
>>>   I understand the rules of English grammar and punctuation but care 
>>> little of it.  
>>>
>>
>> Yes, you don't have to care about it, but you can care about it if you 
>> want to. A machine does not have that option. It can't try harder to follow 
>> proper grammar, it can only assign a priority to the task. It has no 
>> feeling for which tasks are assigned which priority, which is the entire 
>> utility of machines.
>>  
>>
>>> I'm sure you can think of examples.  So the two are not correlated, 
>>> caring and understanding. 
>>>
>>
>> Can you explain why the word understanding is a synonym for kindness and 
>> caring? A coincidence? 
>>  
>>
>>>  Caring is not something that can really be measured in humans while 
>>> caring can be measured in machines/computers.
>>>
>>
>> Give me a break.
>>  
>>
>>>   For example, one might define caring about something means it is 
>>> thinking a lot about it
>>>
>>
>> You might define warm feelings by the onset of influenza but that is a 
>> false equivalence.
>>  
>>
>>> , where a lot means some threshold like over 50% resources are dedicated 
>>> to think about something for a while (a nonzero, finite span of time).  
>>> These days, we can multitask and look up the resource monitor to see what 
>>> the CPU cares about, if anything.
>>>
>>
>> That has nothing whatsover to do with caring. Does the amount of money in 
>> your wallet tell you how much your wallet values money?
>>  
>>
>>>  If it doesn't care about anything, it uses close to 0% and is called 
>>> idle. 
>>>
>>
>> Next you are going to tell me that when a stuffed animal doesn't eat 
>> anything it must be because it is full - but we have no way of knowing if 
>> we are hungry ourselves.
>>  
>>
>>> But if I am running an intensive computation while typing this and look 
>>> at my resource monitor, I can see measurements indicating that my CPU cares 
>>> much more about the intensive computation rather than what I am typing.  
>>> Does that mean the CPU understands what it is doing?  No.  Likewise with 
>>> human brains: we can care a lot about something but have little to no 
>>> understanding of it.
>>>
>>
>> Your entire argument is a defense of the Pathetic fallacy. Nothing you 
>> have said could not apply to any inanimate object, cartoon, abstract 
>> concept etc. Anyone can say 'you can't prove ice cream isn't melting 
>> because it's sad'. It's ridiculous. Find the universe. It is more 
>> interesting than making up stories about CPUs cares, kindnesses, and 
>> understanding. 
>>
>>  
>>>
>>>>
>>>>  This is analogous to saying We are Unicorns because care about 
>>>>> Unicorns. 
>>>>>
>>>>
>>>> No, this is analogous  to you not understanding what I mean and 
>>>> unintentionally making a straw man of my argument. 
>>>>
>>>
>>> Well, be honest here, you changed a phrasing.  You went from 
>>> (paraphrasing)  "we know we understand because we care that we understand" 
>>> to "You know we understand because we care about X". Correct me if I'm 
>>> wrong.  
>>>
>>
>> Correcting you. You're wrong. What I said was "Because we care about what 
>> we understand, and we identify with it personally."
>>
>> You misinterpreted it, then accuse me of meaning what you said, even 
>> after I pointed out your mistake. Now you are unfazed by your unintentional 
>> straw man and are doubling down on the false accusation. You aren't 
>> listening to me and are arguing with yourself.
>>
>> The first phrasing is meaningless because of the second use of the word 
>>> understand (so you might as well be talking about unicorns).  
>>>
>>
>> Which is why I never said that.
>>  
>>
>>> The first phrasing gives no insight into what understanding is and why 
>>> we have it but computers can't.  The problem with your new and improved 
>>> phrasing is that it's a doctored definition of caring; you pick a 
>>> definition related to understanding such that it (the definition of 
>>> 'caring') will *automatically* fail for anything other than a 
>>> non-apathetic human, in essence, assuming computers don't care about 
>>> anything when, in fact, doing what they are programmed to do (much like a 
>>> human, I might add) is the machine-equivalent of them caring about what 
>>> they are told to do.
>>>
>>
>> Pathetic fallacy + false accusation. Next.
>>  
>>
>>>   
>>>
>>>>
>>>>  Doesn't prove unicorns exist; doesn't prove understanding exists 
>>>>> (i.e., that any human understands anything). If this is all sophistry 
>>>>> then 
>>>>> it should be easily dismissible. And yes, playing with words is what 
>>>>> people 
>>>>> normally do, wittingly or unwittingly, and that lends more evidence to 
>>>>> the 
>>>>> notion that we are processors in a Chinese room.  
>>>>>
>>>>
>>>> The position that we only think we understand or that consciousness is 
>>>> an illusion is, in my view, the desperate act of a stubborn mind. Truly, 
>>>> you are sawing off the branch that you are sitting on to suggest that we 
>>>> are incapable of understanding the very conversation that we are having. 
>>>>
>>>
>>> Well calling a conclusion the desperate act of a stubborn mind, rather 
>>> than supply some decent rejoinder, is also the desperate act of a stubborn 
>>> mind, wouldn't you say?
>>>
>>
>> Not at all. If you claim not to understand the very conversation in which 
>> you are participating, how does that make me desperate for pointing out 
>> that it is obviously a specious argument. 
>>  
>>
>>>   While "sawing off the branch you are sitting on" is a very clever 
>>> arrangement of letters (can I use it in a future poem?)
>>>
>>
>> I stole it from Raymond Tallis, so you'll have to ask him.
>>  
>>
>>> , it falls short of being an argument at all or even persuasive. 
>>>
>>
>> Then you're on your own. If you can't understand why you can't claim not 
>> to be able to understand these words, and you are not developmentally 
>> disabled, then I can't help you.
>>  
>>
>>> We can get along just fine by thinking that we understand this 
>>> conversation.  
>>>
>>
>> What does 'thinking that you understand' supposed to mean? We don't have 
>> to understand the entire universe to be able to understand what we are 
>> trying to talk about. Do you think that your operating system or keyboard 
>> have an equal understanding of it?
>>  
>>
>>> But knowing that we understand this conversation?  I'd like to see that 
>>> proved.  
>>>
>>
>> Understanding cannot be proved, it can only be experienced. Why does 
>> everyone want to prove subjective qualities? If that were possible then it 
>> would have been done 5000 years ago. Proof is a kind of understanding.
>>  
>>
>>> Until then, I will continue to think that humans or at least the seat of 
>>> mind are possibly all in Chinese rooms.
>>>
>>
>> You will continue to do what? To "think"? Can you prove that? Does 
>> thinking happen in Chinese rooms. This may be the most preposterous 
>> exchange that I have ever had with someone.
>>  
>>
>>>  
>>>
>>>>
>>>>  
>>>>>  
>>>>>>
>>>>>>> Still doesn't rule out the possibility that we are in a Chinese room 
>>>>>>> right now, manipulating symbols without really understanding what's 
>>>>>>> going 
>>>>>>> on but able to adeptly shuffle the symbols around fast enough to appear 
>>>>>>> functional. 
>>>>>>
>>>>>>
>>>>>> Why not? If we were manipulating symbols, why would we care about 
>>>>>> them. What you're saying doesn't even make sense. We are having a 
>>>>>> conversation. We care about the conversation because we understand it. 
>>>>>> If I 
>>>>>> was being dictated to write in another language instead, I would not 
>>>>>> care 
>>>>>> about the conversation. Are you claiming that there is no difference 
>>>>>> between having a conversation in English and dictating text in a 
>>>>>> language 
>>>>>> you don't understand?
>>>>>>
>>>>>  
>>>>
>>>>>  We care about the symbols because working through the symbols in our 
>>>>> brains is what leads to food, shelter, sex, and all the things animals 
>>>>> want. 
>>>>>
>>>>
>>>> First of all, there are no symbols in our brains, unless you think that 
>>>> serotonin or ATP is a symbol. Secondly, the fact that species have needs 
>>>> does not imply any sort of caring at all. A car needs fuel and oil but it 
>>>> doesn't care about them. When the fuel light comes up on your dashboard, 
>>>> that is for you to care about your car, not a sign that the car is 
>>>> anxious. 
>>>> Instead of a light on the dashboard, a more intelligently designed car 
>>>> could proceed to the filling station and dock at a smart pump, or it could 
>>>> use geological measurements and drill out its own petroleum to 
>>>> refine...all 
>>>> without the slightest bit of caring or understanding. 
>>>>
>>> The electric and chemical footprint of representations of symbols are in 
>>> our brains and caring about the symbols is what leads to food, shelter, sex 
>>> and all the things animals (not cars) want .
>>>
>>
>> If there is nothing to interpret the footprints as representations, then 
>> they can't 'represent' anything. Chemical functions would simply be the 
>> parts in the machine which produce the behaviors that have been selected 
>> for. No experience is required, no caring, no sensation, nothing remotely 
>> close to that.
>>  
>>
>>>
>>>  
>>>>
>>>>>  Or we care about the symbols because they further enrich our lives. 
>>>>>
>>>>
>>>> That's circular. Why do we care about enriching our lives? Because we 
>>>> care about our lives and richness. We don't have to though in theory, and 
>>>> a 
>>>> machine never can.
>>>>
>>> Some people care about enriching their lives, presumably because it 
>>> ultimately makes them more satisfied in life. 
>>>
>>
>> If you can't care about anything, then how could you find anything 
>> satisfying?
>>  
>>
>>> How do you know what a machine never can do? They used to say a machine 
>>> would never fly.  Convictions are prisons. 
>>>
>>
>> Because I understand what a machine is. I understand that art can be 
>> scientific but that science can never appreciate art.
>>  
>>
>>>   
>>>>
>>>>>  The symbols in this corner of the internet (barring my contributions 
>>>>> of course) are examples of that.  Regarding the world, would you say 
>>>>> there 
>>>>> is more that we (i.e., at least one human) understand or more that we 
>>>>> don't?  I would vote 'don't' and that leads me also to suspect we are in 
>>>>> a 
>>>>> chinese room right now.  
>>>>>
>>>>
>>>> I don't know where we are in the extent of our understanding, but there 
>>>> is some understanding, while the man in the Chinese room has no 
>>>> understanding.
>>>>
>>> You think there is some understanding because you are really adept at 
>>> symbol processing.  Your man in the Chinese room is so convincing, that the 
>>> symbols transmitted affirm that you do, in fact, understand.  
>>>
>>
>> Then why don't I think I can understand Chinese?
>>  
>>
>>>  
>>> I think this is one of your theorems: at least one human mind has the 
>>> capacity to understand something and does understand something.
>>>
>>> Or do you assume that when your inner voice tells you that you 
>>> understand something that your inner voice is correct?
>>>
>>
>> I understand the same way that you understand. You tell me. Are these 
>> words indecipherable to you? Are you fumbling around in the dark, 
>> hallucinating that there is a such thing as understanding? Why should 
>> anyone talk to you?
>>  
>>
>>>
>>>  
>>>
>>>>  
>>>>
>>>>> Your coupling of caring and understanding is somewhat arbitrary.  
>>>>>
>>>>
>>>> No, it is supported by the English language: 
>>>> http://dictionary.reverso.net/**english-synonyms/understanding<http://dictionary.reverso.net/english-synonyms/understanding>
>>>>
>>>
>>> I meant in the context of this discussion, the mention of compassion and 
>>> caring at best might have something to do with understanding.  I bet that 
>>> by the time you reply to this email I could print a list having a million 
>>> elements, each of which contains a truth that I think I understand but care 
>>> about zero. 
>>>
>>
>> You didn't read what I said. Just because there are things that you 
>> understand and choose not to care about or care about that you don't 
>> completely understand does not mean that understanding as a phenomenon can 
>> be separated from caring as a phenomenon. I can cook hamburgers or I can 
>> cook something else, but if I cook eggs instead, that doesn't mean that 
>> hamburgers are not food.
>>
>> If you don't see that the synonyms of understanding are relevant to the 
>> context of this discussion then you are having a different discussion.
>>
>>
>> There is no chance of being in a Chinese room at all, because we 
>>> understand some things. 
>>>
>> That's one possibility.  Another is that we're dumb but fast. 
>>
>> In theory that would be a possibility, but in reality, it is not.
>>
>>
>>>
>>> Because the Chinese room prohibits us from ever entertaining the 
>>> possibility that we are in the Chinese Room. Just because we are not 
>>> omniscient and omnipotent does not mean that we are senseless and powerless.
>>>
>>
>> Why is that?  Being aware of one's present location and even all the 
>> properties of present location does NOT mean understanding one's present 
>> location and understanding all the properties of present location. 
>>
>> You don't seem to be able to discern between the terms "all" and "any". 
>> Not knowing the activities of the ants under the foundation of your house 
>> does not mean that you don't know where you are. 
>>
>> That's a good example because we humans are in that situation: we can 
>>> describe our environment works but we do not understand all the properties 
>>> of our location, we don't even understand why we are here.
>>>
>>
>> Why are you presuming understanding to be an all or nothing property? No 
>> matter how much or how little we understand, it is more than any machine 
>> has ever understood.
>>
>> Craig
>>
>>>  -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "Everything List" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/everything-list/bY0TNHtwNh8/unsubscribe?hl=en
>> .
>> To unsubscribe from this group and all its topics, send an email to 
>> everything-li...@googlegroups.com <javascript:>.
>> To post to this group, send email to everyth...@googlegroups.com<javascript:>
>> .
>> Visit this group at http://groups.google.com/group/everything-list?hl=en.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>  
>>  
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to