On Thursday, April 4, 2013 2:31:09 AM UTC-4, Jason wrote:
>
>
>
>
> On Thu, Apr 4, 2013 at 1:06 AM, Craig Weinberg 
> <whats...@gmail.com<javascript:>
> > wrote:
>
>>
>>
>> On Thursday, April 4, 2013 12:08:25 AM UTC-4, Jason wrote:
>>
>>>
>>>
>>>
>>> On Wed, Apr 3, 2013 at 9:54 PM, Craig Weinberg <whats...@gmail.com>wrote:
>>>
>>>>
>>>>
>>>> On Wednesday, April 3, 2013 8:58:37 PM UTC-4, Jason wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Apr 3, 2013 at 6:04 PM, Craig Weinberg <whats...@gmail.com>wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wednesday, April 3, 2013 5:44:24 PM UTC-4, Jason wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sat, Mar 30, 2013 at 7:58 AM, Telmo Menezes <
>>>>>>> te...@telmomenezes.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Mar 28, 2013 at 1:23 PM, Craig Weinberg <whats...@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Then shouldn't a powerful computer be able to quickly deduce the 
>>>>>>>>> winning Arimaa mappings?
>>>>>>>>>
>>>>>>>>
>>>>>>>> You're making the same mistake as John Clark, confusing the 
>>>>>>>> physical computer with the algorithm. Powerful computers don't help us 
>>>>>>>> if 
>>>>>>>> we don't have the right algorithm. The central mystery of AI, in my 
>>>>>>>> opinion, is why on earth haven't we found a general learning algorithm 
>>>>>>>> yet. 
>>>>>>>> Either it's too complex for our monkey brains, or you're right that 
>>>>>>>> computation is not the whole story. I believe in the former, but not 
>>>>>>>> I'm 
>>>>>>>> not sure, of course. Notice that I'm talking about generic 
>>>>>>>> intelligence, 
>>>>>>>> not consciousness, which I strongly believe to be two distinct 
>>>>>>>> phenomena.
>>>>>>>>   
>>>>>>>>
>>>>>>>
>>>>>>> Another point toward Telmo's suspicion that learning is complex:
>>>>>>>
>>>>>>> If learning and thinking intelligently at a human level were 
>>>>>>> computationally easy, biology wouldn't have evolved to use trillions of 
>>>>>>> synapses.  The brain is very expensive metabolically (using 20 - 25% of 
>>>>>>> the 
>>>>>>> total body's energy, about 100 Watts).  If so many neurons were not 
>>>>>>> needed 
>>>>>>> to do what we do, natural selection would have selected those humans 
>>>>>>> with 
>>>>>>> fewer neurons and reduced food requirements.
>>>>>>>
>>>>>>
>>>>>> There's no question that human intelligence reflects an improved 
>>>>>> survival through learning, and that that is what makes the physiological 
>>>>>> investment pay off.
>>>>>>
>>>>>
>>>>> Right, so my point is that we should not expect things like human 
>>>>> intelligence or human learning to be trivial or easy to get in robots, 
>>>>> when 
>>>>> the human brain is the most complex thing we know, and can perform more 
>>>>> computations than even the largest super computers of today.
>>>>>
>>>>
>>>> Absolutely, but neither should we expect that complexity alone 
>>>>
>>>
>>> I don't think anyone has argued that complexity alone is sufficient.
>>>
>>
>> What else are you suggesting makes the difference?
>>  
>>
>
> To implement human learning an intelligence we need the right algorithm 
> and sufficient computational power to implement it.
>


There is a curious dualism to computation. All of computation is based on 
representation - on the ability of the computer to set equivalence 
arbitrarily between any two generic tokens. This is the power of computers 
- anything can be scanned, downloaded, typed, spoken, etc and be treated as 
collections of generic cardinality. At the same time, another important 
feature of computation is the ability to take those very same kinds of 
collections and treat them as operational commands - ordinal sequences. We 
have then this binary distinction between that which is treated with 
absolute indifference and that which is treated as unquestioned author. 

This lack of subtlety is what characterizes machines. What it means to be 
mechanical or robotic is not only the absence of nuanced gradation between 
literal obedience to each discrete order and anesthetic generalization of 
every input and output, but it also means the absence of the perpendicular 
range of feelings and emotions which non-machines use to modulate their 
sensitivity and motivation directly. The aesthetic dimension is missing 
entirely in any possible machine. It doesn't matter how clever or correct 
the algorithm, how plentiful the processing resources, no collection of 
representations can ever conjure an authentic presence which is 
proprietary, caring, feeling, understanding, motivated, inspired, etc. 
human or otherwise. 

These qualities do not fall within the categories of blind execution or 
blind equivalence, they require personal investment. An algorithm is by 
definition impersonal and automatic, so that it can only provide ever more 
sophisticated patterns of the same generic command-data structure. An 
algorithm can simulate learning because learning is a function of 
organization of experiences rather than experiences. Learning is a problem 
of acquiring and retrieving X rather than inventing or appreciating X.
 

>
>  
>
>>  
>>>  
>>>
>>>> can make an assembly of inorganic parts into a subjective experience 
>>>> which compares to that of an animal. 
>>>>
>>>
>>> Both are made of the same four fundamental forces interacting with each 
>>> other, why should the number of protons in the nucleus of some atoms in 
>>> those organic molecules make any difference to the subject? 
>>>
>>
>> Why does arsenic have a different effect on the body than sugar? 
>> Different forms signify different possibilities and potentials on many 
>> different levels. The number of protons causes some things on some levels 
>> by virtue of its topological potentials, but that is not the cause of order 
>> in the the cosmos, or awareness. Gold could be any number of protons in a 
>> nucleus, it just happens to be using that configuration, just like the ip 
>> address of a website does not determine its content.
>>
>
> If your theory is that sense is primitive then how do you justify your 
> belief that certain materials are associated with certain possibilities of 
> experience?
>

I don't have a belief that materials are associated with certain 
possibilities of experience,  I observe that this correlation seems 
self-evident and I give an explanation for that.


>  
>
>>
>>  
>>
>>> What led you to chose the chemical elements as the origin of sense and 
>>> feeling, as opposed to higher level structures (neurology, circuits, etc.) 
>>> or lower level structures (quarks, gluons, electrons)?
>>>
>>
>> The chemical elements have nothing to do with the origin of sense and 
>> feeling at all, just like the letters of the alphabet have nothing to do 
>> with the origin of Shakespeare. Shakespeare used words, words are made of 
>> certain combinations of letters and not others, which is what makes them 
>> words.
>>
>
> Exactly, I just think the alphabet for spelling different conscious states 
> exists at a lower level than you do, e.g., in the logic of recursive 
> relationships, rather than in atomic elements.
>

I don't think that it is in atomic elements, I think that atomic elements 
and recursive relationships of logic are both cracked reflections 
(diffractions) of sense. Sense is eternal and all encompassing. Atoms are 
nothing but tokens, like ip addresses and logic is just a way to keep track 
of what is in between tokens.
 

>
>  
>
>>  
>>
>>>  
>>>  
>>>
>>>>
>>>>  
>>>>>  
>>>>>
>>>>>> What I question is why that improvement would entail awareness.
>>>>>>
>>>>>
>>>>> A human has to be aware to do the things it does, because zombies are 
>>>>> not possible.
>>>>>
>>>>
>>>> That's begging the question.
>>>>
>>>
>>> Not quite, I provided an argument for my reasoning. 
>>>
>>
>> If your argument is A = B because B = A then you are still begging the 
>> question.
>>
>
> You are arguing it is possible to see without seeing, which I equate with 
> zombies, and which I think is logically inconsistent. 
>

Then you have to argue with the blindsight research, not me. Their 
observation is that in fact it is possible to have access to optical 
knowledge without any conscious visual experience.
 

> It is a proof by contradiction that shows it is not possible to "see 
> without seeing".
>

Blindsight is not seeing without seeing, it is about having access to 
sub-personal knowledge about optical conditions without seeing - which 
completely demolishes the presumption that qualia is a "build it and they 
will come" kind of magic. Of course there are other examples everywhere of 
how the map is not the territory and that knowledge can be received without 
having any particular human-like aesthetic phenomenon associated with it.
 

>
>  
>
>>
>>  
>>
>>>  What is your objection, that zombies are possible, or that zombies are 
>>> not possible but that doesn't mean something that in all ways appears 
>>> conscious must be conscious?
>>>
>>
>> My objection is that the premise of zombies is broken to begin with.
>>
>
> We agree on this.  So then isn't the premise of some life form behaving as 
> though it can see, while being blind not broken to begin with?
>

Not at all. A camera can behave as though it can see - it can autofocus and 
autocorrect exposure, etc but it is as blind as a beer bottle. There is 
nothing in your camera which is aware that it contains images.
 

>
>  
>
>> It asks the wrong question and makes the wrong assumption about 
>> consciousness. There is no 'in all ways appears'...it is always 'in all 
>> ways appears to X under Y circumstance.
>>
>
> Maybe that's just how it appears to you. ;-)
>

Oh definitely. I only give my version of what I think makes the most sense 
for the biggest picture, but in that version, I understand that every other 
way of looking at it makes another kind of sense on some level. 

 
>
>>
>>
>>>  
>>>
>>>> Anything that is not exactly what it we might assume it is would be a 
>>>> 'zombie' to some extent. A human does not have to be aware to do the 
>>>> things 
>>>> that it does, which is proved by blindsight, sleepwalking, brainwashing, 
>>>> etc. A human may, in reality, have to be aware to perform all of the 
>>>> functions that we do, but if comp were true, that would not be the case.
>>>>  
>>>>
>>>>>   Your examples of blind sight are not a disproof of the separability 
>>>>> of function and awareness,
>>>>>
>>>>
>>>> I understand why you think that, but ultimately it is proof of exactly 
>>>> that.
>>>>  
>>>>
>>>>>  only examples of broken links in communication (quite similar to 
>>>>> split brain patients).
>>>>>
>>>>
>>>> A broken link in communication which prevents you from being aware of 
>>>> the experience which is informing you is the same thing as function being 
>>>> separate from awareness. The end result is that it is not necessary to 
>>>> experience any conscious qualia to receive optical information. There is 
>>>> no 
>>>> difference functionally between a "broken link in communication" and 
>>>> "separability of function and awareness". The awareness is broken in the 
>>>> dead link, but the function is retained, thus they are in fact separate.
>>>>
>>>
>>> So you take the split brain patient's word for it that he didn't see the 
>>> word "PAN" flashed on the screen http://www.youtube.com/watch?**
>>> v=ZMLzP1VCANo&t=1m50s<http://www.youtube.com/watch?v=ZMLzP1VCANo&t=1m50s>
>>>
>>
>> He didn't see it at the personal level, no. He saw it on a sub-personal 
>> level.
>>
>
> I would argue there are two persons, one that sees it and another that 
> doesn't.  
>

It appears that the person who does not see it is in the dominant position 
to control the outward facing personality, so I would say he is the person 
and those who see PAN are sub-personal versions of him. (Remember every 
cell in our body is really the same cell modified by different experiences, 
so our sub-personal agents are still 'us').
 

> This has been demonstrated by some rare split brain patients which had 
> verbal abilities in both hemispheres, this allowed each hemisphere to be 
> interviewed independently.  See: 
> http://www.macalester.edu/psychology/whathap/ubnrp/split_brain/Behavior.html
>

Sure, I mean Dissociative Identity Disorder shows that personhood can be 
fragmented in all kinds of ways. With more precise control over brain 
activity with TMS or something, we could probably interview any number of 
sub-personal and semi-personal inertial frames. Private physics is not like 
bodies in space - it is a holographic-eidetic fugue. Just as your 
stereoscopic vision or hearing is not the additive sum of two discrete 
impressions, the self is not a piece of equipment which is located in a 
particular place or has a particular set of functions. It isn't a mechanism 
or a body or a soul, it is experience itself, and pure experience 
unmitigated by realism is absolutely insane.
 

>  
>
>>  
>>
>>>  
>>> Perhaps his left hemisphere didn't see it, but his right hemisphere 
>>> certainly did, as his right hemisphere is able to draw a picture of that 
>>> pan (something in his brain saw it).
>>>
>>
>> Right. Technically I would say that the hemisphere doesn't see anything, 
>> but rather he sees through his sub-personal agents which are associated 
>> with a hemisphere.
>>  
>>
>>>
>>> I can't experience life through your eyes right now because our brains 
>>> are disconnected, should you take my word for my assertion that you must 
>>> not be experiencing anything because the I in Jason's skull doesn't 
>>> experience any visual stimulus from Craig's eyes?
>>>
>>
>> I would if not for my own experience which contradicts your belief and is 
>> not possible for me to deny.
>>
>
> But who would believe you if you were mute and trapped inside my skull, 
> how would anyone know you are still there?
>

They wouldn't, but why would that cause me to doubt my own sense that I am 
still 'here'?
 

>  
>
>>  
>>
>>>  
>>>  
>>>
>>>>
>>>>   
>>>>>
>>>>>> There are a lot of neurons in our gut as well, and assimilation of 
>>>>>> nutrients is undoubtedly complex and important to survival, yet we are 
>>>>>> not 
>>>>>> compelled to insist that there must be some conscious experience to 
>>>>>> manage 
>>>>>> that intelligence. Learning is complex, but awareness itself is simple.
>>>>>>
>>>>>
>>>>> I think the nerves in the gut can manifest as awareness, such as 
>>>>> cravings for certain foods when the body realizes it is deficient in some 
>>>>> particular nutrient.  Afterall, what is the point of all those nerves if 
>>>>> they have no impact on behavior?
>>>>>
>>>>
>>>> Oh I agree, because my view is panexperiential. The gut doesn't have 
>>>> the kind of awareness that a human being has as a whole, because the other 
>>>> organs of the body are not as significant as the brain is to the organism. 
>>>> If we are going by the comp assumption though, then there is an 
>>>> implication 
>>>> that nothing has any awareness unless it is running a very sophisticated 
>>>> program.
>>>>
>>>>
>>> I don't think that belief is universal.  Bruno thinks even simple Turing 
>>> machines are conscious, and many people debate whether thermostats can be 
>>> considered at some level conscious.  I am partial to the idea that 
>>> awareness of even a single bit of information represents an atom of 
>>> awareness.  But I, unlike you, think that complex sensations require 
>>> complex representations.  I think we agree there is no set complexity 
>>> threshold where the magic of consciousness begins.
>>>
>>
>> I would say that complex sensations (presentations) are associated with 
>> complex representations, but they don't cause them. The complexity of 
>> presentations and representations contribute to each others complexity and 
>> allow both to become more elaborate, but in an ultimate sense, the 
>> presentation is driving the complexity rather than the other way around.
>>
>
> That's perhaps not too far from Bruno's explanation of what happens when a 
> person views their own brain scan.  It is their own complex experience 
> which leads to probable explanations such as complex histories of 
> evolution, and complex maps of neurons to explain it.
>

Right. The explanations are tokens of 1p experience as seen from the 3p 
involuted perspective. They aren't just any old tokens though, the token is 
as unique and rich as an acorn is in it's association with an oak tree. 
They lead back tot he experience itself if you can unwind the history and 
find common ground with it.

Craig
 

>
> Jason
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to