On Friday, February 15, 2013 7:23:28 PM UTC-5, Stephen Paul King wrote:
>
>  On 2/15/2013 4:07 PM, Craig Weinberg wrote:
>  
>
>
> On Wednesday, February 13, 2013 11:01:30 PM UTC-5, Stephen Paul King 
> wrote: 
>>
>>  On 2/13/2013 9:41 PM, Craig Weinberg wrote:
>>  
>>
>>
>> On Wednesday, February 13, 2013 5:37:08 PM UTC-5, Stephen Paul King 
>> wrote: 
>>>
>>>  On 2/13/2013 5:21 PM, Craig Weinberg wrote:
>>>  
>>>
>>>
>>> On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent wrote: 
>>>>
>>>>  On 2/13/2013 8:35 AM, Craig Weinberg wrote: 
>>>>
>>>> *Wouldn�t Simulated Intelligence be a more appropriate term than 
>>>> Artificial Intelligence?*
>>>>
>>>> Thinking of it objectively, if we have a program which can model a 
>>>> hurricane, we would call that hurricane a simulation, not an �artificial 
>>>> hurricane�. If we modeled any physical substance, force, or field, we 
>>>> would similarly say that we had simulated hydrogen or gravity or 
>>>> electromagnetism, not that we had created artificial hydrogen, gravity, 
>>>> etc.
>>>>
>>>>
>>>> No, because the idea of an AI is that it can control a robot or other 
>>>> machine which interacts with the real world, whereas a simulate AI or 
>>>> hurricane acts within a simulated world.
>>>>  
>>>
>>> AI doesn't need to interact with the real world though. It makes no 
>>> difference to the AI whether its environment is real or simulated. Just 
>>> because we can attach a robot to a simulation doesn't change it into an 
>>> experience of a real world.
>>>  
>>>
>>> Hi Craig,
>>>
>>>     I think that you might be making a huge fuss over a difference that 
>>> does not always make a difference between a public world and a private 
>>> world! IMHO, that makes the 'real' physical world "Real" is that we can all 
>>> agree on its properties (subject to some constraints that matter). Many can 
>>> point at the tree over there and agree on its height and whether or not it 
>>> is a deciduous variety.
>>>  
>>
>> Why does our agreement mean on something's properties mean anything other 
>> than that though?
>>
>>
>> Hi Craig,
>>
>>     Why are you thinking of 'though' in such a minimal way? Don't forget 
>> about the 'objects' of those thoughts... The duals...
>>  
>
> We might be agreeing here. I thought you were saying that our agreeing on 
> what we observe is a sign that things are 'real', so I was saying that it 
> doesn't have to be a sign of anything, just that reality is the quality of 
> having to agree involuntarily on conditions.
>  
>
> Hi Craig,
>
>     We are stumbling over a subtle issue within semiotics. This video in 5 
> parts is helpful: http://www.youtube.com/watch?v=AxV3ompeJ-Y
>
>
Is there something in particular that we're not semiotically square on?
 

>   
>>  We are people living at the same time with human sized bodies, so it 
>> would make sense that we would agree on almost everything that involve our 
>> bodies.
>>
>>
>>     We is this we? I am considering any 'object' of system capable of 
>> being described by a QM wave function or, more simply, capable of being 
>> represented by a semi-complete atomic boolean algebra.
>>  
>
> We in this case is you and me. I try to avoid using the word object, since 
> it can be used in a lot of different ways. An object can be anything that 
> isn't the subject. In another sense an object is a publicly accessible body.
>  
>
>     I use the word 'object' purposefully. We need to deanthropomorphize 
> the observer! An object is what one observer senses of another (potential) 
> observer.
>

I agree but would add that we need to demechanemorphize the observed also. 


>   
>  
>>  
>>  You can have a dream with other characters in the dream who point to 
>> your dream tree and agree on its characteristics, but upon waking, you are 
>> re-oriented to a more real, more tangibly public world with longer and more 
>> stable histories.
>>
>>
>>     Right, it is the "upon waking' part that is important. Our common 
>> 'reality' is the part that we can only 'wake up' from when we depart the 
>> mortal coil. Have you followed the quantum suicide discussion any?
>>  
>
> I haven't been, no.
>  
>
>     It is helpful for the understanding of the argument I am making. The 
> way that a user of a QS system notices or fails to notice her demise is 
> relevant here. The point is that we never sense the switch in the "off" 
> position...
>

I can follow the concept of not sensing the off position (as in the retinal 
blindspot) if that's where you're going.
 

>
>   
>  
>>  
>>  These qualities are only significant in comparison to the dream though. 
>> If you can't remember your waking life, then the dream is real to you, and 
>> to the universe through you.
>>  
>>
>>     You are assuming a standard that you cannot define. Why? What one 
>> observes as 'real' is real to that one, it is not necessarily real to every 
>> one else... but there is a huge overlap between our 1p 'realities'. Andrew 
>> Soltau has this idea nailed now in his Multisolipsism stuff. ;-)
>>  
>
> One can observe that one is observing something that is 'not real' also 
> though.
>  
>
>     Exactly, but that is the point I am making. There has to be a 'real' 
> thing for there to be a simulated thing, no? Or is that just the standard 
> tacit assumption of people new to this question?
>

I think that there only has to be an expectation of sensory fidelity. 
Realism builds from multiple fulfillment of expectations, divided by 
failures to fulfill expectations. Otherwise simulation and reality are the 
same thing - just experiences.


>   
>  
>>  
>>  
>>   
>>  
>>>
>>>  
>>> By calling it artificial, we also emphasize a kind of obsolete notion of 
>>> natural vs man-made as categories of origin. 
>>>
>>>
>>> Why is the distinction between the natural intelligence of a child and 
>>> the artificial intelligence of a Mars rover obsolete?� The latter is one 
>>> we create by art, the other is created by nature.
>>>  
>>
>> Because we understand now that we are nature and nature is us.
>>
>>
>>     I disagree! We can fool ourselves into thinking that we "understand' 
>> but what we can do is, at best, form testable explanations of stuff... We 
>> are fallible!
>>  
>> I agree, but I don't see how that applies to us being nature.
>>
>>
>>     We are part of Nature and there is a 'whole-part isomorphism' 
>> involved..
>>  
>
> Since we are part of nature, there is nothing that we are or do which is 
> not nature.
>  
>
>     Right.
>
>   
>  
>>  
>>  What would it mean to be unnatural? How would an unnatural being find 
>> themselves in a natural world?
>>  
>>
>>     They can't, unless we invent them... Pink Ponies!!!!
>>  
>
> Pink Ponies are natural to imagine for our imagination. A square circle 
> would be unnatural - which is why we can't imagine it.
>  
>
>     This demonstrates that there is a limit on the coherence of a 
> language, maybe even to its possible recursive depth...
>

Sure, yeah language has lots of limits.
 

>
>   
>  
>>  
>>   
>>  
>>>  
>>>  We can certainly use the term informally to clarify what we are 
>>> referring to, like we might call someone a plumber because it helps us 
>>> communicate who we are talking about, but anyone who does plumbing can be a 
>>> plumber. It isn't an ontological distinction. Nature creates our capacity 
>>> to create art, and we use that capacity to shape nature in return.
>>>  
>>>
>>>     I agree! I think it is that aspect of Nature that can "throw itself 
>>> into its choice", as Satre mused, that is making the computationalists 
>>> crazy. I got no problem with it as I embrace non-well foundedness.
>>>  
>>
>> Cool, yeah I mean it could be said that aspect is defines nature?
>>  
>>
>>     Can we put Nature in a box? No...
>>
>>   
>>  
>>>  
>>> "L'homme est d'abord ce qui se jette vers un avenir, et ce qui est 
>>> conscient de se projeter dans l'avenir."/ ~ Jean Paul Satre
>>>
>>>   
>>>  
>>>>  
>>>> If we used simulated instead, the measure of intelligence would be 
>>>> framed more modestly as the degree to which a system meets our 
>>>> expectations 
>>>> (or what we think or assume are our expectations). Rather than assuming a 
>>>> universal index of intelligent qualities which is independent from our own 
>>>> human qualities, 
>>>>
>>>>
>>>> But if we measure intelligence strictly relative to human intelligence
>>>>
>>>
>>> I think that it is a misconception to imagine that we have access to any 
>>> other measure.
>>>  
>>>
>>>     Yeah!
>>>
>>>   
>>>  
>>>>  we will be saying that visual pattern recognition is intelligence but 
>>>> solving Navier-Stokes equations is not.
>>>>
>>>
>>> Why, equations are written by intelligent humans?
>>>  
>>>
>>>     People are confounded by computational intractability and eagerly 
>>> spin tales of hypercomputers and other perpetual motion machines.
>>>  
>>
>> Complexity seems to be the only abstract principle that the Western-OMMM 
>> orientation respects.
>>  
>>
>>     And look at the benefits that it engenders! It is nice to not to have 
>> to worry about freezing in the winter or spending every waking moment 
>> seeking subsistence. Our pets sure don't complain...
>>  
>
> Definitely, it's not all bad and it is a big improvement over 
> Oriental-ACME fanaticism. Even so, we have pushed it too far and now we are 
> starting to pay the price.
>  
>
>     The jury is still out on that, IMHO...
>

It depends how we look at it I suppose. The Dark Ages were only 'dark' by 
certain measures. Then as now, it all depends on who and where you happen 
to be as to whether any era is paradise or Hell on Earth.  The particular 
methods and models which have characterized our science and philosophy 
since the Enlightenment can be more objectively said to have reached their 
End of Life.

>
>  
>   
>>   
>>  
>>>  
>>>   
>>>  
>>>> � This is the anthropocentrism that continually demotes whatever 
>>>> computers can do as "not really intelligent" even when it was regarded a 
>>>> the apothesis of intelligence *before* computers could� do it.
>>>>  
>>>
>>> If I had a camera with higher resolution than a human eye, that doesn't 
>>> mean that I can replace my eyes with those cameras. Computers can still be 
>>> exemplary at computation without being deemed literally intelligent. A 
>>> planetarium's star projector can be as accurate as any telescope and still 
>>> be understood not to be projecting literal galaxies and stars into the 
>>> ceiling of the observatory.
>>>  
>>>  
>>>>  
>>>> we could evaluate the success of a particular Turing emulation purely 
>>>> on its merits as a convincing reflection of intelligence 
>>>>
>>>>
>>>> But there is no one-dimensional measure of intelligence - it's just 
>>>> competence in many domains.
>>>>  
>>>
>>> Competence in many domains is fine. I'm saying that the competence 
>>> relates to how well it reflects or amplifies existing intelligence, not 
>>> that it actually is itself intelligent.
>>>  
>>>  
>>>>  
>>>> rather than presuming to have replicated an organic conscious 
>>>> experience mechanically.
>>>>
>>>>
>>>> I don't think that's a presumption.� It's an inference from the 
>>>> incoherence of the idea of a philosophical zombie.
>>>>  
>>>
>>> The idea of a philosophical zombie is a misconception based on some 
>>> assumptions about matter and function which I clearly understand to be 
>>> untrue. A sociopath is already a philosophical zombie as far as emotional 
>>> intelligence is concerned. Someone with blindsight is a philosophical 
>>> zombie as far as visual perception is concerned. Someone who is 
>>> sleepwalking is a p-zombie as far as bipedal locomotion is concerned. The 
>>> concept is bogus.
>>>  
>>>
>>>     I 100% concur!
>>>  
>>
>> Cool! It's so strange because for almost everything else I think that 
>> Chalmers is The Man, but p-zombies are the concept of this that most people 
>> seem to grab on to, other than the Hard Problem.
>>  
>>
>>     From what I can tell, Chalmers uses the concept of a p-zombie as a 
>> device in a proof of panprotopsychism. He is trying to get people to 
>> understand for themselves that the concept of a p-zombie is absurd. This is 
>> important because material monism demands that we actually are zombies! See 
>> Dennett's eliminatist defense of materialism!
>>  
>
> Yes, and it's a defense of panprotopsychism, but I think for the wrong 
> reason. Blindsight for example shows how qualia can be absent on one level, 
> but another part of our awareness can be informed on another level. 
>
>  
>     I sorta disagree. Blind sight merely shows that verbal reportage is 
> not the sum of what can be known of consciousness.
>

I think that the injuries are to the visual cortex though, not areas 
associated with language. There is no reason that we should expect their 
reports to be less accurate than anyone else's. It sounds like you are 
saying that it isn't blindsight at all, it's just compulsive lying.

There probably are cases of global head injuries where that could be 
possible, but why would they only lie about what they could see?

Craig


> -- 
> Onward!
>
> Stephen
>
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to