Re: Simulated Intelligence Mini-Manifesto

2013-02-18 Thread Craig Weinberg


On Sunday, February 17, 2013 1:11:05 PM UTC-5, Bruno Marchal wrote:


 On 15 Feb 2013, at 22:14, Craig Weinberg wrote:



 On Thursday, February 14, 2013 11:20:12 AM UTC-5, Bruno Marchal wrote:


 On 13 Feb 2013, at 23:37, Stephen P. King wrote, to Craig Weinberg

 Baudrillard is not talking about consciousness in particular, only the 
 sum of whatever is in the original which is not accessible in the copy. His 
 phrase 'profound reality' is apt though. If you don't experience a profound 
 reality, then you might be a p-zombie already.



 Right!



 Right?

 Here Craig is on the worst slope. It looks almost like  if *you* believe 
 that a machine is not a zombie, it means that you are a zombie yourself.


 No, I was saying that if you don't believe that your own experience is 
 profoundly real, then you are a zombie yourself.


 I remain anxious because you seem to believe that a computer cannot 
 support a profoundly real person experience.


I don't think that it can unless it is made of living beings, who are the 
baton holders if you will of a biological history that is grounded in the 
catastrophe of vulnerability that those experiences are composed of.

The bits of the computer which are not assembled - the silicon and plastic 
substance, does have an experience, but not as a person or animal or even 
bacteria.  Without that history being embodied physically, I don't expect 
that it has any resources to draw upon with which to feel 'profound' 
realism in the way that we feel it, and other animals. The sense is that 
vegetables do not have the same sort of realism in their experiences as 
animals when we kill them and eat them, and even if that is untrue, our 
humanity and sanity may depend on believing the lie on some level. I think 
that it is probably not a lie though, and our intuition is not completely 
wrong about the sliding scale of quality in the natural world. We don't see 
the vegetable equivalent of primates. Maybe there's a reason?




  


 They will persecuted the machines and the humans having a different 
 opinion altogether.

 Craig reassure me. he is willing to offer steak to my sun in law (who get 
 an artificial brain before marriage).

 But with Baudrillard, not only my sun in law might no more get his 
  steak, but neither my daughter! Brr...


 Hahaha. How about your son in law gets a simulation of steak which is 
 beneath his substitution level? 


 He will be completely satisfied. Thanks for him.
  




 Even better, I just hack into his hardware and move one of his memories of 
 eating steak up on the stack so it seems very recent. 


 Again, he will be completely satisfied. But my daughter will be sad, as 
 she want to enjoy eating the meal together with him. 



That's good that your position is consistent. Why have a universe at all 
though? Why not just have a memory of it?
 





 Is your brother in law racist against simulated steaks as memory implants?



 Not at all. Since he got an artificial brain, he uploaded already many 
 entire lives from the CGSN (Cluster-Galactica-Super-Net), and I have to ask 
 him to restrain himself, as I am the one paying the bill :)

 You know, in 43867 after JC, they will succeed in recovering the 
 brain-state of any existing human states, just by looking of the tiny 
 actions of their brain on the environment. We always leave traces.
 You will be download, for the first time, in 44886, for example. It is bad 
 news, as all the humans having existed before 33000 (+/-) will be freely 
 downloadable. After that date, most humans will got sophisticated quantum 
 keys protecting them from such possible futures. That why some researcher 
 will say, that with comp, we have the solution of who go in hell and who go 
 in heaven. All humans having live before 33000 go to hell, and all the 
 infinitely many others go to heaven. Of course this is still a rather gross 
 simplification, and it concerns only the minority who want explore and 
 pursue the Samsara exploration.


Nice. Or maybe by 2200 we can just simulate the brain state of someone who 
would be alive in that era and save ourselves 30 or 4 years. 

Craig


 Bruno




 Craig


 Bruno



 http://iridia.ulb.ac.be/~marchal/




 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
  
  


 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, 

Re: Simulated Intelligence Mini-Manifesto

2013-02-17 Thread Stathis Papaioannou
On Fri, Feb 15, 2013 at 1:52 PM, Craig Weinberg whatsons...@gmail.com wrote:

 I think that any debate that even considers word definitions to be real is a
 waste of time.

If we're discussing cows but you understand by that word what most
people understand by the word sheep shouldn't we get this straight?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-17 Thread Stathis Papaioannou
On Fri, Feb 15, 2013 at 11:44 PM, Stephen P. King stephe...@charter.net wrote:

 Umm, are you OK with anthropomorphication... ? Let me ask a different
 question: In your opinion, does the universe 'out there' have to have
 properties that match up one-to-one with some finite list of propositions
 that can be encoded in your skull?

No, the universe is under no obligation to fit in with our thought processes.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-17 Thread meekerdb

On 2/17/2013 4:17 AM, Stathis Papaioannou wrote:

On Fri, Feb 15, 2013 at 11:44 PM, Stephen P. Kingstephe...@charter.net  wrote:


 Umm, are you OK with anthropomorphication... ? Let me ask a different
question: In your opinion, does the universe 'out there' have to have
properties that match up one-to-one with some finite list of propositions
that can be encoded in your skull?

No, the universe is under no obligation to fit in with our thought processes.


On the other hand evolution implies some obligation for our thought processes to fit the 
universe.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-17 Thread Bruno Marchal


On 15 Feb 2013, at 22:14, Craig Weinberg wrote:




On Thursday, February 14, 2013 11:20:12 AM UTC-5, Bruno Marchal wrote:

On 13 Feb 2013, at 23:37, Stephen P. King wrote, to Craig Weinberg

Baudrillard is not talking about consciousness in particular, only  
the sum of whatever is in the original which is not accessible in  
the copy. His phrase 'profound reality' is apt though. If you  
don't experience a profound reality, then you might be a p-zombie  
already.





Right!



Right?

Here Craig is on the worst slope. It looks almost like  if *you*  
believe that a machine is not a zombie, it means that you are a  
zombie yourself.


No, I was saying that if you don't believe that your own experience  
is profoundly real, then you are a zombie yourself.


I remain anxious because you seem to believe that a computer cannot  
support a profoundly real person experience.







They will persecuted the machines and the humans having a different  
opinion altogether.


Craig reassure me. he is willing to offer steak to my sun in law  
(who get an artificial brain before marriage).


But with Baudrillard, not only my sun in law might no more get his   
steak, but neither my daughter! Brr...


Hahaha. How about your son in law gets a simulation of steak which  
is beneath his substitution level?


He will be completely satisfied. Thanks for him.



Even better, I just hack into his hardware and move one of his  
memories of eating steak up on the stack so it seems very recent.


Again, he will be completely satisfied. But my daughter will be sad,  
as she want to enjoy eating the meal together with him.






Is your brother in law racist against simulated steaks as memory  
implants?



Not at all. Since he got an artificial brain, he uploaded already many  
entire lives from the CGSN (Cluster-Galactica-Super-Net), and I have  
to ask him to restrain himself, as I am the one paying the bill :)


You know, in 43867 after JC, they will succeed in recovering the brain- 
state of any existing human states, just by looking of the tiny  
actions of their brain on the environment. We always leave traces.
You will be download, for the first time, in 44886, for example. It is  
bad news, as all the humans having existed before 33000 (+/-) will be  
freely downloadable. After that date, most humans will got  
sophisticated quantum keys protecting them from such possible futures.  
That why some researcher will say, that with comp, we have the  
solution of who go in hell and who go in heaven. All humans having  
live before 33000 go to hell, and all the infinitely many others go to  
heaven. Of course this is still a rather gross simplification, and it  
concerns only the minority who want explore and pursue the Samsara  
exploration.


Bruno





Craig


Bruno



http://iridia.ulb.ac.be/~marchal/




--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-17 Thread Bruno Marchal


On 16 Feb 2013, at 01:01, Stephen P. King wrote:


On 2/15/2013 11:12 AM, Bruno Marchal wrote:


On 14 Feb 2013, at 22:00, Stephen P. King wrote:


On 2/14/2013 11:20 AM, Bruno Marchal wrote:


On 13 Feb 2013, at 23:37, Stephen P. King wrote, to Craig Weinberg

Baudrillard is not talking about consciousness in particular,  
only the sum of whatever is in the original which is not  
accessible in the copy. His phrase 'profound reality' is apt  
though. If you don't experience a profound reality, then you  
might be a p-zombie already.





Right!



Right?

Here Craig is on the worst slope. It looks almost like  if *you*  
believe that a machine is not a zombie, it means that you are a  
zombie yourself.


They will persecuted the machines and the humans having a  
different opinion altogether.


Craig reassure me. he is willing to offer steak to my sun in law  
(who get an artificial brain before marriage).


But with Baudrillard, not only my sun in law might no more get  
his  steak, but neither my daughter! Brr...


Bruno


Dear Bruno,

Could you re-write this post. It's wording is unintelligible  
to me. :_(



Craig sum up well Baudrillard with the  sentence If you don't  
experience a profound reality, then you might be a p-zombie already.


That sentence illustrate the willingness to not attribute a  
consciousness to a person with a copied, or artificial brain, as  
such copy is suspected not being able to live a profound reality.  
This is like saying, we the human with the original carbon brain,  
can live profound reality, but not the machine, together with and  
if you doubt that profound reality then *you* are a zombie too.


It remind me a fundamentalist of some confessional religion who  
told me if your machine cannot believe that some man is the son of  
God, then your machine can't think. I told him ---and what I  
doubt that a man is the son of God?. he told me that in that case  
I can't think either ...


This leads to the idea that not only a machine cannot be conscious,  
but any human who would pretend the contrary is also not conscious.


As I said: brrr...

Bruno


Ah! I see.. Yeah, Craig seems to have some trouble communicating  
the variability of Sense.


I think that Craig is clear. He is just opposed to comp.



It is 1p and thus cannot have a 3p measure, so... I feel his pain. I  
am trying to use the idea of the difference between a simulation of  
X as compared to the real X by a large ensemble of observers to  
parse this distinction to connect with your ideas...


I take it as meaning with comp. I have no ideas.

Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-17 Thread Stephen P. King

On 2/17/2013 7:17 AM, Stathis Papaioannou wrote:

On Fri, Feb 15, 2013 at 11:44 PM, Stephen P. King stephe...@charter.net wrote:


 Umm, are you OK with anthropomorphication... ? Let me ask a different
question: In your opinion, does the universe 'out there' have to have
properties that match up one-to-one with some finite list of propositions
that can be encoded in your skull?

No, the universe is under no obligation to fit in with our thought processes.



Hi Stathis,

It good this see this statement make explicitly. I just wish we 
could keep it in mind when we are debating ideas...


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-17 Thread Stephen P. King

On 2/17/2013 1:10 PM, meekerdb wrote:

On 2/17/2013 4:17 AM, Stathis Papaioannou wrote:
On Fri, Feb 15, 2013 at 11:44 PM, Stephen P. 
Kingstephe...@charter.net  wrote:


 Umm, are you OK with anthropomorphication... ? Let me ask a 
different

question: In your opinion, does the universe 'out there' have to have
properties that match up one-to-one with some finite list of 
propositions

that can be encoded in your skull?
No, the universe is under no obligation to fit in with our thought 
processes.


On the other hand evolution implies some obligation for our thought 
processes to fit the universe.


Brent


Hi Brent,

Most assuredly!


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-16 Thread John Clark
On Fri, Feb 15, 2013  Craig Weinberg whatsons...@gmail.com wrote:

* *Wouldn’t Simulated Intelligence be a more appropriate term than
 Artificial Intelligence?


  Yes that euphemism [Simulated Intelligence] could have advantages, it
 might make the last human being feel a little better about himself just
 before the Jupiter Brain outsmarted him and sent him into oblivion
 forever.


  Then we had better destroy every circuit on Earth to prevent that from
 happening.


If we did that at least 90% of the world's population would be dead within
a year. Planet Earth simply cannot keep 7 billion people alive with 17'th
century technology, much less give them a living standard that wasn't full
of sewage and was just plane gruesome. We're long past the point of turning
back, the path is set.

 What on earth is obsolete about the natural verses man-made dichotomy?
 The Jupiter brain really was the product of a intelligent designer while
 the human being was not.


  But the intelligent designer was the product of nature.


Exactly, and that's why the God hypothesis is so utterly useless; if
explaining why life exists is hard explaining why God exists is harder.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-16 Thread Craig Weinberg


On Friday, February 15, 2013 7:23:28 PM UTC-5, Stephen Paul King wrote:

  On 2/15/2013 4:07 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 11:01:30 PM UTC-5, Stephen Paul King 
 wrote: 

  On 2/13/2013 9:41 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 5:37:08 PM UTC-5, Stephen Paul King 
 wrote: 

  On 2/13/2013 5:21 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent wrote: 

  On 2/13/2013 8:35 AM, Craig Weinberg wrote: 

 *Wouldn�t Simulated Intelligence be a more appropriate term than 
 Artificial Intelligence?*

 Thinking of it objectively, if we have a program which can model a 
 hurricane, we would call that hurricane a simulation, not an �artificial 
 hurricane�. If we modeled any physical substance, force, or field, we 
 would similarly say that we had simulated hydrogen or gravity or 
 electromagnetism, not that we had created artificial hydrogen, gravity, 
 etc.


 No, because the idea of an AI is that it can control a robot or other 
 machine which interacts with the real world, whereas a simulate AI or 
 hurricane acts within a simulated world.
  

 AI doesn't need to interact with the real world though. It makes no 
 difference to the AI whether its environment is real or simulated. Just 
 because we can attach a robot to a simulation doesn't change it into an 
 experience of a real world.
  

 Hi Craig,

 I think that you might be making a huge fuss over a difference that 
 does not always make a difference between a public world and a private 
 world! IMHO, that makes the 'real' physical world Real is that we can all 
 agree on its properties (subject to some constraints that matter). Many can 
 point at the tree over there and agree on its height and whether or not it 
 is a deciduous variety.
  

 Why does our agreement mean on something's properties mean anything other 
 than that though?


 Hi Craig,

 Why are you thinking of 'though' in such a minimal way? Don't forget 
 about the 'objects' of those thoughts... The duals...
  

 We might be agreeing here. I thought you were saying that our agreeing on 
 what we observe is a sign that things are 'real', so I was saying that it 
 doesn't have to be a sign of anything, just that reality is the quality of 
 having to agree involuntarily on conditions.
  

 Hi Craig,

 We are stumbling over a subtle issue within semiotics. This video in 5 
 parts is helpful: http://www.youtube.com/watch?v=AxV3ompeJ-Y


Is there something in particular that we're not semiotically square on?
 

   
  We are people living at the same time with human sized bodies, so it 
 would make sense that we would agree on almost everything that involve our 
 bodies.


 We is this we? I am considering any 'object' of system capable of 
 being described by a QM wave function or, more simply, capable of being 
 represented by a semi-complete atomic boolean algebra.
  

 We in this case is you and me. I try to avoid using the word object, since 
 it can be used in a lot of different ways. An object can be anything that 
 isn't the subject. In another sense an object is a publicly accessible body.
  

 I use the word 'object' purposefully. We need to deanthropomorphize 
 the observer! An object is what one observer senses of another (potential) 
 observer.


I agree but would add that we need to demechanemorphize the observed also. 


   
  
  
  You can have a dream with other characters in the dream who point to 
 your dream tree and agree on its characteristics, but upon waking, you are 
 re-oriented to a more real, more tangibly public world with longer and more 
 stable histories.


 Right, it is the upon waking' part that is important. Our common 
 'reality' is the part that we can only 'wake up' from when we depart the 
 mortal coil. Have you followed the quantum suicide discussion any?
  

 I haven't been, no.
  

 It is helpful for the understanding of the argument I am making. The 
 way that a user of a QS system notices or fails to notice her demise is 
 relevant here. The point is that we never sense the switch in the off 
 position...


I can follow the concept of not sensing the off position (as in the retinal 
blindspot) if that's where you're going.
 


   
  
  
  These qualities are only significant in comparison to the dream though. 
 If you can't remember your waking life, then the dream is real to you, and 
 to the universe through you.
  

 You are assuming a standard that you cannot define. Why? What one 
 observes as 'real' is real to that one, it is not necessarily real to every 
 one else... but there is a huge overlap between our 1p 'realities'. Andrew 
 Soltau has this idea nailed now in his Multisolipsism stuff. ;-)
  

 One can observe that one is observing something that is 'not real' also 
 though.
  

 Exactly, but that is the point I am making. There has to be a 'real' 
 thing for there to be a simulated thing, no? Or is that 

Re: Simulated Intelligence Mini-Manifesto

2013-02-16 Thread Stephen P. King

On 2/16/2013 2:17 PM, Craig Weinberg wrote:



On Friday, February 15, 2013 7:23:28 PM UTC-5, Stephen Paul King wrote:

On 2/15/2013 4:07 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 11:01:30 PM UTC-5, Stephen Paul
King wrote:

On 2/13/2013 9:41 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 5:37:08 PM UTC-5, Stephen
Paul King wrote:

On 2/13/2013 5:21 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent
wrote:

On 2/13/2013 8:35 AM, Craig Weinberg wrote:

*Wouldn�t Simulated Intelligence be a more
appropriate term than Artificial Intelligence?*

Thinking of it objectively, if we have a program
which can model a hurricane, we would call that
hurricane a simulation, not an �artificial
hurricane�. If we modeled any physical
substance, force, or field, we would similarly say
that we had simulated hydrogen or gravity or
electromagnetism, not that we had created
artificial hydrogen, gravity, etc.


No, because the idea of an AI is that it can
control a robot or other machine which interacts
with the real world, whereas a simulate AI or
hurricane acts within a simulated world.


AI doesn't need to interact with the real world though.
It makes no difference to the AI whether its
environment is real or simulated. Just because we can
attach a robot to a simulation doesn't change it into
an experience of a real world.


Hi Craig,

I think that you might be making a huge fuss over a
difference that does not always make a difference
between a public world and a private world! IMHO, that
makes the 'real' physical world Real is that we can
all agree on its properties (subject to some constraints
that matter). Many can point at the tree over there and
agree on its height and whether or not it is a deciduous
variety.


Why does our agreement mean on something's properties mean
anything other than that though?


Hi Craig,

Why are you thinking of 'though' in such a minimal way?
Don't forget about the 'objects' of those thoughts... The
duals...


We might be agreeing here. I thought you were saying that our
agreeing on what we observe is a sign that things are 'real', so
I was saying that it doesn't have to be a sign of anything, just
that reality is the quality of having to agree involuntarily on
conditions.


Hi Craig,

We are stumbling over a subtle issue within semiotics. This
video in 5 parts is helpful:
http://www.youtube.com/watch?v=AxV3ompeJ-Y
http://www.youtube.com/watch?v=AxV3ompeJ-Y


Is there something in particular that we're not semiotically square on?


We seem to talk passed each other on some details within semiotic 
theory. For example, what is a 'sign'? 
http://www.marxists.org/reference/subject/philosophy/works/us/peirce1.htm








We are people living at the same time with human sized
bodies, so it would make sense that we would agree on almost
everything that involve our bodies.


We is this we? I am considering any 'object' of system
capable of being described by a QM wave function or, more
simply, capable of being represented by a semi-complete
atomic boolean algebra.


We in this case is you and me. I try to avoid using the word
object, since it can be used in a lot of different ways. An
object can be anything that isn't the subject. In another sense
an object is a publicly accessible body.


I use the word 'object' purposefully. We need to
deanthropomorphize the observer! An object is what one observer
senses of another (potential) observer.


I agree but would add that we need to demechanemorphize the observed 
also.


Mechanisms are zombies, at best, in your thinking, no?









You can have a dream with other characters in the dream who
point to your dream tree and agree on its characteristics,
but upon waking, you are re-oriented to a more real, more
tangibly public world with longer and more stable histories.


Right, it is the upon waking' part that is important.
Our common 'reality' is the part that we can only 'wake up'
from when we depart the mortal coil. Have you followed the
quantum suicide discussion any?


I haven't been, no.


It is helpful for the understanding of the argument I am
making. The way that a user of a QS system notices or fails to
notice her demise is 

Re: Simulated Intelligence Mini-Manifesto

2013-02-16 Thread Craig Weinberg


On Saturday, February 16, 2013 6:46:46 PM UTC-5, Stephen Paul King wrote:

  On 2/16/2013 2:17 PM, Craig Weinberg wrote:
  


 On Friday, February 15, 2013 7:23:28 PM UTC-5, Stephen Paul King wrote: 

  On 2/15/2013 4:07 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 11:01:30 PM UTC-5, Stephen Paul King 
 wrote: 

  On 2/13/2013 9:41 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 5:37:08 PM UTC-5, Stephen Paul King 
 wrote: 

  On 2/13/2013 5:21 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent wrote: 

  On 2/13/2013 8:35 AM, Craig Weinberg wrote: 

 *Wouldn�t Simulated Intelligence be a more appropriate term than 
 Artificial Intelligence?*

 Thinking of it objectively, if we have a program which can model a 
 hurricane, we would call that hurricane a simulation, not an 
 �artificial 
 hurricane�. If we modeled any physical substance, force, or field, we 
 would similarly say that we had simulated hydrogen or gravity or 
 electromagnetism, not that we had created artificial hydrogen, gravity, 
 etc.


 No, because the idea of an AI is that it can control a robot or other 
 machine which interacts with the real world, whereas a simulate AI or 
 hurricane acts within a simulated world.
  

 AI doesn't need to interact with the real world though. It makes no 
 difference to the AI whether its environment is real or simulated. Just 
 because we can attach a robot to a simulation doesn't change it into an 
 experience of a real world.
  

 Hi Craig,

 I think that you might be making a huge fuss over a difference that 
 does not always make a difference between a public world and a private 
 world! IMHO, that makes the 'real' physical world Real is that we can 
 all 
 agree on its properties (subject to some constraints that matter). Many 
 can 
 point at the tree over there and agree on its height and whether or not it 
 is a deciduous variety.
  

 Why does our agreement mean on something's properties mean anything 
 other than that though?


 Hi Craig,

 Why are you thinking of 'though' in such a minimal way? Don't forget 
 about the 'objects' of those thoughts... The duals...
  

 We might be agreeing here. I thought you were saying that our agreeing on 
 what we observe is a sign that things are 'real', so I was saying that it 
 doesn't have to be a sign of anything, just that reality is the quality of 
 having to agree involuntarily on conditions.
  

 Hi Craig,

 We are stumbling over a subtle issue within semiotics. This video in 
 5 parts is helpful: http://www.youtube.com/watch?v=AxV3ompeJ-Y

  
 Is there something in particular that we're not semiotically square on?
  

 We seem to talk passed each other on some details within semiotic 
 theory. For example, what is a 
 'sign'?http://www.marxists.org/reference/subject/philosophy/works/us/peirce1.htm


In my terms I'll say that a sign is a public form which is intended to 
present a private experience which re-presents another private experience, 
typically in a different sense modality. A sign which is intended to 
signify another form within the same sense modality would be an icon, 
likeness, or simulation.



   
  
   
  We are people living at the same time with human sized bodies, so it 
 would make sense that we would agree on almost everything that involve our 
 bodies.


 We is this we? I am considering any 'object' of system capable of 
 being described by a QM wave function or, more simply, capable of being 
 represented by a semi-complete atomic boolean algebra.
  

 We in this case is you and me. I try to avoid using the word object, 
 since it can be used in a lot of different ways. An object can be anything 
 that isn't the subject. In another sense an object is a publicly accessible 
 body.
  

 I use the word 'object' purposefully. We need to deanthropomorphize 
 the observer! An object is what one observer senses of another (potential) 
 observer.
  

 I agree but would add that we need to demechanemorphize the observed also. 
  

 Mechanisms are zombies, at best, in your thinking, no?


It could maybe be said that mechanisms are to time what signs are to space. 
They are the undeveloped, outsider's view of a sensory-motor interaction. A 
clockwork mechanism, for instance, is a zombie as far as how the clock 
functions for us, both mechanically and as a time-telling sign, but each 
physical part of the clock, the gears, escapement, etc, are made of 
material substances which aren't zombies. On the micro-level, the tension, 
temperature, density, friction, motion, etc, are all experiential on their 
own at some level of description. That level of description is of course 
unfamiliar to human beings, but our representation of it, the sound of 
ticking, the smooth feel and silver look of metal, etc, is I would expect, 
faithful to some extent in presenting the significance of those experiences 
to our own human experience.



 

Re: Simulated Intelligence Mini-Manifesto

2013-02-15 Thread Stathis Papaioannou
On Fri, Feb 15, 2013 at 3:03 PM, Stephen P. King stephe...@charter.net wrote:

 I meant if the table talks to you just like a person does, giving you
 consistently interesting conversation and useful advice on a wide
 variety of subjects. Unless it's a trick and there's a hidden speaker
 somewhere, you would then have to say that the table is intelligent.
 You might speculate as to how the table does it and whether the table
 is conscious, but those are separate questions.


 Who is to say that that table was actually a TV set in the shape of a
 table or a table that had some other means to transmit what would satisfy a
 speech-only Turing test? This goes nowhere, Stathis.

That's why I said unless it's a trick. The same consideration
applies to anything: how do I know that my neighbour isn't a puppet
manipulated by someone else?

 I think you're using the word intelligent in a non-standard way,
 leading to confusion. The first thing to do in any debate is agree on
 the definition of the words.


 Could you define intelligence for us in unambiguous terms? I don't
 recall Craig trying to do that...

I gave an operational definition. One dictionary definition is the
ability to acquire and apply knowledge and skills. It is not
synonymous with consciousness.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-15 Thread Stephen P. King

On 2/15/2013 6:26 AM, Stathis Papaioannou wrote:

On Fri, Feb 15, 2013 at 3:03 PM, Stephen P. King stephe...@charter.net wrote:


I meant if the table talks to you just like a person does, giving you
consistently interesting conversation and useful advice on a wide
variety of subjects. Unless it's a trick and there's a hidden speaker
somewhere, you would then have to say that the table is intelligent.
You might speculate as to how the table does it and whether the table
is conscious, but those are separate questions.


 Who is to say that that table was actually a TV set in the shape of a
table or a table that had some other means to transmit what would satisfy a
speech-only Turing test? This goes nowhere, Stathis.

That's why I said unless it's a trick. The same consideration
applies to anything: how do I know that my neighbour isn't a puppet
manipulated by someone else?

Hi Stathis,

Maybe because we (individually) might want to understand (predict) 
the behavior of that neighbour, so that we could trust them?






I think you're using the word intelligent in a non-standard way,
leading to confusion. The first thing to do in any debate is agree on
the definition of the words.


 Could you define intelligence for us in unambiguous terms? I don't
recall Craig trying to do that...

I gave an operational definition. One dictionary definition is the
ability to acquire and apply knowledge and skills. It is not
synonymous with consciousness.



Umm, are you OK with anthropomorphication... ? Let me ask a 
different question: In your opinion, does the universe 'out there' have 
to have properties that match up one-to-one with some finite list of 
propositions that can be encoded in your skull?


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-15 Thread Bruno Marchal


On 14 Feb 2013, at 22:00, Stephen P. King wrote:


On 2/14/2013 11:20 AM, Bruno Marchal wrote:


On 13 Feb 2013, at 23:37, Stephen P. King wrote, to Craig Weinberg

Baudrillard is not talking about consciousness in particular,  
only the sum of whatever is in the original which is not  
accessible in the copy. His phrase 'profound reality' is apt  
though. If you don't experience a profound reality, then you  
might be a p-zombie already.





Right!



Right?

Here Craig is on the worst slope. It looks almost like  if *you*  
believe that a machine is not a zombie, it means that you are a  
zombie yourself.


They will persecuted the machines and the humans having a different  
opinion altogether.


Craig reassure me. he is willing to offer steak to my sun in law  
(who get an artificial brain before marriage).


But with Baudrillard, not only my sun in law might no more get his   
steak, but neither my daughter! Brr...


Bruno


Dear Bruno,

Could you re-write this post. It's wording is unintelligible to  
me. :_(



Craig sum up well Baudrillard with the  sentence If you don't  
experience a profound reality, then you might be a p-zombie already.


That sentence illustrate the willingness to not attribute a  
consciousness to a person with a copied, or artificial brain, as such  
copy is suspected not being able to live a profound reality. This is  
like saying, we the human with the original carbon brain, can live  
profound reality, but not the machine, together with and if you doubt  
that profound reality then *you* are a zombie too.


It remind me a fundamentalist of some confessional religion who told  
me if your machine cannot believe that some man is the son of God,  
then your machine can't think. I told him ---and what I doubt that a  
man is the son of God?. he told me that in that case I can't think  
either ...


This leads to the idea that not only a machine cannot be conscious,  
but any human who would pretend the contrary is also not conscious.


As I said: brrr...

Bruno





--
Onward!

Stephen

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-15 Thread Craig Weinberg


On Wednesday, February 13, 2013 11:01:30 PM UTC-5, Stephen Paul King wrote:

  On 2/13/2013 9:41 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 5:37:08 PM UTC-5, Stephen Paul King wrote: 

  On 2/13/2013 5:21 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent wrote: 

  On 2/13/2013 8:35 AM, Craig Weinberg wrote: 

 *Wouldn�t Simulated Intelligence be a more appropriate term than 
 Artificial Intelligence?*

 Thinking of it objectively, if we have a program which can model a 
 hurricane, we would call that hurricane a simulation, not an �artificial 
 hurricane�. If we modeled any physical substance, force, or field, we 
 would similarly say that we had simulated hydrogen or gravity or 
 electromagnetism, not that we had created artificial hydrogen, gravity, etc.


 No, because the idea of an AI is that it can control a robot or other 
 machine which interacts with the real world, whereas a simulate AI or 
 hurricane acts within a simulated world.
  

 AI doesn't need to interact with the real world though. It makes no 
 difference to the AI whether its environment is real or simulated. Just 
 because we can attach a robot to a simulation doesn't change it into an 
 experience of a real world.
  

 Hi Craig,

 I think that you might be making a huge fuss over a difference that 
 does not always make a difference between a public world and a private 
 world! IMHO, that makes the 'real' physical world Real is that we can all 
 agree on its properties (subject to some constraints that matter). Many can 
 point at the tree over there and agree on its height and whether or not it 
 is a deciduous variety.
  

 Why does our agreement mean on something's properties mean anything other 
 than that though?


 Hi Craig,

 Why are you thinking of 'though' in such a minimal way? Don't forget 
 about the 'objects' of those thoughts... The duals...


We might be agreeing here. I thought you were saying that our agreeing on 
what we observe is a sign that things are 'real', so I was saying that it 
doesn't have to be a sign of anything, just that reality is the quality of 
having to agree involuntarily on conditions.


  We are people living at the same time with human sized bodies, so it 
 would make sense that we would agree on almost everything that involve our 
 bodies.


 We is this we? I am considering any 'object' of system capable of 
 being described by a QM wave function or, more simply, capable of being 
 represented by a semi-complete atomic boolean algebra.


We in this case is you and me. I try to avoid using the word object, since 
it can be used in a lot of different ways. An object can be anything that 
isn't the subject. In another sense an object is a publicly accessible body.
 


  You can have a dream with other characters in the dream who point to 
 your dream tree and agree on its characteristics, but upon waking, you are 
 re-oriented to a more real, more tangibly public world with longer and more 
 stable histories.


 Right, it is the upon waking' part that is important. Our common 
 'reality' is the part that we can only 'wake up' from when we depart the 
 mortal coil. Have you followed the quantum suicide discussion any?


I haven't been, no.
 


  These qualities are only significant in comparison to the dream though. 
 If you can't remember your waking life, then the dream is real to you, and 
 to the universe through you.
  

 You are assuming a standard that you cannot define. Why? What one 
 observes as 'real' is real to that one, it is not necessarily real to every 
 one else... but there is a huge overlap between our 1p 'realities'. Andrew 
 Soltau has this idea nailed now in his Multisolipsism stuff. ;-)


One can observe that one is observing something that is 'not real' also 
though.
 


  
   
  

  
 By calling it artificial, we also emphasize a kind of obsolete notion of 
 natural vs man-made as categories of origin. 


 Why is the distinction between the natural intelligence of a child and 
 the artificial intelligence of a Mars rover obsolete?� The latter is one 
 we create by art, the other is created by nature.
  

 Because we understand now that we are nature and nature is us.


 I disagree! We can fool ourselves into thinking that we understand' 
 but what we can do is, at best, form testable explanations of stuff... We 
 are fallible!
  
 I agree, but I don't see how that applies to us being nature.


 We are part of Nature and there is a 'whole-part isomorphism' 
 involved..


Since we are part of nature, there is nothing that we are or do which is 
not nature.
 


  What would it mean to be unnatural? How would an unnatural being find 
 themselves in a natural world?
  

 They can't, unless we invent them... Pink Ponies


Pink Ponies are natural to imagine for our imagination. A square circle 
would be unnatural - which is why we can't imagine it.
 


   
  
  
  We can 

Re: Simulated Intelligence Mini-Manifesto

2013-02-15 Thread Craig Weinberg


On Thursday, February 14, 2013 11:20:12 AM UTC-5, Bruno Marchal wrote:


 On 13 Feb 2013, at 23:37, Stephen P. King wrote, to Craig Weinberg

 Baudrillard is not talking about consciousness in particular, only the sum 
 of whatever is in the original which is not accessible in the copy. His 
 phrase 'profound reality' is apt though. If you don't experience a profound 
 reality, then you might be a p-zombie already.



 Right!



 Right?

 Here Craig is on the worst slope. It looks almost like  if *you* believe 
 that a machine is not a zombie, it means that you are a zombie yourself.


No, I was saying that if you don't believe that your own experience is 
profoundly real, then you are a zombie yourself.
 


 They will persecuted the machines and the humans having a different 
 opinion altogether.

 Craig reassure me. he is willing to offer steak to my sun in law (who get 
 an artificial brain before marriage).

 But with Baudrillard, not only my sun in law might no more get his  steak, 
 but neither my daughter! Brr...


Hahaha. How about your son in law gets a simulation of steak which is 
beneath his substitution level? Even better, I just hack into his hardware 
and move one of his memories of eating steak up on the stack so it seems 
very recent. 

Is your brother in law racist against simulated steaks as memory implants?

Craig


 Bruno



 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-15 Thread Craig Weinberg


On Friday, February 15, 2013 12:23:44 AM UTC-5, John Clark wrote:

 On Wed, Feb 13, 2013  Craig Weinberg whats...@gmail.com javascript:wrote:

 * *Wouldn’t Simulated Intelligence be a more appropriate term than 
 Artificial Intelligence?


 Yes that euphemism could have advantages, it might make the last human 
 being feel a little better about himself just before the Jupiter Brain 
 outsmarted him and sent him into oblivion forever.  


Then we had better destroy every circuit on Earth to prevent that from 
happening.
 

  

  By calling it artificial, we also emphasize a kind of obsolete notion 
 of natural vs man-made as categories of origin. 


 What on earth is obsolete about the natural verses man-made dichotomy? The 
 Jupiter brain really was the product of a intelligent designer while the 
 human being was not. 


But the intelligent designer was the product of nature. It's a seamless 
continuum, unless you think that human beings came from some other 
metaphysical universe which is unnatural.

Craig

 


   John K Clark   



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-15 Thread Stephen P. King

On 2/15/2013 11:12 AM, Bruno Marchal wrote:


On 14 Feb 2013, at 22:00, Stephen P. King wrote:


On 2/14/2013 11:20 AM, Bruno Marchal wrote:


On 13 Feb 2013, at 23:37, Stephen P. King wrote, to Craig Weinberg

Baudrillard is not talking about consciousness in particular, only 
the sum of whatever is in the original which is not accessible in 
the copy. His phrase 'profound reality' is apt though. If you 
don't experience a profound reality, then you might be a p-zombie 
already.





Right!



Right?

Here Craig is on the worst slope. It looks almost like  if *you* 
believe that a machine is not a zombie, it means that you are a 
zombie yourself.


They will persecuted the machines and the humans having a different 
opinion altogether.


Craig reassure me. he is willing to offer steak to my sun in law 
(who get an artificial brain before marriage).


But with Baudrillard, not only my sun in law might no more get his 
 steak, but neither my daughter! Brr...


Bruno


Dear Bruno,

Could you re-write this post. It's wording is unintelligible to 
me. :_(



Craig sum up well Baudrillard with the  sentence If you don't 
experience a profound reality, then you might be a p-zombie already.


That sentence illustrate the willingness to not attribute a 
consciousness to a person with a copied, or artificial brain, as such 
copy is suspected not being able to live a profound reality. This is 
like saying, we the human with the original carbon brain, can live 
profound reality, but not the machine, together with and if you doubt 
that profound reality then *you* are a zombie too.


It remind me a fundamentalist of some confessional religion who told 
me if your machine cannot believe that some man is the son of God, 
then your machine can't think. I told him ---and what I doubt that a 
man is the son of God?. he told me that in that case I can't think 
either ...


This leads to the idea that not only a machine cannot be conscious, 
but any human who would pretend the contrary is also not conscious.


As I said: brrr...

Bruno


Ah! I see.. Yeah, Craig seems to have some trouble communicating 
the variability of Sense. It is 1p and thus cannot have a 3p measure, 
so... I feel his pain. I am trying to use the idea of the difference 
between a simulation of X as compared to the real X by a large 
ensemble of observers to parse this distinction to connect with your 
ideas...


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-15 Thread Stephen P. King

On 2/15/2013 4:07 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 11:01:30 PM UTC-5, Stephen Paul King 
wrote:


On 2/13/2013 9:41 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 5:37:08 PM UTC-5, Stephen Paul
King wrote:

On 2/13/2013 5:21 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent wrote:

On 2/13/2013 8:35 AM, Craig Weinberg wrote:

*Wouldn�t Simulated Intelligence be a more
appropriate term than Artificial Intelligence?*

Thinking of it objectively, if we have a program which
can model a hurricane, we would call that hurricane a
simulation, not an �artificial hurricane�. If we
modeled any physical substance, force, or field, we
would similarly say that we had simulated hydrogen or
gravity or electromagnetism, not that we had created
artificial hydrogen, gravity, etc.


No, because the idea of an AI is that it can control a
robot or other machine which interacts with the real
world, whereas a simulate AI or hurricane acts within a
simulated world.


AI doesn't need to interact with the real world though. It
makes no difference to the AI whether its environment is
real or simulated. Just because we can attach a robot to a
simulation doesn't change it into an experience of a real world.


Hi Craig,

I think that you might be making a huge fuss over a
difference that does not always make a difference between a
public world and a private world! IMHO, that makes the 'real'
physical world Real is that we can all agree on its
properties (subject to some constraints that matter). Many
can point at the tree over there and agree on its height and
whether or not it is a deciduous variety.


Why does our agreement mean on something's properties mean
anything other than that though?


Hi Craig,

Why are you thinking of 'though' in such a minimal way? Don't
forget about the 'objects' of those thoughts... The duals...


We might be agreeing here. I thought you were saying that our agreeing 
on what we observe is a sign that things are 'real', so I was saying 
that it doesn't have to be a sign of anything, just that reality is 
the quality of having to agree involuntarily on conditions.


Hi Craig,

We are stumbling over a subtle issue within semiotics. This video 
in 5 parts is helpful: http://www.youtube.com/watch?v=AxV3ompeJ-Y





We are people living at the same time with human sized bodies, so
it would make sense that we would agree on almost everything that
involve our bodies.


We is this we? I am considering any 'object' of system capable
of being described by a QM wave function or, more simply, capable
of being represented by a semi-complete atomic boolean algebra.


We in this case is you and me. I try to avoid using the word object, 
since it can be used in a lot of different ways. An object can be 
anything that isn't the subject. In another sense an object is a 
publicly accessible body.


I use the word 'object' purposefully. We need to deanthropomorphize 
the observer! An object is what one observer senses of another 
(potential) observer.






You can have a dream with other characters in the dream who point
to your dream tree and agree on its characteristics, but upon
waking, you are re-oriented to a more real, more tangibly public
world with longer and more stable histories.


Right, it is the upon waking' part that is important. Our
common 'reality' is the part that we can only 'wake up' from when
we depart the mortal coil. Have you followed the quantum suicide
discussion any?


I haven't been, no.


It is helpful for the understanding of the argument I am making. 
The way that a user of a QS system notices or fails to notice her demise 
is relevant here. The point is that we never sense the switch in the 
off position...






These qualities are only significant in comparison to the dream
though. If you can't remember your waking life, then the dream is
real to you, and to the universe through you.


You are assuming a standard that you cannot define. Why? What
one observes as 'real' is real to that one, it is not necessarily
real to every one else... but there is a huge overlap between our
1p 'realities'. Andrew Soltau has this idea nailed now in his
Multisolipsism stuff. ;-)


One can observe that one is observing something that is 'not real' 
also though.


Exactly, but that is the point I am making. There has to be a 
'real' thing for there to be a simulated thing, no? Or is that just the 
standard tacit assumption of people new to this question?












By calling it artificial, we 

Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Bruno Marchal


On 13 Feb 2013, at 20:44, Craig Weinberg wrote:




On Wednesday, February 13, 2013 12:46:23 PM UTC-5, Bruno Marchal  
wrote:


On 13 Feb 2013, at 17:35, Craig Weinberg wrote:

Wouldn’t Simulated Intelligence be a more appropriate term than  
Artificial Intelligence?


A better term would be natural imagination. But terms are not  
important.


Except that we already have natural imagination, so what would we be  
developing? Replacing something with itself?



Yes. That's what life does all the time.

The distinction between artificial and natural is artificial. Human  
made. And so it is also natural, as all creatures tend to do that by  
developing their ego.


Machines are just a collateral branch of life. Cars and houses are not  
less natural than ribosomes and mitochondria.











Thinking of it objectively, if we have a program which can model a  
hurricane, we would call that hurricane a simulation, not an  
‘artificial hurricane’. If we modeled any physical substance,  
force, or field, we would similarly say that we had simulated  
hydrogen or gravity or electromagnetism, not that we had created  
artificial hydrogen, gravity, etc.


Assuming those things exist.

Whether they exist or not, the mathematically generated model of X  
is simulated X. It could be artificial X as well, but whether X is  
natural or artificial only tells us the nature of its immediate  
developers.


It depends on how you defined Hurricane, and different definition will  
make different sense in different theories.











By calling it artificial, we also emphasize a kind of obsolete  
notion of natural vs man-made as categories of origin. If we used  
simulated instead, the measure of intelligence would be framed more  
modestly as the degree to which a system meets our expectations (or  
what we think or assume are our expectations). Rather than assuming  
a universal index of intelligent qualities which is independent  
from our own human qualities, we could evaluate the success of a  
particular Turing emulation purely on its merits as a convincing  
reflection of intelligence rather than presuming to have replicated  
an organic conscious experience mechanically.


Comp assumes we are Turing emulable,

Which is why Comp fails. Not only are we not emulable, emulation  
itself is not primitively real - it is a subjective consensus of  
expectations.


It is a well defined arithmetical notion, which comp assumes.





and in that case we can be emulated, trivially.

Comp can't define us,


That's correct.



so it can only emulate the postage stamp sized sampling of some of  
our most exposed, and least meaningful surfaces.


You can't know this. We have to bet on some level, and cannot be sure  
it is correct. But the consequences of comp are extracted from the  
mere existence of the subst level, not from the (impossible) knowledge  
of it.





Comp is a stencil or silhouette maker. No amount of silhouettes  
pieced together and animated in a sequence can generate an interior  
experience.


You can't say that publicly. You can't pretend to know that. It is  
your non-comp *hypothesis*.





If it did, we would only have to draw a cartoon and it would come to  
life on its own.


That's a non sense. Even for doing something as simple as Watson or  
big blue, it takes a lot of work.







To assume this being not possible assume the existence of infinite  
process playing relevant roles in the mind or in life. But it is up  
to you to motivates for them. The problem, for you, is that you have  
to speculate on something that we have not yet observed. You can't  
say consciousness, as this would just beg the question.


It is consciousness, and it is not begging the question, since all  
possible questions supervene on consciousness. Not sure what you  
mean about infinite processes or why they would mean that  
simulations can become experiences on their own.


Because any processes finitely describable is trivially Turing emulable.











The cost of losing the promise of imminently mastering awareness  
would, I think, be outweighed by the gain of a more scientifically  
circumspect approach.


Invoking infinities is not so much circumspect, especially for  
driving negative statement about the consciousness of possible  
entities.


What infinities do you refer to?


The special one you need to make sense of non-comp.








Putting the Promethean dream on hold, we could guard against the  
shadow of its confirmation bias. My concern is that without such a  
precaution, the promise of machine intelligence as a stage 1  
simulacrum (a faithful copy of an original, in Baudrillard’s  
terms), will be diluted to a stage 3 simulacrum (a copy that masks  
the absence of a profound reality, where the simulacrum pretends to  
be a faithful copy.)


Assuming a non comp theory, like the quite speculative theory of  
mind by Penrose. Your own proposl fits remarkably ith comp, and some  
low level of 

Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Craig Weinberg


On Wednesday, February 13, 2013 10:38:21 PM UTC-5, stathisp wrote:

 On Thu, Feb 14, 2013 at 2:27 PM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 

  Whether the 
  intelligence has the same associated consciousness or not is a matter 
  for debate, but not the intelligence itself. 
  
  
  I disagree. There is no internal intelligence there at all. Zero. There 
 is a 
  recording of some aspects of human intelligence which can extend human 
  intelligence into extra-human ranges for human users. The computer 
 itself 
  has no extra-human intelligence, just as a telescope itself doesn't see 
  anything, it just helps us see, passively of course. We are the users of 
  technology, technology itself is not a user. 

 I think you're conflating intelligence with consciousness. 


Funny, someone else accused me of the same thing already today:

You've conflating 'real intelligence' with conscious experience.

Real or literal intelligence is a conscious experience as far as we know. 
Metaphorically, we can say that something which is not the result of a 
conscious experience (like evolutionary adaptations in a species) is 
intelligent, but what we mean is that it impresses us as something that 
seems like it could have been the result of intelligent motives. To fail to 
note that intelligence supervenes on consciousness is, in my opinion, 
clearly a Pathetic Fallacy assumption.

 

 If the 
 table talks to you and helps you solve a difficult problem, then by 
 definition the table is intelligent. 


No, you are using your intelligence to turn what comes out of the tables 
mouth into a solution to a difficult problem. If look at the answers to a 
crossword puzzle in a book, and it helps me solve the crossword puzzle, 
that doesn't mean that the book is intelligent, or that answers are 
intelligent, it just means that something which is intelligent has made 
formations available which my intelligence uses to inform itself.
 

 How the table pulls this off and 
 whether it is conscious or not are separate questions. 


I think that assumption and any deep understanding of either consciousness 
or intelligence are mutually exclusive. Understanding begins when you doubt 
what you have assumed.

Craig
 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Bruno Marchal


On 13 Feb 2013, at 23:40, Craig Weinberg wrote:




On Wednesday, February 13, 2013 5:11:32 PM UTC-5, Stephen Paul King  
wrote:

On 2/13/2013 2:58 PM, meekerdb wrote:

On 2/13/2013 8:35 AM, Craig Weinberg wrote:


Wouldn�t Simulated Intelligence be a more appropriate term than  
Artificial Intelligence?


Thinking of it objectively, if we have a program which can model a  
hurricane, we would call that hurricane a simulation, not an  
�artificial hurricane�. If we modeled any physical substance,  
force, or field, we would similarly say that we had simulated  
hydrogen or gravity or electromagnetism, not that we had created  
artificial hydrogen, gravity, etc.


No, because the idea of an AI is that it can control a robot or  
other machine which interacts with the real world, whereas a  
simulate AI or hurricane acts within a simulated world.


��� What difference that makes a difference does that make in  
the grand scheme of things? The point is that we cannot 'prove' that  
we are not in a gigantic simulation. Yeah, we cannot prove a  
negative, but we can extract a lot of valuable insights and maybe  
some predictions from the assumption that 'reality = best possible  
simulation.


I just realized how to translate that into my view: Reality =  
making the most sense possible. Same thing really. That's why I  
talk about multisense Realism, with Realism being the quality of  
maximum unfiltered sense. Since sense is subtractive, the more  
senses you have overlapping and diverging, the less there is that  
you are missing. Reality = nothing is missing (i.e. only possible at  
the Absolute level), Realism = you can't tell that anything is  
missing from your perceptual capacity/inertial frame/simulation.


I don't like the word simulation per se, because I think that  
anything the idea of a Matrix universe does for us would be negated  
by the idea that the simulation eventually has to run on something  
which is not a simulation, otherwise the word has no meaning. Either  
way, the notion of simulation doesn't make any of the big questions  
more answerable, even if it is locally true for us.


Emulation and simulation are arithmetical notion. And with comp, even  
physical emulation, well, it is no more entirely arithmetical, but  
it is still explained entirely in arithmetical terms (an infinity of  
them).


Bruno








Craig






By calling it artificial, we also emphasize a kind of obsolete  
notion of natural vs man-made as categories of origin.


Why is the distinction between the natural intelligence of a child  
and the artificial intelligence of a Mars rover obsolete?� The  
latter is one we create by art, the other is created by nature.


If we used simulated instead, the measure of intelligence would be  
framed more modestly as the degree to which a system meets our  
expectations (or what we think or assume are our expectations).  
Rather than assuming a universal index of intelligent qualities  
which is independent from our own human qualities,


But if we measure intelligence strictly relative to human  
intelligence we will be saying that visual pattern recognition is  
intelligence but solving Navier-Stokes equations is not.� This is  
the anthropocentrism that continually demotes whatever computers  
can do as not really intelligent even when it was regarded a the  
apothesis of intelligence *before* computers could� do it.


we could evaluate the success of a particular Turing emulation  
purely on its merits as a convincing reflection of intelligence


But there is no one-dimensional measure of intelligence - it's just  
competence in many domains.


rather than presuming to have replicated an organic conscious  
experience mechanically.


I don't think that's a presumption.� It's an inference from the  
incoherence of the idea of a philosophical zombie.




The cost of losing the promise of imminently mastering awareness  
would, I think, be outweighed by the gain of a more scientifically  
circumspect approach. Putting the Promethean dream on hold, we  
could guard against the shadow of its confirmation bias. My  
concern is that without such a precaution, the promise of machine  
intelligence as a stage 1 simulacrum (a faithful copy of an  
original, in Baudrillard�s terms), will be diluted to a stage 3  
simulacrum (a copy that masks the absence of a profound reality,  
where the simulacrum pretends to be a faithful copy.) --�


The assumption that there is a 'profound reality' is what Stathis  
showed to be 'magic'.


Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-li...@googlegroups.com.

To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit 

Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Bruno Marchal


On 13 Feb 2013, at 23:51, Stephen P. King wrote:


On 2/13/2013 5:40 PM, Craig Weinberg wrote:
[SPK wrote} What difference that makes a difference does that make  
in the grand scheme of things? The point is that we cannot 'prove'  
that we are not in a gigantic simulation. Yeah, we cannot prove a  
negative, but we can extract a lot of valuable insights and maybe  
some predictions from the assumption that 'reality = best possible  
simulation.


I just realized how to translate that into my view: Reality =  
making the most sense possible. Same thing really. That's why I  
talk about multisense Realism, with Realism being the quality of  
maximum unfiltered sense. Since sense is subtractive, the more  
senses you have overlapping and diverging, the less there is that  
you are missing. Reality = nothing is missing (i.e. only possible  
at the Absolute level), Realism = you can't tell that anything is  
missing from your perceptual capacity/inertial frame/simulation.


I don't like the word simulation per se, because I think that  
anything the idea of a Matrix universe does for us would be negated  
by the idea that the simulation eventually has to run on something  
which is not a simulation, otherwise the word has no meaning.  
Either way, the notion of simulation doesn't make any of the big  
questions more answerable, even if it is locally true for us.


Craig


I like the idea of a Matrix universe exactly for that reason; it  
takes resources to 'run' it. No free lunch, even for universes!!!


No free lunch indeed, but the arithmetical lunch becomes enough to  
explain consciousness and matter, in a sufficient precise way to be  
tested.


Bruno





--
Onward!

Stephen

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Stephen P. King

On 2/14/2013 10:49 AM, Bruno Marchal wrote:


On 13 Feb 2013, at 23:51, Stephen P. King wrote:


On 2/13/2013 5:40 PM, Craig Weinberg wrote:


[SPK wrote} What difference that makes a difference does that
make in the grand scheme of things? The point is that we cannot
'prove' that we are not in a gigantic simulation. Yeah, we
cannot prove a negative, but we can extract a lot of valuable
insights and maybe some predictions from the assumption that
'reality = best possible simulation.


I just realized how to translate that into my view: Reality = 
making the most sense possible. Same thing really. That's why I 
talk about multisense Realism, with Realism being the quality of 
maximum unfiltered sense. Since sense is subtractive, the more 
senses you have overlapping and diverging, the less there is that 
you are missing. Reality = nothing is missing (i.e. only possible at 
the Absolute level), Realism = you can't tell that anything is 
missing from your perceptual capacity/inertial frame/simulation.


I don't like the word simulation per se, because I think that 
anything the idea of a Matrix universe does for us would be negated 
by the idea that the simulation eventually has to run on something 
which is not a simulation, otherwise the word has no meaning. Either 
way, the notion of simulation doesn't make any of the big questions 
more answerable, even if it is locally true for us.


Craig


I like the idea of a Matrix universe exactly for that reason; it 
takes resources to 'run' it. No free lunch, even for universes!!!


No free lunch indeed, but the arithmetical lunch becomes enough to 
explain consciousness and matter, in a sufficient precise way to be 
tested.


Bruno




Hi Bruno,

But explanations are not realities, even if people think of them as 
such.


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Bruno Marchal


On 13 Feb 2013, at 23:37, Stephen P. King wrote, to Craig Weinberg

Baudrillard is not talking about consciousness in particular, only  
the sum of whatever is in the original which is not accessible in  
the copy. His phrase 'profound reality' is apt though. If you don't  
experience a profound reality, then you might be a p-zombie already.





Right!



Right?

Here Craig is on the worst slope. It looks almost like  if *you*  
believe that a machine is not a zombie, it means that you are a zombie  
yourself.


They will persecuted the machines and the humans having a different  
opinion altogether.


Craig reassure me. he is willing to offer steak to my sun in law (who  
get an artificial brain before marriage).


But with Baudrillard, not only my sun in law might no more get his   
steak, but neither my daughter! Brr...


Bruno



http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Bruno Marchal


On 14 Feb 2013, at 17:02, Stephen P. King wrote:


On 2/14/2013 10:49 AM, Bruno Marchal wrote:


On 13 Feb 2013, at 23:51, Stephen P. King wrote:


On 2/13/2013 5:40 PM, Craig Weinberg wrote:
[SPK wrote} What difference that makes a difference does that  
make in the grand scheme of things? The point is that we cannot  
'prove' that we are not in a gigantic simulation. Yeah, we cannot  
prove a negative, but we can extract a lot of valuable insights  
and maybe some predictions from the assumption that 'reality =  
best possible simulation.


I just realized how to translate that into my view: Reality =  
making the most sense possible. Same thing really. That's why I  
talk about multisense Realism, with Realism being the quality of  
maximum unfiltered sense. Since sense is subtractive, the more  
senses you have overlapping and diverging, the less there is that  
you are missing. Reality = nothing is missing (i.e. only possible  
at the Absolute level), Realism = you can't tell that anything is  
missing from your perceptual capacity/inertial frame/simulation.


I don't like the word simulation per se, because I think that  
anything the idea of a Matrix universe does for us would be  
negated by the idea that the simulation eventually has to run on  
something which is not a simulation, otherwise the word has no  
meaning. Either way, the notion of simulation doesn't make any of  
the big questions more answerable, even if it is locally true for  
us.


Craig


I like the idea of a Matrix universe exactly for that reason;  
it takes resources to 'run' it. No free lunch, even for universes!!!


No free lunch indeed, but the arithmetical lunch becomes enough to  
explain consciousness and matter, in a sufficient precise way to be  
tested.


Bruno




Hi Bruno,

But explanations are not realities, even if people think of them  
as such.


Explanations are like taxes and death, that is part of the  
arithmetical realities, when seen from inside. Of course explanations  
of reality are not the reality itself.


Bruno





--
Onward!

Stephen

--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en 
.

For more options, visit https://groups.google.com/groups/opt_out.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Stephen P. King

On 2/14/2013 11:20 AM, Bruno Marchal wrote:


On 13 Feb 2013, at 23:37, Stephen P. King wrote, to Craig Weinberg

Baudrillard is not talking about consciousness in particular, only 
the sum of whatever is in the original which is not accessible in 
the copy. His phrase 'profound reality' is apt though. If you don't 
experience a profound reality, then you might be a p-zombie already.





Right!



Right?

Here Craig is on the worst slope. It looks almost like  if *you* 
believe that a machine is not a zombie, it means that you are a zombie 
yourself.


They will persecuted the machines and the humans having a different 
opinion altogether.


Craig reassure me. he is willing to offer steak to my sun in law (who 
get an artificial brain before marriage).


But with Baudrillard, not only my sun in law might no more get his 
 steak, but neither my daughter! Brr...


Bruno


Dear Bruno,

Could you re-write this post. It's wording is unintelligible to me. :_(

--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Craig Weinberg


On Wednesday, February 13, 2013 10:46:26 PM UTC-5, Stephen Paul King wrote:

  On 2/13/2013 8:09 PM, Craig Weinberg wrote:
  
  [SPK wrote: ]I like the idea of a Matrix universe exactly for that 
 reason; it takes resources to 'run' it. No free lunch, even for universes!!!
  

 You can still have the idea of resources if the universe isn't a 
 simulation though. No particular diffraction tree within the supreme monad 
 can last as long as the Absolute diffraction, so the clock is always 
 running and every motive carries risk.
  

 Right, but since we do have the resources, why not assume that the 
 Matrix is up and running on them already? 


I don't see the advantage of a Matrix running on a non-Matrix vs just a 
non-Matrix totality though.
 

 The fun thing is that if we have both then we have a nice solution to both 
 the mind (for matter) and body (for comp) problems. There can be no 
 'supreme monad' as such would be equivalent to a preferred frame and basis. 
 The totality of all that exists is not a hierarchy, it is a fractal network.


The supreme monad is just everything which is undiffracted, i.e. the single 
thread that the whole tapestry of tapestries is made of...which is itself 
one giant (or infinitesimally small) tapestry seed. Size isn't relevant 
because size is part of the tapestry, not the thread.

Craig


 -- 
 Onward!

 Stephen

  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Stephen P. King

On 2/14/2013 5:45 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 10:46:26 PM UTC-5, Stephen Paul King 
wrote:


On 2/13/2013 8:09 PM, Craig Weinberg wrote:


[SPK wrote: ]I like the idea of a Matrix universe exactly for
that reason; it takes resources to 'run' it. No free lunch,
even for universes!!!


You can still have the idea of resources if the universe isn't a
simulation though. No particular diffraction tree within the
supreme monad can last as long as the Absolute diffraction, so
the clock is always running and every motive carries risk.


Right, but since we do have the resources, why not assume that
the Matrix is up and running on them already?


I don't see the advantage of a Matrix running on a non-Matrix vs just 
a non-Matrix totality though.
ACK! 
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcS0oSEgcZZVrascuppptCDDVSONLD2DxKE-JGirCvuRag8-LT3o


You sound like Dennett, defending material monism! Or, to be more 
charitable, flattening the infinite levels of the transduction into a 
single fabric. Don't do that! The 'non-Matrix' is the level for a given 
1p that cannot be deformed. It is the point where the model of the 
system is the system.




The fun thing is that if we have both then we have a nice solution
to both the mind (for matter) and body (for comp) problems. There
can be no 'supreme monad' as such would be equivalent to a
preferred frame and basis. The totality of all that exists is not
a hierarchy, it is a fractal network.


The supreme monad is just everything which is undiffracted, i.e. the 
single thread that the whole tapestry of tapestries is made of...which 
is itself one giant (or infinitesimally small) tapestry seed. Size 
isn't relevant because size is part of the tapestry, not the thread.


Craig


OK, but can you see that what you are talking about (the Supreme 
Monad) is a giant monism? We need to cover both sides, the dual aspects. 
As I see it, when we jump up to a Supreme Monad we are required to fuzz 
out all distinctions that are relevant at the 1p level. The Sense of the 
Supreme monad is an undistinguished Nothing. It cannot have any 
particular features of properties.



--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Craig Weinberg


On Thursday, February 14, 2013 6:03:51 PM UTC-5, Stephen Paul King wrote:

  On 2/14/2013 5:45 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 10:46:26 PM UTC-5, Stephen Paul King 
 wrote: 

  On 2/13/2013 8:09 PM, Craig Weinberg wrote:
  
  [SPK wrote: ]I like the idea of a Matrix universe exactly for that 
 reason; it takes resources to 'run' it. No free lunch, even for universes!!!
  

 You can still have the idea of resources if the universe isn't a 
 simulation though. No particular diffraction tree within the supreme monad 
 can last as long as the Absolute diffraction, so the clock is always 
 running and every motive carries risk.
  

 Right, but since we do have the resources, why not assume that the 
 Matrix is up and running on them already? 


 I don't see the advantage of a Matrix running on a non-Matrix vs just a 
 non-Matrix totality though.
  
 ACK!https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcS0oSEgcZZVrascuppptCDDVSONLD2DxKE-JGirCvuRag8-LT3o

 You sound like Dennett, defending material monism! 


Not material, experience.
 

 Or, to be more charitable, flattening the infinite levels of the 
 transduction into a single fabric. Don't do that! 


The fabric is figurative - i'm just talking about the unity of all sense 
being more primordial than space or time.
 

 The 'non-Matrix' is the level for a given 1p that cannot be deformed. It 
 is the point where the model of the system is the system. 


I don't think there are any models or systems at all. Not physically. There 
are only presentations and re-presentations. Habits and inertia.

Craig
 


   
  
 The fun thing is that if we have both then we have a nice solution to 
 both the mind (for matter) and body (for comp) problems. There can be no 
 'supreme monad' as such would be equivalent to a preferred frame and basis. 
 The totality of all that exists is not a hierarchy, it is a fractal network.
  

 The supreme monad is just everything which is undiffracted, i.e. the 
 single thread that the whole tapestry of tapestries is made of...which is 
 itself one giant (or infinitesimally small) tapestry seed. Size isn't 
 relevant because size is part of the tapestry, not the thread.

 Craig
  

 OK, but can you see that what you are talking about (the Supreme 
 Monad) is a giant monism? We need to cover both sides, the dual aspects. As 
 I see it, when we jump up to a Supreme Monad we are required to fuzz out 
 all distinctions that are relevant at the 1p level. The Sense of the 
 Supreme monad is an undistinguished Nothing. It cannot have any particular 
 features of properties.


 -- 
 Onward!

 Stephen

  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Stathis Papaioannou
On Fri, Feb 15, 2013 at 1:08 AM, Craig Weinberg whatsons...@gmail.com wrote:

 I think you're conflating intelligence with consciousness.


 Funny, someone else accused me of the same thing already today:

 You've conflating 'real intelligence' with conscious experience.

 Real or literal intelligence is a conscious experience as far as we know.
 Metaphorically, we can say that something which is not the result of a
 conscious experience (like evolutionary adaptations in a species) is
 intelligent, but what we mean is that it impresses us as something that
 seems like it could have been the result of intelligent motives. To fail to
 note that intelligence supervenes on consciousness is, in my opinion,
 clearly a Pathetic Fallacy assumption.

If I move my arm, that is a behaviour. The behaviour has an associated
experience. The behaviour and the experience are not the same thing,
even if it turns out that you can't have one without the other. It's a
question of correct use of the English language.

 If the
 table talks to you and helps you solve a difficult problem, then by
 definition the table is intelligent.


 No, you are using your intelligence to turn what comes out of the tables
 mouth into a solution to a difficult problem. If look at the answers to a
 crossword puzzle in a book, and it helps me solve the crossword puzzle, that
 doesn't mean that the book is intelligent, or that answers are intelligent,
 it just means that something which is intelligent has made formations
 available which my intelligence uses to inform itself.

I meant if the table talks to you just like a person does, giving you
consistently interesting conversation and useful advice on a wide
variety of subjects. Unless it's a trick and there's a hidden speaker
somewhere, you would then have to say that the table is intelligent.
You might speculate as to how the table does it and whether the table
is conscious, but those are separate questions.

 How the table pulls this off and
 whether it is conscious or not are separate questions.


 I think that assumption and any deep understanding of either consciousness
 or intelligence are mutually exclusive. Understanding begins when you doubt
 what you have assumed.

I think you're using the word intelligent in a non-standard way,
leading to confusion. The first thing to do in any debate is agree on
the definition of the words.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Stephen P. King

On 2/14/2013 6:08 PM, Craig Weinberg wrote:
I don't think there are any models or systems at all. Not physically. 
There are only presentations and re-presentations. Habits and inertia.


I agree, they cannot be physical at all, they are representations 
not things-in-themselves (objects). The trick is to see the difference 
between the general properties of representations and objects while not 
thinking of they as separable. For any object there exist at least one 
representation and for every representation there exists at least one 
object. This sets up the isomorphism of the Stone duality.


--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Craig Weinberg


On Thursday, February 14, 2013 6:52:21 PM UTC-5, Stephen Paul King wrote:

 On 2/14/2013 6:08 PM, Craig Weinberg wrote: 
  I don't think there are any models or systems at all. Not physically. 
  There are only presentations and re-presentations. Habits and inertia. 

  I agree, they cannot be physical at all, they are representations 
 not things-in-themselves (objects). The trick is to see the difference 
 between the general properties of representations and objects while not 
 thinking of they as separable. For any object there exist at least one 
 representation and for every representation there exists at least one 
 object. This sets up the isomorphism of the Stone duality. 


I'm on board with that, but I think to complete the picture, both the 
subjective representations (models) and objective representations (objects) 
should be understood to exist only through subjective presentations 
(sense). The isomorphism of the Stone duality requires sense to relate 
topologies to algebras, i.e. they don't relate to each other directly and 
independently of an observer. The duality is a reflection of the observer's 
capacity to observe.

Craig
 


 -- 
 Onward! 

 Stephen 




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Craig Weinberg


On Thursday, February 14, 2013 6:45:27 PM UTC-5, stathisp wrote:

 On Fri, Feb 15, 2013 at 1:08 AM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 

  I think you're conflating intelligence with consciousness. 
  
  
  Funny, someone else accused me of the same thing already today: 
  
  You've conflating 'real intelligence' with conscious experience. 
  
  Real or literal intelligence is a conscious experience as far as we 
 know. 
  Metaphorically, we can say that something which is not the result of a 
  conscious experience (like evolutionary adaptations in a species) is 
  intelligent, but what we mean is that it impresses us as something that 
  seems like it could have been the result of intelligent motives. To fail 
 to 
  note that intelligence supervenes on consciousness is, in my opinion, 
  clearly a Pathetic Fallacy assumption. 

 If I move my arm, that is a behaviour. The behaviour has an associated 
 experience. The behaviour and the experience are not the same thing, 
 even if it turns out that you can't have one without the other. It's a 
 question of correct use of the English language. 


They are both the same thing and not the same thing. Moving your arm is 
exactly what it is before being linguistically deconstructed - a united 
private-public physical participation.
 


  If the 
  table talks to you and helps you solve a difficult problem, then by 
  definition the table is intelligent. 
  
  
  No, you are using your intelligence to turn what comes out of the tables 
  mouth into a solution to a difficult problem. If look at the answers to 
 a 
  crossword puzzle in a book, and it helps me solve the crossword puzzle, 
 that 
  doesn't mean that the book is intelligent, or that answers are 
 intelligent, 
  it just means that something which is intelligent has made formations 
  available which my intelligence uses to inform itself. 

 I meant if the table talks to you just like a person does, giving you 
 consistently interesting conversation and useful advice on a wide 
 variety of subjects. 

 
Why would it matter how convincing the simulation seems?

Unless it's a trick and there's a hidden speaker 
 somewhere, you would then have to say that the table is intelligent. 


It's not a hidden speaker, it is a collection of modular recordings which 
are strung together to match the criteria of canned algorithms. We do not 
at all have to say the table is intelligent. To the contrary, computers are 
literally less intelligent than a rock.

You might speculate as to how the table does it and whether the table 
 is conscious, but those are separate questions. 


The only thing to speculate on is whether there is reason to suspect that 
the table has been designed specifically to convince you into believing it 
is intelligent, or feeling comfortable pretending that it is intelligent.
 


  How the table pulls this off and 
  whether it is conscious or not are separate questions. 
  
  
  I think that assumption and any deep understanding of either 
 consciousness 
  or intelligence are mutually exclusive. Understanding begins when you 
 doubt 
  what you have assumed. 

 I think you're using the word intelligent in a non-standard way, 
 leading to confusion. The first thing to do in any debate is agree on 
 the definition of the words. 


I think that any debate that even considers word definitions to be real is 
a waste of time.

Craig
 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Stephen P. King

On 2/14/2013 6:45 PM, Stathis Papaioannou wrote:

On Fri, Feb 15, 2013 at 1:08 AM, Craig Weinberg whatsons...@gmail.com wrote:


I think you're conflating intelligence with consciousness.


Funny, someone else accused me of the same thing already today:

You've conflating 'real intelligence' with conscious experience.

Real or literal intelligence is a conscious experience as far as we know.
Metaphorically, we can say that something which is not the result of a
conscious experience (like evolutionary adaptations in a species) is
intelligent, but what we mean is that it impresses us as something that
seems like it could have been the result of intelligent motives. To fail to
note that intelligence supervenes on consciousness is, in my opinion,
clearly a Pathetic Fallacy assumption.

If I move my arm, that is a behaviour. The behaviour has an associated
experience. The behaviour and the experience are not the same thing,
even if it turns out that you can't have one without the other. It's a
question of correct use of the English language.


If the
table talks to you and helps you solve a difficult problem, then by
definition the table is intelligent.


No, you are using your intelligence to turn what comes out of the tables
mouth into a solution to a difficult problem. If look at the answers to a
crossword puzzle in a book, and it helps me solve the crossword puzzle, that
doesn't mean that the book is intelligent, or that answers are intelligent,
it just means that something which is intelligent has made formations
available which my intelligence uses to inform itself.

I meant if the table talks to you just like a person does, giving you
consistently interesting conversation and useful advice on a wide
variety of subjects. Unless it's a trick and there's a hidden speaker
somewhere, you would then have to say that the table is intelligent.
You might speculate as to how the table does it and whether the table
is conscious, but those are separate questions.


Who is to say that that table was actually a TV set in the shape of 
a table or a table that had some other means to transmit what would 
satisfy a speech-only Turing test? This goes nowhere, Stathis.






How the table pulls this off and
whether it is conscious or not are separate questions.


I think that assumption and any deep understanding of either consciousness
or intelligence are mutually exclusive. Understanding begins when you doubt
what you have assumed.

I think you're using the word intelligent in a non-standard way,
leading to confusion. The first thing to do in any debate is agree on
the definition of the words.



Could you define intelligence for us in unambiguous terms? I 
don't recall Craig trying to do that...




--
Onward!

Stephen


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Stephen P. King

On 2/14/2013 9:43 PM, Craig Weinberg wrote:



On Thursday, February 14, 2013 6:52:21 PM UTC-5, Stephen Paul King wrote:

On 2/14/2013 6:08 PM, Craig Weinberg wrote:
 I don't think there are any models or systems at all. Not
physically.
 There are only presentations and re-presentations. Habits and
inertia.

 I agree, they cannot be physical at all, they are
representations
not things-in-themselves (objects). The trick is to see the
difference
between the general properties of representations and objects
while not
thinking of they as separable. For any object there exist at least
one
representation and for every representation there exists at least one
object. This sets up the isomorphism of the Stone duality.


I'm on board with that, but I think to complete the picture, both the 
subjective representations (models) and objective representations 
(objects) should be understood to exist only through subjective 
presentations (sense). The isomorphism of the Stone duality requires 
sense to relate topologies to algebras, i.e. they don't relate to each 
other directly and independently of an observer. The duality is a 
reflection of the observer's capacity to observe.


Craig


OK, let's take it to the next step. Let us agree that they don't 
relate to each other directly and independently of an observer, they 
being represented as X and Y. Does this require that there does not 
exist an observer Z than can see both of X's and Y's total world lines 
simultaneously? If the world line of Z is longer than that of X and Y by 
some number then they would be able to communicate directly (well you 
know what I mean) and thus be able to come to some complete agreement 
that Z knows all about X and Y.
Could Z be said to 'know' a representation of the life and times of 
X and Y?


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Craig Weinberg


On Thursday, February 14, 2013 11:17:08 PM UTC-5, Stephen Paul King wrote:

  On 2/14/2013 9:43 PM, Craig Weinberg wrote:
  


 On Thursday, February 14, 2013 6:52:21 PM UTC-5, Stephen Paul King wrote: 

 On 2/14/2013 6:08 PM, Craig Weinberg wrote: 
  I don't think there are any models or systems at all. Not physically. 
  There are only presentations and re-presentations. Habits and inertia. 

  I agree, they cannot be physical at all, they are representations 
 not things-in-themselves (objects). The trick is to see the difference 
 between the general properties of representations and objects while not 
 thinking of they as separable. For any object there exist at least one 
 representation and for every representation there exists at least one 
 object. This sets up the isomorphism of the Stone duality. 


 I'm on board with that, but I think to complete the picture, both the 
 subjective representations (models) and objective representations (objects) 
 should be understood to exist only through subjective presentations 
 (sense). The isomorphism of the Stone duality requires sense to relate 
 topologies to algebras, i.e. they don't relate to each other directly and 
 independently of an observer. The duality is a reflection of the observer's 
 capacity to observe.

 Craig 
  

 OK, let's take it to the next step. Let us agree that they don't 
 relate to each other directly and independently of an observer, they being 
 represented as X and Y. Does this require that there does not exist an 
 observer Z than can see both of X's and Y's total world lines 
 simultaneously? If the world line of Z is longer than that of X and Y by 
 some number then they would be able to communicate directly (well you know 
 what I mean) and thus be able to come to some complete agreement that Z 
 knows all about X and Y. 
 Could Z be said to 'know' a representation of the life and times of X 
 and Y?


Like to you (Z), I am histories of experiences which are associated with me 
(Y) and I am a body which is located right now in a house in North Carolina 
(X). Your Y is private, but your X is much more public - I am a body in a 
house in NC to any Z who is a person, dog, cat, etc. Not to a plant really, 
or a molecule, to those distant kinds of Z, I don't exist at all.

Everyone's XY for me put together adds up to basically (Absolute minus Z). 
My Z is what is being borrowed from the Absolute inertial frame 
temporarily, and my XY is the like shadow that it casts. It's complicated 
of course, because all of the X, Y, and Z feedback multiple loops on each 
other too. Very pretzely.

Craig


 -- 
 Onward!

 Stephen

  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Stephen P. King

On 2/14/2013 11:34 PM, Craig Weinberg wrote:



On Thursday, February 14, 2013 11:17:08 PM UTC-5, Stephen Paul King 
wrote:


On 2/14/2013 9:43 PM, Craig Weinberg wrote:



On Thursday, February 14, 2013 6:52:21 PM UTC-5, Stephen Paul
King wrote:

On 2/14/2013 6:08 PM, Craig Weinberg wrote:
 I don't think there are any models or systems at all. Not
physically.
 There are only presentations and re-presentations. Habits
and inertia.

 I agree, they cannot be physical at all, they are
representations
not things-in-themselves (objects). The trick is to see the
difference
between the general properties of representations and objects
while not
thinking of they as separable. For any object there exist at
least one
representation and for every representation there exists at
least one
object. This sets up the isomorphism of the Stone duality.


I'm on board with that, but I think to complete the picture, both
the subjective representations (models) and objective
representations (objects) should be understood to exist only
through subjective presentations (sense). The isomorphism of the
Stone duality requires sense to relate topologies to algebras,
i.e. they don't relate to each other directly and independently
of an observer. The duality is a reflection of the observer's
capacity to observe.

Craig


OK, let's take it to the next step. Let us agree that they
don't relate to each other directly and independently of an
observer, they being represented as X and Y. Does this require
that there does not exist an observer Z than can see both of X's
and Y's total world lines simultaneously? If the world line of Z
is longer than that of X and Y by some number then they would be
able to communicate directly (well you know what I mean) and thus
be able to come to some complete agreement that Z knows all about
X and Y.
Could Z be said to 'know' a representation of the life and
times of X and Y?


Like to you (Z), I am histories of experiences which are associated 
with me (Y) and I am a body which is located right now in a house in 
North Carolina (X). Your Y is private, but your X is much more public 
- I am a body in a house in NC to any Z who is a person, dog, cat, 
etc. Not to a plant really, or a molecule, to those distant kinds of 
Z, I don't exist at all.


Craig,

Right, exactly right! From Z and X, Y is a p-zombie, a physical 
mindless robot. What does X see of Z and Y? The same kinda thing. And Y, 
what does it see? Seeing is within Sense...




Everyone's XY for me put together adds up to basically (Absolute minus Z).


Only if I stipulate that only X, Y and Z exist would I agree. If 
there are, say, 10^23 witnesses, like Z and X are of Y's physical acts, 
what difference would that make? None! So long as all of this witnesses 
could back up each others narratives.



My Z is what is being borrowed from the Absolute inertial frame 
temporarily, and my XY is the like shadow that it casts. It's 
complicated of course, because all of the X, Y, and Z feedback 
multiple loops on each other too. Very pretzely.


You assuming that one of those p's is absolute in some way. None 
are, all cast shadows equivalently on each other or they would not 
co-exist at all.




Craig


-- 
Onward!


Stephen

--
You received this message because you are subscribed to the Google 
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.





--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread John Clark
On Wed, Feb 13, 2013  Craig Weinberg whatsons...@gmail.com wrote:

* *Wouldn’t Simulated Intelligence be a more appropriate term than
 Artificial Intelligence?


Yes that euphemism could have advantages, it might make the last human
being feel a little better about himself just before the Jupiter Brain
outsmarted him and sent him into oblivion forever.


  By calling it artificial, we also emphasize a kind of obsolete notion of
 natural vs man-made as categories of origin.


What on earth is obsolete about the natural verses man-made dichotomy? The
Jupiter brain really was the product of a intelligent designer while the
human being was not.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-14 Thread Stephen P. King

On 2/15/2013 12:23 AM, John Clark wrote:
On Wed, Feb 13, 2013  Craig Weinberg whatsons...@gmail.com 
mailto:whatsons...@gmail.com wrote:


* *Wouldn’t Simulated Intelligence be a more appropriate term
than Artificial Intelligence?


Yes that euphemism could have advantages, it might make the last human 
being feel a little better about himself just before the Jupiter Brain 
outsmarted him and sent him into oblivion forever.


 By calling it artificial, we also emphasize a kind of obsolete
notion of natural vs man-made as categories of origin.


What on earth is obsolete about the natural verses man-made dichotomy? 
The Jupiter brain really was the product of a intelligent designer 
while the human being was not.


Hi John,

  The Jupiter brain really was the product of a intelligent designer 
while the human being was not. How could you know for sure?


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Bruno Marchal


On 13 Feb 2013, at 17:35, Craig Weinberg wrote:

Wouldn’t Simulated Intelligence be a more appropriate term than  
Artificial Intelligence?


A better term would be natural imagination. But terms are not  
important.






Thinking of it objectively, if we have a program which can model a  
hurricane, we would call that hurricane a simulation, not an  
‘artificial hurricane’. If we modeled any physical substance, force,  
or field, we would similarly say that we had simulated hydrogen or  
gravity or electromagnetism, not that we had created artificial  
hydrogen, gravity, etc.


Assuming those things exist.





By calling it artificial, we also emphasize a kind of obsolete  
notion of natural vs man-made as categories of origin. If we used  
simulated instead, the measure of intelligence would be framed more  
modestly as the degree to which a system meets our expectations (or  
what we think or assume are our expectations). Rather than assuming  
a universal index of intelligent qualities which is independent from  
our own human qualities, we could evaluate the success of a  
particular Turing emulation purely on its merits as a convincing  
reflection of intelligence rather than presuming to have replicated  
an organic conscious experience mechanically.


Comp assumes we are Turing emulable, and in that case we can be  
emulated, trivially. To assume this being not possible assume the  
existence of infinite process playing relevant roles in the mind or in  
life. But it is up to you to motivates for them. The problem, for you,  
is that you have to speculate on something that we have not yet  
observed. You can't say consciousness, as this would just beg the  
question.






The cost of losing the promise of imminently mastering awareness  
would, I think, be outweighed by the gain of a more scientifically  
circumspect approach.


Invoking infinities is not so much circumspect, especially for driving  
negative statement about the consciousness of possible entities.




Putting the Promethean dream on hold, we could guard against the  
shadow of its confirmation bias. My concern is that without such a  
precaution, the promise of machine intelligence as a stage 1  
simulacrum (a faithful copy of an original, in Baudrillard’s terms),  
will be diluted to a stage 3 simulacrum (a copy that masks the  
absence of a profound reality, where the simulacrum pretends to be a  
faithful copy.)


Assuming a non comp theory, like the quite speculative theory of mind  
by Penrose. Your own proposl fits remarkably ith comp, and some low  
level of substitution, it seems to me (we have already discussed this).


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Craig Weinberg


On Wednesday, February 13, 2013 12:46:23 PM UTC-5, Bruno Marchal wrote:


 On 13 Feb 2013, at 17:35, Craig Weinberg wrote:

 *Wouldn’t Simulated Intelligence be a more appropriate term than 
 Artificial Intelligence?*


 A better term would be natural imagination. But terms are not important. 


Except that we already have natural imagination, so what would we be 
developing? Replacing something with itself?
 





 Thinking of it objectively, if we have a program which can model a 
 hurricane, we would call that hurricane a simulation, not an ‘artificial 
 hurricane’. If we modeled any physical substance, force, or field, we would 
 similarly say that we had simulated hydrogen or gravity or 
 electromagnetism, not that we had created artificial hydrogen, gravity, etc.


 Assuming those things exist.


Whether they exist or not, the mathematically generated model of X is 
simulated X. It could be artificial X as well, but whether X is natural or 
artificial only tells us the nature of its immediate developers. 





 By calling it artificial, we also emphasize a kind of obsolete notion of 
 natural vs man-made as categories of origin. If we used simulated instead, 
 the measure of intelligence would be framed more modestly as the degree to 
 which a system meets our expectations (or what we think or assume are our 
 expectations). Rather than assuming a universal index of intelligent 
 qualities which is independent from our own human qualities, we could 
 evaluate the success of a particular Turing emulation purely on its merits 
 as a convincing reflection of intelligence rather than presuming to have 
 replicated an organic conscious experience mechanically.


 Comp assumes we are Turing emulable,


Which is why Comp fails. Not only are we not emulable, emulation itself is 
not primitively real - it is a subjective consensus of expectations.
 

 and in that case we can be emulated, trivially. 


Comp can't define us, so it can only emulate the postage stamp sized 
sampling of some of our most exposed, and least meaningful surfaces. Comp 
is a stencil or silhouette maker. No amount of silhouettes pieced together 
and animated in a sequence can generate an interior experience. If it did, 
we would only have to draw a cartoon and it would come to life on its own.
 

 To assume this being not possible assume the existence of infinite process 
 playing relevant roles in the mind or in life. But it is up to you to 
 motivates for them. The problem, for you, is that you have to speculate on 
 something that we have not yet observed. You can't say consciousness, as 
 this would just beg the question.


It is consciousness, and it is not begging the question, since all possible 
questions supervene on consciousness. Not sure what you mean about infinite 
processes or why they would mean that simulations can become experiences on 
their own.
 





 The cost of losing the promise of imminently mastering awareness would, I 
 think, be outweighed by the gain of a more scientifically circumspect 
 approach. 


 Invoking infinities is not so much circumspect, especially for driving 
 negative statement about the consciousness of possible entities.


What infinities do you refer to?
 




 Putting the Promethean dream on hold, we could guard against the shadow of 
 its confirmation bias. My concern is that without such a precaution, the 
 promise of machine intelligence as a stage 1 simulacrum (a faithful copy of 
 an original, in Baudrillard’s 
 termshttp://en.wikipedia.org/wiki/Simulacra_and_Simulation), 
 will be diluted to a stage 3 simulacrum (a copy that masks the absence of a 
 profound reality, where the simulacrum pretends to be a faithful copy.) 


 Assuming a non comp theory, like the quite speculative theory of mind by 
 Penrose. Your own proposl fits remarkably ith comp, and some low level of 
 substitution, it seems to me (we have already discussed this).


Sense contains comp, by definition, but a comp world cannot generate, 
support, or benefit by sense in any way as far as I can tell.

Craig


 Bruno


 http://iridia.ulb.ac.be/~marchal/





-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread meekerdb

On 2/13/2013 8:35 AM, Craig Weinberg wrote:

*Wouldn’t Simulated Intelligence be a more appropriate term than Artificial 
Intelligence?*

Thinking of it objectively, if we have a program which can model a hurricane, we would 
call that hurricane a simulation, not an ‘artificial hurricane’. If we modeled any 
physical substance, force, or field, we would similarly say that we had simulated 
hydrogen or gravity or electromagnetism, not that we had created artificial hydrogen, 
gravity, etc.


No, because the idea of an AI is that it can control a robot or other machine which 
interacts with the real world, whereas a simulate AI or hurricane acts within a simulated 
world.




By calling it artificial, we also emphasize a kind of obsolete notion of natural vs 
man-made as categories of origin. 


Why is the distinction between the natural intelligence of a child and the artificial 
intelligence of a Mars rover obsolete?  The latter is one we create by art, the other is 
created by nature.


If we used simulated instead, the measure of intelligence would be framed more modestly 
as the degree to which a system meets our expectations (or what we think or assume are 
our expectations). Rather than assuming a universal index of intelligent qualities which 
is independent from our own human qualities, 


But if we measure intelligence strictly relative to human intelligence we will be saying 
that visual pattern recognition is intelligence but solving Navier-Stokes equations is 
not.  This is the anthropocentrism that continually demotes whatever computers can do as 
not really intelligent even when it was regarded a the apothesis of intelligence 
*before* computers could  do it.


we could evaluate the success of a particular Turing emulation purely on its merits as a 
convincing reflection of intelligence 


But there is no one-dimensional measure of intelligence - it's just competence in many 
domains.



rather than presuming to have replicated an organic conscious experience 
mechanically.


I don't think that's a presumption.  It's an inference from the incoherence of the idea of 
a philosophical zombie.




The cost of losing the promise of imminently mastering awareness would, I think, be 
outweighed by the gain of a more scientifically circumspect approach. Putting the 
Promethean dream on hold, we could guard against the shadow of its confirmation bias. My 
concern is that without such a precaution, the promise of machine intelligence as a 
stage 1 simulacrum (a faithful copy of an original, in Baudrillard’s terms 
http://en.wikipedia.org/wiki/Simulacra_and_Simulation), will be diluted to a stage 3 
simulacrum (a copy that masks the absence of a profound reality, where the simulacrum 
pretends to be a faithful copy.) -- 


The assumption that there is a 'profound reality' is what Stathis showed to be 
'magic'.

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Stephen P. King

On 2/13/2013 2:58 PM, meekerdb wrote:

On 2/13/2013 8:35 AM, Craig Weinberg wrote:
*Wouldn’t Simulated Intelligence be a more appropriate term than 
Artificial Intelligence?*


Thinking of it objectively, if we have a program which can model a 
hurricane, we would call that hurricane a simulation, not an 
‘artificial hurricane’. If we modeled any physical substance, force, 
or field, we would similarly say that we had simulated hydrogen or 
gravity or electromagnetism, not that we had created artificial 
hydrogen, gravity, etc.


No, because the idea of an AI is that it can control a robot or other 
machine which interacts with the real world, whereas a simulate AI or 
hurricane acts within a simulated world.


What difference that makes a difference does that make in the grand 
scheme of things? The point is that we cannot 'prove' that we are not in 
a gigantic simulation. Yeah, we cannot prove a negative, but we can 
extract a lot of valuable insights and maybe some predictions from the 
assumption that 'reality = best possible simulation.






By calling it artificial, we also emphasize a kind of obsolete notion 
of natural vs man-made as categories of origin. 


Why is the distinction between the natural intelligence of a child and 
the artificial intelligence of a Mars rover obsolete?  The latter is 
one we create by art, the other is created by nature.


If we used simulated instead, the measure of intelligence would be 
framed more modestly as the degree to which a system meets our 
expectations (or what we think or assume are our expectations). 
Rather than assuming a universal index of intelligent qualities which 
is independent from our own human qualities, 


But if we measure intelligence strictly relative to human intelligence 
we will be saying that visual pattern recognition is intelligence but 
solving Navier-Stokes equations is not.  This is the anthropocentrism 
that continually demotes whatever computers can do as not really 
intelligent even when it was regarded a the apothesis of intelligence 
*before* computers could  do it.


we could evaluate the success of a particular Turing emulation purely 
on its merits as a convincing reflection of intelligence 


But there is no one-dimensional measure of intelligence - it's just 
competence in many domains.


rather than presuming to have replicated an organic conscious 
experience mechanically.


I don't think that's a presumption.  It's an inference from the 
incoherence of the idea of a philosophical zombie.




The cost of losing the promise of imminently mastering awareness 
would, I think, be outweighed by the gain of a more scientifically 
circumspect approach. Putting the Promethean dream on hold, we could 
guard against the shadow of its confirmation bias. My concern is that 
without such a precaution, the promise of machine intelligence as a 
stage 1 simulacrum (a faithful copy of an original, in Baudrillard’s 
terms http://en.wikipedia.org/wiki/Simulacra_and_Simulation), will 
be diluted to a stage 3 simulacrum (a copy that masks the absence of 
a profound reality, where the simulacrum pretends to be a faithful 
copy.) -- 


The assumption that there is a 'profound reality' is what Stathis 
showed to be 'magic'.


Brent

--
You received this message because you are subscribed to the Google 
Groups Everything List group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.





--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Craig Weinberg


On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent wrote:

  On 2/13/2013 8:35 AM, Craig Weinberg wrote: 

 *Wouldn�t Simulated Intelligence be a more appropriate term than 
 Artificial Intelligence?*

 Thinking of it objectively, if we have a program which can model a 
 hurricane, we would call that hurricane a simulation, not an �artificial 
 hurricane�. If we modeled any physical substance, force, or field, we 
 would similarly say that we had simulated hydrogen or gravity or 
 electromagnetism, not that we had created artificial hydrogen, gravity, etc.


 No, because the idea of an AI is that it can control a robot or other 
 machine which interacts with the real world, whereas a simulate AI or 
 hurricane acts within a simulated world.


AI doesn't need to interact with the real world though. It makes no 
difference to the AI whether its environment is real or simulated. Just 
because we can attach a robot to a simulation doesn't change it into an 
experience of a real world.
 



 By calling it artificial, we also emphasize a kind of obsolete notion of 
 natural vs man-made as categories of origin. 


 Why is the distinction between the natural intelligence of a child and the 
 artificial intelligence of a Mars rover obsolete?� The latter is one we 
 create by art, the other is created by nature.


Because we understand now that we are nature and nature is us. We can 
certainly use the term informally to clarify what we are referring to, like 
we might call someone a plumber because it helps us communicate who we are 
talking about, but anyone who does plumbing can be a plumber. It isn't an 
ontological distinction. Nature creates our capacity to create art, and we 
use that capacity to shape nature in return.
 


 If we used simulated instead, the measure of intelligence would be framed 
 more modestly as the degree to which a system meets our expectations (or 
 what we think or assume are our expectations). Rather than assuming a 
 universal index of intelligent qualities which is independent from our own 
 human qualities, 


 But if we measure intelligence strictly relative to human intelligence


I think that it is a misconception to imagine that we have access to any 
other measure.
 

 we will be saying that visual pattern recognition is intelligence but 
 solving Navier-Stokes equations is not.


Why, equations are written by intelligent humans?
 

 � This is the anthropocentrism that continually demotes whatever 
 computers can do as not really intelligent even when it was regarded a 
 the apothesis of intelligence *before* computers could� do it.


If I had a camera with higher resolution than a human eye, that doesn't 
mean that I can replace my eyes with those cameras. Computers can still be 
exemplary at computation without being deemed literally intelligent. A 
planetarium's star projector can be as accurate as any telescope and still 
be understood not to be projecting literal galaxies and stars into the 
ceiling of the observatory.
 


 we could evaluate the success of a particular Turing emulation purely on 
 its merits as a convincing reflection of intelligence 


 But there is no one-dimensional measure of intelligence - it's just 
 competence in many domains.


Competence in many domains is fine. I'm saying that the competence relates 
to how well it reflects or amplifies existing intelligence, not that it 
actually is itself intelligent.
 


 rather than presuming to have replicated an organic conscious experience 
 mechanically.


 I don't think that's a presumption.� It's an inference from the 
 incoherence of the idea of a philosophical zombie.


The idea of a philosophical zombie is a misconception based on some 
assumptions about matter and function which I clearly understand to be 
untrue. A sociopath is already a philosophical zombie as far as emotional 
intelligence is concerned. Someone with blindsight is a philosophical 
zombie as far as visual perception is concerned. Someone who is 
sleepwalking is a p-zombie as far as bipedal locomotion is concerned. The 
concept is bogus.
 



 The cost of losing the promise of imminently mastering awareness would, I 
 think, be outweighed by the gain of a more scientifically circumspect 
 approach. Putting the Promethean dream on hold, we could guard against the 
 shadow of its confirmation bias. My concern is that without such a 
 precaution, the promise of machine intelligence as a stage 1 simulacrum (a 
 faithful copy of an original, in Baudrillard�s 
 termshttp://en.wikipedia.org/wiki/Simulacra_and_Simulation), 
 will be diluted to a stage 3 simulacrum (a copy that masks the absence of a 
 profound reality, where the simulacrum pretends to be a faithful copy.) 
 --�


 The assumption that there is a 'profound reality' is what Stathis showed 
 to be 'magic'.


Baudrillard is not talking about consciousness in particular, only the sum 
of whatever is in the original which is not accessible in the copy. 

Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Craig Weinberg


On Wednesday, February 13, 2013 5:11:32 PM UTC-5, Stephen Paul King wrote:

  On 2/13/2013 2:58 PM, meekerdb wrote:
  
 On 2/13/2013 8:35 AM, Craig Weinberg wrote: 

 *Wouldn�t Simulated Intelligence be a more appropriate term than 
 Artificial Intelligence?*

 Thinking of it objectively, if we have a program which can model a 
 hurricane, we would call that hurricane a simulation, not an �artificial 
 hurricane�. If we modeled any physical substance, force, or field, we 
 would similarly say that we had simulated hydrogen or gravity or 
 electromagnetism, not that we had created artificial hydrogen, gravity, etc.


 No, because the idea of an AI is that it can control a robot or other 
 machine which interacts with the real world, whereas a simulate AI or 
 hurricane acts within a simulated world.


 ��� What difference that makes a difference does that make in the 
 grand scheme of things? The point is that we cannot 'prove' that we are not 
 in a gigantic simulation. Yeah, we cannot prove a negative, but we can 
 extract a lot of valuable insights and maybe some predictions from the 
 assumption that 'reality = best possible simulation.


I just realized how to translate that into my view: Reality = making the 
most sense possible. Same thing really. That's why I talk about multisense 
Realism, with Realism being the quality of maximum unfiltered sense. Since 
sense is subtractive, the more senses you have overlapping and diverging, 
the less there is that you are missing. Reality = nothing is missing (i.e. 
only possible at the Absolute level), Realism = you can't tell that 
anything is missing from your perceptual capacity/inertial frame/simulation.

I don't like the word simulation per se, because I think that anything the 
idea of a Matrix universe does for us would be negated by the idea that the 
simulation eventually has to run on something which is not a simulation, 
otherwise the word has no meaning. Either way, the notion of simulation 
doesn't make any of the big questions more answerable, even if it is 
locally true for us.

Craig
 


  

 By calling it artificial, we also emphasize a kind of obsolete notion of 
 natural vs man-made as categories of origin. 


 Why is the distinction between the natural intelligence of a child and the 
 artificial intelligence of a Mars rover obsolete?� The latter is one we 
 create by art, the other is created by nature.

 If we used simulated instead, the measure of intelligence would be framed 
 more modestly as the degree to which a system meets our expectations (or 
 what we think or assume are our expectations). Rather than assuming a 
 universal index of intelligent qualities which is independent from our own 
 human qualities, 


 But if we measure intelligence strictly relative to human intelligence we 
 will be saying that visual pattern recognition is intelligence but solving 
 Navier-Stokes equations is not.� This is the anthropocentrism that 
 continually demotes whatever computers can do as not really intelligent 
 even when it was regarded a the apothesis of intelligence *before* 
 computers could� do it.

 we could evaluate the success of a particular Turing emulation purely on 
 its merits as a convincing reflection of intelligence 


 But there is no one-dimensional measure of intelligence - it's just 
 competence in many domains.

 rather than presuming to have replicated an organic conscious experience 
 mechanically.


 I don't think that's a presumption.� It's an inference from the 
 incoherence of the idea of a philosophical zombie.


 The cost of losing the promise of imminently mastering awareness would, I 
 think, be outweighed by the gain of a more scientifically circumspect 
 approach. Putting the Promethean dream on hold, we could guard against the 
 shadow of its confirmation bias. My concern is that without such a 
 precaution, the promise of machine intelligence as a stage 1 simulacrum (a 
 faithful copy of an original, in Baudrillard�s 
 termshttp://en.wikipedia.org/wiki/Simulacra_and_Simulation), 
 will be diluted to a stage 3 simulacrum (a copy that masks the absence of a 
 profound reality, where the simulacrum pretends to be a faithful copy.) 
 --�


 The assumption that there is a 'profound reality' is what Stathis showed 
 to be 'magic'.

 Brent

 -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.comjavascript:
 .
 Visit this group at http://groups.google.com/group/everything-list?hl=en.
 For more options, visit https://groups.google.com/groups/opt_out.
 �
 �



 -- 
 Onward!

 Stephen

 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails 

Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Stephen P. King

On 2/13/2013 5:40 PM, Craig Weinberg wrote:


[SPK wrote} What difference that makes a difference does that make
in the grand scheme of things? The point is that we cannot 'prove'
that we are not in a gigantic simulation. Yeah, we cannot prove a
negative, but we can extract a lot of valuable insights and maybe
some predictions from the assumption that 'reality = best possible
simulation.


I just realized how to translate that into my view: Reality = making 
the most sense possible. Same thing really. That's why I talk about 
multisense Realism, with Realism being the quality of maximum 
unfiltered sense. Since sense is subtractive, the more senses you have 
overlapping and diverging, the less there is that you are missing. 
Reality = nothing is missing (i.e. only possible at the Absolute 
level), Realism = you can't tell that anything is missing from your 
perceptual capacity/inertial frame/simulation.


I don't like the word simulation per se, because I think that anything 
the idea of a Matrix universe does for us would be negated by the idea 
that the simulation eventually has to run on something which is not a 
simulation, otherwise the word has no meaning. Either way, the notion 
of simulation doesn't make any of the big questions more answerable, 
even if it is locally true for us.


Craig


I like the idea of a Matrix universe exactly for that reason; it 
takes resources to 'run' it. No free lunch, even for universes!!!


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Stephen P. King

On 2/13/2013 5:21 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent wrote:

On 2/13/2013 8:35 AM, Craig Weinberg wrote:

*Wouldn�t Simulated Intelligence be a more appropriate term
than Artificial Intelligence?*

Thinking of it objectively, if we have a program which can model
a hurricane, we would call that hurricane a simulation, not an
�artificial hurricane�. If we modeled any physical substance,
force, or field, we would similarly say that we had simulated
hydrogen or gravity or electromagnetism, not that we had created
artificial hydrogen, gravity, etc.


No, because the idea of an AI is that it can control a robot or
other machine which interacts with the real world, whereas a
simulate AI or hurricane acts within a simulated world.


AI doesn't need to interact with the real world though. It makes no 
difference to the AI whether its environment is real or simulated. 
Just because we can attach a robot to a simulation doesn't change it 
into an experience of a real world.


Hi Craig,

I think that you might be making a huge fuss over a difference that 
does not always make a difference between a public world and a private 
world! IMHO, that makes the 'real' physical world Real is that we can 
all agree on its properties (subject to some constraints that matter). 
Many can point at the tree over there and agree on its height and 
whether or not it is a deciduous variety.








By calling it artificial, we also emphasize a kind of obsolete
notion of natural vs man-made as categories of origin. 


Why is the distinction between the natural intelligence of a child
and the artificial intelligence of a Mars rover obsolete?� The
latter is one we create by art, the other is created by nature.


Because we understand now that we are nature and nature is us.


I disagree! We can fool ourselves into thinking that we 
understand' but what we can do is, at best, form testable explanations 
of stuff... We are fallible!


We can certainly use the term informally to clarify what we are 
referring to, like we might call someone a plumber because it helps us 
communicate who we are talking about, but anyone who does plumbing can 
be a plumber. It isn't an ontological distinction. Nature creates our 
capacity to create art, and we use that capacity to shape nature in 
return.


I agree! I think it is that aspect of Nature that can throw itself 
into its choice, as Satre mused, that is making the computationalists 
crazy. I got no problem with it as I embrace non-well foundedness.


L'homme est d'abord ce qui se jette vers un avenir, et ce qui est
conscient de se projeter dans l'avenir./ ~ Jean Paul Satre





If we used simulated instead, the measure of intelligence would
be framed more modestly as the degree to which a system meets our
expectations (or what we think or assume are our expectations).
Rather than assuming a universal index of intelligent qualities
which is independent from our own human qualities, 


But if we measure intelligence strictly relative to human intelligence


I think that it is a misconception to imagine that we have access to 
any other measure.


Yeah!



we will be saying that visual pattern recognition is intelligence
but solving Navier-Stokes equations is not.


Why, equations are written by intelligent humans?


People are confounded by computational intractability and eagerly 
spin tales of hypercomputers and other perpetual motion machines.




� This is the anthropocentrism that continually demotes whatever
computers can do as not really intelligent even when it was
regarded a the apothesis of intelligence *before* computers
could� do it.


If I had a camera with higher resolution than a human eye, that 
doesn't mean that I can replace my eyes with those cameras. Computers 
can still be exemplary at computation without being deemed literally 
intelligent. A planetarium's star projector can be as accurate as any 
telescope and still be understood not to be projecting literal 
galaxies and stars into the ceiling of the observatory.




we could evaluate the success of a particular Turing emulation
purely on its merits as a convincing reflection of intelligence 


But there is no one-dimensional measure of intelligence - it's
just competence in many domains.


Competence in many domains is fine. I'm saying that the competence 
relates to how well it reflects or amplifies existing intelligence, 
not that it actually is itself intelligent.




rather than presuming to have replicated an organic conscious
experience mechanically.


I don't think that's a presumption.� It's an inference from the
incoherence of the idea of a philosophical zombie.


The idea of a philosophical zombie is a misconception based on some 
assumptions about matter and function which I clearly understand to be 

Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Stephen P. King

On 2/13/2013 5:40 PM, Craig Weinberg wrote:


[SPK wrote: ]'reality = best possible simulation.


I just realized how to translate that into my view: Reality = making 
the most sense possible. Same thing really. That's why I talk about 
multisense Realism, with Realism being the quality of maximum 
unfiltered sense. Since sense is subtractive, the more senses you have 
overlapping and diverging, the less there is that you are missing. 
Reality = nothing is missing (i.e. only possible at the Absolute 
level), Realism = you can't tell that anything is missing from your 
perceptual capacity/inertial frame/simulation.

Hi Craig,

There is something else that we must discuss in what you wrote! I 
think that you can't tell that anything is missing from your perceptual 
capacity/inertial frame/simulation has nothing to do with realism at 
all. We get that illusion of completeness precisely because the 
necessary conditions for having Sense are met. (This is part of the 
fixed point stuff.)
 If you are conscious at all at any level you will automatically 
not be able to percieve any 'holes' or inconsistencies in your personal 
1p 'Sense of all that is, as othe Sense that one has must be have 
relational closure to some degree, otherwise we have at least one 
instant infinite regress in one's dictionary of concept relations. This 
reasoning is a key part of my motivation to claim that 'reality', for 
any single observer (up to isomorphisms) must be representable as a 
Boolean algebra: it must be that all of its propositions (when 
considered as a lattice of propositions) are mutually consistent. This 
mutual consistency does not come for free, pace Bruno, but is dependent 
on the resources available to compute the Sense content. One must have a 
functioning physical brain to think...


A digression: This universal restriction of Boolean algebraic 
representability on observable content seems to back up that @$$_*)# 
Noam Chomsky's universal grammar law but I think that the Piraha' 
people's language http://en.wikipedia.org/wiki/Pirah%C3%A3_language 
points out that there can be non-recursive 'bubbles' in a overall global 
network of recursive relations. (Chomsky's idea that language is 
causally determined by a genetically determined capacity seems to be the 
distilled essence of rubbish, in my not so humble opinion btw.)


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Craig Weinberg


On Wednesday, February 13, 2013 7:05:38 PM UTC-5, Stephen Paul King wrote:

  On 2/13/2013 5:40 PM, Craig Weinberg wrote:
  
  [SPK wrote: ]'reality = best possible simulation.
  

 I just realized how to translate that into my view: Reality = making the 
 most sense possible. Same thing really. That's why I talk about multisense 
 Realism, with Realism being the quality of maximum unfiltered sense. Since 
 sense is subtractive, the more senses you have overlapping and diverging, 
 the less there is that you are missing. Reality = nothing is missing (i.e. 
 only possible at the Absolute level), Realism = you can't tell that 
 anything is missing from your perceptual capacity/inertial frame/simulation.

 Hi Craig,

 There is something else that we must discuss in what you wrote! I 
 think that you can't tell that anything is missing from your perceptual 
 capacity/inertial frame/simulation has nothing to do with realism at all. 
 We get that illusion of completeness precisely because the necessary 
 conditions for having Sense are met. (This is part of the fixed point 
 stuff.)


If all there is is sense though, then there can never be an illusion of 
completeness, just a comparison of one experience to another in which  one 
is found to be lacking realism. If all there is in the universe is a single 
flicker of light for a millisecond, then that is the only reality. With 
sense, illusion is just a conflict among different sensory frames and 
applications of motive. There is no realism beyond that, but no realism 
beyond that is necessary.

 

  If you are conscious at all at any level you will automatically not 
 be able to percieve any 'holes' or inconsistencies in your personal 1p 
 'Sense of all that is,


We perceive holes all the time. When we look at an optical illusion, our 
visual channel of sense seems to present an experience which conflicts with 
our cognitive channel of sense (understanding). It happens through time 
too. We learn something that makes us rethink our previous understandings, 
etc. That's kind of the main thing that goes on in our life is finding out 
about our gaps, either gracefully or the hard way as regrets.
 

 as othe Sense that one has must be have relational closure to some degree, 
 otherwise we have at least one instant infinite regress in one's dictionary 
 of concept relations.


Sure, there are millions of relational closures, and they're nested within 
each other too. Everything that we can recognize is a closed presence, but 
when we discover new frames of references, previously closed relations can 
change or seem to break.
 

 This reasoning is a key part of my motivation to claim that 'reality', for 
 any single observer (up to isomorphisms) must be representable as a Boolean 
 algebra: it must be that all of its propositions (when considered as a 
 lattice of propositions) are mutually consistent. This mutual consistency 
 does not come for free, pace Bruno, but is dependent on the resources 
 available to compute the Sense content. One must have a functioning 
 physical brain to think...


I don't think that sense is never computed, it is only experienced.  
Computation is only a strategy for organizing sense in public/public 
interactions - which is the essence of realism. The consistency of 
propositions for a single observer is like perspective. If something moves 
closer to your face, it appears larger. That is not because something is 
being computed locally and presented as an illusion, it appears larger 
because that is the sensory content of the experience which best reflects 
all of the conditions involved. This is a hybrid of private and public 
conditions, just as your sink's supply of water is a hybrid of local 
plumbing conditions and distant aqueducts. Because of the unity of sense, 
the mutual consistency does come for free, rather it is the insulation, the 
gaps, the resistance which cannot be maintained for free because they are 
ultimately disequilibrium.


 A digression: This universal restriction of Boolean algebraic 
 representability on observable content seems to back up that @$$_*)# Noam 
 Chomsky's universal grammar law but I think that the Piraha' people's 
 language http://en.wikipedia.org/wiki/Pirah%C3%A3_language points out 
 that there can be non-recursive 'bubbles' in a overall global network of 
 recursive relations. (Chomsky's idea that language is causally determined 
 by a genetically determined capacity seems to be the distilled essence of 
 rubbish, in my not so humble opinion btw.)


Yeah I agree that language doesn't follow genetics - it's the other way 
around if anything. I think you're right for associating algebra with 
realism, because it pertains to functions among public bodies (which is a 
big part of realism). I would say though that most of sense does not have 
to do with algebra or geometry or arithmetic at all. Math and physics are 
what sense sees when it hides from itself.

Craig
 

 -- 
 

Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Craig Weinberg


On Wednesday, February 13, 2013 5:51:27 PM UTC-5, Stephen Paul King wrote:

  On 2/13/2013 5:40 PM, Craig Weinberg wrote:
  
  [SPK wrote} What difference that makes a difference does that make in 
 the grand scheme of things? The point is that we cannot 'prove' that we are 
 not in a gigantic simulation. Yeah, we cannot prove a negative, but we can 
 extract a lot of valuable insights and maybe some predictions from the 
 assumption that 'reality = best possible simulation.
  

 I just realized how to translate that into my view: Reality = making the 
 most sense possible. Same thing really. That's why I talk about multisense 
 Realism, with Realism being the quality of maximum unfiltered sense. Since 
 sense is subtractive, the more senses you have overlapping and diverging, 
 the less there is that you are missing. Reality = nothing is missing (i.e. 
 only possible at the Absolute level), Realism = you can't tell that 
 anything is missing from your perceptual capacity/inertial frame/simulation.

 I don't like the word simulation per se, because I think that anything the 
 idea of a Matrix universe does for us would be negated by the idea that the 
 simulation eventually has to run on something which is not a simulation, 
 otherwise the word has no meaning. Either way, the notion of simulation 
 doesn't make any of the big questions more answerable, even if it is 
 locally true for us.

 Craig


 I like the idea of a Matrix universe exactly for that reason; it takes 
 resources to 'run' it. No free lunch, even for universes!!!


You can still have the idea of resources if the universe isn't a simulation 
though. No particular diffraction tree within the supreme monad can last as 
long as the Absolute diffraction, so the clock is always running and every 
motive carries risk.

Craig


 -- 
 Onward!

 Stephen

  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Craig Weinberg


On Wednesday, February 13, 2013 5:37:08 PM UTC-5, Stephen Paul King wrote:

  On 2/13/2013 5:21 PM, Craig Weinberg wrote:
  


 On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent wrote: 

  On 2/13/2013 8:35 AM, Craig Weinberg wrote: 

 *Wouldn�t Simulated Intelligence be a more appropriate term than 
 Artificial Intelligence?*

 Thinking of it objectively, if we have a program which can model a 
 hurricane, we would call that hurricane a simulation, not an �artificial 
 hurricane�. If we modeled any physical substance, force, or field, we 
 would similarly say that we had simulated hydrogen or gravity or 
 electromagnetism, not that we had created artificial hydrogen, gravity, etc.


 No, because the idea of an AI is that it can control a robot or other 
 machine which interacts with the real world, whereas a simulate AI or 
 hurricane acts within a simulated world.
  

 AI doesn't need to interact with the real world though. It makes no 
 difference to the AI whether its environment is real or simulated. Just 
 because we can attach a robot to a simulation doesn't change it into an 
 experience of a real world.
  

 Hi Craig,

 I think that you might be making a huge fuss over a difference that 
 does not always make a difference between a public world and a private 
 world! IMHO, that makes the 'real' physical world Real is that we can all 
 agree on its properties (subject to some constraints that matter). Many can 
 point at the tree over there and agree on its height and whether or not it 
 is a deciduous variety.


Why does our agreement mean on something's properties mean anything other 
than that though? We are people living at the same time with human sized 
bodies, so it would make sense that we would agree on almost everything 
that involve our bodies. You can have a dream with other characters in the 
dream who point to your dream tree and agree on its characteristics, but 
upon waking, you are re-oriented to a more real, more tangibly public world 
with longer and more stable histories. These qualities are only significant 
in comparison to the dream though. If you can't remember your waking life, 
then the dream is real to you, and to the universe through you.



   
  
  

 By calling it artificial, we also emphasize a kind of obsolete notion of 
 natural vs man-made as categories of origin. 


 Why is the distinction between the natural intelligence of a child and 
 the artificial intelligence of a Mars rover obsolete?� The latter is one 
 we create by art, the other is created by nature.
  

 Because we understand now that we are nature and nature is us.


 I disagree! We can fool ourselves into thinking that we understand' 
 but what we can do is, at best, form testable explanations of stuff... We 
 are fallible!


I agree, but I don't see how that applies to us being nature. What would it 
mean to be unnatural? How would an unnatural being find themselves in a 
natural world?
 


  We can certainly use the term informally to clarify what we are 
 referring to, like we might call someone a plumber because it helps us 
 communicate who we are talking about, but anyone who does plumbing can be a 
 plumber. It isn't an ontological distinction. Nature creates our capacity 
 to create art, and we use that capacity to shape nature in return.
  

 I agree! I think it is that aspect of Nature that can throw itself 
 into its choice, as Satre mused, that is making the computationalists 
 crazy. I got no problem with it as I embrace non-well foundedness.


Cool, yeah I mean it could be said that aspect is defines nature?
 


 L'homme est d'abord ce qui se jette vers un avenir, et ce qui est 
 conscient de se projeter dans l'avenir./ ~ Jean Paul Satre

   
  
  
 If we used simulated instead, the measure of intelligence would be framed 
 more modestly as the degree to which a system meets our expectations (or 
 what we think or assume are our expectations). Rather than assuming a 
 universal index of intelligent qualities which is independent from our own 
 human qualities, 


 But if we measure intelligence strictly relative to human intelligence


 I think that it is a misconception to imagine that we have access to any 
 other measure.
  

 Yeah!

   
  
  we will be saying that visual pattern recognition is intelligence but 
 solving Navier-Stokes equations is not.


 Why, equations are written by intelligent humans?
  

 People are confounded by computational intractability and eagerly spin 
 tales of hypercomputers and other perpetual motion machines.


Complexity seems to be the only abstract principle that the Western-OMMM 
orientation respects.
 


   
  
 � This is the anthropocentrism that continually demotes whatever 
 computers can do as not really intelligent even when it was regarded a 
 the apothesis of intelligence *before* computers could� do it.
  

 If I had a camera with higher resolution than a human eye, that doesn't 
 mean that I can 

Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Stathis Papaioannou
On Thu, Feb 14, 2013 at 3:35 AM, Craig Weinberg whatsons...@gmail.com wrote:
 Wouldn’t Simulated Intelligence be a more appropriate term than Artificial
 Intelligence?

 Thinking of it objectively, if we have a program which can model a
 hurricane, we would call that hurricane a simulation, not an ‘artificial
 hurricane’. If we modeled any physical substance, force, or field, we would
 similarly say that we had simulated hydrogen or gravity or electromagnetism,
 not that we had created artificial hydrogen, gravity, etc.

 By calling it artificial, we also emphasize a kind of obsolete notion of
 natural vs man-made as categories of origin. If we used simulated instead,
 the measure of intelligence would be framed more modestly as the degree to
 which a system meets our expectations (or what we think or assume are our
 expectations). Rather than assuming a universal index of intelligent
 qualities which is independent from our own human qualities, we could
 evaluate the success of a particular Turing emulation purely on its merits
 as a convincing reflection of intelligence rather than presuming to have
 replicated an organic conscious experience mechanically.

 The cost of losing the promise of imminently mastering awareness would, I
 think, be outweighed by the gain of a more scientifically circumspect
 approach. Putting the Promethean dream on hold, we could guard against the
 shadow of its confirmation bias. My concern is that without such a
 precaution, the promise of machine intelligence as a stage 1 simulacrum (a
 faithful copy of an original, in Baudrillard’s terms), will be diluted to a
 stage 3 simulacrum (a copy that masks the absence of a profound reality,
 where the simulacrum pretends to be a faithful copy.)

A simulated hurricane is different from an actual hurricane, but
simulated intelligence is the same as actual intelligence, just as
simulated arithmetic is the same as actual arithmetic. Whether the
intelligence has the same associated consciousness or not is a matter
for debate, but not the intelligence itself.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Craig Weinberg


On Wednesday, February 13, 2013 9:45:43 PM UTC-5, stathisp wrote:

 On Thu, Feb 14, 2013 at 3:35 AM, Craig Weinberg 
 whats...@gmail.comjavascript: 
 wrote: 
  Wouldn’t Simulated Intelligence be a more appropriate term than 
 Artificial 
  Intelligence? 
  
  Thinking of it objectively, if we have a program which can model a 
  hurricane, we would call that hurricane a simulation, not an ‘artificial 
  hurricane’. If we modeled any physical substance, force, or field, we 
 would 
  similarly say that we had simulated hydrogen or gravity or 
 electromagnetism, 
  not that we had created artificial hydrogen, gravity, etc. 
  
  By calling it artificial, we also emphasize a kind of obsolete notion of 
  natural vs man-made as categories of origin. If we used simulated 
 instead, 
  the measure of intelligence would be framed more modestly as the degree 
 to 
  which a system meets our expectations (or what we think or assume are 
 our 
  expectations). Rather than assuming a universal index of intelligent 
  qualities which is independent from our own human qualities, we could 
  evaluate the success of a particular Turing emulation purely on its 
 merits 
  as a convincing reflection of intelligence rather than presuming to have 
  replicated an organic conscious experience mechanically. 
  
  The cost of losing the promise of imminently mastering awareness would, 
 I 
  think, be outweighed by the gain of a more scientifically circumspect 
  approach. Putting the Promethean dream on hold, we could guard against 
 the 
  shadow of its confirmation bias. My concern is that without such a 
  precaution, the promise of machine intelligence as a stage 1 simulacrum 
 (a 
  faithful copy of an original, in Baudrillard’s terms), will be diluted 
 to a 
  stage 3 simulacrum (a copy that masks the absence of a profound reality, 
  where the simulacrum pretends to be a faithful copy.) 

 A simulated hurricane is different from an actual hurricane, but 
 simulated intelligence is the same as actual intelligence, just as 
 simulated arithmetic is the same as actual arithmetic. 


No, that's a false equivalence. Any simulated hurricane *can be* the same 
as any other simulated hurricane, but no simulated hurricane can be the 
same as any actual hurricane.

Arithmetic cannot be simulated because it is only figurative to begin with. 
You can paint a painting of a pipe that says 'this isn't a pipe', but you 
can't paint a painting that truthfully says 'these are not words' or 'this 
is not a painting'.
 

 Whether the 
 intelligence has the same associated consciousness or not is a matter 
 for debate, but not the intelligence itself. 


I disagree. There is no internal intelligence there at all. Zero. There is 
a recording of some aspects of human intelligence which can extend human 
intelligence into extra-human ranges for human users. The computer itself 
has no extra-human intelligence, just as a telescope itself doesn't see 
anything, it just helps us see, passively of course. We are the users of 
technology, technology itself is not a user.

Craig
 



 -- 
 Stathis Papaioannou 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Stathis Papaioannou
On Thu, Feb 14, 2013 at 2:27 PM, Craig Weinberg whatsons...@gmail.com wrote:

 Whether the
 intelligence has the same associated consciousness or not is a matter
 for debate, but not the intelligence itself.


 I disagree. There is no internal intelligence there at all. Zero. There is a
 recording of some aspects of human intelligence which can extend human
 intelligence into extra-human ranges for human users. The computer itself
 has no extra-human intelligence, just as a telescope itself doesn't see
 anything, it just helps us see, passively of course. We are the users of
 technology, technology itself is not a user.

I think you're conflating intelligence with consciousness. If the
table talks to you and helps you solve a difficult problem, then by
definition the table is intelligent. How the table pulls this off and
whether it is conscious or not are separate questions.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Stephen P. King

On 2/13/2013 8:09 PM, Craig Weinberg wrote:


[SPK wrote: ]I like the idea of a Matrix universe exactly for that
reason; it takes resources to 'run' it. No free lunch, even for
universes!!!


You can still have the idea of resources if the universe isn't a 
simulation though. No particular diffraction tree within the supreme 
monad can last as long as the Absolute diffraction, so the clock is 
always running and every motive carries risk.


Right, but since we do have the resources, why not assume that the 
Matrix is up and running on them already? The fun thing is that if we 
have both then we have a nice solution to both the mind (for matter) and 
body (for comp) problems. There can be no 'supreme monad' as such would 
be equivalent to a preferred frame and basis. The totality of all that 
exists is not a hierarchy, it is a fractal network.


--
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: Simulated Intelligence Mini-Manifesto

2013-02-13 Thread Stephen P. King

On 2/13/2013 9:41 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 5:37:08 PM UTC-5, Stephen Paul King 
wrote:


On 2/13/2013 5:21 PM, Craig Weinberg wrote:



On Wednesday, February 13, 2013 2:58:28 PM UTC-5, Brent wrote:

On 2/13/2013 8:35 AM, Craig Weinberg wrote:

*Wouldn�t Simulated Intelligence be a more appropriate
term than Artificial Intelligence?*

Thinking of it objectively, if we have a program which can
model a hurricane, we would call that hurricane a
simulation, not an �artificial hurricane�. If we modeled
any physical substance, force, or field, we would similarly
say that we had simulated hydrogen or gravity or
electromagnetism, not that we had created artificial
hydrogen, gravity, etc.


No, because the idea of an AI is that it can control a robot
or other machine which interacts with the real world, whereas
a simulate AI or hurricane acts within a simulated world.


AI doesn't need to interact with the real world though. It makes
no difference to the AI whether its environment is real or
simulated. Just because we can attach a robot to a simulation
doesn't change it into an experience of a real world.


Hi Craig,

I think that you might be making a huge fuss over a difference
that does not always make a difference between a public world and
a private world! IMHO, that makes the 'real' physical world Real
is that we can all agree on its properties (subject to some
constraints that matter). Many can point at the tree over there
and agree on its height and whether or not it is a deciduous variety.


Why does our agreement mean on something's properties mean anything 
other than that though?


Hi Craig,

Why are you thinking of 'though' in such a minimal way? Don't 
forget about the 'objects' of those thoughts... The duals...


We are people living at the same time with human sized bodies, so it 
would make sense that we would agree on almost everything that involve 
our bodies.


We is this we? I am considering any 'object' of system capable of 
being described by a QM wave function or, more simply, capable of being 
represented by a semi-complete atomic boolean algebra.


You can have a dream with other characters in the dream who point to 
your dream tree and agree on its characteristics, but upon waking, you 
are re-oriented to a more real, more tangibly public world with longer 
and more stable histories.


Right, it is the upon waking' part that is important. Our common 
'reality' is the part that we can only 'wake up' from when we depart the 
mortal coil. Have you followed the quantum suicide discussion any?


These qualities are only significant in comparison to the dream 
though. If you can't remember your waking life, then the dream is real 
to you, and to the universe through you.


You are assuming a standard that you cannot define. Why? What one 
observes as 'real' is real to that one, it is not necessarily real to 
every one else... but there is a huge overlap between our 1p 
'realities'. Andrew Soltau has this idea nailed now in his 
Multisolipsism stuff. ;-)









By calling it artificial, we also emphasize a kind of obsolete
notion of natural vs man-made as categories of origin. 


Why is the distinction between the natural intelligence of a
child and the artificial intelligence of a Mars rover
obsolete?� The latter is one we create by art, the other is
created by nature.


Because we understand now that we are nature and nature is us.


I disagree! We can fool ourselves into thinking that we 
understand' but what we can do is, at best, form testable 
explanations of stuff... We are fallible!


I agree, but I don't see how that applies to us being nature.


We are part of Nature and there is a 'whole-part isomorphism' 
involved..


What would it mean to be unnatural? How would an unnatural being find 
themselves in a natural world?


They can't, unless we invent them... Pink Ponies





We can certainly use the term informally to clarify what we are
referring to, like we might call someone a plumber because it
helps us communicate who we are talking about, but anyone who
does plumbing can be a plumber. It isn't an ontological
distinction. Nature creates our capacity to create art, and we
use that capacity to shape nature in return.


I agree! I think it is that aspect of Nature that can throw
itself into its choice, as Satre mused, that is making the
computationalists crazy. I got no problem with it as I embrace
non-well foundedness.


Cool, yeah I mean it could be said that aspect is defines nature?


Can we put Nature in a box? No...




L'homme est d'abord ce qui se jette vers un avenir, et ce qui est
conscient de se projeter dans l'avenir./ ~ Jean Paul Satre





If we used