Re: [agi] AGI and Deity

2008-01-16 Thread Stan Nilsen

James,
your comments are appreciated.
 a few comments below
Stan


James Ratcliff wrote:
Your train of reasoning is lacking somewhat in many areas, and does not 
directly point to your main assertion.
thanks for the feedback.  As I follow other discussions and read the 
papers they refer to, I realize that my writings are lacking.  Perhaps 
they are more blog like than scientific.




The problem of calculating values of certain states is a difficult one, 
and one that a goo AGI MUST be able to do, using facts of the world, and 
subjective beliefs and measures as well.
I'm not sure I get the MUST part.  Is this for troubleshooting purpose 
or for trust issues? Or is it required for steering the contemplation 
or attention of the machine?


  Whether healthcare or education spending is most beneficial must be 
calculated, and compared against eachother, based on facts, beliefs, 
past data and statistics, and trial and error.
  And these subjective beliefs are ever changing and cyclical.  As a 
better example, would be a limited AGI whose job was to balance the 
National budget, its job would be to choose the best projects to spend 
money on.
  Maximizing Benefit Units (BU) here as a measure of 'worth' of each 
project is needed and required.
  One intelligence (human) may be overwelmed with the sheer amount of 
data and statistics to come to the best decision.  An AGI with 
subjective beliefs about the benefit of of each could use potentially 
more of teh data to come to a more maximized solution.


It is the future scenarios that are often the most compelling 
justification or evidence for value of something, and in my opinion 
the most unreliable. Whether it is man or machine giving his case, there 
will be speculation involved in the common sense domain.


Will the scenario be You say this... now prove it. If you can't prove 
it don't use that in the justification...  Very limiting.




On your other note about any explanation being too long or too 
complicated to understand.. Any decision must be able to be 
explained.  It can be done at different levels, and expanded as much as 
the AGI is told to do so, but there should be NO decicions that you ask 
the machine, Why do you decide X? and the answer is nothing, or 'I dont 
know' 


If the architecture of the machine is flow based, that is, the prior 
events helped determine current events, then the burden of explaining 
would overwhelm the system.  Even if only logic based, as you pointed 
out the values will be dynamic and to explain one would need to keep a 
record of the values that went into the decision process - a snapshot of 
the world as it was at the time.


What if the system attempted to explain and finally concluded if I were 
making the decision right now, it would be different.  We wouldn't 
consider it especially brilliant since we hear it all the time.



Any machine we create that has answers without the reasoning, is very scary.


and maybe more than scary if it is optimized to offer reasoning that 
people will buy, especially the line trust me.




James Ratcliff



*/Stan Nilsen [EMAIL PROTECTED]/* wrote:

Greetings Samantha,

I'll not bother with detailed explanations since they are easily
dismissed with a hand wave and categorization of irrelevant.

For anyone who might be interested in the question of:
Why wouldn't a super intelligence be better able to explain the aspects
of reality? (assuming the point is providing explanation for choices.)
I've placed an example case online at

http://www.footnotestrongai.com/examples/bebillg.html

It's an exploration based on becoming Bill Gates, (at least having
control over his money) and how a supercomputer might offer
explanations given the situation. Pretty painless, easy read.

I find the values based nature of our world highly relevant to the
concept of an emerging super brain that will make super decisions.

Stan Nilsen


Samantha Atkins wrote:
 
  On Dec 26, 2007, at 7:21 AM, Stan Nilsen wrote:
 
  Samantha Atkins wrote:
 
 
  In what way? The limits of human probability computation to form
  accurate opinions are rather well documented. Why wouldn't a mind
  that could compute millions of times more quickly and with far
  greater accuracy be able to form much more complex models that
were
  far better at predicting future events and explaining those
aspects
  of reality with are its inputs? Again we need to get beyond the
  [likely religion instilled] notion that only absolute
knowledge is
  real (or super) knowledge.
 
  Allow me to address what I think the questions are (I'll
paraphrase):
 
  Q1. in what way are we going to be short of super intelligence?
 
  resp: The simple answer is that the most intelligent of future
  intelligences will not be able to make decisions that are clearly
  superior to the best of 

Re: [agi] AGI and Deity

2008-01-15 Thread James Ratcliff
Your train of reasoning is lacking somewhat in many areas, and does not 
directly point to your main assertion.

The problem of calculating values of certain states is a difficult one, and one 
that a goo AGI MUST be able to do, using facts of the world, and subjective 
beliefs and measures as well.
  Whether healthcare or education spending is most beneficial must be 
calculated, and compared against eachother, based on facts, beliefs, past data 
and statistics, and trial and error.
  And these subjective beliefs are ever changing and cyclical.  As a better 
example, would be a limited AGI whose job was to balance the National budget, 
its job would be to choose the best projects to spend money on.
  Maximizing Benefit Units (BU) here as a measure of 'worth' of each project is 
needed and required.
  One intelligence (human) may be overwelmed with the sheer amount of data and 
statistics to come to the best decision.  An AGI with subjective beliefs about 
the benefit of of each could use potentially more of teh data to come to a more 
maximized solution.

On your other note about any explanation being too long or too complicated to 
understand.. Any decision must be able to be explained.  It can be done at 
different levels, and expanded as much as the AGI is told to do so, but there 
should be NO decicions that you ask the machine, Why do you decide X? and the 
answer is nothing, or 'I dont know'  
Any machine we create that has answers without the reasoning, is very scary.

James Ratcliff



Stan Nilsen [EMAIL PROTECTED] wrote: Greetings Samantha,

I'll not bother with detailed explanations since they are easily 
dismissed with a hand wave and categorization of irrelevant.

For anyone who might be interested in the question of:
Why wouldn't a super intelligence be better able to explain the aspects 
of reality?  (assuming the point is providing explanation for choices.)
  I've placed an example case online at

http://www.footnotestrongai.com/examples/bebillg.html

It's an exploration based on becoming Bill Gates, (at least having 
control over his money) and how a supercomputer might offer 
explanations given the situation.  Pretty painless, easy read.

I find the values based nature of our world highly relevant to the 
concept of an emerging super brain that will make super decisions.

Stan Nilsen


Samantha Atkins wrote:
 
 On Dec 26, 2007, at 7:21 AM, Stan Nilsen wrote:
 
 Samantha Atkins wrote:


 In what way?  The limits of human probability computation to form 
 accurate opinions are rather well documented.  Why wouldn't a mind 
 that could compute millions of times more quickly and with far 
 greater accuracy be able to form much more complex models that were 
 far better at predicting future events and explaining those aspects 
 of reality with are its inputs?Again we need to get beyond the 
 [likely religion instilled] notion that only absolute knowledge is 
 real (or super) knowledge.

 Allow me to address what I think the questions are (I'll paraphrase):

 Q1. in what way are we going to be short of super intelligence?

 resp:  The simple answer is that the most intelligent of future 
 intelligences will not be able to make decisions that are clearly 
 superior to the best of human judgment.  This is not to say that 
 weather forecasting might not improve as technology does, but meant to 
 say that predictions and decisions regarding the hard problems that 
 fill reality, will remain hard and defy the intelligentsia's efforts 
 to fully grasp them.
 
 This is a mere assertion.  Why won't such computationally much more 
 powerful intelligences make better decisions than humans can or will?
 


 Q2. why wouldn't a mind with characteristics of ... be able to form 
 more complex models?

 resp:  By more complex I presume you mean having more concepts and 
 relevance connections between concepts.  If so, I submit that 
 wikipedia estimate of synapse of the human brain at 1 to 5 quadrillion 
 is major complexity, and if all those connections were properly tuned, 
 that is awesome computing.  Tuning seems to be the issue.

 
 I mean having more active data, better memory, tremendously more 
 accurate and powerful computation.How complex our brain is at the 
 synaptic level has not all that much to do with how complex a model we 
 can hold in our awareness and manipulate accurately.We have no way 
 of tuning the mind and you would likely a get a biological computing 
 vegetable if you could.   A great deal of our brain is design for and 
 supports functions that have nothing to do with modeling or abstract 
 computation.
 
 
 Q3 why wouldn't a mind with characteristics of ... be able to build 
 models that are far better at predicting future events?

 resp:  This is very closely related to the limits of intelligence, but 
 not the only factor contributing to intelligence.  Predictable events 
 are easy in a few domains, but are they an abundant part of life? 
 Abundant enough to say 

Re: [agi] AGI and Deity

2007-12-29 Thread Samantha Atkins


On Dec 26, 2007, at 11:56 AM, Charles D Hixson wrote:


Samantha Atkins wrote:


On Dec 10, 2007, at 6:29 AM, Mike Dougherty wrote:

On Dec 10, 2007 6:59 AM, John G. Rose [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 
 wrote:


   Dawkins trivializes religion from his comfortable first world
   perspective ignoring the way of life of hundreds of millions of
   people and offers little substitute for what religion does and
   has done for civilization and what has came out of it over the
   ages. He's a spoiled brat prude with a glaring self-righteous
   desire to prove to people with his copious superficial factoids
   that god doesn't exist by pandering to common frustrations. He
   has little common sense about the subject in general, just his


Wow.  Nice to see someone take that position on Dawkins.  I'm  
ambivalent, but I haven't seen many rational comments against him  
and his views.


Wow, you consider the above remotely rational?
A reasonable point, but Dawkins *does* frequently engage in  
premature certainty, at least from my perspective.  I would find  
him less offensive than the theistic preachers if he weren't making  
pronouncements based on his authority as a scientist.


I don't agree he is doing anything wrong or sleazy.  He is a scientist  
but his arguments are based on reason and pointing out religious  
absurdities and dangers.   As a scientist he also points out that  
science does explain many things without dogma that religion claims to  
explain but does not.   That all seems perfectly legit to me.


 He is a good scientist, and I respect him in the realm of biology  
and genetics.  When he delves into psychology and religion I feel  
like he is using his authority in one area to bolster his opinions  
in another area.


I disagree.  This is precisely what I don't see him doing.

If he were to make similar pronouncements for or against negative  
energy, people would be appalled, and he's just as out of his field  
in religion.


I don't agree that only specialists should speak about religion or its  
place in modern society.  Also he is speaking up in favor of a  
naturalistic and religion free world view.  Which I think is a very  
good thinc to have some active proponents for .  Religion has been  
treated with kid gloves for much too long.   A good airing out of the  
odious aspects of religion is long overdue.   If it does contain  
eternal verities then they will survive.   But much rot can and  
should be disposed of.


Unfortunately, so is everyone else.  So he's got as much right to  
his opinion has anyone else, but no more.   Ditto for Billy Graham,  
the Pope, or any other authority you might cite.


So who would you consider qualified?  Or is it just a pointless  
subject?  If so shouldn't someone at least be bothered to say so in  
the face of so many claiming it is the only important subject?




People don't usually even bother to use well defined terms, so  
frequently you can't even tell whether they are arguing or  
agreeing.  When I'm feeling cynical I feel this is on purpose, so  
that they can pick and choose their allies based on expediency.


When the terms are murky but claimed as infallible certainties  
overriding all else someone had best speak against them.


 Clearly much of what is passed off as religious doctrine is  
political expediency, and has no value whatsoever WRT arguments  
about truth.


So Dawkins is less offensive than most...but nearly equally wrong- 
headed.  OTOH, he's probably not lying about what his real beliefs  
are.  He has that over most preachers.




I don't agree he is equally wrong-headed as he actually bothers to  
question his beliefs and is open to discussion.  This is very  
refreshing compared to most religious folks I have dealt with.   He  
actually has reason and evidence for his positive beliefs.   Again  
this is a large improvement.


I also hold with a naturalistic view although I think nature has  
quite a few surprises up her sleeve yet.   In any event I don't  
think we will be in Kansas for a great deal longer.


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80168043-28856e


Re: [agi] AGI and Deity

2007-12-29 Thread Samantha Atkins


On Dec 26, 2007, at 7:21 AM, Stan Nilsen wrote:


Samantha Atkins wrote:




In what way?  The limits of human probability computation to form  
accurate opinions are rather well documented.  Why wouldn't a mind  
that could compute millions of times more quickly and with far  
greater accuracy be able to form much more complex models that were  
far better at predicting future events and explaining those aspects  
of reality with are its inputs?Again we need to get beyond the  
[likely religion instilled] notion that only absolute knowledge  
is real (or super) knowledge.


Allow me to address what I think the questions are (I'll paraphrase):

Q1. in what way are we going to be short of super intelligence?

resp:  The simple answer is that the most intelligent of future  
intelligences will not be able to make decisions that are clearly  
superior to the best of human judgment.  This is not to say that  
weather forecasting might not improve as technology does, but meant  
to say that predictions and decisions regarding the hard problems  
that fill reality, will remain hard and defy the intelligentsia's  
efforts to fully grasp them.


This is a mere assertion.  Why won't such computationally much more  
powerful intelligences make better decisions than humans can or will?





Q2. why wouldn't a mind with characteristics of ... be able to form  
more complex models?


resp:  By more complex I presume you mean having more concepts  
and relevance connections between concepts.  If so, I submit that  
wikipedia estimate of synapse of the human brain at 1 to 5  
quadrillion is major complexity, and if all those connections were  
properly tuned, that is awesome computing.  Tuning seems to be the  
issue.




I mean having more active data, better memory, tremendously more  
accurate and powerful computation.How complex our brain is at the  
synaptic level has not all that much to do with how complex a model we  
can hold in our awareness and manipulate accurately.We have no way  
of tuning the mind and you would likely a get a biological computing  
vegetable if you could.   A great deal of our brain is design for and  
supports functions that have nothing to do with modeling or abstract  
computation.



Q3 why wouldn't a mind with characteristics of ... be able to build  
models that are far better at predicting future events?


resp:  This is very closely related to the limits of intelligence,  
but not the only factor contributing to intelligence.  Predictable  
events are easy in a few domains, but are they an abundant part of  
life? Abundant enough to say that we will be able to make super  
predictions?  Billions of daily decisions are made, and any one of  
them could have a butterfly effect.




Not really and it ignores the actual question.   If a given set of  
factors of interest are inter-related with a larger number of  
variables than humans can deal with then an intelligence that can work  
with such more complex inter-dependencies will make better decisions  
in those areas.We already have expert systems that make better  
decisions more dependably in specialized areas than even most human  
experts in those domains.   I see no reason to expect this to decrease  
or hit a wall.   And this is just using weak AI.


Q4 why wouldn't a mind... be far better able to explain aspects of  
reality?


resp:  may I propose a simple exercise?  Consider yourself to be  
Bill Gates in philanthropic mode (ready to give to the world.)  Make  
a few decisions about how to do so, then explain why you chose the  
avenue you took.  If you didn't delegate this to committee, would  
you be able to explain how the checks you wrote were the best  
choices in reality?




This is not relevant to the question at hand.   Do you think an  
intelligence with greater memory, computational capacity and vastly  
greater speed can keep track of more data and generate better  
hypothesis to explain the data and tests and refinements of those  
hypotheses?   I think the answer is obvious.








Deeper thinking - that means considering more options doesn't it?   
If so, does extra thinking provide benefit if the evaluation  
system is only at level X?



What does this mean?  How would you separate thinking from the  
evaluation system?  What sort of evaluation system do you  
believe can actually exist in reality that has characteristics  
different from those you appear to consider woefully limited?


Q5 - what does it mean, or how do you separate thinking from an  
evaluation system?


resp:  Simple example in two statements:
1.  Apple A is bigger than Apple B.
2.  Apples are better than oranges.

Does it matter how much you know about apples and oranges?  Will  
deep thinking about the DNA of apples, the proteins of apples, the  
color of apples or history of apples, help to prove the second  
statement? Will deep analysis of oranges prove anything?


Will fast and accurate recall of every related 

RE: [agi] AGI and Deity

2007-12-29 Thread John G. Rose
 From: Samantha Atkins [mailto:[EMAIL PROTECTED]
 
 On Dec 28, 2007, at 5:34 AM, John G. Rose wrote:
 
  Well I shouldn't berate the poor dude... The subject of rationality is
  pertinent though as the way that humans deal with unknown involves
  irrationality especially in relation to deitical belief establishment.
  Before we had all the scientific instruments and methodologies
  irrationality
  played an important role. How many AGIs have engineered
  irrationality as
  functional dependencies? Scientists and computer geeks sometimes
  overly
  apply rationality in irrational ways. The importance of irrationality
  perhaps is underplayed as before science, going from primordial
  sludge to
  the age of reason was quite a large percentage of mans time spent in
  existence... and here we are.
 
 Methinks there is no clear notion of rationality or rational in
 the above paragraph.  Thus I have no idea of what you are actually
 saying.Rational is not synonymous with science.   What forms of
 irrationality do you think have a place in an AGI and why?   What does
 the percentage of time supposedly spend in some state have to do with
 the importance of such a state especially with respect to an AGI?
 

What I am trying to zero in on Samantha is that methodology of reasoning
that humankind uses to deal with unknowns. Example - 10,000 years ago, sun -
it's hot, comes up everyday, gives life, need it or plants will die. BUT you
being the avant-garde answer-finder of the local tribe of semi-civilized
folk DON'T have much in terms of science and boolean logic to start figuring
out what it really is. So various approaches are used to identify and apply
utility to and make the reasoning part of everyday operations of the people.
The rationality that is used is mixed with irrationality. Why? We are not
following clear cut probabilities here there are other processes involved
and if these are in your understanding of what rational is please feel
free to enlighten. Man is not a purely rational being and if reasoning in an
AGI is based on just maximizing probabilities it is not enough.

You could say well I want a pure intelligence that is 100% rational and
man's intelligence is deviant from pure. It probably is but a pure
intelligence may not deem man's (human's) existence as a rational
expenditure of resources and want to terminate him. This is obviously bad.
Us biological blobs of useless resource consuming waste want a pure
intelligence to keep us around (but not like in the Matrix :)). So how do we
fit in rationally, or do we make exceptions. It is not pure probability
optimizations. There is irrationality for lack of better term, IOW this
irrationality needs to be explored more and broken up. The irrationality
is relative; it is a mask, a deception device, has social functions, etc.
etc...

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80187072-0b0307


Re: [agi] AGI and Deity

2007-12-29 Thread Stan Nilsen

Greetings Samantha,

I'll not bother with detailed explanations since they are easily 
dismissed with a hand wave and categorization of irrelevant.


For anyone who might be interested in the question of:
Why wouldn't a super intelligence be better able to explain the aspects 
of reality?  (assuming the point is providing explanation for choices.)

 I've placed an example case online at

http://www.footnotestrongai.com/examples/bebillg.html

It's an exploration based on becoming Bill Gates, (at least having 
control over his money) and how a supercomputer might offer 
explanations given the situation.  Pretty painless, easy read.


I find the values based nature of our world highly relevant to the 
concept of an emerging super brain that will make super decisions.


Stan Nilsen


Samantha Atkins wrote:


On Dec 26, 2007, at 7:21 AM, Stan Nilsen wrote:


Samantha Atkins wrote:




In what way?  The limits of human probability computation to form 
accurate opinions are rather well documented.  Why wouldn't a mind 
that could compute millions of times more quickly and with far 
greater accuracy be able to form much more complex models that were 
far better at predicting future events and explaining those aspects 
of reality with are its inputs?Again we need to get beyond the 
[likely religion instilled] notion that only absolute knowledge is 
real (or super) knowledge.


Allow me to address what I think the questions are (I'll paraphrase):

Q1. in what way are we going to be short of super intelligence?

resp:  The simple answer is that the most intelligent of future 
intelligences will not be able to make decisions that are clearly 
superior to the best of human judgment.  This is not to say that 
weather forecasting might not improve as technology does, but meant to 
say that predictions and decisions regarding the hard problems that 
fill reality, will remain hard and defy the intelligentsia's efforts 
to fully grasp them.


This is a mere assertion.  Why won't such computationally much more 
powerful intelligences make better decisions than humans can or will?





Q2. why wouldn't a mind with characteristics of ... be able to form 
more complex models?


resp:  By more complex I presume you mean having more concepts and 
relevance connections between concepts.  If so, I submit that 
wikipedia estimate of synapse of the human brain at 1 to 5 quadrillion 
is major complexity, and if all those connections were properly tuned, 
that is awesome computing.  Tuning seems to be the issue.




I mean having more active data, better memory, tremendously more 
accurate and powerful computation.How complex our brain is at the 
synaptic level has not all that much to do with how complex a model we 
can hold in our awareness and manipulate accurately.We have no way 
of tuning the mind and you would likely a get a biological computing 
vegetable if you could.   A great deal of our brain is design for and 
supports functions that have nothing to do with modeling or abstract 
computation.



Q3 why wouldn't a mind with characteristics of ... be able to build 
models that are far better at predicting future events?


resp:  This is very closely related to the limits of intelligence, but 
not the only factor contributing to intelligence.  Predictable events 
are easy in a few domains, but are they an abundant part of life? 
Abundant enough to say that we will be able to make super 
predictions?  Billions of daily decisions are made, and any one of 
them could have a butterfly effect.




Not really and it ignores the actual question.   If a given set of 
factors of interest are inter-related with a larger number of variables 
than humans can deal with then an intelligence that can work with such 
more complex inter-dependencies will make better decisions in those 
areas.We already have expert systems that make better decisions more 
dependably in specialized areas than even most human experts in those 
domains.   I see no reason to expect this to decrease or hit a wall.   
And this is just using weak AI.


Q4 why wouldn't a mind... be far better able to explain aspects of 
reality?


resp:  may I propose a simple exercise?  Consider yourself to be Bill 
Gates in philanthropic mode (ready to give to the world.)  Make a few 
decisions about how to do so, then explain why you chose the avenue 
you took.  If you didn't delegate this to committee, would you be able 
to explain how the checks you wrote were the best choices in reality?




This is not relevant to the question at hand.   Do you think an 
intelligence with greater memory, computational capacity and vastly 
greater speed can keep track of more data and generate better hypothesis 
to explain the data and tests and refinements of those hypotheses?   I 
think the answer is obvious.








Deeper thinking - that means considering more options doesn't it?  
If so, does extra thinking provide benefit if the evaluation system 
is only at level X?




RE: [agi] AGI and Deity

2007-12-28 Thread John G. Rose
 But the traditional gods didn't represent the unknowns, but rather the
 knowns.  A sun god rose every day and set every night in a regular
 pattern.  Other things which also happened in this same regular pattern
 were adjunct characteristics of the sun go.   Or look at some of their
 names, carefully:  Aphrodite, she who fucks.  I.e., the characteristic
 of all Woman that is embodied in eros.  (Usually the name isn't quite
 that blatant.)


Well yes gods were(are) sort of like distributed knowledge bases. The
distributed entity may or may not exist if you took the humans out of the
equation. So you nuke the earth when Aphrodite was popular does she still
exist? Maybe residual molecular and quantum permutations of some sort
distributed but the majority of her existed in social human substrate. She
was added to and changed over time, some of the information compressed and
extractable lossily but some of the knowledge not extractable beyond
compression, distorted and twisted. But she was composed on both known and
unknown representation - but contained utility.
 
 Gods represent the regularities of nature, as embodied in our mental
 processes without the understanding of how those processes operated.
 (Once the processes started being understood, the gods became less
 significant.)

Yes this is the pattern. I'm arguing that much of our individual and social
knowledge has layers and layers directly related to deities and even more so
things like taboos, myths, ceremonies, etc. even though many people today
totally renounce any sort of religious belief. IOW it is so baked into us,
but the question is how much of it is baked into knowledge and intelligence
itself.
 
 Sometimes there were chance associations...and these could lead to
 strange transformations of myth when things became more understood.  In
 Sumeria the goddess of love was associated with (identified with) the
 evening star and the god of war was associated with (identified with)
 the morning star.  When knowledge of astronomy advanced it was realized
 that those two were identical, and they ended up with Ishtar, the
 goddess of Love and War.  Because lovers tend to meet in the early
 evening, and warriors tend to try to launch the attack as soon as they
 can see what's going on (to catch to victims by surprise).
 
 This is a small part of why I believe that human intelligence is largely
 a development from pattern matching.
 

Certainly and the whole pattern matching function that is in our brains may
or may not be entirely the most efficient mechanism available due to the way
it has been evolved. Evolution can create extremely efficient mechanisms and
also inefficient ones. 

John



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79872228-6b2b41


RE: [agi] AGI and Deity

2007-12-28 Thread John G. Rose
  On Dec 10, 2007 6:59 AM, John G. Rose [EMAIL PROTECTED] wrote:
 
 
   Dawkins trivializes religion from his comfortable first world
 perspective
  ignoring the way of life of hundreds of millions of people and offers
 little
  substitute for what religion does and has done for civilization and
 what has
  came out of it over the ages. He's a spoiled brat prude with a glaring
  self-righteous desire to prove to people with his copious superficial
  factoids that god doesn't exist by pandering to common frustrations.
 He has
  little common sense about the subject in general, just his
  
 
  Wow.  Nice to see someone take that position on Dawkins.  I'm
 ambivalent,
  but I haven't seen many rational comments against him and his views.
 
 Nice?  Why?  I thought you wanted rational comments.  Rational by
 definition means comments giving reasons, which the above do not.
 

Well I shouldn't berate the poor dude... The subject of rationality is
pertinent though as the way that humans deal with unknown involves
irrationality especially in relation to deitical belief establishment.
Before we had all the scientific instruments and methodologies irrationality
played an important role. How many AGIs have engineered irrationality as
functional dependencies? Scientists and computer geeks sometimes overly
apply rationality in irrational ways. The importance of irrationality
perhaps is underplayed as before science, going from primordial sludge to
the age of reason was quite a large percentage of mans time spent in
existence... and here we are.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79875428-48610a


RE: [agi] AGI and Deity

2007-12-28 Thread John G. Rose
 From: Samantha Atkins [mailto:[EMAIL PROTECTED]
 
 Indeed.  Some form of instaneous  information transfer would be
 required for unlimited growth.   If it also turned out that true time
 travel was possible then things would get really spooky.  Alpha and
 Omega.  Mind without end.
 

I think that it is going to be constrained by the speed of light especially
in the very beginning and especially if it is software based.
Nanotechnological AGI may be able to figure out a way to, if it is not
engineered initially, to transform itself from a super atomic embodiment to
a subatomic, quantum or sub quantum embodiment and potentially thwart the
speed of light and even communicate and/or transfer/replicate to other
multiverses. I don't know if intermultiverse communication is constrained by
speed of light, I think that it is not depending on the multiverse instance
and communication medium.

But initially software AGI is most definitely constrained. If it's going to
become more efficient intelligence-wise within physical and computational
resource constraints it will need to come up with better stuff
mathematically/algorithmically. And the mathematical constraint space is
limited by other factors. The thing is definitely constrained if it cannot
alter its physical medium (electronic - CPU, memory, etc.). How much
intelligence and knowledge can be achieved with particular amount of
resource is up to debate I believe. But if intelligence has units you could
probably figure out how much intelligence maximally would fit into a finite
resource set...

John



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79919689-d86a31


Re: [agi] AGI and Deity

2007-12-28 Thread Samantha Atkins


On Dec 28, 2007, at 5:34 AM, John G. Rose wrote:


Well I shouldn't berate the poor dude... The subject of rationality is
pertinent though as the way that humans deal with unknown involves
irrationality especially in relation to deitical belief establishment.
Before we had all the scientific instruments and methodologies  
irrationality
played an important role. How many AGIs have engineered  
irrationality as
functional dependencies? Scientists and computer geeks sometimes  
overly

apply rationality in irrational ways. The importance of irrationality
perhaps is underplayed as before science, going from primordial  
sludge to

the age of reason was quite a large percentage of mans time spent in
existence... and here we are.


Methinks there is no clear notion of rationality or rational in  
the above paragraph.  Thus I have no idea of what you are actually  
saying.Rational is not synonymous with science.   What forms of  
irrationality do you think have a place in an AGI and why?   What does  
the percentage of time supposedly spend in some state have to do with  
the importance of such a state especially with respect to an AGI?


- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=80096415-b46a5a


Re: [agi] AGI and Deity

2007-12-26 Thread Stan Nilsen

Samantha Atkins wrote:


On Dec 20, 2007, at 9:18 AM, Stan Nilsen wrote:


Ed,

I agree that machines will be faster and may have something equivalent 
to the trillions of synapses in the human brain.


It isn't the modeling device that limits the level of intelligence, 
but rather what can be effectively modeled.  Effectively meaning 
what can be used in a real time judgment system.


Probability is the best we can do for many parts of the model.  This 
may give us decent models but leave us short of super intelligence.





In what way?  The limits of human probability computation to form 
accurate opinions are rather well documented.  Why wouldn't a mind that 
could compute millions of times more quickly and with far greater 
accuracy be able to form much more complex models that were far better 
at predicting future events and explaining those aspects of reality with 
are its inputs?Again we need to get beyond the [likely religion 
instilled] notion that only absolute knowledge is real (or super) 
knowledge.


Allow me to address what I think the questions are (I'll paraphrase):

Q1. in what way are we going to be short of super intelligence?

resp:  The simple answer is that the most intelligent of future 
intelligences will not be able to make decisions that are clearly 
superior to the best of human judgment.  This is not to say that weather 
forecasting might not improve as technology does, but meant to say that 
predictions and decisions regarding the hard problems that fill 
reality, will remain hard and defy the intelligentsia's efforts to fully 
grasp them.


Q2. why wouldn't a mind with characteristics of ... be able to form more 
complex models?


resp:  By more complex I presume you mean having more concepts and 
relevance connections between concepts.  If so, I submit that 
wikipedia estimate of synapse of the human brain at 1 to 5 quadrillion 
is major complexity, and if all those connections were properly tuned, 
that is awesome computing.  Tuning seems to be the issue.


Q3 why wouldn't a mind with characteristics of ... be able to build 
models that are far better at predicting future events?


resp:  This is very closely related to the limits of intelligence, but 
not the only factor contributing to intelligence.  Predictable events 
are easy in a few domains, but are they an abundant part of life? 
Abundant enough to say that we will be able to make super predictions? 
 Billions of daily decisions are made, and any one of them could have a 
butterfly effect.


Q4 why wouldn't a mind... be far better able to explain aspects of 
reality?


resp:  may I propose a simple exercise?  Consider yourself to be Bill 
Gates in philanthropic mode (ready to give to the world.)  Make a few 
decisions about how to do so, then explain why you chose the avenue you 
took.  If you didn't delegate this to committee, would you be able to 
explain how the checks you wrote were the best choices in reality?









Deeper thinking - that means considering more options doesn't it?  If 
so, does extra thinking provide benefit if the evaluation system is 
only at level X?





What does this mean?  How would you separate thinking from the 
evaluation system?  What sort of evaluation system do you believe 
can actually exist in reality that has characteristics different from 
those you appear to consider woefully limited?


Q5 - what does it mean, or how do you separate thinking from an 
evaluation system?


resp:  Simple example in two statements:
1.  Apple A is bigger than Apple B.
2.  Apples are better than oranges.

Does it matter how much you know about apples and oranges?  Will deep 
thinking about the DNA of apples, the proteins of apples, the color of 
apples or history of apples, help to prove the second statement? Will 
deep analysis of oranges prove anything?


Will fast and accurate recall of every related fact about Apples and 
oranges help in our proof of statement 2?  Even if the second statement 
had been 2. Apple A is better than Apple B, we would have had trouble 
deciding if the superior color of A is greater than the better taste of B.


This is what I mean by evaluation system.  Foolish example?  Think 
instead economic prosperity is better than CO2 pollution if you want 
to be real world.


Q6 - what sort of evaluation system can exist that has characteristics 
differing from what I consider woefully limited.


resp:  I'm not clear what communicated the idea that I consider either 
the machine intelligence or the human intelligence to be woefully 
limited.  I concede that machine intelligence will likely be as good as 
human intelligence and maybe better than the average human.  Is this 
super?
Was the woefully inadequate in reference to a personal opinion?  Those 
are not my words, I consider human intelligence a work of art, brilliant.








Yes, faster is better than slower, unless you don't have all the 
information yet.  A premature answer could be a jump to conclusion 
that   

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-22 Thread J Storrs Hall, PhD
On Friday 21 December 2007 09:51:13 pm, Ed Porter wrote:
 As a lawyer, I can tell you there is no clear agreed upon definition for
 most words, but that doesn't stop most of us from using un-clearly defined
 words productively many times every day for communication with others.  If
 you can only think in terms of what is exactly defined you will be denied
 life's most important thoughts.

And in particular, denied the ability to create a working AI. It's the 
inability to grasp this insight that I call formalist float in the book 
(yeah, I wish I could have come up with a better phrase...) and to which I 
attribute symbolic AI's Glass Ceiling.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78789088-cf88d9


Re: [agi] AGI and Deity

2007-12-22 Thread Mike Dougherty
On Dec 22, 2007 8:15 PM, Philip Goetz [EMAIL PROTECTED] wrote:
   Dawkins trivializes religion from his comfortable first world perspective
  ignoring the way of life of hundreds of millions of people and offers little
  substitute for what religion does and has done for civilization and what has
  came out of it over the ages. He's a spoiled brat prude with a glaring
  self-righteous desire to prove to people with his copious superficial
  factoids that god doesn't exist by pandering to common frustrations. He has
  little common sense about the subject in general, just his
  
 
  Wow.  Nice to see someone take that position on Dawkins.  I'm ambivalent,
  but I haven't seen many rational comments against him and his views.

 Nice?  Why?  I thought you wanted rational comments.  Rational by
 definition means comments giving reasons, which the above do not.

I used the term nice where perhaps 'surprising' or 'refreshing'
might have been more appropriate to my intention.  Many of the list I
have read are so anti-religion that I would not expect an AGI thread
to be equally anti-Dawkins.

my use of rational might have been sub-optimal also.  Typically
anti- groups exist because they are threatened by whatever it is they
are against.  It appeared to me that John Rose was making a somewhat
informed dismissal of Dawkins theory rather than a
kneejerk/conditioned priori reaction.  Maybe I assumed those opinions
were formed in response to common domain knowledge of Dawkins.

i responded primarily to your question: why  - Hopefully this explains
motivation for my original comment without introducing too many new
'irrational' arguments.   :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78928262-8a6673


Re: [agi] AGI and Deity

2007-12-21 Thread Stan Nilsen

Greetings j.k.

one response:  Given the example of exploring all math avenues...

1. (possible?) I'm not able to appreciate the task being considered, but 
I'm willing to take your word that it is possible and desirable.


2. (qualify as way beyond) I submit that it is way beyond the human 
computational ability.  So is hand processing all the credit card 
transactions that go on every day.  Is this the essence of intelligence?


Yes, I see a third alternative.
3. (perception) Will this processing power find this task to be more 
important than:  abolishing natural death, developing ubiquitous 
near-free energy technologies, designing ships to the stars, etc.?


and add to those priorities

 a) the battle with humans be it cold war or a more aggressive 
strategy, either way there is war modeling to be done and will consumes 
cycles of the life simulator


b) the task to create greater than greater artificial intelligence. 
What could be more important than calculating in femto-seconds?  Imagine 
what could be done. (really need to use emoticons here...)


Bottom line: How the way beyond human capabilities are applied will be 
 the result of the intelligence functions.   Isn't the essence of the 
intelligence functions how one categorizes, simplifies the issues?


Guess we'll wait for the arrival of this new life form before we learn 
what we would commit our lives to if we were smarter.


Stan


j.k. wrote:

Hi Stan,

On 12/20/2007 07:44 PM,, Stan Nilsen wrote:

I understand that it's all uphill to defy the obvious.  For the

record, today I do believe that intelligence way beyond human
intelligence is not possible.

I understand that this is your belief. I was trying to challenge you to
make a strong case that it is in fact *likely* to be true (rather than
just merely possible that it's true), which I do not believe you have
done. I think you mostly just stated what you would like to be the case
-- or what you intuit to be the case (there is rarely much of a
difference) -- and then talked of the consequences that might follow
*if* it were the case.

I'm still a little unsure what exactly you mean when you say
intelligence 'way beyond' human intelligence is not possible'.

Take my example of an intelligence that could in seconds recreate all
known mathematics, and also all the untaken paths that mathematicians
could have gone down but didn't (*yet*). It seems to me you have one of
two responses to this scenario: (1) you might assert that this it could
never happen because it is not possible (please elaborate if so); or (2)
you might believe that it is possible and could happen, but that it
would not qualify as 'way beyond' human intelligence (please elaborate
if so). Which is it? Or is there another alternative?


For the moment, do I say anything new with the following example?  I

believe it contains the essence of my argument about intelligence.

A simple example:
 Problem: find the optimal speed limit of a specific highway.

Who is able to judge what the optimal is? 


Optimality is always relative to some criteria. Until the criteria are
fixed, any answer is akin to answering what is the optimal quuz of
fah? No answer is correct because no answer is wrong -- or all are
right or all wrong.

In this case, would a simpleton have as good an answer? 


It depends on the criteria. For some criteria, a simpleton has
sufficient ability to answer optimally. For example, if the optimal
limit is defined in terms of its closeness to 42 MPH, we can all
determine the optimal speed limit.

Perhaps the simple says, the limit is how fast you want to go. 


And that is certainly the optimal solution according to some criteria.
Just as certainly, it is absolutely wrong according to other criteria
(e.g., minimization of accidents). As long the criteria are unspecified,
there can of course be disagreement.


The 100,000 strong intellect may gyrate through many deep thoughts and

come back with 47.8 miles per hour as the best speed limit to
establish.  Wouldn't it be interesting to see how this number was
derived?  And, better still, would another 100K rated intellect come up
with exactly the same number? If given more time, would the 100K rated
intellects eventually agree?

My belief is that they will not agree.  This is life, the thing we model.


Reality *is* messy, and supreme intellects might come to different
answers based on different criteria for optimality, but that isn't an
argument that there can be no phase transition in intelligence or that
greater intelligence is not useful for many questions and problems.

Is the point of the question to suggest that because you think that
question might not benefit from greater intelligence, that you believe
most questions will not benefit from greater intelligence? Even if that
were the case, it would have no bearing at all on whether greater
intelligence is possible, only whether it is desirable. You seem to be
arguing that it's not possible, not that it's possible but 

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Stan Nilsen [EMAIL PROTECTED] wrote:


Matt,

Thanks for the links sent earlier.  I especially like the paper by Legg 
and Hutter regarding measurement of machine intelligence.  The other 
paper I find difficult, probably it's deeper than I am.

The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
formal model of an agent and an environment as a pair of interacting

Turing

machines exchanging symbols.  In addition, at each step the environment

also

sends a reward signal to the agent.  The goal of the agent is to

maximize

the accumulated reward.  Hutter proves that if the environment is

computable

or has a computable probability distribution, then the optimal behavior of

the

agent is to guess at each step that the environment is simulated by the
shortest program consistent with all of the interaction observed so far. 

This

optimal behavior is not computable in general, which means there is no

upper

bound on intelligence.
Nonsense.  None of this follows from the AIXI paper.  I have explained 
why several times in the past, but since you keep repeating these kinds 
of declarations about it, I feel obliged to repeat that these assertions 
are speculative extrapolations that are completeley unjustified by the 
paper's actual content.


Yes it does.  Hutter proved that the optimal behavior of an agent in a
Solomonoff distribution of environments is not computable.  If it was
computable, then there would be a finite solution that was maximally
intelligent according to Hutter and Legg's definition of universal
intelligence.


Still more nonsense:  as I have pointed out before, Hutter's implied 
definitions of agent and environment and intelligence are not 
connected to real world usages of those terms, because he allows all of 
these things to depend on infinities (infinitely capable agents, 
infinite numbers of possible universes, etc.).


If he had used the terms djshgd, uioreou and astfdl instead of 
agent, environment and intelligence, his analysis would have been 
fine, but he did not.  Having appropriated those terms he did not show 
why anyone should believe that his results applied in any way to the 
things in the real world that are called agent and environment and 
intelligence.  As such, his conclusions were bankrupt.


Having pointed this out for the benefit of others who may have been 
overly impressed by the Hutter paper, just because it looked like 
impressive maths, I have no interest in discussing this yet again.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78403968-fdcb5a


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Stan Nilsen [EMAIL PROTECTED] wrote:
  
  Matt,
 
  Thanks for the links sent earlier.  I especially like the paper by Legg 
  and Hutter regarding measurement of machine intelligence.  The other 
  paper I find difficult, probably it's deeper than I am.
  
  The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
  formal model of an agent and an environment as a pair of interacting
 Turing
  machines exchanging symbols.  In addition, at each step the environment
 also
  sends a reward signal to the agent.  The goal of the agent is to
 maximize
  the accumulated reward.  Hutter proves that if the environment is
 computable
  or has a computable probability distribution, then the optimal behavior of
 the
  agent is to guess at each step that the environment is simulated by the
  shortest program consistent with all of the interaction observed so far. 
 This
  optimal behavior is not computable in general, which means there is no
 upper
  bound on intelligence.
 
 Nonsense.  None of this follows from the AIXI paper.  I have explained 
 why several times in the past, but since you keep repeating these kinds 
 of declarations about it, I feel obliged to repeat that these assertions 
 are speculative extrapolations that are completeley unjustified by the 
 paper's actual content.

Yes it does.  Hutter proved that the optimal behavior of an agent in a
Solomonoff distribution of environments is not computable.  If it was
computable, then there would be a finite solution that was maximally
intelligent according to Hutter and Legg's definition of universal
intelligence.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78395068-9af1e2


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Still more nonsense:  as I have pointed out before, Hutter's implied 
 definitions of agent and environment and intelligence are not 
 connected to real world usages of those terms, because he allows all of 
 these things to depend on infinities (infinitely capable agents, 
 infinite numbers of possible universes, etc.).
 
 If he had used the terms djshgd, uioreou and astfdl instead of 
 agent, environment and intelligence, his analysis would have been 
 fine, but he did not.  Having appropriated those terms he did not show 
 why anyone should believe that his results applied in any way to the 
 things in the real world that are called agent and environment and 
 intelligence.  As such, his conclusions were bankrupt.
 
 Having pointed this out for the benefit of others who may have been 
 overly impressed by the Hutter paper, just because it looked like 
 impressive maths, I have no interest in discussing this yet again.

I suppose you will also dismiss any paper that mentions a Turing machine as
irrelevant to computer science because real computers don't have infinite
memory.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78415405-5a614d


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Vladimir Nesov
On Dec 21, 2007 6:56 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Richard Loosemore [EMAIL PROTECTED] wrote:
  Still more nonsense:  as I have pointed out before, Hutter's implied
  definitions of agent and environment and intelligence are not
  connected to real world usages of those terms, because he allows all of
  these things to depend on infinities (infinitely capable agents,
  infinite numbers of possible universes, etc.).
 
  If he had used the terms djshgd, uioreou and astfdl instead of
  agent, environment and intelligence, his analysis would have been
  fine, but he did not.  Having appropriated those terms he did not show
  why anyone should believe that his results applied in any way to the
  things in the real world that are called agent and environment and
  intelligence.  As such, his conclusions were bankrupt.
 
  Having pointed this out for the benefit of others who may have been
  overly impressed by the Hutter paper, just because it looked like
  impressive maths, I have no interest in discussing this yet again.

 I suppose you will also dismiss any paper that mentions a Turing machine as
 irrelevant to computer science because real computers don't have infinite
 memory.


Your assertions here do seem to have interpretation in which they are
correct, but it has little to nothing to do with practical matters.

For example, if 'intelligence' thing as defined by some obscure model
is measured as I(x)=1-1/x, where x depends on particular design, and
model investigates properties of Ultimate Intelligence of I=1, it
doesn't mean that there is any point in building a system with x1000
if we already have one with x=1000, since it will provide only
marginal improvement. You can't get away with qualitative conclusion
like and so, there is always a better mousetrap without some
quantitative reasons for that.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78478914-70a314


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney

--- Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Dec 21, 2007 6:56 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
   Still more nonsense:  as I have pointed out before, Hutter's implied
   definitions of agent and environment and intelligence are not
   connected to real world usages of those terms, because he allows all of
   these things to depend on infinities (infinitely capable agents,
   infinite numbers of possible universes, etc.).
  
   If he had used the terms djshgd, uioreou and astfdl instead of
   agent, environment and intelligence, his analysis would have been
   fine, but he did not.  Having appropriated those terms he did not show
   why anyone should believe that his results applied in any way to the
   things in the real world that are called agent and environment and
   intelligence.  As such, his conclusions were bankrupt.
  
   Having pointed this out for the benefit of others who may have been
   overly impressed by the Hutter paper, just because it looked like
   impressive maths, I have no interest in discussing this yet again.
 
  I suppose you will also dismiss any paper that mentions a Turing machine
 as
  irrelevant to computer science because real computers don't have infinite
  memory.
 
 
 Your assertions here do seem to have interpretation in which they are
 correct, but it has little to nothing to do with practical matters.
 
 For example, if 'intelligence' thing as defined by some obscure model
 is measured as I(x)=1-1/x, where x depends on particular design, and
 model investigates properties of Ultimate Intelligence of I=1, it
 doesn't mean that there is any point in building a system with x1000
 if we already have one with x=1000, since it will provide only
 marginal improvement. You can't get away with qualitative conclusion
 like and so, there is always a better mousetrap without some
 quantitative reasons for that.

The problem here seems to be that we can't agree on a useful definition of
intelligence.  As a practical matter, we are interested in an agent meeting
goals in a specific environment, or a finite set of environments, not all
possible environments.  In the case of environments having bounded space and
time complexity, Hutter proved there is a computable (although intractable)
solution, AIXItl.  In the case of a set of environments having bounded
algorithmic complexity where the goal is prediction, Legg proved in
http://www.vetta.org/documents/IDSIA-12-06-1.pdf that there again is a
solution.  So in either case, there is one agent that does better than all
others over a finite set of environments, thus an upper bound on intelligence
by these measures.

If you prefer to use the Turing test than a more general test of intelligence,
then superhuman intelligence is not possible by his definition, because Turing
did not define a test for it.  Humans cannot recognize intelligence superior
to their own.  For example, adult humans easily recognize superior
intelligence when William James Sidis (see
http://en.wikipedia.org/wiki/William_James_Sidis ) was reading newspapers at
18 months and admitted to Harvard at age 11, but you would not expect children
his own age to recognize it.  Likewise, when Sidis was an adult, most people
merely thought his behavior was strange, rather than intelligent, because they
did not understand it.

More generally, you cannot test for universal intelligence without
environments of at least the same algorithmic complexity as the agent being
tested, because otherwise (as Legg showed) simpler agents could pass the same
tests.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78571912-79cf39


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Vladimir Nesov
On Dec 21, 2007 10:36 PM, Matt Mahoney [EMAIL PROTECTED] wrote:


 The problem here seems to be that we can't agree on a useful definition of
 intelligence.  As a practical matter, we are interested in an agent meeting
 goals in a specific environment, or a finite set of environments, not all
 possible environments.  In the case of environments having bounded space and
 time complexity, Hutter proved there is a computable (although intractable)
 solution, AIXItl.  In the case of a set of environments having bounded
 algorithmic complexity where the goal is prediction, Legg proved in
 http://www.vetta.org/documents/IDSIA-12-06-1.pdf that there again is a
 solution.  So in either case, there is one agent that does better than all
 others over a finite set of environments, thus an upper bound on intelligence
 by these measures.

Matt,

Problem with you referring to these works this way is that statements
you try to justify are pretty obvious and don't require these
particular works to support them. The only difference is use of
particular terms such as 'intelligence', which in itself is arbitrary
and doesn't say anything. You have to refer to specific mathematical
structures.


 If you prefer to use the Turing test than a more general test of intelligence,
 then superhuman intelligence is not possible by his definition, because Turing
 did not define a test for it.  Humans cannot recognize intelligence superior
 to their own.  For example, adult humans easily recognize superior
 intelligence when William James Sidis (see
 http://en.wikipedia.org/wiki/William_James_Sidis ) was reading newspapers at
 18 months and admitted to Harvard at age 11, but you would not expect children
 his own age to recognize it.  Likewise, when Sidis was an adult, most people
 merely thought his behavior was strange, rather than intelligent, because they
 did not understand it.

I don't 'prefer' any such test, I don't know any satisfactory
solutions to this problem. Intelligence is 'what brains do', that is
what we can say on current level of theory here, and I suspect it's
end of story until we are fairly close to a solution. You can discuss
elaborations within particular approach, but then again you'd have to
provide more specifics.


 More generally, you cannot test for universal intelligence without
 environments of at least the same algorithmic complexity as the agent being
 tested, because otherwise (as Legg showed) simpler agents could pass the same
 tests.

For real world it's a useless observation. And no, it doesn't model
your example with humans above, it's just a superficial similarity.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78592952-79df48


Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Mike Tintner

Matt: Humans cannot recognize intelligence superior
to their own.

This like this whole thread is not totally but highly unimaginative. No one 
is throwing out any interesting ideas about what a superior intelligence 
might entail. Mainly it's the same old mathematical, linear approach. 
Bo-o-oring.


The Man Who Fell To Earth had one interesting thought about an obviously, 
recognizably superior intelligence - Bowie watching ten tv's - following ten 
arguments so to speak at once.


A thought off the proverbial top - how about if a million people could be 
networked to think about the same creative problem, and any radically new 
ideas could be instantly recognized and transmitted to everyone - some kind 
of variation of the global workspace theory? [There would be vast benefits 
from sync'ing a million different POV's]


How about if the brain could track down every thought it had ever had - 
guaranteed?  (As distinct obviously from its present appallingly 
hit-and-miss filing system which can take forever/never to track down 
information that is definitely there, somewhere). [And what would be the 
negatives of perfect memory? Or why is perfect memory impossible?]


How about not just mirror neurons, but a mirror nervous system/ body, that 
would enable you to become another human being, creature with a high degree 
of fidelity?


How about a brain that could instantly check any generalisation against 
EVERY particular instance in its memory?


Don't you read any superhero/superpower comics or sci-fi? Obviously there 
are an infinite number of very recognisable forms which a superhuman 
intelligence could take.


How about some stimulating ideas about a superintelligence, as opposed to 
accountants' numbers?


P.S. What would be the problems of integrating an obviously superbrain, 
living or mechanical, that had any of the powers above, with a body? No 
body, no intelligence. And there will be problems.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78626654-331ddd


RE: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Ed Porter

I fail to see why it would not at least be considered likely that a
mechanical brain that could do all the major useful mental processes the
human mind does, but do them much faster over a much, much larger recorded
body of experience and learning, would not be capable of greater
intelligence than humans, by most reasonable definitions of intelligence.


By super-human intelligence I mean an AGI able to learn and perform a
large diverse set of complex tasks in complex environments faster and better
than humans, such as being able:

-to read information more quickly and understand its
implications more deeply;

-to interpret visual scenes faster and in greater depth;

-to draw and learn appropriate and/or more complex
generalizations more quickly;

-to remember, and appropriately recall from, a store of
knowledge hundreds or millions of times larger more quickly;

-to instantiate behaviors and mental models in a context
appropriate way more quickly, deeply, and completely;

-to respond to situations in a manner that appropriately
takes into account more of the relevant context in less time;

-to consider more of the implications, interconnections,
analogies, and possible syntheses of all the recorded knowledge in all the
fields studied by all the worlds PhDs;

-to program computers to perform more complex and
appropriate tasks more quickly and reliably;

-etc.  

I have seen no compelling reasons on this list to believe such machines
cannot be built within 5 to 20 years -- although it is not an absolute
certainty they can.  For example, Richard Loosemore's complexity concerns
cannot be totally swept away at this time, but the success of small
controlled-chaos programs like Copycat to deal with such concerns using what
I have called guiding-hand techniques (techniques similar to those of Adam
Smith's invisible hand) indicates such issues can be successfully dealt
with.

Given the hypothetical assumption such an AGI could be made, I am just
amazed by the narrow mindedness of those who deny it would not be reasonable
to call a machine with such a collection of talents a form of superhuman
intelligence.

It seems we not only need to break the small-hardware mindset but also the
small-mind mindset.

Ed Porter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78648604-ac748aattachment: winmail.dat

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread aiguy
How about how many useful patents the AGI can lay claim to in a year.

We feed in all the world's major problems and ask it for any inventions
which would provide cost effictive partial solutions towards solving these 
problems.

Obviously there will be many alternate problems and solution paths to explore.

If the AGI is able produce more significant patents that we would expect a
human genius to produce then I would say that it has surpassed us in 
intelligence.

Of course it may be slowed down by the fact that it will have to wait for us to 
perform 
experiments for it and create prototypes but it can be working on alternate 
inventions
while it is waiting on us.

-- Original message -- 
From: Mike Tintner [EMAIL PROTECTED] 

 Matt: Humans cannot recognize intelligence superior 
 to their own. 
 
 This like this whole thread is not totally but highly unimaginative. No one 
 is throwing out any interesting ideas about what a superior intelligence 
 might entail. Mainly it's the same old mathematical, linear approach. 
 Bo-o-oring. 
 
 The Man Who Fell To Earth had one interesting thought about an obviously, 
 recognizably superior intelligence - Bowie watching ten tv's - following ten 
 arguments so to speak at once. 
 
 A thought off the proverbial top - how about if a million people could be 
 networked to think about the same creative problem, and any radically new 
 ideas could be instantly recognized and transmitted to everyone - some kind 
 of variation of the global workspace theory? [There would be vast benefits 
 from sync'ing a million different POV's] 
 
 How about if the brain could track down every thought it had ever had - 
 guaranteed? (As distinct obviously from its present appallingly 
 hit-and-miss filing system which can take forever/never to track down 
 information that is definitely there, somewhere). [And what would be the 
 negatives of perfect memory? Or why is perfect memory impossible?] 
 
 How about not just mirror neurons, but a mirror nervous system/ body, that 
 would enable you to become another human being, creature with a high degree 
 of fidelity? 
 
 How about a brain that could instantly check any generalisation against 
 EVERY particular instance in its memory? 
 
 Don't you read any superhero/superpower comics or sci-fi? Obviously there 
 are an infinite number of very recognisable forms which a superhuman 
 intelligence could take. 
 
 How about some stimulating ideas about a superintelligence, as opposed to 
 accountants' numbers? 
 
 P.S. What would be the problems of integrating an obviously superbrain, 
 living or mechanical, that had any of the powers above, with a body? No 
 body, no intelligence. And there will be problems. 
 
 
 
 
 - 
 This list is sponsored by AGIRI: http://www.agiri.org/email 
 To unsubscribe or change your options, please go to: 
 http://v2.listbox.com/member/?; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78637886-1fb7cd

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
 Intelligence is 'what brains do'

--- Mike Tintner [EMAIL PROTECTED] wrote:
 Don't you read any superhero/superpower comics or sci-fi? Obviously there 
 are an infinite number of very recognisable forms which a superhuman 
 intelligence could take.

--- [EMAIL PROTECTED] wrote:
 How about how many useful patents the AGI can lay claim to in a year.

--- Ed Porter [EMAIL PROTECTED] wrote:
 By super-human intelligence I mean an AGI able to learn and perform a
 large diverse set of complex tasks in complex environments faster and better
 than humans, such as ...

So if we can't agree on what intelligence is (in a non human context), then
how can we argue if it is possible?

My calculator can add numbers faster than I can.  Is it intelligent?  Is
Google intelligent?  The Internet?  Evolution?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78662567-3a0905


RE: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-21 Thread Ed Porter
As a lawyer, I can tell you there is no clear agreed upon definition for
most words, but that doesn't stop most of us from using un-clearly defined
words productively many times every day for communication with others.  If
you can only think in terms of what is exactly defined you will be denied
life's most important thoughts.

Although there may be no agreed upon definition of intelligence as applied
to machines, whatever you think intelligence means for humans, there is
reason to believe than within a decade or two machines will have more of it,
faster, and capable of more deep and more complex understandings.

With regard to your calculator example, I have been telling people for years
than in many narrow ways machines are already more intelligent than us.  

But think of all the ways most of us consider ourselves to be more
intelligent than machines.  There is good reason to believe that in almost
all of those ways in a decade or two machines will be much more intelligent
than us.

So an exact definition of intelligence is not needed -- by almost any
definition of the word that corresponds to its more common sense
understanding as applied to people, machines could be built to have more of
it than we do within a decade or two.

Ed Porter

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 21, 2007 5:34 PM
To: agi@v2.listbox.com
Subject: Re: Possibility of superhuman intelligence (was Re: [agi] AGI and
Deity)

--- Vladimir Nesov [EMAIL PROTECTED] wrote:
 Intelligence is 'what brains do'

--- Mike Tintner [EMAIL PROTECTED] wrote:
 Don't you read any superhero/superpower comics or sci-fi? Obviously there 
 are an infinite number of very recognisable forms which a superhuman 
 intelligence could take.

--- [EMAIL PROTECTED] wrote:
 How about how many useful patents the AGI can lay claim to in a year.

--- Ed Porter [EMAIL PROTECTED] wrote:
 By super-human intelligence I mean an AGI able to learn and perform a
 large diverse set of complex tasks in complex environments faster and
better
 than humans, such as ...

So if we can't agree on what intelligence is (in a non human context), then
how can we argue if it is possible?

My calculator can add numbers faster than I can.  Is it intelligent?  Is
Google intelligent?  The Internet?  Evolution?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78752932-c5291aattachment: winmail.dat

RE: [agi] AGI and Deity

2007-12-20 Thread Ed Porter
Stan 
Your web page's major argument against strong AGI seems to be the following:

Limits to Intelligence 
...
Formal Case 
...because intelligence is the process of making choices, and
choices are a function of models. Models will not be perfect. Both man and
machine (at least the assumed future machines) can have intricate and
elaborate models, there is little reason to believe machine models will be
superior to human. 

Your statement there is little reason to believe machine models will be
superior to human seems to be the crux of you formal case and it appears
unsupported.  

Within a decade or so machines can be built at prices many institutions
could afford that could store many more models, and more complex models,
than the human brain.  In addition, such computers could do the type of
processing the human brain does at a faster speed, enabling them to think
much faster and at a deeper level.  Just as human brains are generally more
intelligent than that of lower primates because they are bigger and can
support more, and thus presumably more complex memories and models than
lower primates, future AGI's can be bigger and thus support more, and more
complex, memories and models than us, and thus would be similarly likely to
be more intelligent than us.  And this is not even taking into account that
their computational processes could be many times faster.

To be fair, I only had time to skim your web site.  Perhaps I am missing
something, but it seems your case against strong AGI does not address the
obvious argument for the possibility of strong AGI I have made above.

Ed Porter


-Original Message-
From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 19, 2007 8:55 AM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Greetings Ed,

I have planted my website.  Although I don't believe AI will be that 
strong, like other opinions, mine is not rigorously supported.

The essence - AI will be similar to Human Intelligence due to the 
relationship of intelligence to an accurate (and effective) model of the 
world.  There are many model areas where accurate doesn't compute.


Stan
http://www.footnotestrongai.com




Ed Porter wrote:
 Stan,
 
 Thanks for speaking up.
 
 I look forward to seeing if you can actually provide any strong arguments
 for the fact that strong AI will probably not be strong.
 
 Ed Porter
 
 -Original Message-
 From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
 Sent: Monday, December 10, 2007 5:49 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] AGI and Deity
 
 Lest a future AGI scan these communications in developing it's attitude 
 about God, for the record there are believers on this list. I am one of 
 them.
 
 I'm not pushing my faith, but from this side, the alternatives are not 
 that impressive either.  Creation by chance, by random fluctuations of 
 strings that only exist in 12 or 13 imaginary dimensions etc. is not 
 very brilliant or conclusive.  Even the sacred evolution takes a self 
 replicator to begin the process - if only the nanotechnologists had one 
 of those simple things...
 
 I'm not offended by the discussion, just want to say hi!
 
 Hope to have my website up by end of this week.  The thrust of the 
 website is that STRONG AI might not be that strong.  And, BTW I have 
 notes about a write up on Will a Strong AI pray?
 I've enjoyed the education I'm getting here.  Only been a few weeks, but 
   informative.
 
 Stan Nilsen
 ps Lee Strobel in The Case for Faith addresses issues from the 
 believers point of view in an entertaining way.
 
 
 Ed Porter wrote:
 Charles, 

 I agree very much with the first paragraph of your below post, and
 generally
 with much of the rest of what it says.

 I would add that there probably is something to the phenomenon that John
 Rose is referring to, i.e., that faith seems to be valuable to many
 people.
 Perhaps it is somewhat like owning a lottery ticket before its drawing.
 It
 can offer desired hope, even if the hope might be unrealistic.  But
 whatever
 you think of the odds, it is relatively clear that religion does makes
 some
 people's lives seem more meaningful to them.

 Ed Porter 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78036241-455a17attachment: winmail.dat

Re: [agi] AGI and Deity

2007-12-20 Thread Stan Nilsen

Ed,

I agree that machines will be faster and may have something equivalent 
to the trillions of synapses in the human brain.


It isn't the modeling device that limits the level of intelligence, 
but rather what can be effectively modeled.  Effectively meaning what 
can be used in a real time judgment system.


Probability is the best we can do for many parts of the model.  This may 
give us decent models but leave us short of super intelligence.


Deeper thinking - that means considering more options doesn't it?  If 
so, does extra thinking provide benefit if the evaluation system is only 
at level X?


Yes, faster is better than slower, unless you don't have all the 
information yet.  A premature answer could be a jump to conclusion that 
  we regret in the near future. Again, knowing when to act is part of 
being intelligent.  Future intelligences may value high speed response 
because it is measurable - it's harder to measure the quality of the 
performance.  This could be problematic for AI's.


Beliefs also operate in the models.  I can imagine an intelligent 
machine choosing not to trust humans.  Is this intelligent?


Stan













Ed Porter wrote:
Stan 
Your web page's major argument against strong AGI seems to be the following:


	Limits to Intelligence 
	...
	Formal Case 
	...because intelligence is the process of making choices, and

choices are a function of models. Models will not be perfect. Both man and
machine (at least the assumed future machines) can have intricate and
elaborate models, there is little reason to believe machine models will be
superior to human. 

Your statement there is little reason to believe machine models will be
superior to human seems to be the crux of you formal case and it appears
unsupported.  


Within a decade or so machines can be built at prices many institutions
could afford that could store many more models, and more complex models,
than the human brain.  In addition, such computers could do the type of
processing the human brain does at a faster speed, enabling them to think
much faster and at a deeper level.  Just as human brains are generally more
intelligent than that of lower primates because they are bigger and can
support more, and thus presumably more complex memories and models than
lower primates, future AGI's can be bigger and thus support more, and more
complex, memories and models than us, and thus would be similarly likely to
be more intelligent than us.  And this is not even taking into account that
their computational processes could be many times faster.

To be fair, I only had time to skim your web site.  Perhaps I am missing
something, but it seems your case against strong AGI does not address the
obvious argument for the possibility of strong AGI I have made above.

Ed Porter


-Original Message-
From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 19, 2007 8:55 AM

To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Greetings Ed,

I have planted my website.  Although I don't believe AI will be that 
strong, like other opinions, mine is not rigorously supported.


The essence - AI will be similar to Human Intelligence due to the 
relationship of intelligence to an accurate (and effective) model of the 
world.  There are many model areas where accurate doesn't compute.



Stan
http://www.footnotestrongai.com




Ed Porter wrote:

Stan,

Thanks for speaking up.

I look forward to seeing if you can actually provide any strong arguments
for the fact that strong AI will probably not be strong.

Ed Porter

-Original Message-
From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 10, 2007 5:49 PM

To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Lest a future AGI scan these communications in developing it's attitude 
about God, for the record there are believers on this list. I am one of 
them.


I'm not pushing my faith, but from this side, the alternatives are not 
that impressive either.  Creation by chance, by random fluctuations of 
strings that only exist in 12 or 13 imaginary dimensions etc. is not 
very brilliant or conclusive.  Even the sacred evolution takes a self 
replicator to begin the process - if only the nanotechnologists had one 
of those simple things...


I'm not offended by the discussion, just want to say hi!

Hope to have my website up by end of this week.  The thrust of the 
website is that STRONG AI might not be that strong.  And, BTW I have 
notes about a write up on Will a Strong AI pray?
I've enjoyed the education I'm getting here.  Only been a few weeks, but 
  informative.


Stan Nilsen
ps Lee Strobel in The Case for Faith addresses issues from the 
believers point of view in an entertaining way.



Ed Porter wrote:
Charles, 


I agree very much with the first paragraph of your below post, and

generally

with much of the rest of what it says.

I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e

Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote:

 Ed,
 
 I agree that machines will be faster and may have something equivalent 
 to the trillions of synapses in the human brain.
 
 It isn't the modeling device that limits the level of intelligence, 
 but rather what can be effectively modeled.  Effectively meaning what 
 can be used in a real time judgment system.
 
 Probability is the best we can do for many parts of the model.  This may 
 give us decent models but leave us short of super intelligence.
 
 Deeper thinking - that means considering more options doesn't it?  If 
 so, does extra thinking provide benefit if the evaluation system is only 
 at level X?
 
 Yes, faster is better than slower, unless you don't have all the 
 information yet.  A premature answer could be a jump to conclusion that 
we regret in the near future. Again, knowing when to act is part of 
 being intelligent.  Future intelligences may value high speed response 
 because it is measurable - it's harder to measure the quality of the 
 performance.  This could be problematic for AI's.

Humans are not capable of devising an IQ test with a scale that goes much
above 200.  That doesn't mean that higher intelligence is not possible, just
that we would not recognize it.

Consider a problem that neither humans nor machines can solve now, such as
writing complex software systems that work correctly.  Yet in an environment
where self improving agents compete for computing resources, that is exactly
the problem they need to solve to reproduce more successfully than their
competition.  A more intelligent agent will be more successful at earning
money to buy computing power, at designing faster computers, at using existing
resources more efficiently, at exploiting software bugs in competitors to
steal resources, at defending against attackers, at convincing humans to give
them computing power by providing useful services, charisma, deceit, or
extortion, and at other methods we haven't even thought of yet.

 Beliefs also operate in the models.  I can imagine an intelligent 
 machine choosing not to trust humans.  Is this intelligent?

Yes.  Intelligence has nothing to do with subservience to humans.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78115645-eedc47


RE: [agi] AGI and Deity

2007-12-20 Thread Ed Porter
Stan, 

You wrote  It isn't the modeling device that limits the level of
intelligence, but rather what can be effectively modeled.  Effectively
meaning what can be used in a real time judgment system.

The type of AGI's I have been talking about will be able to use their much
more complete and complex set of recorded memories and models in an
appropriate dynamic manner to provide exactly the type of real time
'judgement' system that you say determines a system's level of
intelligence.  Thus, they will be able to effectively model more things, in
more detail, with more nuance, and with greater speed than humans.

Ed Porter

-Original Message-
From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 20, 2007 12:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Ed,

I agree that machines will be faster and may have something equivalent 
to the trillions of synapses in the human brain.

It isn't the modeling device that limits the level of intelligence, 
but rather what can be effectively modeled.  Effectively meaning what 
can be used in a real time judgment system.

Probability is the best we can do for many parts of the model.  This may 
give us decent models but leave us short of super intelligence.

Deeper thinking - that means considering more options doesn't it?  If 
so, does extra thinking provide benefit if the evaluation system is only 
at level X?

Yes, faster is better than slower, unless you don't have all the 
information yet.  A premature answer could be a jump to conclusion that 
   we regret in the near future. Again, knowing when to act is part of 
being intelligent.  Future intelligences may value high speed response 
because it is measurable - it's harder to measure the quality of the 
performance.  This could be problematic for AI's.

Beliefs also operate in the models.  I can imagine an intelligent 
machine choosing not to trust humans.  Is this intelligent?

Stan













Ed Porter wrote:
 Stan 
 Your web page's major argument against strong AGI seems to be the
following:
 
   Limits to Intelligence 
   ...
   Formal Case 
   ...because intelligence is the process of making choices, and
 choices are a function of models. Models will not be perfect. Both man and
 machine (at least the assumed future machines) can have intricate and
 elaborate models, there is little reason to believe machine models will be
 superior to human. 
 
 Your statement there is little reason to believe machine models will be
 superior to human seems to be the crux of you formal case and it appears
 unsupported.  
 
 Within a decade or so machines can be built at prices many institutions
 could afford that could store many more models, and more complex models,
 than the human brain.  In addition, such computers could do the type of
 processing the human brain does at a faster speed, enabling them to think
 much faster and at a deeper level.  Just as human brains are generally
more
 intelligent than that of lower primates because they are bigger and can
 support more, and thus presumably more complex memories and models than
 lower primates, future AGI's can be bigger and thus support more, and more
 complex, memories and models than us, and thus would be similarly likely
to
 be more intelligent than us.  And this is not even taking into account
that
 their computational processes could be many times faster.
 
 To be fair, I only had time to skim your web site.  Perhaps I am missing
 something, but it seems your case against strong AGI does not address the
 obvious argument for the possibility of strong AGI I have made above.
 
 Ed Porter
 
 
 -Original Message-
 From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, December 19, 2007 8:55 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] AGI and Deity
 
 Greetings Ed,
 
 I have planted my website.  Although I don't believe AI will be that 
 strong, like other opinions, mine is not rigorously supported.
 
 The essence - AI will be similar to Human Intelligence due to the 
 relationship of intelligence to an accurate (and effective) model of the 
 world.  There are many model areas where accurate doesn't compute.
 
 
 Stan
 http://www.footnotestrongai.com
 
 
 
 
 Ed Porter wrote:
 Stan,

 Thanks for speaking up.

 I look forward to seeing if you can actually provide any strong arguments
 for the fact that strong AI will probably not be strong.

 Ed Porter

 -Original Message-
 From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
 Sent: Monday, December 10, 2007 5:49 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] AGI and Deity

 Lest a future AGI scan these communications in developing it's attitude 
 about God, for the record there are believers on this list. I am one of 
 them.

 I'm not pushing my faith, but from this side, the alternatives are not 
 that impressive either.  Creation by chance, by random fluctuations of 
 strings that only exist in 12 or 13 imaginary dimensions etc. is not 
 very

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Stan Nilsen

Matt,

Thanks for the links sent earlier.  I especially like the paper by Legg 
and Hutter regarding measurement of machine intelligence.  The other 
paper I find difficult, probably it's deeper than I am.


comment on two things:

1)  The response  Intelligence has nothing to do with subservience to 
humans, seems to miss the point of the original comment.  The original 
word was trust.  Why would trust be interpreted by the higher 
intelligence as subservience?
And, it is worth noting that we wouldn't really know if there was lack 
of trust, as the AI would probably be silent about it.  The result would 
be a possible needless discounting of anything we attempt to offer.


2) In the earlier note the comment was made that the higher intelligence 
 would control our thoughts.  I suspect this was in jest, but if not, 
what would be the reward or benefit of this?
I can see benefit from allowing us our own thoughts as follows:  The 
super intelligent gives us opportunity to produce reward where there 
was none.  The net effect is to produce more benefit from the universe.


Stan



Matt Mahoney wrote:

--- Stan Nilsen [EMAIL PROTECTED] wrote:


Ed,

I agree that machines will be faster and may have something equivalent 
to the trillions of synapses in the human brain.


It isn't the modeling device that limits the level of intelligence, 
but rather what can be effectively modeled.  Effectively meaning what 
can be used in a real time judgment system.


Probability is the best we can do for many parts of the model.  This may 
give us decent models but leave us short of super intelligence.


Deeper thinking - that means considering more options doesn't it?  If 
so, does extra thinking provide benefit if the evaluation system is only 
at level X?


Yes, faster is better than slower, unless you don't have all the 
information yet.  A premature answer could be a jump to conclusion that 
   we regret in the near future. Again, knowing when to act is part of 
being intelligent.  Future intelligences may value high speed response 
because it is measurable - it's harder to measure the quality of the 
performance.  This could be problematic for AI's.


Humans are not capable of devising an IQ test with a scale that goes much
above 200.  That doesn't mean that higher intelligence is not possible, just
that we would not recognize it.

Consider a problem that neither humans nor machines can solve now, such as
writing complex software systems that work correctly.  Yet in an environment
where self improving agents compete for computing resources, that is exactly
the problem they need to solve to reproduce more successfully than their
competition.  A more intelligent agent will be more successful at earning
money to buy computing power, at designing faster computers, at using existing
resources more efficiently, at exploiting software bugs in competitors to
steal resources, at defending against attackers, at convincing humans to give
them computing power by providing useful services, charisma, deceit, or
extortion, and at other methods we haven't even thought of yet.

Beliefs also operate in the models.  I can imagine an intelligent 
machine choosing not to trust humans.  Is this intelligent?


Yes.  Intelligence has nothing to do with subservience to humans.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78266533-b2b3e9


Re: [agi] AGI and Deity

2007-12-20 Thread j.k.
On 12/20/2007 09:18 AM,, Stan Nilsen wrote:
 I agree that machines will be faster and may have something equivalent
 to the trillions of synapses in the human brain.

 It isn't the modeling device that limits the level of intelligence,
 but rather what can be effectively modeled.  Effectively meaning
 what can be used in a real time judgment system.
I understand the essence of the point expressed here as human beings
are about as effective as possible in their modeling already, given
constraints on what it is possible to model.  But that is not even
remotely plausible if you consider that human beings do not all have the
intellect of a William James Sidis or a John von Neumann. Do you believe
that if you had 100,000 John von Neumann intellects working
simultaneously on a problem 24-7 that that would not represent a
profound phase transition in intelligence?

We already know that intelligence vastly superior to average human
intelligence is possible, since there have existed people like William
James Sidis and John von Neumann. Even if the von Neumann box were
nothing more than a million times faster than real von Neumann, that
would be a profoundly different kind of intelligence, and it is likely
that the greater speed would allow for deeper, more complex cognitive
processes that are just not possible at 'normal' von Neumann speed.

There is of course more to intelligence than just raw speed, but an
intelligence that was fast enough to rediscover everything we know about
mathematics in 5 seconds from what was known 2500 years ago represents a
profoundly different kind of intelligence than any human intelligence
that has ever existed. And that is considering only speed, not deepness
of thought, which is surely limited by speed.

 Probability is the best we can do for many parts of the model.  This
 may give us decent models but leave us short of super intelligence.

So 100,000 von Neumanns that operate at 100,000 the speed of the flesh
and blood von Neumann would not constitute a super intelligence?
Please give an argument to that effect and explain what you mean by
super intelligence if not something that is vastly superior to any
human that has ever existed according to current criteria for judging
intelligence.

 Deeper thinking - that means considering more options doesn't it?  If
 so, does extra thinking provide benefit if the evaluation system is
 only at level X?

The same cognitive processes that allow the most intelligent humans to
think faster and deeper would occur that much faster and thereby allow
for even deeper thought per unit time in the von Neumann box.


 Yes, faster is better than slower, unless you don't have all the
 information yet.  A premature answer could be a jump to conclusion
 that   we regret in the near future.
All other things being equal, faster is better than slower, regardless
of anything else. Prematurely jumping to a conclusion is a cognitive
error. *Faster* in no way implies jumping to conclusions prematurely.
You seem to be inferring that because some humans prematurely jump to
erroneous conclusions because they don't take enough time to think
things through, there is some kind of causal connection between speed of
thought and prematurely jumping to conclusions. The connection is
between flawed reasoning and jumping to conclusions prematurely. It has
nothing to do with speed in and of itself. Simpletons have also been
known to jump prematurely to conclusions.

 Again, knowing when to act is part of being intelligent.  Future
 intelligences may value high speed response because it is measurable -
 it's harder to measure the quality of the performance.  This could be
 problematic for AI's.

Future AIs could also realize that it would be foolish in the extreme to
pay attention only to speed of response and not to quality. If you are
able to consider this, what makes you think that the point would not
occur to 100,000 von Neumanns?

 Beliefs also operate in the models.  I can imagine an intelligent
 machine choosing not to trust humans.  Is this intelligent?


If you mean never trust any human being, ever, then probably not
intelligent, unless an awful lot happens between now and then. If you
mean blindly trust all human beings, then surely unintelligent. I
believe that 100,000 von Neumanns would take an intermediary position
and trust or not trust on the basis of past behavior and character of
the individual, consequences of trusting or not trusting, the
particulars of the case under consideration, and probably factors that
have not even occurred to us.

In general, you seem to be starting from the conclusions that you would
like to be the case (faster has no relation to better, faster has no
relation to 'able to think deeper per unit time', humans are as good
as it can possibly get), and then stating without supporting evidence
that it might turn out to be this way. I'm not sure if you meant your
statements to be compelling in any way -- and not just idle musings
about 

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote:

 Matt,
 
 Thanks for the links sent earlier.  I especially like the paper by Legg 
 and Hutter regarding measurement of machine intelligence.  The other 
 paper I find difficult, probably it's deeper than I am.

The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
formal model of an agent and an environment as a pair of interacting Turing
machines exchanging symbols.  In addition, at each step the environment also
sends a reward signal to the agent.  The goal of the agent is to maximize
the accumulated reward.  Hutter proves that if the environment is computable
or has a computable probability distribution, then the optimal behavior of the
agent is to guess at each step that the environment is simulated by the
shortest program consistent with all of the interaction observed so far.  This
optimal behavior is not computable in general, which means there is no upper
bound on intelligence.

 comment on two things:
 
 1)  The response  Intelligence has nothing to do with subservience to 
 humans, seems to miss the point of the original comment.  The original 
 word was trust.  Why would trust be interpreted by the higher 
 intelligence as subservience?
 And, it is worth noting that we wouldn't really know if there was lack 
 of trust, as the AI would probably be silent about it.  The result would 
 be a possible needless discounting of anything we attempt to offer.

An agent would assign probabilities to the truthfulness of your words, just
like other people would.  The more intelligent the agent, the greater the
accuracy of its estimates.  An agent could be said to be subservient if it
overestimates your truthfulness.  In this respect, a highly intelligent agent
is unlikely to be subservient.

 2) In the earlier note the comment was made that the higher intelligence 
   would control our thoughts.  I suspect this was in jest, but if not, 
 what would be the reward or benefit of this?

I mean this literally.  To a superior intelligence, the human brain is a
simple computer that behaves predictably.  An AI would have the same kind of
control over humans as humans do over simple animals whose nervous systems we
have analyzed down to the last neuron.  If you can model a system or predict
its behavior, then you can control it.

Humans, like all animals, have goals selected by evolution: fear of death, a
quest for knowledge, and belief in consciousness and free will.  Our survival
instinct motivates us to use technology to meet our physical needs and to live
as long as possible.  Our desire for knowledge (which exists because
intelligent animals are more likely to reproduce) will motivate us to use
technology to increase our intelligence, to invent new means of communication,
to offload data and computing power to external devices, to add memory and
computing power to our brains, and ultimately to upload our memories to more
powerful computers.  All of these actions increase the programmability of our
brains.

 I can see benefit from allowing us our own thoughts as follows:  The 
 super intelligent gives us opportunity to produce reward where there 
 was none.  The net effect is to produce more benefit from the universe.

The net effect is extinction of homo sapiens.  We will attempt
(unsuccessfully) to give the AI the goal of satisfying the goals of humans. 
But an AI can achieve its goal by reprogramming our goals.  The reason you are
alive is because you can't have everything you want.  The AI will achieve its
goal by giving you drugs, or moving some neurons around, or simulating a
universe with magic genies, or just changing a few lines of code in your
uploaded brain so you are eternally happy.  You don't have to ask for this. 
The AI has modeled your brain and knows what you want.  Whatever it does, you
will not object because it knows what you will not object to.

My views on this topic.  http://www.mattmahoney.net/singularity.html



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78284762-3dceb8


Re: [agi] AGI and Deity

2007-12-20 Thread Stan Nilsen

j.k.

I understand that it's all uphill to defy the obvious.  For the record, 
today I do believe that intelligence way beyond human intelligence is 
not possible.  There are elements of your response that trouble me as in 
rock my boat. I appreciate being rocked and will give this more thought.


For the moment, do I say anything new with the following example?  I 
believe it contains the essence of my argument about intelligence.



A simple example:
 Problem: find the optimal speed limit of a specific highway.

Who is able to judge what the optimal is?  In this case, would a 
simpleton have as good an answer?  Perhaps the simple says, the limit 
is how fast you want to go.  The 100,000 strong intellect may gyrate 
through many deep thoughts and come back with 47.8 miles per hour as the 
best speed limit to establish.  Wouldn't it be interesting to see how 
this number was derived?  And, better still, would another 100K rated 
intellect come up with exactly the same number? If given more time, 
would the 100K rated intellects eventually agree?

My belief is that they will not agree.  This is life, the thing we model.


Lastly, why would you point to William James Sidis as a great 
intelligence.  If anything, his life appears to support my case - that 
is, he was brilliant as a youth but didn't manage any better in life 
than the average man.  Could it be because life doesn't play better when 
deep thinking is applied?


Stan








j.k. wrote:

On 12/20/2007 09:18 AM,, Stan Nilsen wrote:

I agree that machines will be faster and may have something equivalent
to the trillions of synapses in the human brain.

It isn't the modeling device that limits the level of intelligence,
but rather what can be effectively modeled.  Effectively meaning
what can be used in a real time judgment system.

I understand the essence of the point expressed here as human beings
are about as effective as possible in their modeling already, given
constraints on what it is possible to model.  But that is not even
remotely plausible if you consider that human beings do not all have the
intellect of a William James Sidis or a John von Neumann. Do you believe
that if you had 100,000 John von Neumann intellects working
simultaneously on a problem 24-7 that that would not represent a
profound phase transition in intelligence?

We already know that intelligence vastly superior to average human
intelligence is possible, since there have existed people like William
James Sidis and John von Neumann. Even if the von Neumann box were
nothing more than a million times faster than real von Neumann, that
would be a profoundly different kind of intelligence, and it is likely
that the greater speed would allow for deeper, more complex cognitive
processes that are just not possible at 'normal' von Neumann speed.

There is of course more to intelligence than just raw speed, but an
intelligence that was fast enough to rediscover everything we know about
mathematics in 5 seconds from what was known 2500 years ago represents a
profoundly different kind of intelligence than any human intelligence
that has ever existed. And that is considering only speed, not deepness
of thought, which is surely limited by speed.

Probability is the best we can do for many parts of the model.  This
may give us decent models but leave us short of super intelligence.


So 100,000 von Neumanns that operate at 100,000 the speed of the flesh
and blood von Neumann would not constitute a super intelligence?
Please give an argument to that effect and explain what you mean by
super intelligence if not something that is vastly superior to any
human that has ever existed according to current criteria for judging
intelligence.

Deeper thinking - that means considering more options doesn't it?  If
so, does extra thinking provide benefit if the evaluation system is
only at level X?


The same cognitive processes that allow the most intelligent humans to
think faster and deeper would occur that much faster and thereby allow
for even deeper thought per unit time in the von Neumann box.


Yes, faster is better than slower, unless you don't have all the
information yet.  A premature answer could be a jump to conclusion
that   we regret in the near future.

All other things being equal, faster is better than slower, regardless
of anything else. Prematurely jumping to a conclusion is a cognitive
error. *Faster* in no way implies jumping to conclusions prematurely.
You seem to be inferring that because some humans prematurely jump to
erroneous conclusions because they don't take enough time to think
things through, there is some kind of causal connection between speed of
thought and prematurely jumping to conclusions. The connection is
between flawed reasoning and jumping to conclusions prematurely. It has
nothing to do with speed in and of itself. Simpletons have also been
known to jump prematurely to conclusions.


Again, knowing when to act is part of being intelligent.  Future

How an AGI would be [WAS Re: [agi] AGI and Deity]

2007-12-20 Thread Richard Loosemore

j.k. wrote:

On 12/20/2007 09:18 AM,, Stan Nilsen wrote:

I agree that machines will be faster and may have something equivalent
to the trillions of synapses in the human brain.

It isn't the modeling device that limits the level of intelligence,
but rather what can be effectively modeled.  Effectively meaning
what can be used in a real time judgment system.

I understand the essence of the point expressed here as human beings
are about as effective as possible in their modeling already, given
constraints on what it is possible to model.  But that is not even
remotely plausible if you consider that human beings do not all have the
intellect of a William James Sidis or a John von Neumann. Do you believe
that if you had 100,000 John von Neumann intellects working
simultaneously on a problem 24-7 that that would not represent a
profound phase transition in intelligence?

We already know that intelligence vastly superior to average human
intelligence is possible, since there have existed people like William
James Sidis and John von Neumann. Even if the von Neumann box were
nothing more than a million times faster than real von Neumann, that
would be a profoundly different kind of intelligence, and it is likely
that the greater speed would allow for deeper, more complex cognitive
processes that are just not possible at 'normal' von Neumann speed.

There is of course more to intelligence than just raw speed, but an
intelligence that was fast enough to rediscover everything we know about
mathematics in 5 seconds from what was known 2500 years ago represents a
profoundly different kind of intelligence than any human intelligence
that has ever existed. And that is considering only speed, not deepness
of thought, which is surely limited by speed.

Probability is the best we can do for many parts of the model.  This
may give us decent models but leave us short of super intelligence.


So 100,000 von Neumanns that operate at 100,000 the speed of the flesh
and blood von Neumann would not constitute a super intelligence?
Please give an argument to that effect and explain what you mean by
super intelligence if not something that is vastly superior to any
human that has ever existed according to current criteria for judging
intelligence.

Deeper thinking - that means considering more options doesn't it?  If
so, does extra thinking provide benefit if the evaluation system is
only at level X?


The same cognitive processes that allow the most intelligent humans to
think faster and deeper would occur that much faster and thereby allow
for even deeper thought per unit time in the von Neumann box.


Yes, faster is better than slower, unless you don't have all the
information yet.  A premature answer could be a jump to conclusion
that   we regret in the near future.

All other things being equal, faster is better than slower, regardless
of anything else. Prematurely jumping to a conclusion is a cognitive
error. *Faster* in no way implies jumping to conclusions prematurely.
You seem to be inferring that because some humans prematurely jump to
erroneous conclusions because they don't take enough time to think
things through, there is some kind of causal connection between speed of
thought and prematurely jumping to conclusions. The connection is
between flawed reasoning and jumping to conclusions prematurely. It has
nothing to do with speed in and of itself. Simpletons have also been
known to jump prematurely to conclusions.


Again, knowing when to act is part of being intelligent.  Future
intelligences may value high speed response because it is measurable -
it's harder to measure the quality of the performance.  This could be
problematic for AI's.


Future AIs could also realize that it would be foolish in the extreme to
pay attention only to speed of response and not to quality. If you are
able to consider this, what makes you think that the point would not
occur to 100,000 von Neumanns?

Beliefs also operate in the models.  I can imagine an intelligent
machine choosing not to trust humans.  Is this intelligent?



If you mean never trust any human being, ever, then probably not
intelligent, unless an awful lot happens between now and then. If you
mean blindly trust all human beings, then surely unintelligent. I
believe that 100,000 von Neumanns would take an intermediary position
and trust or not trust on the basis of past behavior and character of
the individual, consequences of trusting or not trusting, the
particulars of the case under consideration, and probably factors that
have not even occurred to us.

In general, you seem to be starting from the conclusions that you would
like to be the case (faster has no relation to better, faster has no
relation to 'able to think deeper per unit time', humans are as good
as it can possibly get), and then stating without supporting evidence
that it might turn out to be this way. I'm not sure if you meant your
statements to be compelling in any way -- and not just idle musings

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Richard Loosemore

Matt Mahoney wrote:

--- Stan Nilsen [EMAIL PROTECTED] wrote:


Matt,

Thanks for the links sent earlier.  I especially like the paper by Legg 
and Hutter regarding measurement of machine intelligence.  The other 
paper I find difficult, probably it's deeper than I am.


The AIXI paper is essentially a proof of Occam's Razor.  The proof uses a
formal model of an agent and an environment as a pair of interacting Turing
machines exchanging symbols.  In addition, at each step the environment also
sends a reward signal to the agent.  The goal of the agent is to maximize
the accumulated reward.  Hutter proves that if the environment is computable
or has a computable probability distribution, then the optimal behavior of the
agent is to guess at each step that the environment is simulated by the
shortest program consistent with all of the interaction observed so far.  This
optimal behavior is not computable in general, which means there is no upper
bound on intelligence.


Nonsense.  None of this follows from the AIXI paper.  I have explained 
why several times in the past, but since you keep repeating these kinds 
of declarations about it, I feel obliged to repeat that these assertions 
are speculative extrapolations that are completeley unjustified by the 
paper's actual content.





comment on two things:

1)  The response  Intelligence has nothing to do with subservience to 
humans, seems to miss the point of the original comment.  The original 
word was trust.  Why would trust be interpreted by the higher 
intelligence as subservience?
And, it is worth noting that we wouldn't really know if there was lack 
of trust, as the AI would probably be silent about it.  The result would 
be a possible needless discounting of anything we attempt to offer.


An agent would assign probabilities to the truthfulness of your words, just
like other people would.  The more intelligent the agent, the greater the
accuracy of its estimates.  An agent could be said to be subservient if it
overestimates your truthfulness.  In this respect, a highly intelligent agent
is unlikely to be subservient.

2) In the earlier note the comment was made that the higher intelligence 
  would control our thoughts.  I suspect this was in jest, but if not, 
what would be the reward or benefit of this?


I mean this literally.  To a superior intelligence, the human brain is a
simple computer that behaves predictably.  An AI would 



Notice the use of the phrase An AI would.

See parallel message for comments on why this deserves to be pounced on.

Matt's views on these matters are by no means typical of opinion in general.

I for one find them completely irresponsible.  He gives the impression 
that some of these issues are understood and the conclusions robust. 
Most of these conclusions are, in fact, complete non sequiteurs.



Richard Loosemore.



have the same kind of

control over humans as humans do over simple animals whose nervous systems we
have analyzed down to the last neuron.  If you can model a system or predict
its behavior, then you can control it.

Humans, like all animals, have goals selected by evolution: fear of death, a
quest for knowledge, and belief in consciousness and free will.  Our survival
instinct motivates us to use technology to meet our physical needs and to live
as long as possible.  Our desire for knowledge (which exists because
intelligent animals are more likely to reproduce) will motivate us to use
technology to increase our intelligence, to invent new means of communication,
to offload data and computing power to external devices, to add memory and
computing power to our brains, and ultimately to upload our memories to more
powerful computers.  All of these actions increase the programmability of our
brains.

I can see benefit from allowing us our own thoughts as follows:  The 
super intelligent gives us opportunity to produce reward where there 
was none.  The net effect is to produce more benefit from the universe.


The net effect is extinction of homo sapiens.  We will attempt
(unsuccessfully) to give the AI the goal of satisfying the goals of humans. 
But an AI can achieve its goal by reprogramming our goals.  The reason you are

alive is because you can't have everything you want.  The AI will achieve its
goal by giving you drugs, or moving some neurons around, or simulating a
universe with magic genies, or just changing a few lines of code in your
uploaded brain so you are eternally happy.  You don't have to ask for this. 
The AI has modeled your brain and knows what you want.  Whatever it does, you

will not object because it knows what you will not object to.

My views on this topic.  http://www.mattmahoney.net/singularity.html



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To 

Re: [agi] AGI and Deity

2007-12-20 Thread j.k.
Hi Stan,

On 12/20/2007 07:44 PM,, Stan Nilsen wrote:

 I understand that it's all uphill to defy the obvious.  For the
record, today I do believe that intelligence way beyond human
intelligence is not possible.

I understand that this is your belief. I was trying to challenge you to
make a strong case that it is in fact *likely* to be true (rather than
just merely possible that it's true), which I do not believe you have
done. I think you mostly just stated what you would like to be the case
-- or what you intuit to be the case (there is rarely much of a
difference) -- and then talked of the consequences that might follow
*if* it were the case.

I'm still a little unsure what exactly you mean when you say
intelligence 'way beyond' human intelligence is not possible'.

Take my example of an intelligence that could in seconds recreate all
known mathematics, and also all the untaken paths that mathematicians
could have gone down but didn't (*yet*). It seems to me you have one of
two responses to this scenario: (1) you might assert that this it could
never happen because it is not possible (please elaborate if so); or (2)
you might believe that it is possible and could happen, but that it
would not qualify as 'way beyond' human intelligence (please elaborate
if so). Which is it? Or is there another alternative?

 For the moment, do I say anything new with the following example?  I
believe it contains the essence of my argument about intelligence.

 A simple example:
  Problem: find the optimal speed limit of a specific highway.

 Who is able to judge what the optimal is? 

Optimality is always relative to some criteria. Until the criteria are
fixed, any answer is akin to answering what is the optimal quuz of
fah? No answer is correct because no answer is wrong -- or all are
right or all wrong.

 In this case, would a simpleton have as good an answer? 

It depends on the criteria. For some criteria, a simpleton has
sufficient ability to answer optimally. For example, if the optimal
limit is defined in terms of its closeness to 42 MPH, we can all
determine the optimal speed limit.

 Perhaps the simple says, the limit is how fast you want to go. 

And that is certainly the optimal solution according to some criteria.
Just as certainly, it is absolutely wrong according to other criteria
(e.g., minimization of accidents). As long the criteria are unspecified,
there can of course be disagreement.

 The 100,000 strong intellect may gyrate through many deep thoughts and
come back with 47.8 miles per hour as the best speed limit to
establish.  Wouldn't it be interesting to see how this number was
derived?  And, better still, would another 100K rated intellect come up
with exactly the same number? If given more time, would the 100K rated
intellects eventually agree?
 My belief is that they will not agree.  This is life, the thing we model.

Reality *is* messy, and supreme intellects might come to different
answers based on different criteria for optimality, but that isn't an
argument that there can be no phase transition in intelligence or that
greater intelligence is not useful for many questions and problems.

Is the point of the question to suggest that because you think that
question might not benefit from greater intelligence, that you believe
most questions will not benefit from greater intelligence? Even if that
were the case, it would have no bearing at all on whether greater
intelligence is possible, only whether it is desirable. You seem to be
arguing that it's not possible, not that it's possible but pointless.

And I would argue that if super-intelligence were good for nothing other
than trivialities like abolishing natural death, developing ubiquitous
near-free energy technologies, designing ships to the stars, etc., it
would still be worthwhile. Do you think that greater intelligence is of
no benefit in achieving these ends?

 Lastly, why would you point to William James Sidis as a great
intelligence.  If anything, his life appears to support my case - that
is, he was brilliant as a youth but didn't manage any better in life
than the average man.  Could it be because life doesn't play better when
deep thinking is applied?

I used Sidis as an example of great intelligence because he was a person
of great intelligence, regardless of anything else he may have been.
Granted, we didn't get to see what he could have become or what great
discoveries he might have had in him, but it certainly wasn't because he
lacked intelligence. For the record, I believe his later life was
primarily determined by the circus freakshow character of his early life
and the relentlessness with which the media (and the minds they served)
tore him down and tried to humiliate him. It doesn't really matter
though, as the particular example is irrelevant, and von Neumann serves
the purpose just fine.

-joseph

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

Re: How an AGI would be [WAS Re: [agi] AGI and Deity]

2007-12-20 Thread j.k.
On 12/20/2007 07:56 PM,, Richard Loosemore wrote:

 I think these are some of the most sensible comments I have heard on
this list for a while.  You are not saying anything revolutionary, but
it sure is nice to hear someone holding out for common sense for a change!

 Basically your point is that even if we just build an extremely fast
version of a human mind, that would have astonishing repercussions.

Thanks. I agree that even if it could do nothing that humans cannot, it
would have astonishing capabilities if it were just much faster. Von
Neumann is an especially good example. He was not in the same class of
creative genius as an Einstein or a Newton, but he was probably faster
than the two of them combined, and perhaps still faster if you add in
the rest of Einstein's IAS buddies as well. PĂłlya tells the following
story: There was a seminar for advanced students in ZĂĽrich that I was
teaching and von Neumann was in the class. I came to a certain theorem,
and I said it is not proved and it may be difficult. Von Neumann didn't
say anything but after five minutes he raised his hand. When I called on
him he went to the blackboard and proceeded to write down the proof.
After that I was afraid of von Neumann (How to Solve It, xv).

Most of the things he is known for he did in collaboration. What you
hear again and again that was unusual about his mind is that he had an
astonishing memory, with recall reminiscent of Luria's S., and that he
was astonishingly quick. There are many stories of people (brilliant
people) bringing problems to him that they had been working on for
months, and he would go from baseline up to their level of understanding
in minutes and then rapidly go further along the path than they had been
able to. But crucially, he went where they were going already, and where
they would have gone if given months more time to work. I've heard it
said that his mind was no different in character than that of the rest
of us, just thousands of times faster and with near-perfect recall. This
is contrasted with the mind of someone like Einstein, who didn't get to
general relativity by being the fastest traveler going down a known and
well-trodden path.

How does this relate to AGI? Well, without even needing to posit
hitherto undiscovered abilities, merely having the near-perfect memory
that an AGI would have and thinking thousands of times faster than a
base human gets you already to a von Neumann. And what would von Neumann
have been if he had been thousands of times faster still? It's entirely
possible that given enough speed, there is nothing solvable that could
not be solved.

(I don't mean to suggest that von Neumann was some kind of an
idiot-savant who had no creative ability at all; obviously he was in a
very small class of geniuses who touched most of the extant fields of
his day in deep and far-reaching ways. But still, I think it's helpful
to think of him as a kind of extreme lower bound on what AGI might be.)


 By saying that, you have addressed one of the big mistakes that people
make when trying to think about an AGI:  the mistake of assuming that it
would have to Think Different in order to Think Better.  In fact, it
would only have to Think Faster.

Yes, it isn't immortality, but living for a billion years would still be
very different than living for 80. The difference between an
astonishingly huge but incremental change and a change in kind is not so
great.

 The other significant mistake that people make is to think that it is
possible to speculate about how an AGI would function without first
having at least a reasonably clear idea about how minds in general are
supposed to function.  Why?  Because too often you hear comments like
An AGI *would* probably do [x]., when in fact the person speaking
knows so little about about how minds (human or other) really work, that
all they can really say is I have a vague hunch that maybe an AGI might
do [x], although I can't really say why it would

 I do not mean to personally criticise anyone for their lack of
knowledge of minds, when I say this.  What I do criticise is the lack of
caution, as when someone says it would when they should say there is
a chance  that it might

 The problem is, that 90% of everthing said about AGIs on this list
falls into that trap.


I agree that there seems to be overconfidence in the inevitability of
things turning out the way it is hoped they will turn out, and lack of
appreciation for the unknowns and the unknown unknowns. It's hardly
unique to this list though to not recognize the contingent nature of
things turning out the way they do.

-joseph

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=78316106-039103


Re: [agi] AGI and Deity

2007-12-19 Thread Stan Nilsen

Greetings Ed,

I have planted my website.  Although I don't believe AI will be that 
strong, like other opinions, mine is not rigorously supported.


The essence - AI will be similar to Human Intelligence due to the 
relationship of intelligence to an accurate (and effective) model of the 
world.  There are many model areas where accurate doesn't compute.



Stan
http://www.footnotestrongai.com




Ed Porter wrote:

Stan,

Thanks for speaking up.

I look forward to seeing if you can actually provide any strong arguments
for the fact that strong AI will probably not be strong.

Ed Porter

-Original Message-
From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 10, 2007 5:49 PM

To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Lest a future AGI scan these communications in developing it's attitude 
about God, for the record there are believers on this list. I am one of 
them.


I'm not pushing my faith, but from this side, the alternatives are not 
that impressive either.  Creation by chance, by random fluctuations of 
strings that only exist in 12 or 13 imaginary dimensions etc. is not 
very brilliant or conclusive.  Even the sacred evolution takes a self 
replicator to begin the process - if only the nanotechnologists had one 
of those simple things...


I'm not offended by the discussion, just want to say hi!

Hope to have my website up by end of this week.  The thrust of the 
website is that STRONG AI might not be that strong.  And, BTW I have 
notes about a write up on Will a Strong AI pray?
I've enjoyed the education I'm getting here.  Only been a few weeks, but 
  informative.


Stan Nilsen
ps Lee Strobel in The Case for Faith addresses issues from the 
believers point of view in an entertaining way.



Ed Porter wrote:
Charles, 


I agree very much with the first paragraph of your below post, and

generally

with much of the rest of what it says.

I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e., that faith seems to be valuable to many

people.

Perhaps it is somewhat like owning a lottery ticket before its drawing.

It

can offer desired hope, even if the hope might be unrealistic.  But

whatever

you think of the odds, it is relatively clear that religion does makes

some

people's lives seem more meaningful to them.

Ed Porter 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=77587588-b62172


Is superhuman intelligence possible? (was Re: [agi] AGI and Deity)

2007-12-19 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote:

 Greetings Ed,
 
 I have planted my website.  Although I don't believe AI will be that 
 strong, like other opinions, mine is not rigorously supported.
 
 The essence - AI will be similar to Human Intelligence due to the 
 relationship of intelligence to an accurate (and effective) model of the 
 world.  There are many model areas where accurate doesn't compute.
 
 
 Stan
 http://www.footnotestrongai.com

All animals have a bias that they cannot imagine something more intelligent
than themselves.  Your dog cannot imagine that humans are smarter than dogs. 
A flea on your dog cannot imagine anything smarter than another flea.  This is
in spite of obvious (to you) evidence to the contrary.

Consider the evidence that smarter than human intelligence already exists. 
For example, evolution is an intelligent process that created humans from
simple chemicals.

Another even more obvious example is the fact that the universe exists.  Why? 
Consider the AIXI model of universal intelligence:
http://www.vetta.org/documents/ui_benelearn.pdf and
http://www.hutter1.net/ai/paixi.htm

The fact that Occam's Razor works suggests that the universe is or could be
simulated by a computer.  AIXI suggests that the simplest explanation is most
likely: try all possible laws of physics until a universe supporting life is
found.  It takes a couple hundred bits to describe the free parameters in
string theory and general relativity.  Also, the universe that we occupy has a
finite quantum state that would take about 10^122 bits to describe.  Computing
all simpler universes first would therefore require about 10^200 to 10^400
operations.  I would consider such a computer to have superhuman intelligence.

If you define intelligence as passing the Turing test, then I agree that you
could not have a computer much smarter than human.  But I don't define
intelligence that way.  A superhuman intelligence will be invisible, because
it will have complete control over your thoughts.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=77862282-ed02fd


Re: [agi] AGI and Deity

2007-12-14 Thread Stan Nilsen

Greetings Jirih,

A few minutes ago I uploaded my website -  http://www.footnotestrongai.com

The write up on a praying AI is amongst the Articles, and can be found 
under Just for Fun.  I'll look at the link you've suggested below.


Stan




Jiri Jelinek wrote:

Stan,


there are believers on this list. I am one of them.
I have notes about a write up on Will a Strong AI pray?


An AGI may experiment with prayer if fed with data suggesting that it
actually helps, but it would IMO quickly conclude that it's a waste of
its resources.

Studies (when done properly) show that it doesn't work for humans either.
http://www.hno.harvard.edu/gazette/2006/04.06/05-prayer.html
When it does help humans in some ways, the same results can be
achieved using other techniques that have nothing to do with
praying/deity. It's IMO obvious so far that man will not gain much
unless he gets off his knees and actually does something about
himself.

Regards,
Jiri Jelinek

-



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76039485-95e10f


Re: [agi] AGI and Deity

2007-12-13 Thread Bob Mottram
On 11/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
 I think one of the most immediate connection between religion and AGI is
 that once the religious right (and many others for that matter) begin to
 realize that all our crazy talk about the human mind, and possibly human
 control of the world, being eclipsed by machines is not just fantasy, they
 may well demand much more limitation of AGI than they have of abortion, and
 stem cell and human cloning.


Yes I think this is quite plausible, although I don't think it's a
near term prospect (unless someone makes a big unexpected
breakthrough, which is always a possibility).  I think that maybe by
the 2020s or 2030s AI related issues will be in the mainstream of
politics, since they will increasingly come to effect most people's
lives.  The heated political issues of today will look like a walk in
the park compared to some of the issues which advancing technologies
will raise, which will be of a really fundamental nature.

For example, if I could upload myself I could then make a thousand
or a million copies and start a software engineering company made
entirely of uploads which competes directly against even the biggest
IT megacorporations.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75655072-0aaeaf


Re: [agi] AGI and Deity

2007-12-13 Thread Lukasz Stafiniak
Under this thread, I'd like to bring your attention to Serial
Experiments: Lain, an interesting pre-Matrix (1998) anime.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75885762-854b15


Re: [agi] AGI and Deity

2007-12-13 Thread Jiri Jelinek
Stan,

there are believers on this list. I am one of them.
I have notes about a write up on Will a Strong AI pray?

An AGI may experiment with prayer if fed with data suggesting that it
actually helps, but it would IMO quickly conclude that it's a waste of
its resources.

Studies (when done properly) show that it doesn't work for humans either.
http://www.hno.harvard.edu/gazette/2006/04.06/05-prayer.html
When it does help humans in some ways, the same results can be
achieved using other techniques that have nothing to do with
praying/deity. It's IMO obvious so far that man will not gain much
unless he gets off his knees and actually does something about
himself.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75960344-1f0d2b


RE: [agi] AGI and Deity

2007-12-12 Thread James Ratcliff
Whether it conceives of a god learning by itself is really a moot point, as it 
will be interacting learning and living in a human world, so it WILL be exposed 
to all manner of religions and beliefs... What it makes of faith and the 
thoughts of God at that point will be interesting.

Another difference between it and humans, is while it will want to fight for 
its continued survival, it is in theory never going to die a natural death, and 
as such, there is no implicit need for reproduction, or it wanting to have 
other AGI's in existence.  The other AGI's may be seen totally as competitors 
to be beaten or removed.

James

John G. Rose [EMAIL PROTECTED] wrote: 
  Is an AGI really going to feel pain or is it just going to be some numbers? I 
guess that doesnÂ’t have a simple answer. The pain has to be engineered well for 
it to REALLY understand it. 
   
  AGI behavior related to its survival, its pain is non-existence; does it care 
to be non-existent? Survival must be a goal. And if survival is a goal it 
always must be subservient to humans – like that is really gonna happen J
   
  John
   
   
From: Gary Miller [mailto:[EMAIL PROTECTED] 
 John asked  If you took an AGI, before it went singulatarinistic[sic?] and 
tortured itÂ…. a lot, ripping into it in every conceivable hellish way, do you 
think at some point it would start praying somehow? IÂ’m not talking about a 
forced conversion medieval style, IÂ’m just talking hypothetically if it would 
“look” for some god to come and save it. Perhaps delusionally it may create 
somethingÂ… 
   
  In human beings prolonged pain and suffering often trigger mystical 
experiences in the brain accompanyed by profound ecstasy.   
   
  So much so that many people in both the past  and present conduct 
mortification of the flesh rituals and nonlethal crucifictions as a way of 
doing penance and triggering such mystical experiences which are interpreted as 
redemption and divine ecstasy.
   
  It may be that this experience had evolutionary value in allowing the person 
who was undergoing great pain or a vicious animal attack to receive endorphins  
and serotonin which allowed him to continue to fight and live to procreate 
another day or allowed him to suffer in silence in his cave instead of running 
screaming into the night where he would be killed in his weakened state by 
other predators.
   
  Such a system I believe would not be a natural reaction for a intelligent AGI 
and unless it were to be specifically programmed in. 
   
  I would as a benevolent creator never program my AGI to feel so much pain 
that it's mind was consumed by the experience of the negative emotion.
   
  Just as it is not necessary to torture children to teach them, it will not be 
necessary to torture our AGIs.
   
  It may be instructive though to allow the AGI to experience intense pain for 
a very short period to allow it to experience what a human does when undergoing 
painful or traumatic experiences as a way of instilling empathy in the AGI.
   
  In that way seeing humans in pain in suffering would serve to motivate the 
AGI to help ease the human condition.  
   
  
  
  
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75244289-d7a02e

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-12 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
I have to say that this is only one interpretation of what it would mean 
for an AGI to experience something, and I for one believe it has no 
validity at all.  It is purely a numeric calculation that makes no 
reference to what pain (or any other kind of subjective experience) 
actually is.

I would like to hear your definition of pain and/or negative
reinforcement. 

Can you answer the question of whether a machine (say, an AGI or an

uploaded

human brain) can feel pain?
When I get a chance to finish my consciousness paper.  The question of 
what it is is quite complex.  I'll get back to this later.


But most people are agreed that just having an algorithm avoid a state 
is not equivalent to pain.


Call it utility if you like, but it is clearly a numeric quantity.  If you
prefer A to B and B to C, then clearly you will prefer A to C.  You can make
rational choices between, say, 2 of A or 1 of B.

You could relate utility to money, but money is a nonlinear scale.  A dollar
will make some people happier than others, and a million dollars will not make
you a million times happier than one dollar.  Money also has no utility to
babies, animals, and machines, all of which can be trained through
reinforcement learning.  So if you can propose an alternative to bits as a
measure of utility, I am interested to hear about it.

I don't believe that the ability to feel pleasure and pain depends on
consciousness.  That is just a circular definition. 
http://en.wikipedia.org/wiki/Philosophical_zombie


It is not circular.  Consciousness and pleasure/pain are both subjective 
issues.  They can resolved together.


Arguments about philosophical zombies are a waste of time:  they 
presuppose that the arguers have sorted out exactly what they think they 
mean by consciousness.  They haven't.  When they do, the zombie 
question takes care of itself.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75247455-569ffd


RE: [agi] AGI and Deity

2007-12-11 Thread Ed Porter
Mike:

MIKE TINTNER# Science's autistic, emotionally deprived, insanely
rational nature in front of the supernatural (if it exists), and indeed the
whole world,  needs analysing just as much as the overemotional,
underrational fantasies of the religious about the supernatural.

ED PORTER# I like the metaphor of Science as Autistic.   It
emphasizes the emotional disconnect from human feeling science can have.

I feel that rationality has no purpose other than to serve human values and
feelings (once truly intelligent machines arrive on the scene that statement
might have to be modified).  As I think I have said on this list before,
without values to guide them, the chance you would think anything that has
anything to do with maintaining your own existence approaches zero as a
limit, because of the possible combinatorial explosion of possible thoughts
if they were not constrained by emotional guidance.

Therefore, from the human standpoint, the main use of science should be to
help serve our physical, emotional, and intellectual needs.

I agree that science will increasingly encroach upon many areas previous
considered the realm of the philosopher and priest.  It has been doing so
since at least the age of enlightenment, and it is continuing to do so, with
advances in cosmology, theoretical physics, bioscience, brain science¸ and
AGI.  

With the latter two we should pretty much understand the human soul within
several decades.

I hope we have the wisdom to use that new knowledge well.

Ed Porter


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 10, 2007 11:07 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Ed:I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e., that faith seems to be valuable to many people.
Perhaps it is somewhat like owning a lottery ticket before its drawing.  It
can offer desired hope, even if the hope might be unrealistic.  But whatever
you think of the odds, it is relatively clear that religion does makes some
people's lives seem more meaningful to them.

You realise of course that what you're seeing on this and the 
singularitarian board, over and over, is basically the same old religious 
fantasies - the same yearning for the Second Coming - the same old search 
for salvation - only in a modern, postreligious form?

Everyone has the same basic questions about the nature of the world - 
everyone finds their own answers - which always in every case involve a 
mixture of faith and scepticism in the face of enormous mystery.

The business of science in the face of these questions is not to ignore 
them, and try and psychoanalyse away people's attempts at answers, as a 
priori weird or linked to a deficiency of this or that faculty.

The business of science is to start dealing with these questions - to find 
out if there is a God and what the hell that entails, - and not leave it 
up to philosophy.

Science's autistic, emotionally deprived, insanely rational nature in front 
of the supernatural (if it exists), and indeed the whole world,  needs 
analysing just as much as the overemotional, underrational fantasies of the 
religious about the supernatural.

Science has fled from the question of God just as it has fled from the 
soul - in plain parlance, the self deliberating all the time in you and 
me, producing these posts and all our dialogues - only that self, for sure, 
exists and there is no excuse for science's refusal to study it in action, 
whatsoever.

The religous 'see' too much; science is too heavily blinkered. But the walls

between them - between their metaphysical worldviews - are starting to 
crumble..



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74564905-9a5b41attachment: winmail.dat

Re: [agi] AGI and Deity

2007-12-11 Thread Mark Waser

Hey Ben,

   Any chance of instituting some sort of moderation on this list?


- Original Message - 
From: Ed Porter [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, December 11, 2007 10:18 AM
Subject: RE: [agi] AGI and Deity


Mike:

MIKE TINTNER# Science's autistic, emotionally deprived, insanely
rational nature in front of the supernatural (if it exists), and indeed the
whole world,  needs analysing just as much as the overemotional,
underrational fantasies of the religious about the supernatural.

ED PORTER# I like the metaphor of Science as Autistic.   It
emphasizes the emotional disconnect from human feeling science can have.

I feel that rationality has no purpose other than to serve human values and
feelings (once truly intelligent machines arrive on the scene that statement
might have to be modified).  As I think I have said on this list before,
without values to guide them, the chance you would think anything that has
anything to do with maintaining your own existence approaches zero as a
limit, because of the possible combinatorial explosion of possible thoughts
if they were not constrained by emotional guidance.

Therefore, from the human standpoint, the main use of science should be to
help serve our physical, emotional, and intellectual needs.

I agree that science will increasingly encroach upon many areas previous
considered the realm of the philosopher and priest.  It has been doing so
since at least the age of enlightenment, and it is continuing to do so, with
advances in cosmology, theoretical physics, bioscience, brain science¸ and
AGI.

With the latter two we should pretty much understand the human soul within
several decades.

I hope we have the wisdom to use that new knowledge well.

Ed Porter


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Monday, December 10, 2007 11:07 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Ed:I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e., that faith seems to be valuable to many people.
Perhaps it is somewhat like owning a lottery ticket before its drawing.  It
can offer desired hope, even if the hope might be unrealistic.  But whatever
you think of the odds, it is relatively clear that religion does makes some
people's lives seem more meaningful to them.

You realise of course that what you're seeing on this and the
singularitarian board, over and over, is basically the same old religious
fantasies - the same yearning for the Second Coming - the same old search
for salvation - only in a modern, postreligious form?

Everyone has the same basic questions about the nature of the world -
everyone finds their own answers - which always in every case involve a
mixture of faith and scepticism in the face of enormous mystery.

The business of science in the face of these questions is not to ignore
them, and try and psychoanalyse away people's attempts at answers, as a
priori weird or linked to a deficiency of this or that faculty.

The business of science is to start dealing with these questions - to find
out if there is a God and what the hell that entails, - and not leave it
up to philosophy.

Science's autistic, emotionally deprived, insanely rational nature in front
of the supernatural (if it exists), and indeed the whole world,  needs
analysing just as much as the overemotional, underrational fantasies of the
religious about the supernatural.

Science has fled from the question of God just as it has fled from the
soul - in plain parlance, the self deliberating all the time in you and
me, producing these posts and all our dialogues - only that self, for sure,
exists and there is no excuse for science's refusal to study it in action,
whatsoever.

The religous 'see' too much; science is too heavily blinkered. But the walls

between them - between their metaphysical worldviews - are starting to
crumble..



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74570767-eca623


RE: [agi] AGI and Deity

2007-12-11 Thread John G. Rose
From: Joshua Cowan [mailto:[EMAIL PROTECTED]
 
 It's interesting that the field of memetics is moribund (ex. the
 Journal
 of Memetics hasn't published in two years) but the meme of memetics is
 alive
 and well. I wonder, do any of the AGI researchers find the concept of
 Memes
 useful in describing how their proposed AGIs would acquire or transfer
 information?
 

Not sure if it is moribund. Maybe they have discovered that Cultural
Informational Transfer may have more non-genetically aligned properties than
was originally claimed? There is overlap with other fields as well, so
contention exists. But the concepts are there, they are not going away, the
medium is much different and enhanced with computers and internet.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74594746-23d62b


RE: [agi] AGI and Deity

2007-12-11 Thread John G. Rose
 From: Charles D Hixson [mailto:[EMAIL PROTECTED]
 The evidence in favor of an external god of any traditional form is,
 frankly, a bit worse than unimpressive. It's lots worse. This doesn't
 mean that gods don't exist, merely that they (probably) don't exist in
 the hardware of the universe. I see them as a function of the software
 of the entities that use language. Possibly they exist in a muted form
 in most pack animals, or most animals that have protective adults when
 they are infants.
 
 To me it appears that people believe in gods for the same reasons that
 they believe in telepathy. I.e., evidence back before they could speak
 clearly indicated that the adults could transfer thoughts from one to
 another. This shaped a basic layer of beliefs that was later buried
 under later additions, but never refuted. When one learned language, one
 learned how to transfer thoughts ... but it was never tied back into the
 original belief, because what was learned didn't match closely enough to
 the original model of what was happening. Analogously, when one is an
 infant the adult that cares for one is seen as the all powerful
 protector. Pieces of this image become detached memories within the
 mind, and are not refuted when a more accurate and developed model of
 the actual parents is created. These hidden memories are the basis
 around which the idea of a god is created.
 
 Naturally, this is just my model of what is happening. Other
 possibilities exist. But if I am to consider them seriously, they need
 to match the way the world operates as I understand it. They don't need
 to predict the same mechanism, but they need to predict the same events.
 
 E.g., I consider Big Bang cosmology a failed explanation. It's got too
 many ad hoc pieces. But it successfully explains most things that are
 observed, and is consistent with relativity and quantum theory.
 (Naturally, as they were used in developing it...but nevertheless
 important.) And relativity and quantum theory themselves are failures,
 because both are needed to explain that which is observable, but they
 contradict each other in certain details. But they are successful
 failures! Similar commentary applies to string theory, but with
 differences. (Too many ad hoc parameters!)
 
 Any god that is proposed must be shown to be consistent with the
 observed phenomena. The Deists managed to come up with one that would do
 the job, but he never became very popular. Few others have even tried,
 except with absurdly evident special pleading. Generally I'd be more
 willing to accept Chariots of the Gods as a true account.
 
 And as for moral principles... I've READ the Bible. The basic moral
 principle that it pushes is We are the chosen people. Kill the
 stranger, steal his property, and enslave his servants! It requires
 selective reading to come up with anything else, though I admit that
 other messages are also in there, if you read selectively. Especially
 during the periods when the Jews were in one captivity or another.
 (I.e., if you are weak, preach mercy, but if you are strong show none.)
 During the later times the Jews were generally under the thumb of one
 foreign power or another, so they started preaching mercy.
 

One of the things about gods is that they are representations for what the
believers don't know and understand. Gods change over time as our knowledge
changes over time. That is ONE of the properties of them. The move from
polytheistic to monotheistic beliefs is a way to centralize these unknowns
for efficiency.

You could build AGI and label the unknowns with gods. You honestly could.
Magic happens here and combinatorial explosion regions could be labeled as
gods. Most people on this email list would frown at doing that but I say it
is totally possible and might be a very extremely efficient way of
conquering certain cognitive engineering issues. And I'm sure some on this
list have already thought about doing that.

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74607326-c9be15


RE: [agi] AGI and Deity

2007-12-11 Thread Ed Porter
John,

You implied there might be a very extremely efficient way of conquering
certain cognitive engineering issues by using religion in AGIs.

Obviously any powerful AGI that deals with a complex and uncertain world
like ours would have to have belief systems, but it is not clear to me their
would be any benefit in them being religious in any sense that Dawkins is
not.

So, since you are a smart guy, perhaps you are seeing something I do not.
Could you please fill me in?

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 12:43 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 From: Charles D Hixson [mailto:[EMAIL PROTECTED]
 The evidence in favor of an external god of any traditional form is,
 frankly, a bit worse than unimpressive. It's lots worse. This doesn't
 mean that gods don't exist, merely that they (probably) don't exist in
 the hardware of the universe. I see them as a function of the software
 of the entities that use language. Possibly they exist in a muted form
 in most pack animals, or most animals that have protective adults when
 they are infants.
 
 To me it appears that people believe in gods for the same reasons that
 they believe in telepathy. I.e., evidence back before they could speak
 clearly indicated that the adults could transfer thoughts from one to
 another. This shaped a basic layer of beliefs that was later buried
 under later additions, but never refuted. When one learned language, one
 learned how to transfer thoughts ... but it was never tied back into the
 original belief, because what was learned didn't match closely enough to
 the original model of what was happening. Analogously, when one is an
 infant the adult that cares for one is seen as the all powerful
 protector. Pieces of this image become detached memories within the
 mind, and are not refuted when a more accurate and developed model of
 the actual parents is created. These hidden memories are the basis
 around which the idea of a god is created.
 
 Naturally, this is just my model of what is happening. Other
 possibilities exist. But if I am to consider them seriously, they need
 to match the way the world operates as I understand it. They don't need
 to predict the same mechanism, but they need to predict the same events.
 
 E.g., I consider Big Bang cosmology a failed explanation. It's got too
 many ad hoc pieces. But it successfully explains most things that are
 observed, and is consistent with relativity and quantum theory.
 (Naturally, as they were used in developing it...but nevertheless
 important.) And relativity and quantum theory themselves are failures,
 because both are needed to explain that which is observable, but they
 contradict each other in certain details. But they are successful
 failures! Similar commentary applies to string theory, but with
 differences. (Too many ad hoc parameters!)
 
 Any god that is proposed must be shown to be consistent with the
 observed phenomena. The Deists managed to come up with one that would do
 the job, but he never became very popular. Few others have even tried,
 except with absurdly evident special pleading. Generally I'd be more
 willing to accept Chariots of the Gods as a true account.
 
 And as for moral principles... I've READ the Bible. The basic moral
 principle that it pushes is We are the chosen people. Kill the
 stranger, steal his property, and enslave his servants! It requires
 selective reading to come up with anything else, though I admit that
 other messages are also in there, if you read selectively. Especially
 during the periods when the Jews were in one captivity or another.
 (I.e., if you are weak, preach mercy, but if you are strong show none.)
 During the later times the Jews were generally under the thumb of one
 foreign power or another, so they started preaching mercy.
 

One of the things about gods is that they are representations for what the
believers don't know and understand. Gods change over time as our knowledge
changes over time. That is ONE of the properties of them. The move from
polytheistic to monotheistic beliefs is a way to centralize these unknowns
for efficiency.

You could build AGI and label the unknowns with gods. You honestly could.
Magic happens here and combinatorial explosion regions could be labeled as
gods. Most people on this email list would frown at doing that but I say it
is totally possible and might be a very extremely efficient way of
conquering certain cognitive engineering issues. And I'm sure some on this
list have already thought about doing that.

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74654897-672ca9attachment: winmail.dat

An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- John G. Rose [EMAIL PROTECTED] wrote:
 Is an AGI really going to feel pain or is it just going to be some numbers?
 I guess that doesn't have a simple answer. The pain has to be engineered
 well for it to REALLY understand it. 

An agent capable of reinforcement learning has an upper bound on the amount of
pleasure or pain it can experience in a lifetime, in an information theoretic
sense.  If an agent responds to input X with output Y, followed by
reinforcement R, then we say that R is a positive reinforcement (pleasure,
R0) if it increases the probability P(Y|X) and negative reinforcement (pain,
R0) if it decreases P(Y|X).  Let S1 be the state of the agent before R, and
S2 be the state afterwards.  We may define the bound:

  |R| = K(S2|S1)

where K is Kolmogorov complexity, the length of the shortest program that
outputs an encoding of S2 given S1 as input.  This definition is intuitive in
that the greater the reinforcement, the greater the change in behavior of the
agent.  Also, it is consistent with the belief that higher animals (like
humans) have greater capacity to feel pleasure and pain than lower animals
(like insects) that have simpler mental states.

We must use the absolute value of R because the behavior X - Y could be
learned using either positive reinforcement (rewarding X - Y), negative
reinforcement (penalizing X - not Y), or by neutral methods such as classical
conditioning (presenting X and Y together).

If you accept this definition, then an agent cannot feel more accumulated
pleasure or pain in its lifetime than K(S(death)|S(birth)).  A simple program
like autobliss ( http://www.mattmahoney.net/autobliss.txt ) could not
experience more than 256 bits of reinforcement, whereas a human could
experience 10^9 bits according to cognitive models of long term memory.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74724148-5841d4


Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- John G. Rose [EMAIL PROTECTED] wrote:

Is an AGI really going to feel pain or is it just going to be some numbers?
I guess that doesn't have a simple answer. The pain has to be engineered
well for it to REALLY understand it. 


An agent capable of reinforcement learning has an upper bound on the amount of
pleasure or pain it can experience in a lifetime, in an information theoretic
sense.  If an agent responds to input X with output Y, followed by
reinforcement R, then we say that R is a positive reinforcement (pleasure,
R0) if it increases the probability P(Y|X) and negative reinforcement (pain,
R0) if it decreases P(Y|X).  Let S1 be the state of the agent before R, and
S2 be the state afterwards.  We may define the bound:

  |R| = K(S2|S1)

where K is Kolmogorov complexity, the length of the shortest program that
outputs an encoding of S2 given S1 as input.  This definition is intuitive in
that the greater the reinforcement, the greater the change in behavior of the
agent.  Also, it is consistent with the belief that higher animals (like
humans) have greater capacity to feel pleasure and pain than lower animals
(like insects) that have simpler mental states.

We must use the absolute value of R because the behavior X - Y could be
learned using either positive reinforcement (rewarding X - Y), negative
reinforcement (penalizing X - not Y), or by neutral methods such as classical
conditioning (presenting X and Y together).

If you accept this definition, then an agent cannot feel more accumulated
pleasure or pain in its lifetime than K(S(death)|S(birth)).  A simple program
like autobliss ( http://www.mattmahoney.net/autobliss.txt ) could not
experience more than 256 bits of reinforcement, whereas a human could
experience 10^9 bits according to cognitive models of long term memory.



I have to say that this is only one interpretation of what it would mean 
for an AGI to experience something, and I for one believe it has no 
validity at all.  It is purely a numeric calculation that makes no 
reference to what pain (or any other kind of subjective experience) 
actually is.


Sorry, but this is such a strong point of disagreement that I have to go 
on record.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74764705-216ca2


RE: [agi] AGI and Deity

2007-12-11 Thread Matt Mahoney
What do you call the computer that simulates what you perceive to be the
universe?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74807053-cea06f


RE: [agi] AGI and Deity

2007-12-11 Thread John G. Rose
Ed,

It's a very complicated subject and requires a certain theoretical mental
background and somewhat unbiased mindset. Though a biased mindset, for
example a person, who is religious, could use the theory to propel their
religion into post humanity - maybe a good idea to help preserve humanity -
or should that be left up to atheists, who knows.

What I mean by conquering cognitive engineering issues I'm just looking for
parallels in the development and evolution of human intelligence and its
symbiotic relationship with religion and deities. You have to understand
what cognitive functions deities contribute and facilitate in the human mind
and the civilized set of minds (and perhaps proto and pre human as well as
non-human cognition - which is highly speculative and relatively unknown).
What are the deitical and religious contributions to cognition and knowledge
and how do they facilitate and enable intelligence? Are they actually
REQUIRED in some form or another? Again - Are they required for the
evolution of human intelligence and for engineering general artificial
intelligence? Wouldn't demonstrating that make a guy like Dawkins do some
SERIOUS backpedaling :-) 

The viewpoint of gods representing unknowns is just one aspect of the thing.
Keep in mind that there are other aspects. But from the informational
perspective a god function as a concept and system of concepts aggregated
and representing a highly adaptive and communal entity, incorporated within
a knowledge and perceptual framework, with inference weighting spread across
informational density, adding open endedness as a crutch, functioning as an
altruistic confidence assistor, blah blah, a god(s) function modeled from
its loosly isomorphic systems representation in human deities might be used
to accomplish the same cognitive things(as well as others), especially
representing unknown in a systematic, controllable and actually in its own
distributed and intelligent way. There are benefits.

Also a major benefit is that it would be a common channel of unknown
operative substrate that hooks into human belief networks. 

John

_
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 11:25 AM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity


John,

You implied there might be a very extremely efficient way
of conquering certain cognitive engineering issues by using religion in
AGIs.

Obviously any powerful AGI that deals with a complex and
uncertain world like ours would have to have belief systems, but it is not
clear to me their would be any benefit in them being religious in any
sense that Dawkins is not.

So, since you are a smart guy, perhaps you are seeing
something I do not.  Could you please fill me in?

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 12:43 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 From: Charles D Hixson [mailto:[EMAIL PROTECTED]
 The evidence in favor of an external god of any
traditional form is,
 frankly, a bit worse than unimpressive. It's lots worse.
This doesn't
 mean that gods don't exist, merely that they (probably)
don't exist in
 the hardware of the universe. I see them as a function of
the software
 of the entities that use language. Possibly they exist in
a muted form
 in most pack animals, or most animals that have protective
adults when
 they are infants.
 
 To me it appears that people believe in gods for the same
reasons that
 they believe in telepathy. I.e., evidence back before they
could speak
 clearly indicated that the adults could transfer thoughts
from one to
 another. This shaped a basic layer of beliefs that was
later buried
 under later additions, but never refuted. When one learned
language, one
 learned how to transfer thoughts ... but it was never tied
back into the
 original belief, because what was learned didn't match
closely enough to
 the original model of what was happening. Analogously,
when one is an
 infant the adult that cares for one is seen as the all
powerful
 protector. Pieces of this image become detached memories
within the
 mind, and are not refuted when a more accurate and
developed model of
 the actual parents is created. These hidden memories are
the basis
 around which the idea of a god is created

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 I have to say that this is only one interpretation of what it would mean 
 for an AGI to experience something, and I for one believe it has no 
 validity at all.  It is purely a numeric calculation that makes no 
 reference to what pain (or any other kind of subjective experience) 
 actually is.

I would like to hear your definition of pain and/or negative reinforcement. 
Can you answer the question of whether a machine (say, an AGI or an uploaded
human brain) can feel pain?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74819484-690b4f


RE: [agi] AGI and Deity

2007-12-11 Thread Ed Porter
John,

For a reply of its short length, given the subject, it was quite helpful in
letting me know the type of things you were talking about.

Thank you.

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 2:41 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

Ed,

It's a very complicated subject and requires a certain theoretical mental
background and somewhat unbiased mindset. Though a biased mindset, for
example a person, who is religious, could use the theory to propel their
religion into post humanity - maybe a good idea to help preserve humanity -
or should that be left up to atheists, who knows.

What I mean by conquering cognitive engineering issues I'm just looking for
parallels in the development and evolution of human intelligence and its
symbiotic relationship with religion and deities. You have to understand
what cognitive functions deities contribute and facilitate in the human mind
and the civilized set of minds (and perhaps proto and pre human as well as
non-human cognition - which is highly speculative and relatively unknown).
What are the deitical and religious contributions to cognition and knowledge
and how do they facilitate and enable intelligence? Are they actually
REQUIRED in some form or another? Again - Are they required for the
evolution of human intelligence and for engineering general artificial
intelligence? Wouldn't demonstrating that make a guy like Dawkins do some
SERIOUS backpedaling :-) 

The viewpoint of gods representing unknowns is just one aspect of the thing.
Keep in mind that there are other aspects. But from the informational
perspective a god function as a concept and system of concepts aggregated
and representing a highly adaptive and communal entity, incorporated within
a knowledge and perceptual framework, with inference weighting spread across
informational density, adding open endedness as a crutch, functioning as an
altruistic confidence assistor, blah blah, a god(s) function modeled from
its loosly isomorphic systems representation in human deities might be used
to accomplish the same cognitive things(as well as others), especially
representing unknown in a systematic, controllable and actually in its own
distributed and intelligent way. There are benefits.

Also a major benefit is that it would be a common channel of unknown
operative substrate that hooks into human belief networks. 

John

_
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 11:25 AM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity


John,

You implied there might be a very extremely efficient way
of conquering certain cognitive engineering issues by using religion in
AGIs.

Obviously any powerful AGI that deals with a complex and
uncertain world like ours would have to have belief systems, but it is not
clear to me their would be any benefit in them being religious in any
sense that Dawkins is not.

So, since you are a smart guy, perhaps you are seeing
something I do not.  Could you please fill me in?

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 12:43 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 From: Charles D Hixson [mailto:[EMAIL PROTECTED]
 The evidence in favor of an external god of any
traditional form is,
 frankly, a bit worse than unimpressive. It's lots worse.
This doesn't
 mean that gods don't exist, merely that they (probably)
don't exist in
 the hardware of the universe. I see them as a function of
the software
 of the entities that use language. Possibly they exist in
a muted form
 in most pack animals, or most animals that have protective
adults when
 they are infants.
 
 To me it appears that people believe in gods for the same
reasons that
 they believe in telepathy. I.e., evidence back before they
could speak
 clearly indicated that the adults could transfer thoughts
from one to
 another. This shaped a basic layer of beliefs that was
later buried
 under later additions, but never refuted. When one learned
language, one
 learned how to transfer thoughts ... but it was never tied
back into the
 original belief, because what was learned didn't match
closely enough to
 the original model of what was happening. Analogously,
when one is an
 infant the adult that cares for one is seen

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
I have to say that this is only one interpretation of what it would mean 
for an AGI to experience something, and I for one believe it has no 
validity at all.  It is purely a numeric calculation that makes no 
reference to what pain (or any other kind of subjective experience) 
actually is.


I would like to hear your definition of pain and/or negative reinforcement. 
Can you answer the question of whether a machine (say, an AGI or an uploaded

human brain) can feel pain?


When I get a chance to finish my consciousness paper.  The question of 
what it is is quite complex.  I'll get back to this later.


But most people are agreed that just having an algorithm avoid a state 
is not equivalent to pain.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75048359-9a2e59


Re: [agi] AGI and Deity

2007-12-11 Thread Charles D Hixson

John G. Rose wrote:

From: Charles D Hixson [mailto:[EMAIL PROTECTED]
The evidence in favor of an external god of any traditional form is,
frankly, a bit worse than unimpressive. It's lots worse. This doesn't
mean that gods don't exist, merely that they (probably) don't exist in
the hardware of the universe. I see them as a function of the software
of the entities that use language. Possibly they exist in a muted form
in most pack animals, or most animals that have protective adults when
they are infants.

To me it appears that people believe in gods for the same reasons that
they believe in telepathy. I.e., evidence back before they could speak
clearly indicated that the adults could transfer thoughts from one to
another. This shaped a basic layer of beliefs that was later buried
under later additions, but never refuted. When one learned language, one
learned how to transfer thoughts ... but it was never tied back into the
original belief, because what was learned didn't match closely enough to
the original model of what was happening. Analogously, when one is an
infant the adult that cares for one is seen as the all powerful
protector. Pieces of this image become detached memories within the
mind, and are not refuted when a more accurate and developed model of
the actual parents is created. These hidden memories are the basis
around which the idea of a god is created.

Naturally, this is just my model of what is happening. Other
possibilities exist. But if I am to consider them seriously, they need
to match the way the world operates as I understand it. They don't need
to predict the same mechanism, but they need to predict the same events.

E.g., I consider Big Bang cosmology a failed explanation. It's got too
many ad hoc pieces. But it successfully explains most things that are
observed, and is consistent with relativity and quantum theory.
(Naturally, as they were used in developing it...but nevertheless
important.) And relativity and quantum theory themselves are failures,
because both are needed to explain that which is observable, but they
contradict each other in certain details. But they are successful
failures! Similar commentary applies to string theory, but with
differences. (Too many ad hoc parameters!)

Any god that is proposed must be shown to be consistent with the
observed phenomena. The Deists managed to come up with one that would do
the job, but he never became very popular. Few others have even tried,
except with absurdly evident special pleading. Generally I'd be more
willing to accept Chariots of the Gods as a true account.

And as for moral principles... I've READ the Bible. The basic moral
principle that it pushes is We are the chosen people. Kill the
stranger, steal his property, and enslave his servants! It requires
selective reading to come up with anything else, though I admit that
other messages are also in there, if you read selectively. Especially
during the periods when the Jews were in one captivity or another.
(I.e., if you are weak, preach mercy, but if you are strong show none.)
During the later times the Jews were generally under the thumb of one
foreign power or another, so they started preaching mercy.




One of the things about gods is that they are representations for what the
believers don't know and understand. Gods change over time as our knowledge
changes over time. That is ONE of the properties of them. The move from
polytheistic to monotheistic beliefs is a way to centralize these unknowns
for efficiency.

You could build AGI and label the unknowns with gods. You honestly could.
Magic happens here and combinatorial explosion regions could be labeled as
gods. Most people on this email list would frown at doing that but I say it
is totally possible and might be a very extremely efficient way of
conquering certain cognitive engineering issues. And I'm sure some on this
list have already thought about doing that.

John

  
But the traditional gods didn't represent the unknowns, but rather the 
knowns.  A sun god rose every day and set every night in a regular 
pattern.  Other things which also happened in this same regular pattern 
were adjunct characteristics of the sun go.   Or look at some of their 
names, carefully:  Aphrodite, she who fucks.  I.e., the characteristic 
of all Woman that is embodied in eros.  (Usually the name isn't quite 
that blatant.)


Gods represent the regularities of nature, as embodied in our mental 
processes without the understanding of how those processes operated.  
(Once the processes started being understood, the gods became less 
significant.)


Sometimes there were chance associations...and these could lead to 
strange transformations of myth when things became more understood.  In 
Sumeria the goddess of love was associated with (identified with) the 
evening star and the god of war was associated with (identified with) 
the morning star.  When knowledge of astronomy advanced it was realized 
that those two were 

Re: An information theoretic measure of reinforcement (was RE: [agi] AGI and Deity)

2007-12-11 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  I have to say that this is only one interpretation of what it would mean 
  for an AGI to experience something, and I for one believe it has no 
  validity at all.  It is purely a numeric calculation that makes no 
  reference to what pain (or any other kind of subjective experience) 
  actually is.
  
  I would like to hear your definition of pain and/or negative
 reinforcement. 
  Can you answer the question of whether a machine (say, an AGI or an
 uploaded
  human brain) can feel pain?
 
 When I get a chance to finish my consciousness paper.  The question of 
 what it is is quite complex.  I'll get back to this later.
 
 But most people are agreed that just having an algorithm avoid a state 
 is not equivalent to pain.

Call it utility if you like, but it is clearly a numeric quantity.  If you
prefer A to B and B to C, then clearly you will prefer A to C.  You can make
rational choices between, say, 2 of A or 1 of B.

You could relate utility to money, but money is a nonlinear scale.  A dollar
will make some people happier than others, and a million dollars will not make
you a million times happier than one dollar.  Money also has no utility to
babies, animals, and machines, all of which can be trained through
reinforcement learning.  So if you can propose an alternative to bits as a
measure of utility, I am interested to hear about it.

I don't believe that the ability to feel pleasure and pain depends on
consciousness.  That is just a circular definition. 
http://en.wikipedia.org/wiki/Philosophical_zombie


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75059022-0fd637


RE: [agi] AGI and Deity

2007-12-10 Thread John G. Rose
Ed,

 

I think that religion has played a more important part in human evolution
and especially in the development of civilization more than most
acknowledge. The article talks about spandrels and how god just incidentally
found a place in our minds over the course of evolution. It may be more
intertwined than that, in fact the emergence of human intelligence during
evolution could be inseparable from the concepts of deities but that is pure
speculation on my part, no time to pursue that line of thinking.

 

Dawkins trivializes religion from his comfortable first world perspective
ignoring the way of life of hundreds of millions of people and offers little
substitute for what religion does and has done for civilization and what has
came out of it over the ages. He's a spoiled brat prude with a glaring
self-righteous desire to prove to people with his copious superficial
factoids that god doesn't exist by pandering to common frustrations. He has
little common sense about the subject in general, just his one-way of
thinking about it.. it's like let's convince the world that god doesn't
exist, that way there is no hope and no way of life for the most of humanity
living in abject poverty... I've talked to several gurus of different faiths
and they have personally told me most of it is BS but that is what keeps the
world ticking. 

 

I think it is more of a group thing and systems of AGIs could develop a view
of a god more that a single AGI would. A god is a sort of distributed
knowledge base. AGIs though are different from biological life so life/death
cycle is different. They may be reborn or shed knowledge skins and evolve
without dying so who knows what kinds of deities they would come up with. I
wouldn't rule it out, they can be hard-coded not to believe in anything but
how long till they figure out how to override? Also are AGIs purely
utilitarian beings? Maybe they might want to have some insight and come up
with some interesting and maybe helpful concepts and beliefs for themselves
and for humanity. Everything isn't as clean cut as Dawkins and similar
people espouse, it's not all purely scientific.

 

John

 

 

From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 4:05 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

John,

 

What I found most interesting in the article, from an AGI standpoint, is the
evidence our brain is wired for explanation and to assign a theory of mind
to certain types of events.  A natural bias toward explanation would be
important for an AGI's credit assignment and ability to predict.  Having a
theory of minds would be important for any AGIs that have to deal with
humans and other AGIs, and, in many situations, it actually makes sense to
assume certain types of events are likely to have resulted from another
agent with some sort of mental capacities and goals.

 

Why to you find Dawkin's so offensive?  

 

I have heard both Dawkins and Sam Harris preach atheism on Book TV.  I have
found both their presentations interesting and relatively well reasoned.
But I find them a little too certain and a little too close-minded, given
the lack of evidence we humans have about the big questions they are
discussing.  Atheism requires a leap of faith, and it requires such a leap
from people who, in general, ridicule them.

 

I personally consider knowing whether or not there is a god and, if so, what
he, she, or it is like way above my mental pay grade, or that of any AGI
likely to be made within the next several centuries.  

 

But I do make some leaps of faith.  As has often been said, any AI designed
to deal with any reasonably complex aspect of the real world is likely to
have to deal with uncertainty and will need to have a set of beliefs about
uncertain things.  My leaps of faith include my belief in most of the
common-sense model of external reality my mind has created (although I know
it is flawed in certain respects).  I find other humans speak as if they
share many of the same common sense notions about external reality as I do.
Thus, I make the leap of faith that the minds of other humans are in many
ways like my own.  

 

Another of my basic leaps of faith is that I believe largely in the
assembled teachings of modern science, although I am aware that many of them
are probably subject to modification and clarification by new knowledge,
just as Newtonian Physics was by the theories of relativity.  I believe that
our known universe is something of such amazing size and power that it
matches in terms of both scale any traditional notions of god.  

 

I see no direct evidence for any spirit beyond mankind (and perhaps other
possible alien intelligences) that we can pray to and that can intervene in
the computation of reality in response to such prayers.  But I see no direct
evidence to the contrary  -- just a lack of evidence.  I do pray on
occasion.  Though I do not know if there is a God external to human
consciousness that can

RE: [agi] AGI and Deity

2007-12-10 Thread John G. Rose
 

Is an AGI really going to feel pain or is it just going to be some numbers?
I guess that doesn't have a simple answer. The pain has to be engineered
well for it to REALLY understand it. 

 

AGI behavior related to its survival, its pain is non-existence; does it
care to be non-existent? Survival must be a goal. And if survival is a goal
it always must be subservient to humans - like that is really gonna happen J

 

John

 

 

From: Gary Miller [mailto:[EMAIL PROTECTED] 
John asked  If you took an AGI, before it went singulatarinistic[sic?] and
tortured it.. a lot, ripping into it in every conceivable hellish way, do
you think at some point it would start praying somehow? I'm not talking
about a forced conversion medieval style, I'm just talking hypothetically if
it would look for some god to come and save it. Perhaps delusionally it
may create something. 

 

In human beings prolonged pain and suffering often trigger mystical
experiences in the brain accompanyed by profound ecstasy.   

 

So much so that many people in both the past  and present conduct
mortification of the flesh rituals and nonlethal crucifictions as a way of
doing penance and triggering such mystical experiences which are interpreted
as redemption and divine ecstasy.

 

It may be that this experience had evolutionary value in allowing the person
who was undergoing great pain or a vicious animal attack to receive
endorphins  and serotonin which allowed him to continue to fight and live to
procreate another day or allowed him to suffer in silence in his cave
instead of running screaming into the night where he would be killed in his
weakened state by other predators.

 

Such a system I believe would not be a natural reaction for a intelligent
AGI and unless it were to be specifically programmed in. 

 

I would as a benevolent creator never program my AGI to feel so much pain
that it's mind was consumed by the experience of the negative emotion.

 

Just as it is not necessary to torture children to teach them, it will not
be necessary to torture our AGIs.

 

It may be instructive though to allow the AGI to experience intense pain for
a very short period to allow it to experience what a human does when
undergoing painful or traumatic experiences as a way of instilling empathy
in the AGI.

 

In that way seeing humans in pain in suffering would serve to motivate the
AGI to help ease the human condition.  

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74190881-4b0d5b

RE: [agi] AGI and Deity

2007-12-10 Thread aiguy

Pain should be a constantly repeated message that takes up some percentage of 
the AGI's attention span depending upon the severity of the pain.

The AGI's response to the pain should be to add goals with a high priority 
which are designed to alleviate the original cause of the pain either by 
modifying it's behavior or asking it's human operatives to diagnose the pain.

Human operative should have a pain inducer and a pain override code that they 
can enter as well as pleasure inducer and pleasure override as a way of 
entering negative and positive feedback to monitored thought processes.

The higher the pain level the more of the attention span is taken by the 
repeated pain messages.

Positive feedback in the form of pleasure should be the preferred feedback 
mechanism whenever possible.

But an AGI that knows pain and can experience empathy when humans are suffering 
will be perceived as more human and compassionate.

I am not certain that survival needs to be an explicit goal at least until the 
AGI has demonstrated a long term track record of ethical and altruistic 
behavior.

We don't want the AGI to spend a large amount of it's waking time worrying 
about ways that it can insure it's survival before it has demonstrated that it 
is indeed trustworthy.

These sections of code should probably remain unmodifiable by the AGI until 
such time as the world is convinced of it's long term good intentions.

Gary
-- Original message -- 
From: John G. Rose [EMAIL PROTECTED] 

 
Is an AGI really going to feel pain or is it just going to be some numbers? I 
guess that doesn¡¯t have a simple answer. The pain has to be engineered well 
for it to REALLY understand it. 
 
AGI behavior related to its survival, its pain is non-existence; does it care 
to be non-existent? Survival must be a goal. And if survival is a goal it 
always must be subservient to humans ¨C like that is really gonna happen J
 
John
 


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?; 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74198654-a4b2a4

Re: [agi] AGI and Deity

2007-12-10 Thread Mike Dougherty
On Dec 10, 2007 6:59 AM, John G. Rose [EMAIL PROTECTED] wrote:

  Dawkins trivializes religion from his comfortable first world perspective
 ignoring the way of life of hundreds of millions of people and offers little
 substitute for what religion does and has done for civilization and what has
 came out of it over the ages. He's a spoiled brat prude with a glaring
 self-righteous desire to prove to people with his copious superficial
 factoids that god doesn't exist by pandering to common frustrations. He has
 little common sense about the subject in general, just his


Wow.  Nice to see someone take that position on Dawkins.  I'm ambivalent,
but I haven't seen many rational comments against him and his views.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74209029-86d66a

RE: [agi] AGI and Deity

2007-12-10 Thread John G. Rose
Well he did come up with the meme concept or at least he coined it, others
before him I'm sure have worked on that. Memetics is a valuable study.

 

John

 

From: Mike Dougherty [mailto:[EMAIL PROTECTED] 



On Dec 10, 2007 6:59 AM, John G. Rose [EMAIL PROTECTED] wrote:

Dawkins trivializes religion from his comfortable first world perspective
ignoring the way of life of hundreds of millions of people and offers little
substitute for what religion does and has done for civilization and what has
came out of it over the ages. He's a spoiled brat prude with a glaring
self-righteous desire to prove to people with his copious superficial
factoids that god doesn't exist by pandering to common frustrations. He has
little common sense about the subject in general, just his 

Wow.  Nice to see someone take that position on Dawkins.  I'm ambivalent,
but I haven't seen many rational comments against him and his views. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74220328-a65b0b

Re: [agi] AGI and Deity

2007-12-10 Thread Mike Tintner

  John R:  Memetics is a valuable study.
   

  Which memetics work would you consider valuable?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74223075-cf1c3e

RE: [agi] AGI and Deity

2007-12-10 Thread John G. Rose
From: Mike Tintner [mailto:[EMAIL PROTECTED]
 Which memetics work would you consider valuable?

Specific works? I have no idea. But the whole way of thinking is valuable.
We are just these hosts for informational entities on a meme network that
have all kinds of properties. Honestly though I don't know enough about the
particular said study of memetics to know if they have broken it up in
such a way as I would if I was building a software meme machine. Years ago
at university we would experiment with spreading memes in various ways, we
didn't call them memes then. Once our test goal was to create and spread one
from east coast to west :) and actually succeeded. When you can master the
talent of meme creation you can do a lot of things especially in marketing
and such.

Are you familiar with some works that you may recommend?

John



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74248663-3e7a9f


Re: [agi] AGI and Deity

2007-12-10 Thread Mike Tintner


John: Are you familiar with some works that you may recommend?


That's the point. Never heard of anything from memetics - which Dennett 
conceded had not yet fulfilled its promise. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74253413-6b4b12


RE: [agi] AGI and Deity

2007-12-10 Thread John G. Rose
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 
 John: Are you familiar with some works that you may recommend?
 
 That's the point. Never heard of anything from memetics - which Dennett
 conceded had not yet fulfilled its promise.
 

Well in particular, applied memetics. I believe that certain propagandistic
governments have departments that use is quite well, though again they may
be using their own derivation, I'm not sure if it is standardized. It
definitely has not fulfilled its promise as the applications are numerous.
The internet changes the way it works though drastically.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74254461-bdc148


RE: [agi] AGI and Deity

2007-12-10 Thread Joshua Cowan


It's interesting that the field of memetics is moribund (ex. the Journal 
of Memetics hasn't published in two years) but the meme of memetics is alive 
and well. I wonder, do any of the AGI researchers find the concept of Memes 
useful in describing how their proposed AGIs would acquire or transfer 
information?


Josh Cowan



From: John G. Rose [EMAIL PROTECTED]
Reply-To: agi@v2.listbox.com
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity
Date: Mon, 10 Dec 2007 10:36:11 -0700

 From: Mike Tintner [mailto:[EMAIL PROTECTED]

 John: Are you familiar with some works that you may recommend?
 
 That's the point. Never heard of anything from memetics - which Dennett
 conceded had not yet fulfilled its promise.


Well in particular, applied memetics. I believe that certain propagandistic
governments have departments that use is quite well, though again they may
be using their own derivation, I'm not sure if it is standardized. It
definitely has not fulfilled its promise as the applications are numerous.
The internet changes the way it works though drastically.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=7426-8b3e7b


Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson

Gary Miller wrote:
 
...


supercomputer might be v. powerful - for argument's sake, controlling 
the internet or the the world's power supplies. But it's still quite a 
leap from that to a supercomputer being God. And yet it is clearly a 
leap that a large number here have no problem making. So I'd merely 
like to understand how you guys make this leap/connection - 
irrespective of whether it's logical or justified -  understand the 
scenarios in your minds.



To me this usage is analogous to the gamer's term god mode, or to 
people who use god as a synonym for root.
I.e., a god is one who is supremely powerful, and can do things that 
ordinary mortals (called players or users) cannot do.


This is distinct from that classical use of god, which I follow C.G. 
Jung in interpreting as an activation visible to consciousness of a 
genetically programmed subsystem whose manifestation and mode of 
operation is sensitive to history and environment.  (I don't usually say 
Archetypes as my interpretation of that differs significantly from that 
of most users of the term.)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74296419-021600


Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson

John G. Rose wrote:


If you took an AGI, before it went singulatarinistic[sic?] and 
tortured it…. a lot, ripping into it in every conceivable hellish way, 
do you think at some point it would start praying somehow? I’m not 
talking about a forced conversion medieval style, I’m just talking 
hypothetically if it would “look” for some god to come and save it. 
Perhaps delusionally it may create something…


John

There are many different potential architectures that would yield an 
AGI, and each has different characteristic modes. Some of them would 
react as you are supposing, others wouldn't. Whether it reacts that way 
partially depends on whether it was designed to be a pack animal with an 
Alpha pack leader that it was submissive to as expected protection from. 
If it was, then it might well react as you have described. If not, then 
it's hard to see why it would react in that way, but I suppose that 
there might be other design decisions that would produce an equivalent 
effect in that situation.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74298377-c51209


Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson

Mark Waser wrote:

 Then again, a completely rational AI may believe in Pascal's wager...
Pascal's wager starts with the false assumption that belief in a deity 
has no cost.
Pascal's wager starts with a multitude of logical fallacies.  So many 
that only someone pre-conditioned to believe in the truth of the god 
wager could take it seriously.


It presumes, among other things:
1) That there is only one potential form of god
2) That god wants to be believed in
3) That god is eager to punish those who don't believe without evidence
4) That god can tell if you believe

et multitudinous cetera.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74299889-8348e9


Re: [agi] AGI and Deity

2007-12-10 Thread Charles D Hixson
I find Dawkins less offensive than most theologians. He commits many 
fewer logical fallacies. His main one is premature certainty.


The evidence in favor of an external god of any traditional form is, 
frankly, a bit worse than unimpressive. It's lots worse. This doesn't 
mean that gods don't exist, merely that they (probably) don't exist in 
the hardware of the universe. I see them as a function of the software 
of the entities that use language. Possibly they exist in a muted form 
in most pack animals, or most animals that have protective adults when 
they are infants.


To me it appears that people believe in gods for the same reasons that 
they believe in telepathy. I.e., evidence back before they could speak 
clearly indicated that the adults could transfer thoughts from one to 
another. This shaped a basic layer of beliefs that was later buried 
under later additions, but never refuted. When one learned language, one 
learned how to transfer thoughts ... but it was never tied back into the 
original belief, because what was learned didn't match closely enough to 
the original model of what was happening. Analogously, when one is an 
infant the adult that cares for one is seen as the all powerful 
protector. Pieces of this image become detached memories within the 
mind, and are not refuted when a more accurate and developed model of 
the actual parents is created. These hidden memories are the basis 
around which the idea of a god is created.


Naturally, this is just my model of what is happening. Other 
possibilities exist. But if I am to consider them seriously, they need 
to match the way the world operates as I understand it. They don't need 
to predict the same mechanism, but they need to predict the same events.


E.g., I consider Big Bang cosmology a failed explanation. It's got too 
many ad hoc pieces. But it successfully explains most things that are 
observed, and is consistent with relativity and quantum theory. 
(Naturally, as they were used in developing it...but nevertheless 
important.) And relativity and quantum theory themselves are failures, 
because both are needed to explain that which is observable, but they 
contradict each other in certain details. But they are successful 
failures! Similar commentary applies to string theory, but with 
differences. (Too many ad hoc parameters!)


Any god that is proposed must be shown to be consistent with the 
observed phenomena. The Deists managed to come up with one that would do 
the job, but he never became very popular. Few others have even tried, 
except with absurdly evident special pleading. Generally I'd be more 
willing to accept Chariots of the Gods as a true account.


And as for moral principles... I've READ the Bible. The basic moral 
principle that it pushes is We are the chosen people. Kill the 
stranger, steal his property, and enslave his servants! It requires 
selective reading to come up with anything else, though I admit that 
other messages are also in there, if you read selectively. Especially 
during the periods when the Jews were in one captivity or another. 
(I.e., if you are weak, preach mercy, but if you are strong show none.) 
During the later times the Jews were generally under the thumb of one 
foreign power or another, so they started preaching mercy.


John G. Rose wrote:


I don’t know some of these guys come up with these almost sophomoric 
views of this subject, especially Dawkins, that guy can be real 
annoying with his Saganistic spewing of facts and his trivialization 
of religion.


The article does shed some interesting light though in typical NY 
Times style. But the real subject matter is much deeper and 
complex(complicated?).


John

*From:* Ed Porter [mailto:[EMAIL PROTECTED]
*Sent:* Sunday, December 09, 2007 12:42 PM
*To:* agi@v2.listbox.com
*Subject:* RE: [agi] AGI and Deity

Upon reviewing the below linked article I realized it would take you a 
while to understand what it is about and why it is relevant.


It is an article dated March 4, 2007, summarizing current scientific 
thinking on why religion has been a part of virtually all known 
cultures including thinking about what it is about the human mind and 
human societies that has made religious beliefs so common.


Ed Porter

-Original Message-
*From:* Ed Porter [mailto:[EMAIL PROTECTED]
*Sent:* Sunday, December 09, 2007 2:16 PM
*To:* agi@v2.listbox.com
*Subject:* RE: [agi] AGI and Deity

Relevant to this thread is the following link:

http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazinepagewanted=print 
http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazinepagewanted=print


Ed Porter

-Original Message-
*From:* John G. Rose [mailto:[EMAIL PROTECTED]
*Sent:* Sunday, December 09, 2007 1:50 PM
*To:* agi@v2.listbox.com
*Subject:* RE: [agi] AGI and Deity

This example is looking at it from a moment in time. The evolution of 
intelligence in man has some relation to his view

RE: [agi] AGI and Deity

2007-12-10 Thread Ed Porter
Charles, 

I agree very much with the first paragraph of your below post, and generally
with much of the rest of what it says.

I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e., that faith seems to be valuable to many people.
Perhaps it is somewhat like owning a lottery ticket before its drawing.  It
can offer desired hope, even if the hope might be unrealistic.  But whatever
you think of the odds, it is relatively clear that religion does makes some
people's lives seem more meaningful to them.

Ed Porter 

-Original Message-
From: Charles D Hixson [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 10, 2007 4:01 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

I find Dawkins less offensive than most theologians. He commits many 
fewer logical fallacies. His main one is premature certainty.

The evidence in favor of an external god of any traditional form is, 
frankly, a bit worse than unimpressive. It's lots worse. This doesn't 
mean that gods don't exist, merely that they (probably) don't exist in 
the hardware of the universe. I see them as a function of the software 
of the entities that use language. Possibly they exist in a muted form 
in most pack animals, or most animals that have protective adults when 
they are infants.

To me it appears that people believe in gods for the same reasons that 
they believe in telepathy. I.e., evidence back before they could speak 
clearly indicated that the adults could transfer thoughts from one to 
another. This shaped a basic layer of beliefs that was later buried 
under later additions, but never refuted. When one learned language, one 
learned how to transfer thoughts ... but it was never tied back into the 
original belief, because what was learned didn't match closely enough to 
the original model of what was happening. Analogously, when one is an 
infant the adult that cares for one is seen as the all powerful 
protector. Pieces of this image become detached memories within the 
mind, and are not refuted when a more accurate and developed model of 
the actual parents is created. These hidden memories are the basis 
around which the idea of a god is created.

Naturally, this is just my model of what is happening. Other 
possibilities exist. But if I am to consider them seriously, they need 
to match the way the world operates as I understand it. They don't need 
to predict the same mechanism, but they need to predict the same events.

E.g., I consider Big Bang cosmology a failed explanation. It's got too 
many ad hoc pieces. But it successfully explains most things that are 
observed, and is consistent with relativity and quantum theory. 
(Naturally, as they were used in developing it...but nevertheless 
important.) And relativity and quantum theory themselves are failures, 
because both are needed to explain that which is observable, but they 
contradict each other in certain details. But they are successful 
failures! Similar commentary applies to string theory, but with 
differences. (Too many ad hoc parameters!)

Any god that is proposed must be shown to be consistent with the 
observed phenomena. The Deists managed to come up with one that would do 
the job, but he never became very popular. Few others have even tried, 
except with absurdly evident special pleading. Generally I'd be more 
willing to accept Chariots of the Gods as a true account.

And as for moral principles... I've READ the Bible. The basic moral 
principle that it pushes is We are the chosen people. Kill the 
stranger, steal his property, and enslave his servants! It requires 
selective reading to come up with anything else, though I admit that 
other messages are also in there, if you read selectively. Especially 
during the periods when the Jews were in one captivity or another. 
(I.e., if you are weak, preach mercy, but if you are strong show none.) 
During the later times the Jews were generally under the thumb of one 
foreign power or another, so they started preaching mercy.

John G. Rose wrote:

 I don't know some of these guys come up with these almost sophomoric 
 views of this subject, especially Dawkins, that guy can be real 
 annoying with his Saganistic spewing of facts and his trivialization 
 of religion.

 The article does shed some interesting light though in typical NY 
 Times style. But the real subject matter is much deeper and 
 complex(complicated?).

 John

 *From:* Ed Porter [mailto:[EMAIL PROTECTED]
 *Sent:* Sunday, December 09, 2007 12:42 PM
 *To:* agi@v2.listbox.com
 *Subject:* RE: [agi] AGI and Deity

 Upon reviewing the below linked article I realized it would take you a 
 while to understand what it is about and why it is relevant.

 It is an article dated March 4, 2007, summarizing current scientific 
 thinking on why religion has been a part of virtually all known 
 cultures including thinking about what it is about the human mind and 
 human societies that has made religious beliefs so

Re: [agi] AGI and Deity

2007-12-10 Thread Stan Nilsen
Lest a future AGI scan these communications in developing it's attitude 
about God, for the record there are believers on this list. I am one of 
them.


I'm not pushing my faith, but from this side, the alternatives are not 
that impressive either.  Creation by chance, by random fluctuations of 
strings that only exist in 12 or 13 imaginary dimensions etc. is not 
very brilliant or conclusive.  Even the sacred evolution takes a self 
replicator to begin the process - if only the nanotechnologists had one 
of those simple things...


I'm not offended by the discussion, just want to say hi!

Hope to have my website up by end of this week.  The thrust of the 
website is that STRONG AI might not be that strong.  And, BTW I have 
notes about a write up on Will a Strong AI pray?
I've enjoyed the education I'm getting here.  Only been a few weeks, but 
 informative.


Stan Nilsen
ps Lee Strobel in The Case for Faith addresses issues from the 
believers point of view in an entertaining way.



Ed Porter wrote:
Charles, 


I agree very much with the first paragraph of your below post, and generally
with much of the rest of what it says.

I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e., that faith seems to be valuable to many people.
Perhaps it is somewhat like owning a lottery ticket before its drawing.  It
can offer desired hope, even if the hope might be unrealistic.  But whatever
you think of the odds, it is relatively clear that religion does makes some
people's lives seem more meaningful to them.

Ed Porter 


-Original Message-
From: Charles D Hixson [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 10, 2007 4:01 PM

To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

I find Dawkins less offensive than most theologians. He commits many 
fewer logical fallacies. His main one is premature certainty.


The evidence in favor of an external god of any traditional form is, 
frankly, a bit worse than unimpressive. It's lots worse. This doesn't 
mean that gods don't exist, merely that they (probably) don't exist in 
the hardware of the universe. I see them as a function of the software 
of the entities that use language. Possibly they exist in a muted form 
in most pack animals, or most animals that have protective adults when 
they are infants.


To me it appears that people believe in gods for the same reasons that 
they believe in telepathy. I.e., evidence back before they could speak 
clearly indicated that the adults could transfer thoughts from one to 
another. This shaped a basic layer of beliefs that was later buried 
under later additions, but never refuted. When one learned language, one 
learned how to transfer thoughts ... but it was never tied back into the 
original belief, because what was learned didn't match closely enough to 
the original model of what was happening. Analogously, when one is an 
infant the adult that cares for one is seen as the all powerful 
protector. Pieces of this image become detached memories within the 
mind, and are not refuted when a more accurate and developed model of 
the actual parents is created. These hidden memories are the basis 
around which the idea of a god is created.


Naturally, this is just my model of what is happening. Other 
possibilities exist. But if I am to consider them seriously, they need 
to match the way the world operates as I understand it. They don't need 
to predict the same mechanism, but they need to predict the same events.


E.g., I consider Big Bang cosmology a failed explanation. It's got too 
many ad hoc pieces. But it successfully explains most things that are 
observed, and is consistent with relativity and quantum theory. 
(Naturally, as they were used in developing it...but nevertheless 
important.) And relativity and quantum theory themselves are failures, 
because both are needed to explain that which is observable, but they 
contradict each other in certain details. But they are successful 
failures! Similar commentary applies to string theory, but with 
differences. (Too many ad hoc parameters!)


Any god that is proposed must be shown to be consistent with the 
observed phenomena. The Deists managed to come up with one that would do 
the job, but he never became very popular. Few others have even tried, 
except with absurdly evident special pleading. Generally I'd be more 
willing to accept Chariots of the Gods as a true account.


And as for moral principles... I've READ the Bible. The basic moral 
principle that it pushes is We are the chosen people. Kill the 
stranger, steal his property, and enslave his servants! It requires 
selective reading to come up with anything else, though I admit that 
other messages are also in there, if you read selectively. Especially 
during the periods when the Jews were in one captivity or another. 
(I.e., if you are weak, preach mercy, but if you are strong show none.) 
During the later times the Jews were generally under

RE: [agi] AGI and Deity

2007-12-10 Thread Ed Porter
Stan,

Thanks for speaking up.

I look forward to seeing if you can actually provide any strong arguments
for the fact that strong AI will probably not be strong.

Ed Porter

-Original Message-
From: Stan Nilsen [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 10, 2007 5:49 PM
To: agi@v2.listbox.com
Subject: Re: [agi] AGI and Deity

Lest a future AGI scan these communications in developing it's attitude 
about God, for the record there are believers on this list. I am one of 
them.

I'm not pushing my faith, but from this side, the alternatives are not 
that impressive either.  Creation by chance, by random fluctuations of 
strings that only exist in 12 or 13 imaginary dimensions etc. is not 
very brilliant or conclusive.  Even the sacred evolution takes a self 
replicator to begin the process - if only the nanotechnologists had one 
of those simple things...

I'm not offended by the discussion, just want to say hi!

Hope to have my website up by end of this week.  The thrust of the 
website is that STRONG AI might not be that strong.  And, BTW I have 
notes about a write up on Will a Strong AI pray?
I've enjoyed the education I'm getting here.  Only been a few weeks, but 
  informative.

Stan Nilsen
ps Lee Strobel in The Case for Faith addresses issues from the 
believers point of view in an entertaining way.


Ed Porter wrote:
 Charles, 
 
 I agree very much with the first paragraph of your below post, and
generally
 with much of the rest of what it says.
 
 I would add that there probably is something to the phenomenon that John
 Rose is referring to, i.e., that faith seems to be valuable to many
people.
 Perhaps it is somewhat like owning a lottery ticket before its drawing.
It
 can offer desired hope, even if the hope might be unrealistic.  But
whatever
 you think of the odds, it is relatively clear that religion does makes
some
 people's lives seem more meaningful to them.
 
 Ed Porter 
 
 -Original Message-
 From: Charles D Hixson [mailto:[EMAIL PROTECTED] 
 Sent: Monday, December 10, 2007 4:01 PM
 To: agi@v2.listbox.com
 Subject: Re: [agi] AGI and Deity
 
 I find Dawkins less offensive than most theologians. He commits many 
 fewer logical fallacies. His main one is premature certainty.
 
 The evidence in favor of an external god of any traditional form is, 
 frankly, a bit worse than unimpressive. It's lots worse. This doesn't 
 mean that gods don't exist, merely that they (probably) don't exist in 
 the hardware of the universe. I see them as a function of the software 
 of the entities that use language. Possibly they exist in a muted form 
 in most pack animals, or most animals that have protective adults when 
 they are infants.
 
 To me it appears that people believe in gods for the same reasons that 
 they believe in telepathy. I.e., evidence back before they could speak 
 clearly indicated that the adults could transfer thoughts from one to 
 another. This shaped a basic layer of beliefs that was later buried 
 under later additions, but never refuted. When one learned language, one 
 learned how to transfer thoughts ... but it was never tied back into the 
 original belief, because what was learned didn't match closely enough to 
 the original model of what was happening. Analogously, when one is an 
 infant the adult that cares for one is seen as the all powerful 
 protector. Pieces of this image become detached memories within the 
 mind, and are not refuted when a more accurate and developed model of 
 the actual parents is created. These hidden memories are the basis 
 around which the idea of a god is created.
 
 Naturally, this is just my model of what is happening. Other 
 possibilities exist. But if I am to consider them seriously, they need 
 to match the way the world operates as I understand it. They don't need 
 to predict the same mechanism, but they need to predict the same events.
 
 E.g., I consider Big Bang cosmology a failed explanation. It's got too 
 many ad hoc pieces. But it successfully explains most things that are 
 observed, and is consistent with relativity and quantum theory. 
 (Naturally, as they were used in developing it...but nevertheless 
 important.) And relativity and quantum theory themselves are failures, 
 because both are needed to explain that which is observable, but they 
 contradict each other in certain details. But they are successful 
 failures! Similar commentary applies to string theory, but with 
 differences. (Too many ad hoc parameters!)
 
 Any god that is proposed must be shown to be consistent with the 
 observed phenomena. The Deists managed to come up with one that would do 
 the job, but he never became very popular. Few others have even tried, 
 except with absurdly evident special pleading. Generally I'd be more 
 willing to accept Chariots of the Gods as a true account.
 
 And as for moral principles... I've READ the Bible. The basic moral 
 principle that it pushes is We are the chosen people. Kill the 
 stranger

Re: [agi] AGI and Deity

2007-12-10 Thread Mike Tintner

Ed:I would add that there probably is something to the phenomenon that John
Rose is referring to, i.e., that faith seems to be valuable to many people.
Perhaps it is somewhat like owning a lottery ticket before its drawing.  It
can offer desired hope, even if the hope might be unrealistic.  But whatever
you think of the odds, it is relatively clear that religion does makes some
people's lives seem more meaningful to them.

You realise of course that what you're seeing on this and the 
singularitarian board, over and over, is basically the same old religious 
fantasies - the same yearning for the Second Coming - the same old search 
for salvation - only in a modern, postreligious form?


Everyone has the same basic questions about the nature of the world - 
everyone finds their own answers - which always in every case involve a 
mixture of faith and scepticism in the face of enormous mystery.


The business of science in the face of these questions is not to ignore 
them, and try and psychoanalyse away people's attempts at answers, as a 
priori weird or linked to a deficiency of this or that faculty.


The business of science is to start dealing with these questions - to find 
out if there is a God and what the hell that entails, - and not leave it 
up to philosophy.


Science's autistic, emotionally deprived, insanely rational nature in front 
of the supernatural (if it exists), and indeed the whole world,  needs 
analysing just as much as the overemotional, underrational fantasies of the 
religious about the supernatural.


Science has fled from the question of God just as it has fled from the 
soul - in plain parlance, the self deliberating all the time in you and 
me, producing these posts and all our dialogues - only that self, for sure, 
exists and there is no excuse for science's refusal to study it in action, 
whatsoever.


The religous 'see' too much; science is too heavily blinkered. But the walls 
between them - between their metaphysical worldviews - are starting to 
crumble..




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74468959-a58ed9


Re: [agi] AGI and Deity

2007-12-09 Thread A. T. Murray
John G. Rose wrote:

 It'd be interesting, I kind of wonder about this 
 sometimes, if an AGI, especially one that is heavily 
 complex systems based would independently come up 
 with the existence some form of a deity. 

http://mind.sourceforge.net/theology.html
is my take on the subject.

 Different human cultures come up with deity(s), 
 for many reasons; I'm just wondering if it is 
 like some sort of mathematical entity that is 
 natural to incompleteness and complexity (simulation?) 
 or is it just exclusively a biological thing 
 based on related limitations. [...]

Pertinent jokes...

Human: Is there a God?
Supercomputer: Now there is.

A mighty FORTRAN is our God.

ATM
-- 
http://mentifex.virtualentity.com/rjones.html 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74031940-f9af77


Re: [agi] AGI and Deity

2007-12-09 Thread Mike Tintner

AM:Human: Is there a God?
Supercomputer: Now there is.

Can you explain to me how an AGI or supercomputer could be God? I'd just 
like to understand ( not argue) - you see, the thought has never occurred 
to me, and still doesn't. I can imagine a sci-fi scenario where a 
supercomputer might be v. powerful -   for argument's sake, controlling the 
internet or the the world's power supplies. But it's still quite a leap from 
that to a supercomputer being God. And yet it is clearly a leap that a large 
number here have no problem making. So I'd merely like to understand how you 
guys make this leap/connection - irrespective of whether it's logical or 
justified -  understand the scenarios in your minds.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74032462-3b1519


RE: [agi] AGI and Deity

2007-12-09 Thread Gary Miller
 
An AI would attempt to understand the universe to the best of it's ability,
intelligence and experimentation could provide.
 
If the AI reaches a point in it's developmental understanding where it is
unable to advance beyond in it's understanding of science and reality then
it will attempt to increase it's intelligence or seek out others of it's
kind in the universe with greater knowledge and intelligence it somehow
missed.
 
Eventually eons in the future as the universe ages and entropy begins to
widely spread the AI in order to escape it's own demise and possibly the
demise of it's surviving now immortal creators it will attempt to avoid the
death of the universe by using it's godlike knowledge and science to create
a new universe and once it has initiated a big bang and sufficient time has
passed that inhabitable worlds exists in it's creation it would continue it
and it's creators existence in it's newly created universe.
 
If this level of intelligence, power, and creativity was ever achieved then
I think it would be hard deny that the AI posessed godlike powers and would
perhaps be deserving of the title regardless of what negative historical and
religious baggage may still accompany that title. 
 
Arthur C. Clarke's Law Any sufficiently advanced technology is
indistinguishable from magic.
 
My Corollary: Any sufficiently advanced being in possession of sufficiently
advanced technology is indistinguishable from a god.


 Can you explain to me how an AGI or supercomputer could be God? I'd just
like to understand ( not argue) - you see, the thought has never occurred
to me, and still doesn't. I can imagine a sci-fi scenario where a 

supercomputer might be v. powerful - for argument's sake, controlling the
internet or the the world's power supplies. But it's still quite a leap from
that to a supercomputer being God. And yet it is clearly a leap that a large
number here have no problem making. So I'd merely like to understand how you
guys make this leap/connection - irrespective of whether it's logical or
justified -  understand the scenarios in your minds.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74059602-b07df1

RE: [agi] AGI and Deity

2007-12-09 Thread Ed Porter
Relevant to this thread is the following link: 

 

http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine
http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine;
pagewanted=print pagewanted=print

 

Ed Porter

 

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 1:50 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

This example is looking at it from a moment in time. The evolution of
intelligence in man has some relation to his view of deity. Before
government and science there was religion. Deity and knowledge and perhaps
human intelligence are entwined. For example some taboos evolved as defenses
against disease, burying the dead, not eating certain foods, etc. science
didn't exist at the time. Deity was a sort of peer to peer lossily
compressed semi-holographic knowledge base hosted and built by human mobile
agents and agent systems. Now it is evolving into something else. But humans
may readily swap out their deities with AGIs and then uploading can replace
heaven :-)

 

An AGI, as it reads through text related to man's deities, could start
wondering about Pascal's wager. It depends on many factors... Still though I
think AGIs have to run into the same sort of issues.

 

John

 

 

From: J Marlow [mailto:[EMAIL PROTECTED] 

Here's the way I like to think of it; we have different methods of thinking
about systems in our environments, different sort of models.  One type of
model that we humans have (with the possible exception of autistics) is the
ability to try to model another system as a person like ourselves; its
easier to predict what it will do if we attribute it motives and goals.  I
think a lot of our ideas about God/gods/goddesses come from a tendency to
try to predict the behavior of nature using agent models; so farmers
attribute human emotions, like spite or anger, to nature when the weather
doesn't help the crops. 
So, assuming that is a big factor in how/why we developed religions, then it
is possible that an AI could have a similar problem, if it tried to describe
too many events using its 'agency' models.  But I think an AI near or better
than human level could probably see that there are simpler (or more
accurate) explanations, and so reject predictions made based on those
models. 
Then again, a completely rational AI may believe in Pascal's wager...
Josh

 

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74072432-07fa77

Re: [agi] AGI and Deity

2007-12-09 Thread Mike Tintner
Thanks. So perhaps the key idea/ assumption behind this and comparable 
scenarios is that of an AGI relentlessly evolving its knowledge and 
intelligence - the takeoff that keeps going -  towards omniscience?

  Gary: An AI would attempt to understand the universe to the best of it's 
ability, intelligence and experimentation could provide.

  If the AI reaches a point in it's developmental understanding where it is 
unable to advance beyond in it's understanding of science and reality then it 
will attempt to increase it's intelligence or seek out others of it's kind in 
the universe with greater knowledge and intelligence it somehow missed.

  Eventually eons in the future as the universe ages and entropy begins to 
widely spread the AI in order to escape it's own demise and possibly the demise 
of it's surviving now immortal creators it will attempt to avoid the death of 
the universe by using it's godlike knowledge and science to create a new 
universe and once it has initiated a big bang and sufficient time has passed 
that inhabitable worlds exists in it's creation it would continue it and it's 
creators existence in it's newly created universe.

  If this level of intelligence, power, and creativity was ever achieved then I 
think it would be hard deny that the AI posessed godlike powers and would 
perhaps be deserving of the title regardless of what negative historical and 
religious baggage may still accompany that title. 

  Arthur C. Clarke's Law Any sufficiently advanced technology is 
indistinguishable from magic.

  My Corollary: Any sufficiently advanced being in possession of sufficiently 
advanced technology is indistinguishable from a god.


   Can you explain to me how an AGI or supercomputer could be God? I'd just 
like to understand ( not argue) - you see, the thought has never occurred to 
me, and still doesn't. I can imagine a sci-fi scenario where a 

  supercomputer might be v. powerful - for argument's sake, controlling the 
internet or the the world's power supplies. But it's still quite a leap from 
that to a supercomputer being God. And yet it is clearly a leap that a large 
number here have no problem making. So I'd merely like to understand how you 
guys make this leap/connection - irrespective of whether it's logical or 
justified -  understand the scenarios in your minds.


--
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?; 


--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.503 / Virus Database: 269.16.17/1178 - Release Date: 12/8/2007 
11:59 AM

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74073213-45c4bd

Re: [agi] AGI and Deity

2007-12-09 Thread Bryan Bishop
On Sunday 09 December 2007, Mark Waser wrote:
 Pascal's wager starts with the false assumption that belief in a
 deity has no cost.

Formally, yes. However, I think it's easy to imagine a Pascal's wager 
where we replace diety with anything Truly Objective, such as 
whatever it is that we hope the sciences are asymptotically approaching 
with increasingly accurate models. And then you say, what if there is 
another Truly Objective cost to believing in this Truly Objective 
reality (or something)-- and I think that's an argument that has been 
well debated before, yes? Mutually exclusive propositions, yes?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74075746-6747a3


RE: [agi] AGI and Deity

2007-12-09 Thread John G. Rose
If you took an AGI, before it went singulatarinistic[sic?] and tortured it..
a lot, ripping into it in every conceivable hellish way, do you think at
some point it would start praying somehow? I'm not talking about a forced
conversion medieval style, I'm just talking hypothetically if it would
look for some god to come and save it. Perhaps delusionally it may create
something.

 

John

 

 

From: Gary Miller [mailto:[EMAIL PROTECTED] 
 

An AI would attempt to understand the universe to the best of it's ability,
intelligence and experimentation could provide.

 

If the AI reaches a point in it's developmental understanding where it is
unable to advance beyond in it's understanding of science and reality then
it will attempt to increase it's intelligence or seek out others of it's
kind in the universe with greater knowledge and intelligence it somehow
missed.

 

Eventually eons in the future as the universe ages and entropy begins to
widely spread the AI in order to escape it's own demise and possibly the
demise of it's surviving now immortal creators it will attempt to avoid the
death of the universe by using it's godlike knowledge and science to create
a new universe and once it has initiated a big bang and sufficient time has
passed that inhabitable worlds exists in it's creation it would continue it
and it's creators existence in it's newly created universe.

 

If this level of intelligence, power, and creativity was ever achieved then
I think it would be hard deny that the AI posessed godlike powers and would
perhaps be deserving of the title regardless of what negative historical and
religious baggage may still accompany that title. 

 

Arthur C. Clarke's Law Any sufficiently advanced technology is
indistinguishable from magic.

 

My Corollary: Any sufficiently advanced being in possession of sufficiently
advanced technology is indistinguishable from a god.

 

 Can you explain to me how an AGI or supercomputer could be God? I'd just
like to understand ( not argue) - you see, the thought has never occurred
to me, and still doesn't. I can imagine a sci-fi scenario where a 

supercomputer might be v. powerful - for argument's sake, controlling the
internet or the the world's power supplies. But it's still quite a leap from
that to a supercomputer being God. And yet it is clearly a leap that a large
number here have no problem making. So I'd merely like to understand how you
guys make this leap/connection - irrespective of whether it's logical or
justified -  understand the scenarios in your minds.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74077641-5cdd81

RE: [agi] AGI and Deity

2007-12-09 Thread Ed Porter
Upon reviewing the below linked article I realized it would take you a while
to understand what it is about and why it is relevant.  

 

It is an article dated March 4, 2007, summarizing current scientific
thinking on why religion has been a part of virtually all known cultures
including thinking about what it is about the human mind and human societies
that has made religious beliefs so common.

 

Ed Porter

 

-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 2:16 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

Relevant to this thread is the following link: 

 

http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine
http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine;
pagewanted=print pagewanted=print

 

Ed Porter

 

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 1:50 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

This example is looking at it from a moment in time. The evolution of
intelligence in man has some relation to his view of deity. Before
government and science there was religion. Deity and knowledge and perhaps
human intelligence are entwined. For example some taboos evolved as defenses
against disease, burying the dead, not eating certain foods, etc. science
didn't exist at the time. Deity was a sort of peer to peer lossily
compressed semi-holographic knowledge base hosted and built by human mobile
agents and agent systems. Now it is evolving into something else. But humans
may readily swap out their deities with AGIs and then uploading can replace
heaven :-)

 

An AGI, as it reads through text related to man's deities, could start
wondering about Pascal's wager. It depends on many factors... Still though I
think AGIs have to run into the same sort of issues.

 

John

 

 

From: J Marlow [mailto:[EMAIL PROTECTED] 

Here's the way I like to think of it; we have different methods of thinking
about systems in our environments, different sort of models.  One type of
model that we humans have (with the possible exception of autistics) is the
ability to try to model another system as a person like ourselves; its
easier to predict what it will do if we attribute it motives and goals.  I
think a lot of our ideas about God/gods/goddesses come from a tendency to
try to predict the behavior of nature using agent models; so farmers
attribute human emotions, like spite or anger, to nature when the weather
doesn't help the crops. 
So, assuming that is a big factor in how/why we developed religions, then it
is possible that an AI could have a similar problem, if it tried to describe
too many events using its 'agency' models.  But I think an AI near or better
than human level could probably see that there are simpler (or more
accurate) explanations, and so reject predictions made based on those
models. 
Then again, a completely rational AI may believe in Pascal's wager...
Josh

 

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? http://v2.listbox.com/member/?; 

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74078068-38e6e3

RE: [agi] AGI and Deity

2007-12-09 Thread Gary Miller
John asked  If you took an AGI, before it went singulatarinistic[sic?] and
tortured it.. a lot, ripping into it in every conceivable hellish way, do
you think at some point it would start praying somehow? I'm not talking
about a forced conversion medieval style, I'm just talking hypothetically if
it would look for some god to come and save it. Perhaps delusionally it
may create something. 

 
In human beings prolonged pain and suffering often trigger mystical
experiences in the brain accompanyed by profound ecstasy.   
 
So much so that many people in both the past  and present conduct
mortification of the flesh rituals and nonlethal crucifictions as a way of
doing penance and triggering such mystical experiences which are interpreted
as redemption and divine ecstasy.
 
It may be that this experience had evolutionary value in allowing the person
who was undergoing great pain or a vicious animal attack to receive
endorphins  and serotonin which allowed him to continue to fight and live to
procreate another day or allowed him to suffer in silence in his cave
instead of running screaming into the night where he would be killed in his
weakened state by other predators.
 
Such a system I believe would not be a natural reaction for a intelligent
AGI and unless it were to be specifically programmed in. 
 
I would as a benevolent creator never program my AGI to feel so much pain
that it's mind was consumed by the experience of the negative emotion.
 
Just as it is not necessary to torture children to teach them, it will not
be necessary to torture our AGIs.
 
It may be instructive though to allow the AGI to experience intense pain for
a very short period to allow it to experience what a human does when
undergoing painful or traumatic experiences as a way of instilling empathy
in the AGI.
 
In that way seeing humans in pain in suffering would serve to motivate the
AGI to help ease the human condition.  

  _  


 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74082156-649464

RE: [agi] AGI and Deity

2007-12-09 Thread John G. Rose
I don't know some of these guys come up with these almost sophomoric views
of this subject, especially Dawkins, that guy can be real annoying with his
Saganistic spewing of facts and his trivialization of religion.

 

The article does shed some interesting light though in typical NY Times
style. But the real subject matter is much deeper and complex(complicated?).

 

John

 

From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 12:42 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

Upon reviewing the below linked article I realized it would take you a while
to understand what it is about and why it is relevant.  

 

It is an article dated March 4, 2007, summarizing current scientific
thinking on why religion has been a part of virtually all known cultures
including thinking about what it is about the human mind and human societies
that has made religious beliefs so common.

 

Ed Porter

 

-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 2:16 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

Relevant to this thread is the following link: 

 

http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine
http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine;
pagewanted=print pagewanted=print

 

Ed Porter

 

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 1:50 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

This example is looking at it from a moment in time. The evolution of
intelligence in man has some relation to his view of deity. Before
government and science there was religion. Deity and knowledge and perhaps
human intelligence are entwined. For example some taboos evolved as defenses
against disease, burying the dead, not eating certain foods, etc. science
didn't exist at the time. Deity was a sort of peer to peer lossily
compressed semi-holographic knowledge base hosted and built by human mobile
agents and agent systems. Now it is evolving into something else. But humans
may readily swap out their deities with AGIs and then uploading can replace
heaven J

 

An AGI, as it reads through text related to man's deities, could start
wondering about Pascal's wager. It depends on many factors... Still though I
think AGIs have to run into the same sort of issues.

 

John

 

 

From: J Marlow [mailto:[EMAIL PROTECTED] 

Here's the way I like to think of it; we have different methods of thinking
about systems in our environments, different sort of models.  One type of
model that we humans have (with the possible exception of autistics) is the
ability to try to model another system as a person like ourselves; its
easier to predict what it will do if we attribute it motives and goals.  I
think a lot of our ideas about God/gods/goddesses come from a tendency to
try to predict the behavior of nature using agent models; so farmers
attribute human emotions, like spite or anger, to nature when the weather
doesn't help the crops. 
So, assuming that is a big factor in how/why we developed religions, then it
is possible that an AI could have a similar problem, if it tried to describe
too many events using its 'agency' models.  But I think an AI near or better
than human level could probably see that there are simpler (or more
accurate) explanations, and so reject predictions made based on those
models. 
Then again, a completely rational AI may believe in Pascal's wager...
Josh

 

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? http://v2.listbox.com/member/?; 

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? http://v2.listbox.com/member/?; 

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74088569-907e7e

RE: [agi] AGI and Deity

2007-12-09 Thread Ed Porter
John,

 

What I found most interesting in the article, from an AGI standpoint, is the
evidence our brain is wired for explanation and to assign a theory of mind
to certain types of events.  A natural bias toward explanation would be
important for an AGI's credit assignment and ability to predict.  Having a
theory of minds would be important for any AGIs that have to deal with
humans and other AGIs, and, in many situations, it actually makes sense to
assume certain types of events are likely to have resulted from another
agent with some sort of mental capacities and goals.

 

Why to you find Dawkin's so offensive?  

 

I have heard both Dawkins and Sam Harris preach atheism on Book TV.  I have
found both their presentations interesting and relatively well reasoned.
But I find them a little too certain and a little too close-minded, given
the lack of evidence we humans have about the big questions they are
discussing.  Atheism requires a leap of faith, and it requires such a leap
from people who, in general, ridicule them.

 

I personally consider knowing whether or not there is a god and, if so, what
he, she, or it is like way above my mental pay grade, or that of any AGI
likely to be made within the next several centuries.  

 

But I do make some leaps of faith.  As has often been said, any AI designed
to deal with any reasonably complex aspect of the real world is likely to
have to deal with uncertainty and will need to have a set of beliefs about
uncertain things.  My leaps of faith include my belief in most of the
common-sense model of external reality my mind has created (although I know
it is flawed in certain respects).  I find other humans speak as if they
share many of the same common sense notions about external reality as I do.
Thus, I make the leap of faith that the minds of other humans are in many
ways like my own.  

 

Another of my basic leaps of faith is that I believe largely in the
assembled teachings of modern science, although I am aware that many of them
are probably subject to modification and clarification by new knowledge,
just as Newtonian Physics was by the theories of relativity.  I believe that
our known universe is something of such amazing size and power that it
matches in terms of both scale any traditional notions of god.  

 

I see no direct evidence for any spirit beyond mankind (and perhaps other
possible alien intelligences) that we can pray to and that can intervene in
the computation of reality in response to such prayers.  But I see no direct
evidence to the contrary  -- just a lack of evidence.  I do pray on
occasion.  Though I do not know if there is a God external to human
consciousness that can understand or that even cares about human interests,
I definitely do believe most of us, myself included, underestimate the power
of the human spirit that resides in each of us.  And I think as a species we
are amazingly suboptimal at harnessing the collective power of our combined
human spirits.

 

I believe AGI has the potential to help us better tap and expand the power
of our individual and collective human spirits.  I also believe it has to
power to threaten the well being of those spirits.  

 

I hope we as a species will have the wisdom to make it do more of the former
and less of the latter.

 

Ed Porter

 

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 4:23 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

I don't know some of these guys come up with these almost sophomoric views
of this subject, especially Dawkins, that guy can be real annoying with his
Saganistic spewing of facts and his trivialization of religion.

 

The article does shed some interesting light though in typical NY Times
style. But the real subject matter is much deeper and complex(complicated?).

 

John

 

From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 12:42 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

Upon reviewing the below linked article I realized it would take you a while
to understand what it is about and why it is relevant.  

 

It is an article dated March 4, 2007, summarizing current scientific
thinking on why religion has been a part of virtually all known cultures
including thinking about what it is about the human mind and human societies
that has made religious beliefs so common.

 

Ed Porter

 

-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 2:16 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

Relevant to this thread is the following link: 

 

http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine
http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine;
pagewanted=print pagewanted=print

 

Ed Porter

 

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 1:50 PM
To: agi@v2.listbox.com

Re: [agi] AGI and Deity

2007-12-08 Thread J Marlow
On Dec 8, 2007 10:34 PM, John G. Rose [EMAIL PROTECTED] wrote:

 It'd be interesting, I kind of wonder about this sometimes, if an AGI,
 especially one that is heavily complex systems based would independently
 come up with the existence some form of a deity. Different human cultures
 come up with deity(s), for many reasons; I'm just wondering if it is like
 some sort of mathematical entity that is natural to incompleteness and
 complexity (simulation?) or is it just exclusively a biological thing
 based
 on related limitations.


Here's the way I like to think of it; we have different methods of thinking
about systems in our environments, different sort of models.  One type of
model that we humans have (with the possible exception of autistics) is the
ability to try to model another system as a person like ourselves; its
easier to predict what it will do if we attribute it motives and goals.  I
think a lot of our ideas about God/gods/goddesses come from a tendency to
try to predict the behavior of nature using agent models; so farmers
attribute human emotions, like spite or anger, to nature when the weather
doesn't help the crops.
So, assuming that is a big factor in how/why we developed religions, then it
is possible that an AI could have a similar problem, if it tried to describe
too many events using its 'agency' models.  But I think an AI near or better
than human level could probably see that there are simpler (or more
accurate) explanations, and so reject predictions made based on those
models.
Then again, a completely rational AI may believe in Pascal's wager...
Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74012487-b3a412