Re: AI Dooms Us

2014-08-28 Thread Telmo Menezes
On Wed, Aug 27, 2014 at 8:16 PM, Terren Suydam terren.suy...@gmail.com
wrote:


 On Wed, Aug 27, 2014 at 12:21 PM, Telmo Menezes te...@telmomenezes.com
 wrote:


 On Wed, Aug 27, 2014 at 1:53 PM, Terren Suydam terren.suy...@gmail.com
 wrote:


 The space of possibilities quickly scales beyond the wildest imaginings
 of computing power. Chess AIs are already better than humans, because they
 more or less implement this approach, and it turns out you only need to
 computer a few hundred million positions per second to do that. Obviously
 that's a toy environment... the possibilities inherent in the real world
 are even be enumerable according to some predefined ontology (i.e. that
 would be required to specify in a minimax type AI).


 Ok, but of course minimax was also a toy example. Several algorithms that
 already exist could be combined: deep learning, bayesian belief networks,
 genetic programming and so on. A clever combination of algorithms plus the
 still ongoing exponential growth in available computational power could
 soon unleash something impressive. Of course I am just challenging your
 intuition, mostly because it's a fun topic :) Who knows who's right...


 I think these are overlapping intuitions. On one hand, there is the idea
 that given enough computing/data resources, something can be created that -
 regardless of how limited its domain of operation - is still a threat in
 unexpected ways. On the other hand is the idea that AIs which pose real
 threats - threats we are not capable of stopping - require a quantum leap
 forward in cognitive flexibility, if you will.


Agreed.



 Although my POV is aligned with the latter intuition, I actually agree
 with the former, but consider the kinds of threats involved to be bounded
 in ways we can in principle control. Although in practice it is possible
 for them to do damage so quickly we can't prevent it.

 Perhaps my idea of intelligence is too limited. I am assuming that
 something capable of being a real threat will be able to generate its own
 ontologies, creatively model them in ways that build on and relate to
 existing ontologies, simulate and test those new models, etc., generate
 value judgments using these new models with respect to overarching utility
 function(s). It is suspiciously similar to human intelligence.


I wonder. What you describe seems like the way of thinking of a person
trained in the scientific method (a very recent discovery in human
history). Is this raw human intelligence? I suspect raw human intelligence
is more like a kludge. It is possible to create rickety structures of order
on top of that kludge, by a process we call education.


 The difference is that as an *artificial* intelligence with a different
 embodiement and different algorithms, the modeling they would arrive at
 could well be strikingly different from how we see the world, with all the
 attendant problems that could pose for us given the eventually superior
 computing power.


Ok.




 Another interesting/scary scenario to think about is the possibility of a
 self-mutating computer program proliferating under our noses until it's too
 late (and exploiting the Internet to create a very powerful meta-computer
 by stealing a few cpu cycles from everyone).


 I think something like this could do a lot of damage very quickly, but by
 accident... in a similar way perhaps to the occasional meltdowns caused by
 the collective behaviors of micro-second market-making algorithms.


Another example is big societies designed by humans.


  I find it exceedingly unlikely that an AGI will spontaneously emerge from
 a self-mutating process like you describe. Again, if this kind of thing
 were likely, or at least not extremely unlikely, I think it suggests that
 AGI is a lot simpler than it really is.


This is tricky. The Kolmogorov complexity of AGI could be relatively low --
maybe it can be expressed in 1000 lines of lisp. But the set of programs
expressible in 1000 lines of lisp includes some really crazy,
counter-intuitive stuff (e.g. the universal dovetailer). Genetic
programming has been shown to be able to discover relatively short
solutions that are better than anything a human could come up with, due to
counter-intuitiveness.










  You're talking about an AI that arrives at novel solutions, which
 requires the ability to invent/simulate/act on new models in new domains
 (AGI).


 Evolutionary computation already achieves novelty and invention, to a
 degree. I concur that it is still not AGI. But it could already be a
 threat, given enough computational resources.


 AGI is a threat because it's utility function would necessarily be
 sufficiently meta that it could create novel sub-goals. We would not
 necessarily be able to control whether it chose a goal that was compatible
 with ours.

 It comes down to how the utility function is defined. For Google Car,
 the utility function probably tests actions along the lines of get from A
 to B safely, as 

Re: AI Dooms Us

2014-08-28 Thread Telmo Menezes
On Wed, Aug 27, 2014 at 11:11 PM, Platonist Guitar Cowboy 
multiplecit...@gmail.com wrote:

 Legitimacy of proof and evidence (e.g. for a set of cool algorithms
 concerning AI, more computing power, big data etc), is an empty question to
 ask, outside a specified theory. It's like some alien questioning whether
 the rules of soccer on earth are valid in absolute sense.

 Are we after freedom from contradictions? Completeness? Utility function,
 what are the references, where is the ultimate list?

 ISTM Gödel's work has more to say about AI and monkey rock throwing
 theologies than we might be inclined to assume.


Agreed.




 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: AI Dooms Us

2014-08-28 Thread Telmo Menezes
On Wed, Aug 27, 2014 at 9:49 PM, meekerdb meeke...@verizon.net wrote:

  On 8/27/2014 4:53 AM, Terren Suydam wrote:

 You're talking about an AI that arrives at novel solutions, which
 requires the ability to invent/simulate/act on new models in new domains
 (AGI).


   Evolutionary computation already achieves novelty and invention, to a
 degree. I concur that it is still not AGI. But it could already be a
 threat, given enough computational resources.


  AGI is a threat because it's utility function would necessarily be
 sufficiently meta that it could create novel sub-goals. We would not
 necessarily be able to control whether it chose a goal that was compatible
 with ours.


 On the other hand we're not that good at choosing goals for ourselves -
 e.g. ISIS has chosen the goal of imposing a ruthless religious tyranny.


Or maybe we are not very good at choosing goals for societies.

Telmo.



 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Spark of Madness

2014-08-28 Thread Telmo Menezes
https://www.youtube.com/watch?v=l2SliEAGamw

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: AI Dooms Us

2014-08-28 Thread Terren Suydam
On Thu, Aug 28, 2014 at 7:30 AM, Telmo Menezes te...@telmomenezes.com
wrote:


 Although my POV is aligned with the latter intuition, I actually agree
 with the former, but consider the kinds of threats involved to be bounded
 in ways we can in principle control. Although in practice it is possible
 for them to do damage so quickly we can't prevent it.

 Perhaps my idea of intelligence is too limited. I am assuming that
 something capable of being a real threat will be able to generate its own
 ontologies, creatively model them in ways that build on and relate to
 existing ontologies, simulate and test those new models, etc., generate
 value judgments using these new models with respect to overarching utility
 function(s). It is suspiciously similar to human intelligence.


 I wonder. What you describe seems like the way of thinking of a person
 trained in the scientific method (a very recent discovery in human
 history). Is this raw human intelligence? I suspect raw human intelligence
 is more like a kludge. It is possible to create rickety structures of order
 on top of that kludge, by a process we call education.



I don't mean to imply formal learning at all. I think this even applies to
any animal that dreams during sleep (say). Modeling the world is a very
basic function of the brain, even if the process and result is a kludge.
With language and the ability to articulate models, humans can get very
good indeed at making them precise and building structures, rickity or
otherwise, upon the basic kludginess you're talking about.


 I think something like this could do a lot of damage very quickly, but by
 accident... in a similar way perhaps to the occasional meltdowns caused by
 the collective behaviors of micro-second market-making algorithms.


 Another example is big societies designed by humans.


Big societies act much more slowly. But they are their own organisms, we
don't design them anymore than our cells design us. We are not really that
good at seeing how they operate, for the same reason we find it hard to
perceive how a cloud changes through time.




  I find it exceedingly unlikely that an AGI will spontaneously emerge
 from a self-mutating process like you describe. Again, if this kind of
 thing were likely, or at least not extremely unlikely, I think it suggests
 that AGI is a lot simpler than it really is.


 This is tricky. The Kolmogorov complexity of AGI could be relatively low
 -- maybe it can be expressed in 1000 lines of lisp. But the set of programs
 expressible in 1000 lines of lisp includes some really crazy,
 counter-intuitive stuff (e.g. the universal dovetailer). Genetic
 programming has been shown to be able to discover relatively short
 solutions that are better than anything a human could come up with, due to
 counter-intuitiveness.


I suppose it is possible and maybe my estimate of how likely it is is too
low. All the same I would be rather shocked if AGI could be implemented in
1000 lines of code. And no cheating - each line has to be less than 80
chars ;-)  Bonus points if you can do it in Arnold
https://github.com/lhartikk/ArnoldC.

T

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-28 Thread John Mikes
Thanks for the rehabilitation. As I learned in ~1933: Prius cogitare quam
conari consuesce. (I like the relationship between conari and le canard).


On Wed, Aug 27, 2014 at 7:28 PM, LizR lizj...@gmail.com wrote:

 Oops I should have read your comments rather than stopping to rattle of my
 reply. But I think we agree.


 On 28 August 2014 11:27, LizR lizj...@gmail.com wrote:

 I disagree that

 * Artificial intelligence
 http://en.wikipedia.org/wiki/Artificial_intelligence is the simulation of
 intelligence in machines.*

 That is, I don't think it can be called a simulation (obviously ELIZA
 simulated having a lot more intelligence than it actually had). If a
 machine is intelligent, that's the real thing, surely? The Artificial in
 AI doesn't apply to the intelligence itself, but to the substrate it's
 running on. This seems to me a semantic confusion on the part of the
 article writer.



 On 28 August 2014 07:52, John Mikes jami...@gmail.com wrote:

 Wiki identifies the (non-artificial) base:
 *For other uses, see Intelligence (disambiguation)
 http://en.wikipedia.org/wiki/Intelligence_(disambiguation).*

 *Intelligence has been defined in many different ways such as in terms
 of one's capacity for logic http://en.wikipedia.org/wiki/Logic, abstract
 thought http://en.wikipedia.org/wiki/Abstraction, understanding
 http://en.wikipedia.org/wiki/Understanding, self-awareness
 http://en.wikipedia.org/wiki/Self-awareness, communication
 http://en.wikipedia.org/wiki/Communication, learning
 http://en.wikipedia.org/wiki/Learning,emotional knowledge
 http://en.wikipedia.org/wiki/Emotional_knowledge, memory
 http://en.wikipedia.org/wiki/Memory, planning
 http://en.wikipedia.org/wiki/Plan, creativity
 http://en.wikipedia.org/wiki/Creativity and problem solving
 http://en.wikipedia.org/wiki/Problem_solving.*

 *Intelligence is most widely studied in humans
 http://en.wikipedia.org/wiki/Human, but has also been observed in animals
 and in plants. Artificial intelligence
 http://en.wikipedia.org/wiki/Artificial_intelligence is the simulation of
 intelligence in machines.*

 *Within the discipline of psychology
 http://en.wikipedia.org/wiki/Psychology, various approaches to human
 intelligence have been adopted. The psychometric
 http://en.wikipedia.org/wiki/Psychometric approach is especially familiar
 to the general public, as well as being the most researched and by far the
 most widely used in practical settings.[1]
 http://en.wikipedia.org/wiki/Intelligence#cite_note-APA1995-1*

 IMO all the substitute words mean *themselves*, not intelligence.
 Accordingly the 'artificial' one would refer to simulate *THOSE terms*
 in/by machines. Not the *INTELLIGENCE.*

 *I like to use* the word-origin meaning: *'inter'* ligence -
 *legibility* or its variant, to understand the in-between what is not
 verbatim expressed in/by the 'text'. Logically, intuitively,
 anticipatorily, or otherwise we may come up in our thinking evolvement.

 *Artificial Intelligence *is accordingly an oxymoron. We cannot expect
 from a (any?) machine to understand (use?) the verbatim non-expressed
 (infinite potential) of some (any) content and work with it successfully.
 Yet the term is widely used for 'computers' working in 'meanings and
 conclusions' of the SO FAR deciphered domain of our thinking - translated
 into softwares of that -still-embryonical tool of digital workings we call
 our existing Turing machine. Beyond that The Deluge.

 I do not share the pessimism of the good professor, our machines are not
 (yet?) up to eliminate human ingenuity in the workplaces.

 John Mikes

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-28 Thread LizR
On 28 August 2014 17:26, Russell Standish li...@hpcoders.com.au wrote:

 On Thu, Aug 28, 2014 at 04:18:17PM +1200, LizR wrote:
  On 25 August 2014 14:16, Russell Standish li...@hpcoders.com.au wrote:
 
  
   You have to include all the people who will live in the future, as
   well as all those who have lived in the past.
 
 
  Of course, which is why I added assuming a population crash, as per...

 When are you assuming the population crash?

 If you assume we're about 50% of the way though all the humans who
 ever will have lived, then there will be a total of 2x10^11 people who
 ever will live.


On the timescales you mention below.


 At a population growth rate of 2% (the value in 1970) it only takes
 167 years for another 100 million people to be born. OK - today's
 growth rate is a little less, its down to about 1.3% now - if we kept
 on that same exponential growth, then the time till doom extends to
 257 years, but still not long.

 
 
   With exponential growth
   rates (business-as-usual), more people rapidly end up living in the
   future as the time until doom increases.
  
   That looks increasingly unsustainable - in fact, population growth has
  already gone into a bit of a decline as more women are educated and
 realise
  they don't actually *have* to keep popping sprogs. As health, education
 and
  general freedom spread around the world, as I hope they will, we should
 see
  a levelling off and possibly a decline in numbers of people without a
  doomsday scenario. Although unfortunately, other factors indicate there
 may
  well be one anyway.
 

 That's all the doomsday argument predicts, actually. That population
 will start to decline soon. The decline may be gradual, or it may be
 sudden. To call it doomsday maybe overlarding it, or course, but the
 prediction is still surprising.

 Yes, I am hoping for a gradual decline ... what does the DDA have to say
about other sentient species? If, say, the Andromedans were going to
colonise their entire galaxy, we'd almost certainly have been born one of
them. Does it therefore predict that there will be no vastly populous
conscious race in any part of this universe or any other?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Thin Ice

2014-08-28 Thread LizR
We saw the above mentioned film last night. The film-maker set out to find
out what all the fuss was about climate science, and whether all those
scientists could be part of some vast hoax - he did this by travelling to
Antarctica and a few other parts of the world (including New Zealand) and
living with some of them for several years. After listening to physicists,
biologists, chemists and of course climate scientists, the take home
message is don't buy a house within 10 metres of sea level if you want to
be able to sell it in 50 years time.

Typical quote - physicists find global warming boring, because it's a
no-brainer. It was cutting edge about 200 years ago, when the science was
worked out, but nowadays why do we keep having to explain that certain
molecules behave in certain ways?

Plus the best description of how the greenhouse effect works I've ever seen
or read, together wth graphs of data being recorded live (in NZ again)
showing the effect happening right there in front of you.

Highly recommended.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


What sort of brain dead idiot...

2014-08-28 Thread LizR
...lets a 9 year old girl play around with an Uzi?

(Well I guess we know, now.)

The most appalling idiocy. Children should not have any guns within reach,
even if they live on farms --- full stop.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: AI Dooms Us

2014-08-28 Thread LizR
Lots of games come with AI :-)


On 29 August 2014 08:05, Terren Suydam terren.suy...@gmail.com wrote:


 On Thu, Aug 28, 2014 at 7:30 AM, Telmo Menezes te...@telmomenezes.com
 wrote:


 Although my POV is aligned with the latter intuition, I actually agree
 with the former, but consider the kinds of threats involved to be bounded
 in ways we can in principle control. Although in practice it is possible
 for them to do damage so quickly we can't prevent it.

 Perhaps my idea of intelligence is too limited. I am assuming that
 something capable of being a real threat will be able to generate its own
 ontologies, creatively model them in ways that build on and relate to
 existing ontologies, simulate and test those new models, etc., generate
 value judgments using these new models with respect to overarching utility
 function(s). It is suspiciously similar to human intelligence.


 I wonder. What you describe seems like the way of thinking of a person
 trained in the scientific method (a very recent discovery in human
 history). Is this raw human intelligence? I suspect raw human intelligence
 is more like a kludge. It is possible to create rickety structures of order
 on top of that kludge, by a process we call education.



 I don't mean to imply formal learning at all. I think this even applies to
 any animal that dreams during sleep (say). Modeling the world is a very
 basic function of the brain, even if the process and result is a kludge.
 With language and the ability to articulate models, humans can get very
 good indeed at making them precise and building structures, rickity or
 otherwise, upon the basic kludginess you're talking about.


 I think something like this could do a lot of damage very quickly, but by
 accident... in a similar way perhaps to the occasional meltdowns caused by
 the collective behaviors of micro-second market-making algorithms.


 Another example is big societies designed by humans.


 Big societies act much more slowly. But they are their own organisms, we
 don't design them anymore than our cells design us. We are not really that
 good at seeing how they operate, for the same reason we find it hard to
 perceive how a cloud changes through time.




  I find it exceedingly unlikely that an AGI will spontaneously emerge
 from a self-mutating process like you describe. Again, if this kind of
 thing were likely, or at least not extremely unlikely, I think it suggests
 that AGI is a lot simpler than it really is.


 This is tricky. The Kolmogorov complexity of AGI could be relatively low
 -- maybe it can be expressed in 1000 lines of lisp. But the set of programs
 expressible in 1000 lines of lisp includes some really crazy,
 counter-intuitive stuff (e.g. the universal dovetailer). Genetic
 programming has been shown to be able to discover relatively short
 solutions that are better than anything a human could come up with, due to
 counter-intuitiveness.


 I suppose it is possible and maybe my estimate of how likely it is is too
 low. All the same I would be rather shocked if AGI could be implemented in
 1000 lines of code. And no cheating - each line has to be less than 80
 chars ;-)  Bonus points if you can do it in Arnold
 https://github.com/lhartikk/ArnoldC.

 T

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: AI Dooms Us

2014-08-28 Thread LizR
PS Arnold is hilarious. I recognised quite a few quotes ... but where was
this one?

ENDLESS LOOP - To crush your enemies, see them driven before you, and to
hear the lamentation of their women.


On 29 August 2014 10:09, LizR lizj...@gmail.com wrote:

 Lots of games come with AI :-)


 On 29 August 2014 08:05, Terren Suydam terren.suy...@gmail.com wrote:


 On Thu, Aug 28, 2014 at 7:30 AM, Telmo Menezes te...@telmomenezes.com
 wrote:


 Although my POV is aligned with the latter intuition, I actually agree
 with the former, but consider the kinds of threats involved to be bounded
 in ways we can in principle control. Although in practice it is possible
 for them to do damage so quickly we can't prevent it.

 Perhaps my idea of intelligence is too limited. I am assuming that
 something capable of being a real threat will be able to generate its own
 ontologies, creatively model them in ways that build on and relate to
 existing ontologies, simulate and test those new models, etc., generate
 value judgments using these new models with respect to overarching utility
 function(s). It is suspiciously similar to human intelligence.


 I wonder. What you describe seems like the way of thinking of a person
 trained in the scientific method (a very recent discovery in human
 history). Is this raw human intelligence? I suspect raw human intelligence
 is more like a kludge. It is possible to create rickety structures of order
 on top of that kludge, by a process we call education.



 I don't mean to imply formal learning at all. I think this even applies
 to any animal that dreams during sleep (say). Modeling the world is a very
 basic function of the brain, even if the process and result is a kludge.
 With language and the ability to articulate models, humans can get very
 good indeed at making them precise and building structures, rickity or
 otherwise, upon the basic kludginess you're talking about.


 I think something like this could do a lot of damage very quickly, but
 by accident... in a similar way perhaps to the occasional meltdowns caused
 by the collective behaviors of micro-second market-making algorithms.


 Another example is big societies designed by humans.


 Big societies act much more slowly. But they are their own organisms, we
 don't design them anymore than our cells design us. We are not really that
 good at seeing how they operate, for the same reason we find it hard to
 perceive how a cloud changes through time.




  I find it exceedingly unlikely that an AGI will spontaneously emerge
 from a self-mutating process like you describe. Again, if this kind of
 thing were likely, or at least not extremely unlikely, I think it suggests
 that AGI is a lot simpler than it really is.


 This is tricky. The Kolmogorov complexity of AGI could be relatively low
 -- maybe it can be expressed in 1000 lines of lisp. But the set of programs
 expressible in 1000 lines of lisp includes some really crazy,
 counter-intuitive stuff (e.g. the universal dovetailer). Genetic
 programming has been shown to be able to discover relatively short
 solutions that are better than anything a human could come up with, due to
 counter-intuitiveness.


 I suppose it is possible and maybe my estimate of how likely it is is too
 low. All the same I would be rather shocked if AGI could be implemented in
 1000 lines of code. And no cheating - each line has to be less than 80
 chars ;-)  Bonus points if you can do it in Arnold
 https://github.com/lhartikk/ArnoldC.

 T

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Hal Finney

2014-08-28 Thread Stathis Papaioannou
I just learned that Hal Finney has died. Hal was active on this list in its
early days. For the last few years he has suffered from ALS. He will be
missed.



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: AI Dooms Us

2014-08-28 Thread Platonist Guitar Cowboy
On Fri, Aug 29, 2014 at 12:14 AM, LizR lizj...@gmail.com wrote:

 PS Arnold is hilarious. I recognised quite a few quotes ... but where
 was this one?

 ENDLESS LOOP - To crush your enemies, see them driven before you, and to
 hear the lamentation of their women.


https://www.youtube.com/watch?v=6PQ6335puOc

That line still makes me laugh every time I bump into it.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Hal Finney

2014-08-28 Thread John Clark
On Thu, Aug 28, 2014 at 6:30 PM, Stathis Papaioannou stath...@gmail.com
wrote:

 I just learned that Hal Finney has died. Hal was active on this list in
 its early days. For the last few years he has suffered from ALS. He will
 be missed.


I too will miss Hal, but he is being Cryopreserved so maybe just maybe he
won't be missed forever.

  John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: AI Dooms Us

2014-08-28 Thread Stephen Paul King
Are our fears of AI running amuck and killing random persons based on 
unfounded assumptions?

On Monday, August 25, 2014 3:20:24 PM UTC-4, cdemorsella wrote:

 AI is being developed and funded primarily by agencies such as DARPA, NSA, 
 DOD (plus MIC contractors). After all smart drones with independent 
 untended warfighting capabilities offer a significant military advantage to 
 the side that possesses them. This is a guarantee that the wrong kind of 
 super-intelligence will come out of the process... a super-intelligent 
 machine devoted to the killing of enemy human beings (+ opposing drones I 
 suppose as well)

 This does not bode well for a benign super-intelligence outcome does it?
   --
  *From:* meekerdb meek...@verizon.net javascript:
 *To:* 
 *Sent:* Monday, August 25, 2014 12:04 PM
 *Subject:* Re: AI Dooms Us
  
  Bostrom says, If humanity had been sane and had our act together 
 globally, the sensible course of action would be to postpone development of 
 superintelligence until we figured out how to do so safely. And then maybe 
 wait another generation or two just to make sure that we hadn't overlooked 
 some flaw in our reasoning. And then do it -- and reap immense benefit. 
 Unfortunately, we do not have the ability to pause.

 But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to 
 produce a pause.

 Brent

 On 8/25/2014 10:27 AM 

 Artificial Intelligence May Doom The Human Race Within A Century, Oxford 
 Professor  

  
 http://www.huffingtonpost.com/2014/08/22/artificial-intelligence-oxford_n_5689858.html?ir=Science
  

  -- 
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to everything-li...@googlegroups.com javascript:.
 To post to this group, send email to everyth...@googlegroups.com 
 javascript:.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: AI Dooms Us

2014-08-28 Thread Stephen Paul King
 If humanity had been sane and had our act together globally, the
sensible course of action would be to postpone development of
superintelligence until we figured out how to do so safely. 

  Sanity is not a common property of crowds, we are not considering
wisdom but actual observer behaviors of humans in large groups. If we
define wise behavior that which does not generate higher entropy in its
environment, crows, more often than not, tend to not be wise.

   If an AI where to emerge from the interactions of many computers, would
it be expected to be sane? What is sanity anyway?

  Another question is: Would AI have a view of the universe that can be
matched up with ours? If not, how would we expect it to see the world
that it interacts with? Our worlds and that of AI may be disjoint!


On Fri, Aug 29, 2014 at 12:59 AM, Stephen Paul King stephe...@charter.net
wrote:

 Are our fears of AI running amuck and killing random persons based on
 unfounded assumptions?

 On Monday, August 25, 2014 3:20:24 PM UTC-4, cdemorsella wrote:

 AI is being developed and funded primarily by agencies such as DARPA,
 NSA, DOD (plus MIC contractors). After all smart drones with independent
 untended warfighting capabilities offer a significant military advantage to
 the side that possesses them. This is a guarantee that the wrong kind of
 super-intelligence will come out of the process... a super-intelligent
 machine devoted to the killing of enemy human beings (+ opposing drones I
 suppose as well)

 This does not bode well for a benign super-intelligence outcome does it?
   --
  *From:* meekerdb meek...@verizon.net
 *To:*
 *Sent:* Monday, August 25, 2014 12:04 PM
 *Subject:* Re: AI Dooms Us

  Bostrom says, If humanity had been sane and had our act together
 globally, the sensible course of action would be to postpone development of
 superintelligence until we figured out how to do so safely. And then maybe
 wait another generation or two just to make sure that we hadn't overlooked
 some flaw in our reasoning. And then do it -- and reap immense benefit.
 Unfortunately, we do not have the ability to pause.

 But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to
 produce a pause.

 Brent

 On 8/25/2014 10:27 AM

 Artificial Intelligence May Doom The Human Race Within A Century, Oxford
 Professor

  http://www.huffingtonpost.com/2014/08/22/artificial-
 intelligence-oxford_n_5689858.html?ir=Science


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-li...@googlegroups.com.
 To post to this group, send email to everyth...@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


   --
 You received this message because you are subscribed to a topic in the
 Google Groups Everything List group.
 To unsubscribe from this topic, visit
 https://groups.google.com/d/topic/everything-list/YJeHJO5dNqQ/unsubscribe.
 To unsubscribe from this group and all its topics, send an email to
 everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.




-- 

Kindest Regards,

Stephen Paul King

Senior Researcher

Mobile: (864) 567-3099

stephe...@provensecure.com

 http://www.provensecure.us/


“This message (including any attachments) is intended only for the use of
the individual or entity to which it is addressed, and may contain
information that is non-public, proprietary, privileged, confidential and
exempt from disclosure under applicable law or may be constituted as
attorney work product. If you are not the intended recipient, you are
hereby notified that any use, dissemination, distribution, or copying of
this communication is strictly prohibited. If you have received this
message in error, notify sender immediately and delete this message
immediately.”

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.