Re: MGA revisited paper

2014-08-30 Thread Russell Standish
On Sat, Aug 30, 2014 at 11:49:00AM +1200, LizR wrote:
 Oops for smie read semi. Damn this no-editing feature!

I did wonder!


-- 


Prof Russell Standish  Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics  hpco...@hpcoders.com.au
University of New South Wales  http://www.hpcoders.com.au

 Latest project: The Amoeba's Secret 
 (http://www.hpcoders.com.au/AmoebasSecret.html)


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread John Clark
On Wed, Aug 27, 2014 John Mikes jami...@gmail.com wrote:

* Artificial Intelligence *is accordingly an oxymoron.


Artificial means made by design not by random mutation and natural
selection as we were. And if you don't have a good definition of
intelligence that you can express in words you have something better, a
example. Intelligence is behavior that a smart human performs, and if
someone or something outsmarts than human then that thing is either
intelligent or very very lucky. It is the exact same test we humans use to
tell the difference between smart people and those less smart. What's
oxymoronic about that?

 We cannot expect from a (any?) machine to understand (use?) the verbatim
 non-expressed (infinite potential) of some (any) content and work with it
 successfully.


Then how on earth did Watson defeat the 2 smartest human Jeopardy players
on the planet?


  I do not share the pessimism of the good professor,

Whistling through the grave yard.

 our machines are not (yet?) up to eliminate human ingenuity in the
 workplaces.


Yes not yet. A man was heard to say as he passed the 20'th floor after
falling off the top of the Empire State building so far so good; now that
is optimism!

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-30 Thread John Mikes
Liz:
and HOW ON EARTH (verbatim: this one) would you know the entire World? Not
to ask: what would you call 'populous'? Is a trillion 'many'?
Please do ot quote Adam and Eve, Adam started out to be alone with a spare
rib. And they(?) made the entire crowd.
JM


On Fri, Aug 29, 2014 at 7:48 PM, LizR lizj...@gmail.com wrote:

 Actually I'm surprised that there are *no* populous universes anywhere
 in the string landscape / level 4 multiverse (if such exist). Or perhaps
 it's more likely that there are, but their proportion is so much lower than
 our sort that the chances are still better to find oneself in a universe
 where the life of civilisations is either nasty, brutish and short, or
 involves us evolving into an Childhood's End style Overmind.


 On 30 August 2014 08:25, John Mikes jami...@gmail.com wrote:

 Russell: in your note



 *Yes, it does. And that might explain the Fermi paradox. It doesn't rule
 vastly distributed hive minds, though. Perhaps our future is to be
 assilimated with the Borg.*

  isn't there an   *out *   missing in the 2nd line after 'rule', or
 not?

 John M



 On Fri, Aug 29, 2014 at 2:17 AM, Russell Standish li...@hpcoders.com.au
 wrote:

 On Fri, Aug 29, 2014 at 10:01:38AM +1200, LizR wrote:
   Yes, I am hoping for a gradual decline ... what does the DDA have to
 say
  about other sentient species? If, say, the Andromedans were going to
  colonise their entire galaxy, we'd almost certainly have been born one
 of
  them. Does it therefore predict that there will be no vastly populous
  conscious race in any part of this universe or any other?
 

 Yes, it does. And that might explain the Fermi paradox. It doesn't
 rule vastly distributed hive minds, though. Perhaps our future is to
 be assilimated with the Borg.

 --


 
 Prof Russell Standish  Phone 0425 253119 (mobile)
 Principal, High Performance Coders
 Visiting Professor of Mathematics  hpco...@hpcoders.com.au
 University of New South Wales  http://www.hpcoders.com.au

  Latest project: The Amoeba's Secret
  (http://www.hpcoders.com.au/AmoebasSecret.html)

 

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


RE: Artificial Intelligence article

2014-08-30 Thread 'Chris de Morsella' via Everything List
 

 

From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of John Clark

 

On Wed, Aug 27, 2014 John Mikes jami...@gmail.com wrote:

 

 Artificial Intelligence is accordingly an oxymoron. 

 

Artificial means made by design not by random mutation and natural selection as 
we were. And if you don't have a good definition of intelligence that you can 
express in words you have something better, a example. Intelligence is behavior 
that a smart human performs, and if someone or something outsmarts than human 
then that thing is either intelligent or very very lucky. It is the exact same 
test we humans use to tell the difference between smart people and those less 
smart. What's oxymoronic about that?   

 

 We cannot expect from a (any?) machine to understand (use?) the verbatim 
 non-expressed (infinite potential) of some (any) content and work with it 
 successfully. 

 

Then how on earth did Watson defeat the 2 smartest human Jeopardy players on 
the planet?

 

Agreed. Networked machines have access to all the data repositories they have 
connection  authorization on – and this available store (and deep store) of 
information is truly massive, varied and cross connected. The data-mining 
algorithms have made huge strides over the last decade – driven by the 
insatiable need of the NSA to mine all data. Corporations have all complied and 
made their own data repositories reachable, searchable for this same reason. 
The end result is that a very much larger number of disconnected disparate and 
poorly searchable data has become warehoused in massive data centers with 
rapidly growing search metadata cross indexing this vast stream and quantity of 
raw data.

The information capacity and generated new quantities of information is massive 
(at least by the standards of what our brains can comprehend). The figures I 
looked up for this post are from 2007 (ancient in terms of the information 
explosion). http://en.wikipedia.org/wiki/Exabyte

· 65 exabytes of telecom capacity in 2007 – an increase of more than 30 
times the 2.2 exabyte capacity in 2000. Extrapolating this rate of growth to 
the present day would mean that the actual current total telecommunications 
throughput is now in zettabyte (one zettabyte = a trillion gigabytes) territory

· Single state of the art supercomputers are also entering into 
zettabyte scale and perform at over 10^16 FLOPS (100 petaflops) Exaflop 
supercomputers are just around the corner with China and the US racing neck and 
neck to get them built out by 2018

· The world's technological per-capita capacity to store information 
has roughly doubled every 40 months since the 1980. By 2012 almost a zettabyte 
of information was generated and stored… meaning that by now (2014) we are well 
into the zettabyte scale.

 

These numbers and similar global information age statistics boggle the mind. 
And the rates of growth are staggering. Increasingly data is being made 
accessible to the net and centralized into net-facing repositories, getting 
migrated out of hard to access disparate repositories into big data 
repositories (motivated in part by the NSA’s desire to know everything)

 I do not share the pessimism of the good professor,

Whistling through the grave yard.   

 our machines are not (yet?) up to eliminate human ingenuity in the 
 workplaces. 

 

Yes not yet. A man was heard to say as he passed the 20'th floor after falling 
off the top of the Empire State building so far so good; now that is 
optimism! 

Hehe – we don’t always get along, but got to give it to you. Nice bit of 
apropos dark wit.

Chris

 John K Clark

 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: MGA revisited paper

2014-08-30 Thread LizR
I don't understand the question. I'm attempting to make further deductions
from the self-sampling assumption, as used in the Doomsday Argument.
Please could you explain what you think I'm saying, so I can attempt a
sensible reply to your comment?


On 31 August 2014 07:38, John Mikes jami...@gmail.com wrote:

 Liz:
 and HOW ON EARTH (verbatim: this one) would you know the entire World? Not
 to ask: what would you call 'populous'? Is a trillion 'many'?
 Please do ot quote Adam and Eve, Adam started out to be alone with a spare
 rib. And they(?) made the entire crowd.
 JM


 On Fri, Aug 29, 2014 at 7:48 PM, LizR lizj...@gmail.com wrote:

 Actually I'm surprised that there are *no* populous universes anywhere
 in the string landscape / level 4 multiverse (if such exist). Or perhaps
 it's more likely that there are, but their proportion is so much lower than
 our sort that the chances are still better to find oneself in a universe
 where the life of civilisations is either nasty, brutish and short, or
 involves us evolving into an Childhood's End style Overmind.


 On 30 August 2014 08:25, John Mikes jami...@gmail.com wrote:

 Russell: in your note



 *Yes, it does. And that might explain the Fermi paradox. It doesn't rule
 vastly distributed hive minds, though. Perhaps our future is to be
 assilimated with the Borg.*

  isn't there an   *out *   missing in the 2nd line after 'rule', or
 not?

 John M



 On Fri, Aug 29, 2014 at 2:17 AM, Russell Standish li...@hpcoders.com.au
  wrote:

 On Fri, Aug 29, 2014 at 10:01:38AM +1200, LizR wrote:
   Yes, I am hoping for a gradual decline ... what does the DDA have
 to say
  about other sentient species? If, say, the Andromedans were going to
  colonise their entire galaxy, we'd almost certainly have been born
 one of
  them. Does it therefore predict that there will be no vastly populous
  conscious race in any part of this universe or any other?
 

 Yes, it does. And that might explain the Fermi paradox. It doesn't
 rule vastly distributed hive minds, though. Perhaps our future is to
 be assilimated with the Borg.

 --


 
 Prof Russell Standish  Phone 0425 253119 (mobile)
 Principal, High Performance Coders
 Visiting Professor of Mathematics  hpco...@hpcoders.com.au
 University of New South Wales  http://www.hpcoders.com.au

  Latest project: The Amoeba's Secret
  (http://www.hpcoders.com.au/AmoebasSecret.html)

 

 --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google
 Groups Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread LizR
To be absolutely clear - the Artificial in AI refers to the machine which
hosts the intelligence, not to the intelligence itself.

The problem with machines defeating Jeopardy players (I assume this
refers to this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29 ?)
is that the machines concerned almost certainly have no concepts of what
the answers were about. Hence they aren't in fact doing what humans do
(or at least not most humans do, apart from perhaps *idiots savant*).
Likewise, Deep Junior almost certainly has no concept of what it's doing
when it scores a 3-3 tie aganst Kasparov. It has no concept of itself or
its opponent, or very limited concepts embedded in relatively small* data
structures - and it experiences no emotions on winning or losing.

According to Bruno, at least, its possible for a machine to do all the
above, but I don't think we've got one yet (apart from the ones made all
over the world by unskilled labour, of course).

*At least I imagine that the human concept of self involves more than,
say, a few megabytes.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread Stathis Papaioannou
On Sunday, August 31, 2014, LizR lizj...@gmail.com wrote:

 To be absolutely clear - the Artificial in AI refers to the machine
 which hosts the intelligence, not to the intelligence itself.

 The problem with machines defeating Jeopardy players (I assume this
 refers to this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29 ?)
 is that the machines concerned almost certainly have no concepts of what
 the answers were about. Hence they aren't in fact doing what humans do
 (or at least not most humans do, apart from perhaps *idiots savant*).
 Likewise, Deep Junior almost certainly has no concept of what it's doing
 when it scores a 3-3 tie aganst Kasparov. It has no concept of itself or
 its opponent, or very limited concepts embedded in relatively small* data
 structures - and it experiences no emotions on winning or losing.

 According to Bruno, at least, its possible for a machine to do all the
 above, but I don't think we've got one yet (apart from the ones made all
 over the world by unskilled labour, of course).

 *At least I imagine that the human concept of self involves more than,
 say, a few megabytes.


How do you know that a machine (or human) really knows what the answers are
about? You can ask more questions, but how do you know they really know
what *those* answers they give are about?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread meekerdb

On 8/30/2014 4:04 PM, LizR wrote:
To be absolutely clear - the Artificial in AI refers to the machine which hosts the 
intelligence, not to the intelligence itself.


The problem with machines defeating Jeopardy players (I assume this refers to this - 
http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29 ?) is that the machines concerned 
almost certainly have no concepts of what the answers were about.


How do you have a concept of what Who was Charlamagne? about? Isn't a lot of of it 
verbal and relational; stuff Winston does know.  Of course Winston is ignorant about a lot 
of basic things about being a person because it doesn't have perceptive sensors and the 
ability to move and manipulate things.


Hence they aren't in fact doing what humans do (or at least not most humans do, apart 
from perhaps /idiots savant/). Likewise, Deep Junior almost certainly has no concept of 
what it's doing when it scores a 3-3 tie aganst Kasparov. It has no concept of itself or 
its opponent, or very limited concepts embedded in relatively small* data structures - 
and it experiences no emotions on winning or losing.


Isn't the reason you think that is because its input/output is so limited?  It wouldn't be 
at all difficult to add to Deep Blue's program so that on winning it composed a poem of 
celebration and displayed fireworks on a screen - or even set off real fireworks - and on 
losing it shut down and refused to do anything for three days.


Brent



According to Bruno, at least, its possible for a machine to do all the above, but I 
don't think we've got one yet (apart from the ones made all over the world by unskilled 
labour, of course).


*At least I imagine that the human concept of self involves more than, say, a few 
megabytes.


--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com 
mailto:everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
mailto:everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread Kim Jones

 On 31 Aug 2014, at 9:04 am, LizR lizj...@gmail.com wrote:
 
 To be absolutely clear - the Artificial in AI refers to the machine which 
 hosts the intelligence, not to the intelligence itself.

How can anything be artificial???

What in fact does this word mean? The loose way in which we use this word 
suggests that whatever is deemed artificial is somehow an order of real 
that is less than real, or, in some sense missing some ingredient X which 
takes it from vaguely unreal - real.

Artifice = clever or cunning devices and expedients. Nature does that all the 
time.

Nothing is artificial, nothing. And I mean that substantively.

Kim

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread meekerdb

On 8/30/2014 5:29 PM, Kim Jones wrote:

On 31 Aug 2014, at 9:04 am, LizR lizj...@gmail.com wrote:

To be absolutely clear - the Artificial in AI refers to the machine which 
hosts the intelligence, not to the intelligence itself.

How can anything be artificial???


Artificial means somebody made it, while there is a natural form that nobody made.  
Hence artificial flower, artifact, artificial fur,...


Brent



What in fact does this word mean? The loose way in which we use this word suggests that whatever is deemed 
artificial is somehow an order of real that is less than real, or, in some sense 
missing some ingredient X which takes it from vaguely unreal - real.

Artifice = clever or cunning devices and expedients. Nature does that all the 
time.

Nothing is artificial, nothing. And I mean that substantively.

Kim



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread LizR
On 31 August 2014 12:29, Kim Jones kimjo...@ozemail.com.au wrote:


  On 31 Aug 2014, at 9:04 am, LizR lizj...@gmail.com wrote:
 
  To be absolutely clear - the Artificial in AI refers to the machine
 which hosts the intelligence, not to the intelligence itself.

 How can anything be artificial???

 What in fact does this word mean? The loose way in which we use this word
 suggests that whatever is deemed artificial is somehow an order of real
 that is less than real, or, in some sense missing some ingredient X which
 takes it from vaguely unreal - real.

 Artifice = clever or cunning devices and expedients. Nature does that
 all the time.

 Nothing is artificial, nothing. And I mean that substantively.

 You can remove or distort the meaning of most words if you try hard
enough, nevertheless this is a useful distinction. Artificial in this
context means created by humans.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread LizR
On 31 August 2014 12:27, meekerdb meeke...@verizon.net wrote:

  On 8/30/2014 4:04 PM, LizR wrote:

   To be absolutely clear - the Artificial in AI refers to the machine
 which hosts the intelligence, not to the intelligence itself.

  The problem with machines defeating Jeopardy players (I assume this
 refers to this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29 ?)
 is that the machines concerned almost certainly have no concepts of what
 the answers were about.


 How do you have a concept of what Who was Charlamagne? about?  Isn't a
 lot of of it verbal and relational; stuff Winston does know.  Of course
 Winston is ignorant about a lot of basic things about being a person
 because it doesn't have perceptive sensors and the ability to move and
 manipulate things.


That's the point. Winston or whatever isn't immersed in an environment, or
its environment only involves abstract relations. So I do have a better
idea of who charlemagne was, even if I'd never heard of him before.

 Hence they aren't in fact doing what humans do (or at least not most
humans do, apart from perhaps *idiots savant*). Likewise, Deep Junior
almost certainly has no concept of what it's doing when it scores a 3-3 tie
aganst Kasparov. It has no concept of itself or its opponent, or very
limited concepts embedded in relatively small* data structures - and it
experiences no emotions on winning or losing.

 Isn't the reason you think that is because its input/output is so
 limited?  It wouldn't be at all difficult to add to Deep Blue's program so
 that on winning it composed a poem of celebration and displayed fireworks
 on a screen - or even set off real fireworks - and on losing it shut down
 and refused to do anything for three days.


No, I think that because there's no evidence whatsoever that Deep Blue etc
have feelings, at least none that I've come across. I'd be happy to be
proved wrong (which would be a boost for comp, I suppose).

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread LizR
I should have added - an environment that only involves abstract relations;
it has no referents to a reality richly experienced via senses.


On 31 August 2014 12:54, LizR lizj...@gmail.com wrote:

 On 31 August 2014 12:27, meekerdb meeke...@verizon.net wrote:

  On 8/30/2014 4:04 PM, LizR wrote:

   To be absolutely clear - the Artificial in AI refers to the machine
 which hosts the intelligence, not to the intelligence itself.

  The problem with machines defeating Jeopardy players (I assume this
 refers to this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29
 ?) is that the machines concerned almost certainly have no concepts of what
 the answers were about.


 How do you have a concept of what Who was Charlamagne? about?  Isn't a
 lot of of it verbal and relational; stuff Winston does know.  Of course
 Winston is ignorant about a lot of basic things about being a person
 because it doesn't have perceptive sensors and the ability to move and
 manipulate things.


 That's the point. Winston or whatever isn't immersed in an environment, or
 its environment only involves abstract relations. So I do have a better
 idea of who charlemagne was, even if I'd never heard of him before.

   Hence they aren't in fact doing what humans do (or at least not most
 humans do, apart from perhaps *idiots savant*). Likewise, Deep Junior
 almost certainly has no concept of what it's doing when it scores a 3-3 tie
 aganst Kasparov. It has no concept of itself or its opponent, or very
 limited concepts embedded in relatively small* data structures - and it
 experiences no emotions on winning or losing.

  Isn't the reason you think that is because its input/output is so
 limited?  It wouldn't be at all difficult to add to Deep Blue's program so
 that on winning it composed a poem of celebration and displayed fireworks
 on a screen - or even set off real fireworks - and on losing it shut down
 and refused to do anything for three days.


 No, I think that because there's no evidence whatsoever that Deep Blue etc
 have feelings, at least none that I've come across. I'd be happy to be
 proved wrong (which would be a boost for comp, I suppose).



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread meekerdb

On 8/30/2014 5:54 PM, LizR wrote:
On 31 August 2014 12:27, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


On 8/30/2014 4:04 PM, LizR wrote:

To be absolutely clear - the Artificial in AI refers to the machine which 
hosts
the intelligence, not to the intelligence itself.

The problem with machines defeating Jeopardy players (I assume this 
refers to
this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29 ?) is that the
machines concerned almost certainly have no concepts of what the answers 
were about.


How do you have a concept of what Who was Charlamagne? about?  Isn't a 
lot of of
it verbal and relational; stuff Winston does know.  Of course Winston is 
ignorant
about a lot of basic things about being a person because it doesn't have 
perceptive
sensors and the ability to move and manipulate things.


That's the point. Winston or whatever isn't immersed in an environment, or its 
environment only involves abstract relations. So I do have a better idea of who 
charlemagne was, even if I'd never heard of him before.


Sure, you have a better idea.  But I don't think that shows that Winston has no concept 
of what the answers are about.  His concepts are limited to verbal relations, but he 
probably has more of those related to Charlemagne than I do.


Hence they aren't in fact doing what humans do (or at least not most humans do, apart 
from perhaps /idiots savant/). Likewise, Deep Junior almost certainly has no concept of 
what it's doing when it scores a 3-3 tie aganst Kasparov. It has no concept of itself 
or its opponent, or very limited concepts embedded in relatively small* data 
structures - and it experiences no emotions on winning or losing.


Isn't the reason you think that is because its input/output is so limited?  
It
wouldn't be at all difficult to add to Deep Blue's program so that on 
winning it
composed a poem of celebration and displayed fireworks on a screen - or 
even set off
real fireworks - and on losing it shut down and refused to do anything for 
three days.


No, I think that because there's no evidence whatsoever that Deep Blue etc have 
feelings, at least none that I've come across. I'd be happy to be proved wrong (which 
would be a boost for comp, I suppose).


I'm asking what would constitute evidence for Deep Blue's having feelings?  Fireworks and 
sulking aren't enough?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread Terren Suydam
On Aug 30, 2014 9:10 PM, meekerdb meeke...@verizon.net wrote:

 On 8/30/2014 5:54 PM, LizR wrote:

 On 31 August 2014 12:27, meekerdb meeke...@verizon.net wrote:

 On 8/30/2014 4:04 PM, LizR wrote:

 To be absolutely clear - the Artificial in AI refers to the machine
which hosts the intelligence, not to the intelligence itself.

 The problem with machines defeating Jeopardy players (I assume this
refers to this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29 ?)
is that the machines concerned almost certainly have no concepts of what
the answers were about.


 How do you have a concept of what Who was Charlamagne? about?  Isn't
a lot of of it verbal and relational; stuff Winston does know.  Of course
Winston is ignorant about a lot of basic things about being a person
because it doesn't have perceptive sensors and the ability to move and
manipulate things.


 That's the point. Winston or whatever isn't immersed in an environment,
or its environment only involves abstract relations. So I do have a better
idea of who charlemagne was, even if I'd never heard of him before.


 Sure, you have a better idea.  But I don't think that shows that Winston
has no concept of what the answers are about.  His concepts are limited
to verbal relations, but he probably has more of those related to
Charlemagne than I do.


 Hence they aren't in fact doing what humans do (or at least not most
humans do, apart from perhaps idiots savant). Likewise, Deep Junior almost
certainly has no concept of what it's doing when it scores a 3-3 tie aganst
Kasparov. It has no concept of itself or its opponent, or very limited
concepts embedded in relatively small* data structures - and it
experiences no emotions on winning or losing.

 Isn't the reason you think that is because its input/output is so
limited?  It wouldn't be at all difficult to add to Deep Blue's program so
that on winning it composed a poem of celebration and displayed fireworks
on a screen - or even set off real fireworks - and on losing it shut down
and refused to do anything for three days.


 No, I think that because there's no evidence whatsoever that Deep Blue
etc have feelings, at least none that I've come across. I'd be happy to be
proved wrong (which would be a boost for comp, I suppose).


 I'm asking what would constitute evidence for Deep Blue's having
feelings?  Fireworks and sulking aren't enough?

 Brent


Craig is that you?

With all due respect it's this kind of thinking that has limited progress
in AI for so long.

Terren

 --
 You received this message because you are subscribed to the Google Groups
Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread Kim Jones


 On 31 Aug 2014, at 10:51 am, LizR lizj...@gmail.com wrote:
 
 On 31 August 2014 12:29, Kim Jones kimjo...@ozemail.com.au wrote:
 
  On 31 Aug 2014, at 9:04 am, LizR lizj...@gmail.com wrote:
 
  To be absolutely clear - the Artificial in AI refers to the machine 
  which hosts the intelligence, not to the intelligence itself.
 
 How can anything be artificial???
 
 What in fact does this word mean? The loose way in which we use this word 
 suggests that whatever is deemed artificial is somehow an order of real 
 that is less than real, or, in some sense missing some ingredient X which 
 takes it from vaguely unreal - real.
 
 Artifice = clever or cunning devices and expedients. Nature does that all 
 the time.
 
 Nothing is artificial, nothing. And I mean that substantively.

 You can remove or distort the meaning of most words if you try hard enough, 
 nevertheless this is a useful distinction. Artificial in this context means 
 created by humans.


OK.  But some finches use twigs as tools and that surely comes under the same 
umbrella. Creativity is a large part of artificial, not just we bipedal 
wonders did it.
Artifice is the practice of distorting meanings and layering meanings. It's 
ART. I'm sure someone like Jacob Brunowski (The Ascent of Man) would agree with 
me. Are we going to call Chartres Cathedral artificial? Of course it is. How 
about Mary Shelley's Frankenstein? He's as artificial and intelligent as they 
come. No zombie, that guy.

So for me, the distinction is useful in the way you mean but continues not to 
capture the real distinction which is more about intelligence doing what it 
always does most usefully; create newness. There is no way the search to build 
an AI is the same as intelligence pouring itself from one bottle into another. 
What we create will be something that will almost certainly surprise us, just 
as I am still surprised by certain ancient works of man's artifice that can 
now include something that may even converse and reason with us in a way that 
strikes us as vaguely reminiscent of ourselves.

It makes me smile to think that now we have (an?) intelligence hosted by a 
machine declaring the machine hosting (another) intelligence artificial. 
Assuming comp of course, this doesn't seem like a very meaningful distinction 
to want to make.

Kim
 


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread Platonist Guitar Cowboy
On Sun, Aug 31, 2014 at 2:54 AM, LizR lizj...@gmail.com wrote:

 On 31 August 2014 12:27, meekerdb meeke...@verizon.net wrote:

  On 8/30/2014 4:04 PM, LizR wrote:

   To be absolutely clear - the Artificial in AI refers to the machine
 which hosts the intelligence, not to the intelligence itself.

  The problem with machines defeating Jeopardy players (I assume this
 refers to this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29
 ?) is that the machines concerned almost certainly have no concepts of what
 the answers were about.


 How do you have a concept of what Who was Charlamagne? about?  Isn't a
 lot of of it verbal and relational; stuff Winston does know.  Of course
 Winston is ignorant about a lot of basic things about being a person
 because it doesn't have perceptive sensors and the ability to move and
 manipulate things.


 That's the point. Winston or whatever isn't immersed in an environment, or
 its environment only involves abstract relations. So I do have a better
 idea of who charlemagne was, even if I'd never heard of him before.

   Hence they aren't in fact doing what humans do (or at least not most
 humans do, apart from perhaps *idiots savant*). Likewise, Deep Junior
 almost certainly has no concept of what it's doing when it scores a 3-3 tie
 aganst Kasparov. It has no concept of itself or its opponent, or very
 limited concepts embedded in relatively small* data structures - and it
 experiences no emotions on winning or losing.

  Isn't the reason you think that is because its input/output is so
 limited?  It wouldn't be at all difficult to add to Deep Blue's program so
 that on winning it composed a poem of celebration and displayed fireworks
 on a screen - or even set off real fireworks - and on losing it shut down
 and refused to do anything for three days.


 No, I think that because there's no evidence whatsoever that Deep Blue etc
 have feelings, at least none that I've come across. I'd be happy to be
 proved wrong (which would be a boost for comp, I suppose).


I'm not sure comp needs a boost... this might be horrible ;-) Perhaps a
look at the game itself would be appropriate at this point because
yesterday, the current World Champion played White and lost to black. Yes,
the dark side won this one yesterday:

https://www.youtube.com/watch?v=JXm_DaG09SE

The engines might be merely matching/summing tables but they assess the
game as winning/loosing pretty much in harmony with our third person
assessment of the game, which the above link illustrates nicely; which is
also why Grandmasters and lesser humans use engines to analyze games and
check, pun intended, their judgement.

Feelings? We know: It's sad to watch a world champion loose and search for
dwindling branches in vain. Same for watching an engine. Whether two great
engines or humans play = fun stories for some, painful ones for others,
and nice undecided ones in funky explosive draws.

I'd say yes, chess is partially about matching tables AND partially about
incredible struggles between good and evil, kings, queens, knights,
bishops, rook cops, pawns, promotions, sacrifices, tactics, strategy,
diagonalization, truth and all. And when an engine or human is in winning
position: the searches for lines in a position light up like Christmas
trees.

Does the engine know this while coming up with its results/playing?
And... do we? It's funny we end up with the same notes on the matter though.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: AI Dooms Us

2014-08-30 Thread Stephen Paul King
Hi Chris,

  Here is the thing. Does not the difficulty in creating a computational
simulation of the brain in action give you pause? Why are we assuming that
the AI will have a mind (program) that can be parsed by humans?

   AFAIK, AGI (following Ben Goertzel's convention) will be completely
incomprehensible to us. If we are trying to figure out its values, what
could we do better than to run the thing in a sandbox and let it interact
in with test AI. Can we prove that is intelligent?

   I don't think so! Unless we could somehow mindmeld with it and the
mindmeld results in a mutual understanding, how could we have a proof.
But melding minds together is a hard thing to do


On Fri, Aug 29, 2014 at 3:16 AM, 'Chris de Morsella' via Everything List 
everything-list@googlegroups.com wrote:





 *From:* everything-list@googlegroups.com [mailto:
 everything-list@googlegroups.com] *On Behalf Of *Stephen Paul King



 Are our fears of AI running amuck and killing random persons based on
 unfounded assumptions?



 Perhaps, and I see your point.

 However, am going to try to make the following case:

 If we take AI as some emergent networked meta-system, arising in a
 non-linear, fuzzy, non-demarcated manner from pre-existing (increasingly
 networked) proto-AI smart systems (+vast repositories), such as already
 exist… and then drill down through the code layers – through the logic
 (DNA) – embedded within and characterizing all those sub systems, and
 factor in all the many conscious and unconscious human assumptions and
 biases that exist throughout these deeply layered systems… I would argue
 that what could emerge ( given the trajectory will emerge fairly soon I
 think) will very much have our human fingerprints sown all the way through
 its source code, its repositories, its injected values. At least initially.

 I am concerned by the kinds of “values” that are becoming encoded in
 sub-system after sub-system, when the driving motivation for these layered
 complex self-navigating, increasingly autonomous systems is to create
 untended killer robots as well as social data mining smart agents to
 penetrate social networks and identify targets. If this becomes the major
 part of the code base from which AI emerges then isn’t it a fairly good
 reason to be concerned about the software DNA of what could emerge? If the
 code base is driven by the desire to establish and maintain a system
 characterized by having a highly centralized and vertical social control,
 deep data mining defended by an army increasingly comprised of autonomous
 mobile warbots… isn’t this a cause for concern?

 But then -- admittedly -- who really knows how an emergent machine based
 (probably highly networked) self-aware intelligence might evolve; my
 concern is the initial conditions (algorithms etc.) we are embedding into
 the source code from which an AI would emerge.



 On Monday, August 25, 2014 3:20:24 PM UTC-4, cdemorsella wrote:

 AI is being developed and funded primarily by agencies such as DARPA, NSA,
 DOD (plus MIC contractors). After all smart drones with independent
 untended warfighting capabilities offer a significant military advantage to
 the side that possesses them. This is a guarantee that the wrong kind of
 super-intelligence will come out of the process... a super-intelligent
 machine devoted to the killing of enemy human beings (+ opposing drones I
 suppose as well)



 This does not bode well for a benign super-intelligence outcome does it?
 --

 *From:* meekerdb meek...@verizon.net
 *To:*
 *Sent:* Monday, August 25, 2014 12:04 PM
 *Subject:* Re: AI Dooms Us



 Bostrom says, If humanity had been sane and had our act together
 globally, the sensible course of action would be to postpone development of
 superintelligence until we figured out how to do so safely. And then maybe
 wait another generation or two just to make sure that we hadn't overlooked
 some flaw in our reasoning. And then do it -- and reap immense benefit.
 Unfortunately, we do not have the ability to pause.

 But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to
 produce a pause.

 Brent

 On 8/25/2014 10:27 AM

 Artificial Intelligence May Doom The Human Race Within A Century, Oxford
 Professor




 http://www.huffingtonpost.com/2014/08/22/artificial-intelligence-oxford_n_5689858.html?ir=Science



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-li...@googlegroups.com.
 To post to this group, send email to everyth...@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/d/optout.



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email 

Re: Artificial Intelligence article

2014-08-30 Thread LizR
On 31 August 2014 13:10, meekerdb meeke...@verizon.net wrote:

  On 8/30/2014 5:54 PM, LizR wrote:

  On 31 August 2014 12:27, meekerdb meeke...@verizon.net wrote:

  On 8/30/2014 4:04 PM, LizR wrote:

   To be absolutely clear - the Artificial in AI refers to the machine
 which hosts the intelligence, not to the intelligence itself.

  The problem with machines defeating Jeopardy players (I assume this
 refers to this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29
 ?) is that the machines concerned almost certainly have no concepts of what
 the answers were about.


  How do you have a concept of what Who was Charlamagne? about?  Isn't a
 lot of of it verbal and relational; stuff Winston does know.  Of course
 Winston is ignorant about a lot of basic things about being a person
 because it doesn't have perceptive sensors and the ability to move and
 manipulate things.


  That's the point. Winston or whatever isn't immersed in an environment,
 or its environment only involves abstract relations. So I do have a better
 idea of who charlemagne was, even if I'd never heard of him before.

  Sure, you have a better idea.  But I don't think that shows that Winston
 has no concept of what the answers are about.  His concepts are limited
 to verbal relations, but he probably has more of those related to
 Charlemagne than I do.


So you appear to think purely abstract relations can be about something
even when they have no relation to experience of an environment - is that
correct?

Hence they aren't in fact doing what humans do (or at least not most
humans do, apart from perhaps *idiots savant*). Likewise, Deep Junior
almost certainly has no concept of what it's doing when it scores a 3-3 tie
aganst Kasparov. It has no concept of itself or its opponent, or very
limited concepts embedded in relatively small* data structures - and it
experiences no emotions on winning or losing.

  Isn't the reason you think that is because its input/output is so
 limited?  It wouldn't be at all difficult to add to Deep Blue's program so
 that on winning it composed a poem of celebration and displayed fireworks
 on a screen - or even set off real fireworks - and on losing it shut down
 and refused to do anything for three days.


 No, I think that because there's no evidence whatsoever that Deep Blue etc
have feelings, at least none that I've come across. I'd be happy to be
proved wrong (which would be a boost for comp, I suppose).

 I'm asking what would constitute evidence for Deep Blue's having
 feelings?  Fireworks and sulking aren't enough?


An ongoing exhibition that it did, sustained over a period of time, and
accompanied by what appeared to be the results of mentation, etc - i.e.
passing a Turing test equivalent. Plus supporting evidence that it was
conscious, and that we had reasonable theoretical grounds to think that it
was (e.g. it had had an electronic childhood like HAL, etc). Just
displaying a smiley face on a screen by loading in a bitmap wouldn't do it,
for me at least. Given that this would be one of the most profound
discoveries (or inventions) of all time, I'd want some pretty good
evidence. Wouldn't you?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: AI Dooms Us

2014-08-30 Thread LizR
I think the only test we have available for consciousness etc (for
computers or people) is the good old Turing test. Once our AI starts
killing of astronauts because they may interfere with its main mission (I
was always with HAL on this one, what exactly was the point of those
humans, again?) that looks like a good point to stop arguing the finer
details and start pulling out the memory cubes.


On 31 August 2014 15:35, Stephen Paul King stephe...@provensecure.com
wrote:

 Hi Chris,

   Here is the thing. Does not the difficulty in creating a computational
 simulation of the brain in action give you pause? Why are we assuming that
 the AI will have a mind (program) that can be parsed by humans?

AFAIK, AGI (following Ben Goertzel's convention) will be completely
 incomprehensible to us. If we are trying to figure out its values, what
 could we do better than to run the thing in a sandbox and let it interact
 in with test AI. Can we prove that is intelligent?

I don't think so! Unless we could somehow mindmeld with it and the
 mindmeld results in a mutual understanding, how could we have a proof.
 But melding minds together is a hard thing to do


 On Fri, Aug 29, 2014 at 3:16 AM, 'Chris de Morsella' via Everything List 
 everything-list@googlegroups.com wrote:





 *From:* everything-list@googlegroups.com [mailto:
 everything-list@googlegroups.com] *On Behalf Of *Stephen Paul King



 Are our fears of AI running amuck and killing random persons based on
 unfounded assumptions?



 Perhaps, and I see your point.

 However, am going to try to make the following case:

 If we take AI as some emergent networked meta-system, arising in a
 non-linear, fuzzy, non-demarcated manner from pre-existing (increasingly
 networked) proto-AI smart systems (+vast repositories), such as already
 exist… and then drill down through the code layers – through the logic
 (DNA) – embedded within and characterizing all those sub systems, and
 factor in all the many conscious and unconscious human assumptions and
 biases that exist throughout these deeply layered systems… I would argue
 that what could emerge ( given the trajectory will emerge fairly soon I
 think) will very much have our human fingerprints sown all the way through
 its source code, its repositories, its injected values. At least initially.

 I am concerned by the kinds of “values” that are becoming encoded in
 sub-system after sub-system, when the driving motivation for these layered
 complex self-navigating, increasingly autonomous systems is to create
 untended killer robots as well as social data mining smart agents to
 penetrate social networks and identify targets. If this becomes the major
 part of the code base from which AI emerges then isn’t it a fairly good
 reason to be concerned about the software DNA of what could emerge? If the
 code base is driven by the desire to establish and maintain a system
 characterized by having a highly centralized and vertical social control,
 deep data mining defended by an army increasingly comprised of autonomous
 mobile warbots… isn’t this a cause for concern?

 But then -- admittedly -- who really knows how an emergent machine based
 (probably highly networked) self-aware intelligence might evolve; my
 concern is the initial conditions (algorithms etc.) we are embedding into
 the source code from which an AI would emerge.



 On Monday, August 25, 2014 3:20:24 PM UTC-4, cdemorsella wrote:

 AI is being developed and funded primarily by agencies such as DARPA,
 NSA, DOD (plus MIC contractors). After all smart drones with independent
 untended warfighting capabilities offer a significant military advantage to
 the side that possesses them. This is a guarantee that the wrong kind of
 super-intelligence will come out of the process... a super-intelligent
 machine devoted to the killing of enemy human beings (+ opposing drones I
 suppose as well)



 This does not bode well for a benign super-intelligence outcome does it?
 --

 *From:* meekerdb meek...@verizon.net
 *To:*
 *Sent:* Monday, August 25, 2014 12:04 PM
 *Subject:* Re: AI Dooms Us



 Bostrom says, If humanity had been sane and had our act together
 globally, the sensible course of action would be to postpone development of
 superintelligence until we figured out how to do so safely. And then maybe
 wait another generation or two just to make sure that we hadn't overlooked
 some flaw in our reasoning. And then do it -- and reap immense benefit.
 Unfortunately, we do not have the ability to pause.

 But maybe he's forgotten the Dark Ages.  I think ISIS is working hard to
 produce a pause.

 Brent

 On 8/25/2014 10:27 AM

 Artificial Intelligence May Doom The Human Race Within A Century, Oxford
 Professor




 http://www.huffingtonpost.com/2014/08/22/artificial-intelligence-oxford_n_5689858.html?ir=Science



 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To 

RE: AI Dooms Us

2014-08-30 Thread 'Chris de Morsella' via Everything List
 

 

From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of Stephen Paul King
Sent: Saturday, August 30, 2014 8:35 PM
To: everything-list@googlegroups.com
Subject: Re: AI Dooms Us

 

Hi Chris,

 

  Here is the thing. Does not the difficulty in creating a computational 
simulation of the brain in action give you pause?

 

Difficult yes, impossible I don’t think so. A simulation of the human brain 
need not have the same scale as an actual human brain (after all it is a 
model). For example statistical well bounded statements can be made about many 
social behavioral outcomes based on relatively small sampling sets. This also 
applies to the brain. A model could have a small fraction of the real brain’s 
complexity and scale and yet produce pretty accurate results.

Of course it is complex for us to imagine today… the human brain is after all 
vastly parallel with an immense number of connections 100s of trillions. Even 
within a single synapse (one of a large number of synapses) there is a world of 
exquisite molecular scale complexity and it seems multi-channeled to me. 

However, it is also true that the global networked meta-cloud (the dynamic 
process driven interconnected cloud of clouds operating over the underlying 
global scale physical and technological infrastructure) is also scaling up to 
immense numbers of disparate computational elements with thousands of trillions 
of network vertices. 

Perhaps I don’t understand the thrust of this statement? Why should it give me 
pause? 

The brain is a magnificent but of biology an admirable compact hyper energy 
efficient computational engine of unequaled parallelism. Yes, we agree. 

On the other hand the geometric growth rates of informatics capacity – in all 
dimensions: storage, speed, network size, traffic, cross talk, numbers of 
cores, memory, capacity of the various pipes.. you name it is also literally 
exploding out in scale. And on the level of fundamental understanding we are 
establishing a finer and finer grained understanding about the brain and how it 
works – dynamically in real time – and doing so from many various angles and 
scales of observation (from macro down to the electro-chemical molecular 
machinery of a single synapse). There are major initiatives in figuring out (at 
least at the macro scale) the human brain connectome. The micro architecture of 
the brain (at the scale of a single arrayed column -- usually around six 
neurons deep) is also being better understood, as are the various sensorial 
processing, memory, temporal, decisional and other brain algorithms.

A huge exciting challenge certainly… but for my way of thinking about this, not 
a cause for pause, rather a call to delve deeper into it and try to put it all 
together.

 

 

Why are we assuming that the AI will have a mind (program) that can be parsed 
by humans?

 

Who is assuming that? I was arguing that the code we create today will be the 
DNA of what emerges, by virtue of being the template in which subsequent 
development emerges from. Are you saying that our human prejudices, 
assumptions, biases, needs, desires, objectives, habits, ways of thinking… that 
all this assortment of hidden variables is not influencing the kind of code 
that is written. The hundreds of millions of lines of code written by 
programmers – mostly living and working in just a small number of technological 
centers operating on planet earth --  that all of this vast output of code is 
somehow unaffected by our humanness, by our nature?

Personally I would find that astounding and think it would seem rather obvious 
that in fact it is very much influenced by our nature and our objectives and 
needs.

I am not assuming anything by making the statement that whatever does emerge 
(assuming a self-aware intelligence does emerge) will have emerged out from a 
primordial soup that we cooked up and will have had its roots and beginnings 
from a code base of human creation, created for human ends and objectives with 
human prejudices and modes of thinking literally hard coded into the mind 
boggling numbers of services, objects, systems, frameworks and what have you 
that exist and are now all connecting up into non-locational dynamic cloud 
architectures.

 

   AFAIK, AGI (following Ben Goertzel's convention) will be completely 
incomprehensible to us. If we are trying to figure out its values, what could 
we do better than to run the thing in a sandbox and let it interact in with 
test AI. Can we prove that is intelligent?

We don’t know what it will turn out to become, but we can say with certainty 
that it will emerge from the code, from the algorithms from the physical chip 
architectures, network architectures, etc. that we have created. This is 
clearly an a priori assumption if we are speaking about human spawned AI – it 
has to emerge from human creation (unless we are speaking of alien AI of 
course).

We cannot even prove that we are 

RE: Artificial Intelligence article

2014-08-30 Thread 'Chris de Morsella' via Everything List
 

 

From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of Platonist Guitar Cowboy
Sent: Saturday, August 30, 2014 7:43 PM
To: everything-list@googlegroups.com
Subject: Re: Artificial Intelligence article

 

 

 

On Sun, Aug 31, 2014 at 2:54 AM, LizR lizj...@gmail.com wrote:

On 31 August 2014 12:27, meekerdb meeke...@verizon.net wrote:

On 8/30/2014 4:04 PM, LizR wrote:

To be absolutely clear - the Artificial in AI refers to the machine which 
hosts the intelligence, not to the intelligence itself.

The problem with machines defeating Jeopardy players (I assume this refers to 
this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29 ?) is that the 
machines concerned almost certainly have no concepts of what the answers were 
about. 

 

How do you have a concept of what Who was Charlamagne? about?  Isn't a lot of 
of it verbal and relational; stuff Winston does know.  Of course Winston is 
ignorant about a lot of basic things about being a person because it doesn't 
have perceptive sensors and the ability to move and manipulate things.

 

That's the point. Winston or whatever isn't immersed in an environment, or its 
environment only involves abstract relations. So I do have a better idea of who 
charlemagne was, even if I'd never heard of him before.

 

 

Our minds are also immersed in an abstract environment – a reification of the 
“real” world – as delivered to use through our sensorial streams, colored and 
altered by our memories and notional constructs (our beliefs etc.) The 
verbalizing self-aware entity operating within our minds is a dynamic pattern 
of electrical and chemical activity… it is every bit as much abstracted out 
from reality as a hypothetical machine intelligence would also be.

 





Hence they aren't in fact doing what humans do (or at least not most humans 
do, apart from perhaps idiots savant). Likewise, Deep Junior almost certainly 
has no concept of what it's doing when it scores a 3-3 tie aganst Kasparov. It 
has no concept of itself or its opponent, or very limited concepts embedded 
in relatively small* data structures - and it experiences no emotions on 
winning or losing.

Isn't the reason you think that is because its input/output is so limited?  It 
wouldn't be at all difficult to add to Deep Blue's program so that on winning 
it composed a poem of celebration and displayed fireworks on a screen - or even 
set off real fireworks - and on losing it shut down and refused to do anything 
for three days.

 

No, I think that because there's no evidence whatsoever that Deep Blue etc have 
feelings, at least none that I've come across. I'd be happy to be proved wrong 
(which would be a boost for comp, I suppose).

 

The Japanese, especially for some reason, are doing some pretty amazing stuff 
with emotional intelligence for robots… robots that can read human emotions and 
expressions and discern human feelings and also mimic human emotions as well. 
Are these “true” feelings. What is a “true” feeling I ask then?

Just because we experience it… is that the only metric of “trueness”?

 

I'm not sure comp needs a boost... this might be horrible ;-) Perhaps a look 
at the game itself would be appropriate at this point because yesterday, the 
current World Champion played White and lost to black. Yes, the dark side won 
this one yesterday:

https://www.youtube.com/watch?v=JXm_DaG09SE

The engines might be merely matching/summing tables but they assess the game 
as winning/loosing pretty much in harmony with our third person assessment of 
the game, which the above link illustrates nicely; which is also why 
Grandmasters and lesser humans use engines to analyze games and check, pun 
intended, their judgement.  

Feelings? We know: It's sad to watch a world champion loose and search for 
dwindling branches in vain. Same for watching an engine. Whether two great 
engines or humans play = fun stories for some, painful ones for others, and 
nice undecided ones in funky explosive draws. 


I'd say yes, chess is partially about matching tables AND partially about 
incredible struggles between good and evil, kings, queens, knights, bishops, 
rook cops, pawns, promotions, sacrifices, tactics, strategy, diagonalization, 
truth and all. And when an engine or human is in winning position: the searches 
for lines in a position light up like Christmas trees.

Does the engine know this while coming up with its results/playing? And... do 
we? It's funny we end up with the same notes on the matter though.

 

Again what do we “know”? All we know is what our minds inform us we know… all 
we think is what our minds cause to pop in our heads. We are more similar to 
machines than many would like to imagine themselves as being… it hurts to admit 
there is no divine spark that gives us “true” intelligence… that we may just be 
a collection of dynamic, concurrent algorithms operating within our tightly 
folded sheets.

 


 

-- 
You received 

RE: AI Dooms Us

2014-08-30 Thread 'Chris de Morsella' via Everything List
 

 

From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of LizR
Sent: Saturday, August 30, 2014 8:55 PM
To: everything-list@googlegroups.com
Subject: Re: AI Dooms Us

 

I think the only test we have available for consciousness etc (for computers or 
people) is the good old Turing test. Once our AI starts killing of astronauts 
because they may interfere with its main mission (I was always with HAL on this 
one, what exactly was the point of those humans, again?) that looks like a good 
point to stop arguing the finer details and start pulling out the memory cubes.

 

AI might also accelerate its development at such breakneck speed that it very 
rapidly loses all interest in us, our planet, this galaxy, this particular 
underlying “physical reality” (whatever that may turn out to be) and exit our 
perceived universe into some other dimension beyond our reach or comprehension.

 

On 31 August 2014 15:35, Stephen Paul King stephe...@provensecure.com wrote:

Hi Chris,

 

  Here is the thing. Does not the difficulty in creating a computational 
simulation of the brain in action give you pause? Why are we assuming that the 
AI will have a mind (program) that can be parsed by humans?

 

   AFAIK, AGI (following Ben Goertzel's convention) will be completely 
incomprehensible to us. If we are trying to figure out its values, what could 
we do better than to run the thing in a sandbox and let it interact in with 
test AI. Can we prove that is intelligent?

 

   I don't think so! Unless we could somehow mindmeld with it and the 
mindmeld results in a mutual understanding, how could we have a proof. But 
melding minds together is a hard thing to do

 

On Fri, Aug 29, 2014 at 3:16 AM, 'Chris de Morsella' via Everything List 
everything-list@googlegroups.com wrote:

 

 

From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of Stephen Paul King

 

Are our fears of AI running amuck and killing random persons based on unfounded 
assumptions?

 

Perhaps, and I see your point. 

However, am going to try to make the following case: 

If we take AI as some emergent networked meta-system, arising in a non-linear, 
fuzzy, non-demarcated manner from pre-existing (increasingly networked) 
proto-AI smart systems (+vast repositories), such as already exist… and then 
drill down through the code layers – through the logic (DNA) – embedded within 
and characterizing all those sub systems, and factor in all the many conscious 
and unconscious human assumptions and biases that exist throughout these deeply 
layered systems… I would argue that what could emerge ( given the trajectory 
will emerge fairly soon I think) will very much have our human fingerprints 
sown all the way through its source code, its repositories, its injected 
values. At least initially.

I am concerned by the kinds of “values” that are becoming encoded in sub-system 
after sub-system, when the driving motivation for these layered complex 
self-navigating, increasingly autonomous systems is to create untended killer 
robots as well as social data mining smart agents to penetrate social networks 
and identify targets. If this becomes the major part of the code base from 
which AI emerges then isn’t it a fairly good reason to be concerned about the 
software DNA of what could emerge? If the code base is driven by the desire to 
establish and maintain a system characterized by having a highly centralized 
and vertical social control, deep data mining defended by an army increasingly 
comprised of autonomous mobile warbots… isn’t this a cause for concern?

But then -- admittedly -- who really knows how an emergent machine based 
(probably highly networked) self-aware intelligence might evolve; my concern is 
the initial conditions (algorithms etc.) we are embedding into the source code 
from which an AI would emerge.



On Monday, August 25, 2014 3:20:24 PM UTC-4, cdemorsella wrote:

AI is being developed and funded primarily by agencies such as DARPA, NSA, DOD 
(plus MIC contractors). After all smart drones with independent untended 
warfighting capabilities offer a significant military advantage to the side 
that possesses them. This is a guarantee that the wrong kind of 
super-intelligence will come out of the process... a super-intelligent machine 
devoted to the killing of enemy human beings (+ opposing drones I suppose as 
well)

 

This does not bode well for a benign super-intelligence outcome does it?


  _  


From: meekerdb meek...@verizon.net
To: 
Sent: Monday, August 25, 2014 12:04 PM
Subject: Re: AI Dooms Us

 

Bostrom says, If humanity had been sane and had our act together globally, the 
sensible course of action would be to postpone development of superintelligence 
until we figured out how to do so safely. And then maybe wait another 
generation or two just to make sure that we hadn't overlooked some flaw in our 
reasoning. And then do it -- and reap 

Re: AI Dooms Us

2014-08-30 Thread LizR
On 31 August 2014 17:30, 'Chris de Morsella' via Everything List 
everything-list@googlegroups.com wrote:



 *From:* everything-list@googlegroups.com [mailto:
 everything-list@googlegroups.com] *On Behalf Of *LizR
 *Sent:* Saturday, August 30, 2014 8:55 PM
 *To:* everything-list@googlegroups.com

 *Subject:* Re: AI Dooms Us



 I think the only test we have available for consciousness etc (for
 computers or people) is the good old Turing test. Once our AI starts
 killing of astronauts because they may interfere with its main mission (I
 was always with HAL on this one, what exactly was the point of those
 humans, again?) that looks like a good point to stop arguing the finer
 details and start pulling out the memory cubes.



 AI might also accelerate its development at such breakneck speed that it
 very rapidly loses all interest in us, our planet, this galaxy, this
 particular underlying “physical reality” (whatever that may turn out to be)
 and exit our perceived universe into some other dimension beyond our reach
 or comprehension.


Yes indeed, like the children in Childhood's End, the neutron star beings
in Dragon's Egg or the human race falling into the technological
singularity in Marooned in Realtime.

However it's possible they might at least feel enough gratitude to upload
us, perhaps into a zoo...

Or then again they might have a more Dalek like attitude...

LESTERSON: I want to help you.
DALEK: Why?
LESTERSON: (like a Dalek) I am your servant.
DALEK: We do not need humans now.
LESTERSON: Ah, but you wouldn't kill me. I gave you life.
DALEK: Yes, you gave us life
(It exteminates him.)

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


RE: AI Dooms Us

2014-08-30 Thread 'Chris de Morsella' via Everything List
 

 

From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of LizR
Sent: Saturday, August 30, 2014 10:37 PM
To: everything-list@googlegroups.com
Subject: Re: AI Dooms Us

 

On 31 August 2014 17:30, 'Chris de Morsella' via Everything List 
everything-list@googlegroups.com wrote:

 

From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of LizR
Sent: Saturday, August 30, 2014 8:55 PM
To: everything-list@googlegroups.com


Subject: Re: AI Dooms Us

 

I think the only test we have available for consciousness etc (for computers or 
people) is the good old Turing test. Once our AI starts killing of astronauts 
because they may interfere with its main mission (I was always with HAL on this 
one, what exactly was the point of those humans, again?) that looks like a good 
point to stop arguing the finer details and start pulling out the memory cubes.

 

AI might also accelerate its development at such breakneck speed that it very 
rapidly loses all interest in us, our planet, this galaxy, this particular 
underlying “physical reality” (whatever that may turn out to be) and exit our 
perceived universe into some other dimension beyond our reach or comprehension.

 

Yes indeed, like the children in Childhood's End, the neutron star beings in 
Dragon's Egg or the human race falling into the technological singularity in 
Marooned in Realtime.

However it's possible they might at least feel enough gratitude to upload us, 
perhaps into a zoo...

Or then again they might have a more Dalek like attitude...

LESTERSON: I want to help you. 
DALEK: Why? 
LESTERSON: (like a Dalek) I am your servant. 
DALEK: We do not need humans now. 
LESTERSON: Ah, but you wouldn't kill me. I gave you life. 
DALEK: Yes, you gave us life 

(It exteminates him.)

 

Classic J

 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Artificial Intelligence article

2014-08-30 Thread meekerdb

On 8/30/2014 8:51 PM, LizR wrote:
On 31 August 2014 13:10, meekerdb meeke...@verizon.net mailto:meeke...@verizon.net 
wrote:


On 8/30/2014 5:54 PM, LizR wrote:

On 31 August 2014 12:27, meekerdb meeke...@verizon.net
mailto:meeke...@verizon.net wrote:

On 8/30/2014 4:04 PM, LizR wrote:

To be absolutely clear - the Artificial in AI refers to the machine 
which
hosts the intelligence, not to the intelligence itself.

The problem with machines defeating Jeopardy players (I assume this 
refers
to this - http://en.wikipedia.org/wiki/Jeopardy_%28TV_series%29 ?) is 
that the
machines concerned almost certainly have no concepts of what the 
answers were
about.


How do you have a concept of what Who was Charlamagne? about?  Isn't 
a lot of
of it verbal and relational; stuff Winston does know.  Of course 
Winston is
ignorant about a lot of basic things about being a person because it 
doesn't
have perceptive sensors and the ability to move and manipulate things.


That's the point. Winston or whatever isn't immersed in an environment, or 
its
environment only involves abstract relations. So I do have a better idea of 
who
charlemagne was, even if I'd never heard of him before.

Sure, you have a better idea.  But I don't think that shows that Winston has 
no
concept of what the answers are about.  His concepts are limited to verbal
relations, but he probably has more of those related to Charlemagne than I 
do.


So you appear to think purely abstract relations can be about something even when they 
have no relation to experience of an environment - is that correct?


I don't think so.  I think abstract relations have relations to experience and the 
environment.  They are abstract because they are abstracted from experience (by ignoring 
some aspects).



Hence they aren't in fact doing what humans do (or at least not most humans do, 
apart from perhaps /idiots savant/). Likewise, Deep Junior almost certainly has no 
concept of what it's doing when it scores a 3-3 tie aganst Kasparov. It has no concept 
of itself or its opponent, or very limited concepts embedded in relatively small* 
data structures - and it experiences no emotions on winning or losing.


Isn't the reason you think that is because its input/output is so limited?  
It
wouldn't be at all difficult to add to Deep Blue's program so that on 
winning it
composed a poem of celebration and displayed fireworks on a screen - or 
even set
off real fireworks - and on losing it shut down and refused to do anything 
for
three days.


No, I think that because there's no evidence whatsoever that Deep Blue etc have 
feelings, at least none that I've come across. I'd be happy to be proved wrong (which 
would be a boost for comp, I suppose).


I'm asking what would constitute evidence for Deep Blue's having feelings? 
Fireworks and sulking aren't enough?



An ongoing exhibition that it did, sustained over a period of time, and accompanied by 
what appeared to be the results of mentation, etc - i.e. passing a Turing test 
equivalent. Plus supporting evidence that it was conscious, and that we had reasonable 
theoretical grounds to think that it was (e.g. it had had an electronic childhood like 
HAL, etc). Just displaying a smiley face on a screen by loading in a bitmap wouldn't do 
it, for me at least. Given that this would be one of the most profound discoveries (or 
inventions) of all time, I'd want some pretty good evidence. Wouldn't you?


It seems you're raising the bar from experience an emotion on winning or losing to 
having human level consciousness.  Do you suppose your dog does not experience emotion 
just because he can't even come close to passing a Turing test?


Brent



--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com 
mailto:everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com 
mailto:everything-list@googlegroups.com.

Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.