Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Nick Thompson
I LOVE this, Frank.  How ever did you find it amongst the ten thousand 
pages

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love 
mercy, now. Walk humbly, now.  You are not obligated to complete the work, but 
neither are you free to abandon it.

 

By the way.  Now in my 80th year, I am officially against technology.  I was OK 
with everything up through the word processor.  (I hated carbons.) Everything 
after that, I could do without.  

 

Really!  What has AI done for me lately? 

 

What  was it Flaubert said about trains?  Something like, they just made it 
possible for people to run around faster and faster and be stupid in more 
places.  

 

Nick 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

  
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Frank Wimberly
Sent: Tuesday, August 08, 2017 1:56 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Talmud:

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love 
mercy, now. Walk humbly, now.  You are not obligated to complete the work, but 
neither are you free to abandon it.

 

Plus 10,000 other pages.

 

Frank Wimberly
Phone (505) 670-9918

 

On Aug 8, 2017 11:18 AM, "Pamela McCorduck"  > wrote:

Grant, does it really seem plausible to you that the thousands of crack 
researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other 
places have not seen this? And found remedies?

 

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws 
Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.

 

 

 

 

On Aug 8, 2017, at 1:42 AM, Marcus Daniels  > wrote:

 

Grant writes:

 

"Fortunately, the AI folks don't seem to see - yet - that they are stumbling 
all over the missing piece: stochastic adaptation. You know, like in evolution: 
chance mutations. AI is still down with a bad case of causal determinism. But I 
expect they will fairly shortly get over that. Watch out."

 

What is probability, physically?   It could be an illusion and that there is no 
such thing as an independent observer.   Even if that is true, sampling 
techniques are used in many machine learning algorithms -- it is not a question 
of if they work, it is an academic question of why they work.

 

Marcus

  _  

From: Friam  > on 
behalf of Grant Holland  >
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

That sounds right, Carl. Asimov's three "laws" of robotics are more like 
Asimov's three "wishes" for robotics. AI entities are already no longer 
servants. They have become machine learners. They have actually learned to 
project conditional probability. The cat is out of the barn. Or is it that the 
horse is out of the bag?  

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are 
stumbling all over the missing piece: stochastic adaptation. You know, like in 
evolution: chance mutations. AI is still down with a bad case of causal 
determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a 
survivable trait?

 

On 8/7/17 9:54 PM, Carl Tollander wrote:

It seems to me that there are many here in the US who are not entirely on board 
with Asimov's First Law of Robotics, at least insofar as it may apply to 
themselves, so I suspect notions of "reining it in" are probably not going to 
fly.

 

 

 

 

On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez  > wrote:

Future will be quite interesting. How will be the human being of the future? 
For sure not a human being in the way we know.

 

http://m.eltiempo.com/tecnosfera/novedades-tecnologia/peligros-y-avances-de-la-inteligencia-artificial-para-los-humanos-117158



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

 

 


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ 

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Grant Holland
Thanks for throwing in on this one, Glen. Your thoughts are 
ever-insightful. And ever-entertaining!


For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution /is/ stochastic. (You actually did not 
disagree with me on that. You only said that the reason I was right was 
another one.) A good book on the stochasticity of evolution is "Chance 
and Necessity" by Jacques Monod. (I just finished rereading it for the 
second time. And that proved quite fruitful.)


G.


On 8/8/17 12:44 PM, glen ☣ wrote:

I'm not sure how Asimov intended them.  But the three laws is a trope that clearly shows 
the inadequacy of deontological ethics.  Rules are fine as far as they go.  But they 
don't go very far.  We can see this even in the foundations of mathematics, the 
unification of physics, and polyphenism/robustness in biology.  Von Neumann (Burks) said 
it best when he said: "But in the complicated parts of formal logic it is always one 
order of magnitude harder to tell what an object can do than to produce the object." 
 Or, if you don't like that, you can see the same perspective in his iterative 
construction of sets as an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than 
any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus 
our rule sets.  Stochasticity is the measure of the extent to which a rule set 
matches a set of patterns.  But Grant's right to qualify that with evolution, 
not because of the way evolution is stochastic, but because evolution requires 
a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will 
*always* fail.  It's guaranteed to fail because syncing with the environment 
isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Nick Thompson
f.

“space”?

 

Or was that a correction error arising from trying to write “apace”.  

n

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

  
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Frank Wimberly
Sent: Tuesday, August 08, 2017 5:32 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Nick,

 

It's actually more like six thousand pages. However many pages thousands of 
rabbis can write in 600 years, more or less.  Deborah found it and posted it on 
our refrigerator.

 

I understand you are recovering space.

 

Frank

Frank Wimberly
Phone (505) 670-9918

 

On Aug 8, 2017 3:24 PM, "Nick Thompson"  > wrote:

I LOVE this, Frank.  How ever did you find it amongst the ten thousand 
pages

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love 
mercy, now. Walk humbly, now.  You are not obligated to complete the work, but 
neither are you free to abandon it.

 

By the way.  Now in my 80th year, I am officially against technology.  I was OK 
with everything up through the word processor.  (I hated carbons.) Everything 
after that, I could do without.  

 

Really!  What has AI done for me lately? 

 

What  was it Flaubert said about trains?  Something like, they just made it 
possible for people to run around faster and faster and be stupid in more 
places.  

 

Nick 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

  
http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:friam-boun...@redfish.com 
 ] On Behalf Of Frank Wimberly
Sent: Tuesday, August 08, 2017 1:56 PM
To: The Friday Morning Applied Complexity Coffee Group  >
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Talmud:

 

Do not be daunted by the enormity of the world's grief.  Do justly, now.  Love 
mercy, now. Walk humbly, now.  You are not obligated to complete the work, but 
neither are you free to abandon it.

 

Plus 10,000 other pages.

 

Frank Wimberly
Phone (505) 670-9918  

 

On Aug 8, 2017 11:18 AM, "Pamela McCorduck"  > wrote:

Grant, does it really seem plausible to you that the thousands of crack 
researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other 
places have not seen this? And found remedies?

 

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws 
Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.

 

 

 

 

On Aug 8, 2017, at 1:42 AM, Marcus Daniels  > wrote:

 

Grant writes:

 

"Fortunately, the AI folks don't seem to see - yet - that they are stumbling 
all over the missing piece: stochastic adaptation. You know, like in evolution: 
chance mutations. AI is still down with a bad case of causal determinism. But I 
expect they will fairly shortly get over that. Watch out."

 

What is probability, physically?   It could be an illusion and that there is no 
such thing as an independent observer.   Even if that is true, sampling 
techniques are used in many machine learning algorithms -- it is not a question 
of if they work, it is an academic question of why they work.

 

Marcus

  _  

From: Friam  > on 
behalf of Grant Holland  >
Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

That sounds right, Carl. Asimov's three "laws" of robotics are more like 
Asimov's three "wishes" for robotics. AI entities are already no longer 
servants. They have become machine learners. They have actually learned to 
project conditional probability. The cat is out of the barn. Or is it that the 
horse is out of the bag?  

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are 
stumbling all over the missing piece: stochastic adaptation. You know, like in 
evolution: chance mutations. AI is still down with a bad case of causal 
determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a 
survivable trait?

 

On 8/7/17 9:54 PM, Carl Tollander wrote:

It seems to me that there are many here in the US who are not entirely on board 
with Asimov's First Law of Robotics, at least insofar as it may 

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Marcus Daniels
Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. 
Prolog, now Curry, similar capabilities implemented in Lisp) from the age 
before the AI winter.  These systems provide a very flexible way to pose 
constraint problems.  But one problem is that breadth-first and depth-first 
search are just fast ways to find answers.  Recent work seems to have shifted 
to SMT solvers and specialized constraint solving algorithms, but these have 
somewhat less expressiveness as programming languages.  Meanwhile, machine 
learning has come on the scene in a big way and tasks traditionally associated 
with old-school AI, like natural language processing, are now matched or even 
dominated using neural nets (LSTM).  I find the range of capabilities provided 
by groups like nlp.stanford.edu really impressive -- there examples of both 
approaches (logic programming and machine learning) and then don't need to be 
mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by 
using physical phenomena to accelerate the rate at which high dimensional 
discrete systems can be solved, without relying on fragile or domain-specific 
heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic 
algorithms, for example, are robust to  noise (or if you like ambiguity) in 
fitness functions, and they are trivial to parallelize.


Marcus


From: Friam  on behalf of Grant Holland 

Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence


Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. 
And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) A 
good book on the stochasticity of evolution is "Chance and Necessity" by 
Jacques Monod. (I just finished rereading it for the second time. And that 
proved quite fruitful.)

G.

On 8/8/17 12:44 PM, glen ☣ wrote:


I'm not sure how Asimov intended them.  But the three laws is a trope that 
clearly shows the inadequacy of deontological ethics.  Rules are fine as far as 
they go.  But they don't go very far.  We can see this even in the foundations 
of mathematics, the unification of physics, and polyphenism/robustness in 
biology.  Von Neumann (Burks) said it best when he said: "But in the 
complicated parts of formal logic it is always one order of magnitude harder to 
tell what an object can do than to produce the object."  Or, if you don't like 
that, you can see the same perspective in his iterative construction of sets as 
an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than 
any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus 
our rule sets.  Stochasticity is the measure of the extent to which a rule set 
matches a set of patterns.  But Grant's right to qualify that with evolution, 
not because of the way evolution is stochastic, but because evolution requires 
a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will 
*always* fail.  It's guaranteed to fail because syncing with the environment 
isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Nick Thompson
Grant, 

 

I think I know the answer to this question, but want to make sure:  

 

What is the difference beween calling a process “stochastic”, “indeterminate”, 
or “random”?  

 

Nick

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

From: Friam [mailto:friam-boun...@redfish.com] On Behalf Of Grant Holland
Sent: Tuesday, August 08, 2017 6:51 PM
To: The Friday Morning Applied Complexity Coffee Group ; 
glen ☣ 
Subject: Re: [FRIAM] Future of humans and artificial intelligence

 

Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. 
And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) A 
good book on the stochasticity of evolution is "Chance and Necessity" by 
Jacques Monod. (I just finished rereading it for the second time. And that 
proved quite fruitful.)

G.

 

On 8/8/17 12:44 PM, glen ☣ wrote:

 
I'm not sure how Asimov intended them.  But the three laws is a trope that 
clearly shows the inadequacy of deontological ethics.  Rules are fine as far as 
they go.  But they don't go very far.  We can see this even in the foundations 
of mathematics, the unification of physics, and polyphenism/robustness in 
biology.  Von Neumann (Burks) said it best when he said: "But in the 
complicated parts of formal logic it is always one order of magnitude harder to 
tell what an object can do than to produce the object."  Or, if you don't like 
that, you can see the same perspective in his iterative construction of sets as 
an alternative to the classical conception.
 
The point being that reality, traditionally, has shown more expressiveness than 
any of our rule sets.
 
There are ways to handle the mismatch in expressivity between reality versus 
our rule sets.  Stochasticity is the measure of the extent to which a rule set 
matches a set of patterns.  But Grant's right to qualify that with evolution, 
not because of the way evolution is stochastic, but because evolution requires 
a unit to regularly (or sporadically) sync with its environment.
 
An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will 
*always* fail.  It's guaranteed to fail because syncing with the environment 
isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.
 

 


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Gillian Densmore
@Nick that's a fair question. On a pragmatic side not much...yet. However
as I understand it (some) amount of AI was invaluable for making pretty gud
guesses about frustrating issues: Like what the heck is going on with the
weather.
Robots and androids (so far) are better then humans at somethingsand
pretty bad at others. Androids the R2-D2 kind. Basically computers speek
computer better than people
Computers can talk to computers reely reely fast and possibly understand
each other better than humans do. For some (I think) reely awsome things
they've done (so far): Dictation software basically asks your computer to
guess what you're saying (AI) . Mine litterally tries to learn how make
small improvements as I uses it and has gotten a lot better over time.
Their's a video on youtube of some MIT guys that have a robot band playing
disney inspired music. Those robots have tastes and stuff they like playing
more than others. Some better than others.
FWIW what I thought was too cool was some of stuff sounded reely good.
Robots driving cars or helping people could rock.  Or robots exploring
awsome  stuff that humans can't(yet)

Though I haven't a clue how close any of that is yet.  And you are right to
be concerned ^_^

On Tue, Aug 8, 2017 at 4:51 PM, Grant Holland 
wrote:

> Thanks for throwing in on this one, Glen. Your thoughts are
> ever-insightful. And ever-entertaining!
>
> For example, I did not know that von Neumann put forth a set theory.
>
> On the other hand... evolution *is* stochastic. (You actually did not
> disagree with me on that. You only said that the reason I was right was
> another one.) A good book on the stochasticity of evolution is "Chance and
> Necessity" by Jacques Monod. (I just finished rereading it for the second
> time. And that proved quite fruitful.)
>
> G.
>
> On 8/8/17 12:44 PM, glen ☣ wrote:
>
> I'm not sure how Asimov intended them.  But the three laws is a trope that 
> clearly shows the inadequacy of deontological ethics.  Rules are fine as far 
> as they go.  But they don't go very far.  We can see this even in the 
> foundations of mathematics, the unification of physics, and 
> polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he 
> said: "But in the complicated parts of formal logic it is always one order of 
> magnitude harder to tell what an object can do than to produce the object."  
> Or, if you don't like that, you can see the same perspective in his iterative 
> construction of sets as an alternative to the classical conception.
>
> The point being that reality, traditionally, has shown more expressiveness 
> than any of our rule sets.
>
> There are ways to handle the mismatch in expressivity between reality versus 
> our rule sets.  Stochasticity is the measure of the extent to which a rule 
> set matches a set of patterns.  But Grant's right to qualify that with 
> evolution, not because of the way evolution is stochastic, but because 
> evolution requires a unit to regularly (or sporadically) sync with its 
> environment.
>
> An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head 
> will *always* fail.  It's guaranteed to fail because syncing with the 
> environment isn't *built in*.  The sync isn't part of the AI's onto- or 
> phylo-geny.
>
>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Frank Wimberly
The latter.  I'm  about to turn off autocorrect. Ironic in the context of a
discussion about the benefits and dangers out AI.

Frank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 5:28 PM, "Nick Thompson"  wrote:

f.

“space”?



Or was that a correction error arising from trying to write “apace”.

n



Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/



*From:* Friam [mailto:friam-boun...@redfish.com] *On Behalf Of *Frank
Wimberly
*Sent:* Tuesday, August 08, 2017 5:32 PM

*To:* The Friday Morning Applied Complexity Coffee Group 
*Subject:* Re: [FRIAM] Future of humans and artificial intelligence



Nick,



It's actually more like six thousand pages. However many pages thousands of
rabbis can write in 600 years, more or less.  Deborah found it and posted
it on our refrigerator.



I understand you are recovering space.



Frank

Frank Wimberly
Phone (505) 670-9918



On Aug 8, 2017 3:24 PM, "Nick Thompson"  wrote:

I LOVE this, Frank.  How ever did you find it amongst the ten thousand
pages



*Do not be daunted by the enormity of the world's grief.  Do justly, now.
Love mercy, now. Walk humbly, now.  You are not obligated to complete the
work, but neither are you free to abandon it.*



By the way.  Now in my 80th year, I am officially against technology.  I
was OK with everything up through the word processor.  (I hated carbons.)
Everything after that, I could do without.



Really!  What has AI done for me lately?



What  was it Flaubert said about trains?  Something like, they just made it
possible for people to run around faster and faster and be stupid in more
places.



Nick



Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/



*From:* Friam [mailto:friam-boun...@redfish.com] *On Behalf Of *Frank
Wimberly
*Sent:* Tuesday, August 08, 2017 1:56 PM
*To:* The Friday Morning Applied Complexity Coffee Group 
*Subject:* Re: [FRIAM] Future of humans and artificial intelligence



Talmud:



Do not be daunted by the enormity of the world's grief.  Do justly, now.
Love mercy, now. Walk humbly, now.  You are not obligated to complete the
work, but neither are you free to abandon it.



Plus 10,000 other pages.



Frank Wimberly
Phone (505) 670-9918



On Aug 8, 2017 11:18 AM, "Pamela McCorduck"  wrote:

Grant, does it really seem plausible to you that the thousands of crack
researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and
other places have not seen this? And found remedies?



Just for FRIAM’s information, John McCarthy used to call Asimov’s Three
Laws Talmudic. Sorry I don’t know enough about the Talmud to agree or
disagree.









On Aug 8, 2017, at 1:42 AM, Marcus Daniels  wrote:



Grant writes:



"Fortunately, the AI folks don't seem to see - yet - that they are
stumbling all over the missing piece: stochastic adaptation. You know, like
in evolution: chance mutations. AI is still down with a bad case of causal
determinism. But I expect they will fairly shortly get over that. Watch out.
"



What is probability, physically?   It could be an illusion and that there
is no such thing as an independent observer.   Even if that is true,
sampling techniques are used in many machine learning algorithms -- it is
not a question of if they work, it is an academic question of why they work.



Marcus
--

*From:* Friam  on behalf of Grant Holland <
grant.holland...@gmail.com>
*Sent:* Monday, August 7, 2017 11:38:03 PM
*To:* The Friday Morning Applied Complexity Coffee Group; Carl Tollander
*Subject:* Re: [FRIAM] Future of humans and artificial intelligence



That sounds right, Carl. Asimov's three "laws" of robotics are more like
Asimov's three "wishes" for robotics. AI entities are already no longer
servants. They have become machine learners. They have actually learned to
project conditional probability. The cat is out of the barn. Or is it that
the horse is out of the bag?

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are
stumbling all over the missing piece: stochastic adaptation. You know, like
in evolution: chance mutations. AI is still down with a bad case of causal
determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is
intelligence a survivable trait?



On 8/7/17 9:54 PM, Carl Tollander wrote:

It seems to me that there are many here in the US who are not entirely on
board with Asimov's First Law of Robotics, at least insofar as it may apply
to themselves, so I suspect notions of "reining it in" are probably not
going to fly.









On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda 

Re: [FRIAM] the self

2017-08-08 Thread glen ☣
OK.  This is better.  But you seem to have defined "unit" or "coherence", 
rather than "self" ... I'm reminded of Simon's "near decomposability" in The 
Sciences of the Artificial.  To promote a unit to a self, you're going to have 
to include some sort of loop, like propri- or inter-oception.  And that raises 
the idea that some (exteroception) variables are unbound.  If the "unit" has 
more unbound variables than bound ones and/or the loops see less weight/traffic 
than the unbound ones, then the "unit" isn't coherent ... not a unit.  By that 
reasoning, we should be able to parse the unit into parts whose excision does 
not (appreciatively) affect the unit versus parts whose excision fundamentally 
changes it, including destroying it.

I'd posit that a passable definition of "self" is the collection of parts that 
can't be excised without causing fundamental changes.  So, the loss of things 
like hair, fingernails, skin cells, maybe teeth, maybe 1 kidney, 1/2 a liver, 
etc. preserve the unit.

But even *that* definition is hopelessly flawed because it passes the buck to 
"fundamental changes".  Is myself invariant across the loss of a tooth?  What 
were we talking about?

On 08/07/2017 05:52 PM, Marcus Daniels wrote:
> I claim a message send is analogous to an axon firing, where there is at 
> least one target neuron for each receivable message.   The whole graph and 
> instantaneous charge state of the neurons and the musculature/skeleton/etc. 
> attached to them is the `self'.  The edges and effective edges in the graph 
> (apparently) come and go depending on experience.   In terms of comparing 
> selves, I think one needs to look at the graphs in terms of the behaviors 
> they exhibit and not their internal wiring.   My wiring of yellow can be 
> different from yours.Your perception of throwing a baseball will change 
> with and without a broken arm, not just because the arm might not work, but 
> also because the broken arm will lead to the motor system changing due to the 
> lack of practice with throwing.
> 
> Probably there are subgraphs that are more stable configurations than others.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Marcus Daniels
Grant writes:


"Fortunately, the AI folks don't seem to see - yet - that they are stumbling 
all over the missing piece: stochastic adaptation. You know, like in evolution: 
chance mutations. AI is still down with a bad case of causal determinism. But I 
expect they will fairly shortly get over that. Watch out."


What is probability, physically?   It could be an illusion and that there is no 
such thing as an independent observer.   Even if that is true, sampling 
techniques are used in many machine learning algorithms -- it is not a question 
of if they work, it is an academic question of why they work.


Marcus


From: Friam  on behalf of Grant Holland 

Sent: Monday, August 7, 2017 11:38:03 PM
To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
Subject: Re: [FRIAM] Future of humans and artificial intelligence


That sounds right, Carl. Asimov's three "laws" of robotics are more like 
Asimov's three "wishes" for robotics. AI entities are already no longer 
servants. They have become machine learners. They have actually learned to 
project conditional probability. The cat is out of the barn. Or is it that the 
horse is out of the bag?

Whatever. Fortunately, the AI folks don't seem to see - yet - that they are 
stumbling all over the missing piece: stochastic adaptation. You know, like in 
evolution: chance mutations. AI is still down with a bad case of causal 
determinism. But I expect they will fairly shortly get over that. Watch out.

And we still must answer Stephen Hawking's burning question: Is intelligence a 
survivable trait?

On 8/7/17 9:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not entirely on board 
with Asimov's First Law of Robotics, at least insofar as it may apply to 
themselves, so I suspect notions of "reining it in" are probably not going to 
fly.




On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez 
> wrote:
Future will be quite interesting. How will be the human being of the future? 
For sure not a human being in the way we know.

http://m.eltiempo.com/tecnosfera/novedades-tecnologia/peligros-y-avances-de-la-inteligencia-artificial-para-los-humanos-117158


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Marcus Daniels
"But one problem is that breadth-first and depth-first search are just fast 
ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam  on behalf of Marcus Daniels 

Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence


Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. 
Prolog, now Curry, similar capabilities implemented in Lisp) from the age 
before the AI winter.  These systems provide a very flexible way to pose 
constraint problems.  But one problem is that breadth-first and depth-first 
search are just fast ways to find answers.  Recent work seems to have shifted 
to SMT solvers and specialized constraint solving algorithms, but these have 
somewhat less expressiveness as programming languages.  Meanwhile, machine 
learning has come on the scene in a big way and tasks traditionally associated 
with old-school AI, like natural language processing, are now matched or even 
dominated using neural nets (LSTM).  I find the range of capabilities provided 
by groups like nlp.stanford.edu really impressive -- there examples of both 
approaches (logic programming and machine learning) and then don't need to be 
mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by 
using physical phenomena to accelerate the rate at which high dimensional 
discrete systems can be solved, without relying on fragile or domain-specific 
heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic 
algorithms, for example, are robust to  noise (or if you like ambiguity) in 
fitness functions, and they are trivial to parallelize.


Marcus


From: Friam  on behalf of Grant Holland 

Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence


Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. 
And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) A 
good book on the stochasticity of evolution is "Chance and Necessity" by 
Jacques Monod. (I just finished rereading it for the second time. And that 
proved quite fruitful.)

G.

On 8/8/17 12:44 PM, glen ☣ wrote:


I'm not sure how Asimov intended them.  But the three laws is a trope that 
clearly shows the inadequacy of deontological ethics.  Rules are fine as far as 
they go.  But they don't go very far.  We can see this even in the foundations 
of mathematics, the unification of physics, and polyphenism/robustness in 
biology.  Von Neumann (Burks) said it best when he said: "But in the 
complicated parts of formal logic it is always one order of magnitude harder to 
tell what an object can do than to produce the object."  Or, if you don't like 
that, you can see the same perspective in his iterative construction of sets as 
an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than 
any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus 
our rule sets.  Stochasticity is the measure of the extent to which a rule set 
matches a set of patterns.  But Grant's right to qualify that with evolution, 
not because of the way evolution is stochastic, but because evolution requires 
a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will 
*always* fail.  It's guaranteed to fail because syncing with the environment 
isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Frank Wimberly
Then there's best-first search, B*, C*, constraint-directed search, etc.
And these are just classical search methods.

Feank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 7:20 PM, "Marcus Daniels"  wrote:

> "But one problem is that breadth-first and depth-first search are just
> fast ways to find answers."
>
>
> Just _not_ -- general but not efficient.   [My dog was demanding
> attention! ]
> --
> *From:* Friam  on behalf of Marcus Daniels <
> mar...@snoutfarm.com>
> *Sent:* Tuesday, August 8, 2017 6:43:40 PM
> *To:* The Friday Morning Applied Complexity Coffee Group; glen ☣
> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>
>
> Grant writes:
>
>
> "On the other hand... evolution *is* stochastic. (You actually did not
> disagree with me on that. You only said that the reason I was right was
> another one.) "
>
>
> I think of logic programming systems as a traditional tool of AI research
> (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the
> age before the AI winter.  These systems provide a very flexible way to
> pose constraint problems.  But one problem is that breadth-first and
> depth-first search are just fast ways to find answers.  Recent work seems
> to have shifted to SMT solvers and specialized constraint solving
> algorithms, but these have somewhat less expressiveness as programming
> languages.  Meanwhile, machine learning has come on the scene in a big way
> and tasks traditionally associated with old-school AI, like natural
> language processing, are now matched or even dominated using neural nets
> (LSTM).  I find the range of capabilities provided by groups like
> nlp.stanford.edu really impressive -- there examples of both approaches
> (logic programming and machine learning) and then don't need to be mutually
> exclusive.
>
>
> Quantum annealing is one area where the two may increasingly come together
> by using physical phenomena to accelerate the rate at which high
> dimensional discrete systems can be solved, without relying on fragile or
> domain-specific heuristics.
>
>
> I often use evolutionary algorithms for hard optimization problems.
> Genetic algorithms, for example, are robust to  noise (or if you like
> ambiguity) in fitness functions, and they are trivial to parallelize.
>
>
> Marcus
> --
> *From:* Friam  on behalf of Grant Holland <
> grant.holland...@gmail.com>
> *Sent:* Tuesday, August 8, 2017 4:51:18 PM
> *To:* The Friday Morning Applied Complexity Coffee Group; glen ☣
> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>
>
> Thanks for throwing in on this one, Glen. Your thoughts are
> ever-insightful. And ever-entertaining!
>
> For example, I did not know that von Neumann put forth a set theory.
>
> On the other hand... evolution *is* stochastic. (You actually did not
> disagree with me on that. You only said that the reason I was right was
> another one.) A good book on the stochasticity of evolution is "Chance and
> Necessity" by Jacques Monod. (I just finished rereading it for the second
> time. And that proved quite fruitful.)
>
> G.
>
> On 8/8/17 12:44 PM, glen ☣ wrote:
>
>
> I'm not sure how Asimov intended them.  But the three laws is a trope that 
> clearly shows the inadequacy of deontological ethics.  Rules are fine as far 
> as they go.  But they don't go very far.  We can see this even in the 
> foundations of mathematics, the unification of physics, and 
> polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he 
> said: "But in the complicated parts of formal logic it is always one order of 
> magnitude harder to tell what an object can do than to produce the object."  
> Or, if you don't like that, you can see the same perspective in his iterative 
> construction of sets as an alternative to the classical conception.
>
> The point being that reality, traditionally, has shown more expressiveness 
> than any of our rule sets.
>
> There are ways to handle the mismatch in expressivity between reality versus 
> our rule sets.  Stochasticity is the measure of the extent to which a rule 
> set matches a set of patterns.  But Grant's right to qualify that with 
> evolution, not because of the way evolution is stochastic, but because 
> evolution requires a unit to regularly (or sporadically) sync with its 
> environment.
>
> An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head 
> will *always* fail.  It's guaranteed to fail because syncing with the 
> environment isn't *built in*.  The sync isn't part of the AI's onto- or 
> phylo-geny.
>
>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by 

Re: [FRIAM] the self

2017-08-08 Thread Marcus Daniels
Glen writes:


"I'd posit that a passable definition of "self" is the collection of parts that 
can't be excised without causing fundamental changes.  So, the loss of things 
like hair, fingernails, skin cells, maybe teeth, maybe 1 kidney, 1/2 a liver, 
etc. preserve the unit."


Gasp.   Loss of _hair_?  _Who_ would say such a thing?


Marcus


From: Friam  on behalf of glen ☣ 

Sent: Tuesday, August 8, 2017 5:51:53 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] the self

OK.  This is better.  But you seem to have defined "unit" or "coherence", 
rather than "self" ... I'm reminded of Simon's "near decomposability" in The 
Sciences of the Artificial.  To promote a unit to a self, you're going to have 
to include some sort of loop, like propri- or inter-oception.  And that raises 
the idea that some (exteroception) variables are unbound.  If the "unit" has 
more unbound variables than bound ones and/or the loops see less weight/traffic 
than the unbound ones, then the "unit" isn't coherent ... not a unit.  By that 
reasoning, we should be able to parse the unit into parts whose excision does 
not (appreciatively) affect the unit versus parts whose excision fundamentally 
changes it, including destroying it.

I'd posit that a passable definition of "self" is the collection of parts that 
can't be excised without causing fundamental changes.  So, the loss of things 
like hair, fingernails, skin cells, maybe teeth, maybe 1 kidney, 1/2 a liver, 
etc. preserve the unit.

But even *that* definition is hopelessly flawed because it passes the buck to 
"fundamental changes".  Is myself invariant across the loss of a tooth?  What 
were we talking about?

On 08/07/2017 05:52 PM, Marcus Daniels wrote:
> I claim a message send is analogous to an axon firing, where there is at 
> least one target neuron for each receivable message.   The whole graph and 
> instantaneous charge state of the neurons and the musculature/skeleton/etc. 
> attached to them is the `self'.  The edges and effective edges in the graph 
> (apparently) come and go depending on experience.   In terms of comparing 
> selves, I think one needs to look at the graphs in terms of the behaviors 
> they exhibit and not their internal wiring.   My wiring of yellow can be 
> different from yours.Your perception of throwing a baseball will change 
> with and without a broken arm, not just because the arm might not work, but 
> also because the broken arm will lead to the motor system changing due to the 
> lack of practice with throwing.
>
> Probably there are subgraphs that are more stable configurations than others.

--
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Marcus Daniels
Frank writes:


"Then there's best-first search, B*, C*, constraint-directed search, etc.  And 
these are just classical search methods."


Connecting this back to evolutionary / stochastic techniques, genetic 
programming is one way to get the best of both approaches, at least in 
principle.   One can expose these human-designed algorithms as predefined 
library functions.  Typically in genetic programming the vocabulary consists of 
simple routines (e.g. arithmetic), conditionals, and recursion.


In practice, this kind of seeding of the solution space can collapse diversity. 
  It is a drag to see tons of compute time spent on a million little 
refinements around an already good solution.  (Yes, I know that solution!)  
More fun to see a set of clumsy solutions turn into to decent-performing but 
weird solutions.  I find my attention is drawn to properties of sub-populations 
and how I can keep the historically good performers _out_.  Not a pure GA, but 
a GA where communities also have fitness functions matching my heavy hand of 
justice..  (If I prove that conservatism just doesn't work, I'll be sure to 
pass it along.)


Marcus



From: Friam  on behalf of Frank Wimberly 

Sent: Tuesday, August 8, 2017 7:57:06 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence

Then there's best-first search, B*, C*, constraint-directed search, etc.  And 
these are just classical search methods.

Feank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 7:20 PM, "Marcus Daniels" 
> wrote:

"But one problem is that breadth-first and depth-first search are just fast 
ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam > on 
behalf of Marcus Daniels >
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence


Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. 
Prolog, now Curry, similar capabilities implemented in Lisp) from the age 
before the AI winter.  These systems provide a very flexible way to pose 
constraint problems.  But one problem is that breadth-first and depth-first 
search are just fast ways to find answers.  Recent work seems to have shifted 
to SMT solvers and specialized constraint solving algorithms, but these have 
somewhat less expressiveness as programming languages.  Meanwhile, machine 
learning has come on the scene in a big way and tasks traditionally associated 
with old-school AI, like natural language processing, are now matched or even 
dominated using neural nets (LSTM).  I find the range of capabilities provided 
by groups like nlp.stanford.edu really impressive -- 
there examples of both approaches (logic programming and machine learning) and 
then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by 
using physical phenomena to accelerate the rate at which high dimensional 
discrete systems can be solved, without relying on fragile or domain-specific 
heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic 
algorithms, for example, are robust to  noise (or if you like ambiguity) in 
fitness functions, and they are trivial to parallelize.


Marcus


From: Friam > on 
behalf of Grant Holland 
>
Sent: Tuesday, August 8, 2017 4:51:18 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence


Thanks for throwing in on this one, Glen. Your thoughts are ever-insightful. 
And ever-entertaining!

For example, I did not know that von Neumann put forth a set theory.

On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) A 
good book on the stochasticity of evolution is "Chance and Necessity" by 
Jacques Monod. (I just finished rereading it for the second time. And that 
proved quite fruitful.)

G.

On 8/8/17 12:44 PM, glen ☣ wrote:


I'm not sure how Asimov intended them.  But the three laws is a trope that 
clearly shows the inadequacy of deontological ethics.  Rules are fine as far as 
they go.  But they don't go very far.  We can see 

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Marcus Daniels
Frank writes:


"My point was that depth-first and breadth-first can probably serve only as a 
straw-man (straw-men?)."


Unless there is a robust meta-rule (not heuristic) or single deterministic 
search algorithm to rule them all, then wouldn't those other suggestions also 
be straw-men too?   If I knew that there were no noise and the domain was 
continuous and convex, then I wouldn't use a stochastic approach.


Marcus


From: Friam  on behalf of Frank Wimberly 

Sent: Tuesday, August 8, 2017 10:15:05 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence

My point was that depth-first and breadth-first can probably serve only as a 
straw-man (straw-men?).

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 10:11 PM, "Marcus Daniels" 
> wrote:

Frank writes:


"Then there's best-first search, B*, C*, constraint-directed search, etc.  And 
these are just classical search methods."


Connecting this back to evolutionary / stochastic techniques, genetic 
programming is one way to get the best of both approaches, at least in 
principle.   One can expose these human-designed algorithms as predefined 
library functions.  Typically in genetic programming the vocabulary consists of 
simple routines (e.g. arithmetic), conditionals, and recursion.


In practice, this kind of seeding of the solution space can collapse diversity. 
  It is a drag to see tons of compute time spent on a million little 
refinements around an already good solution.  (Yes, I know that solution!)  
More fun to see a set of clumsy solutions turn into to decent-performing but 
weird solutions.  I find my attention is drawn to properties of sub-populations 
and how I can keep the historically good performers _out_.  Not a pure GA, but 
a GA where communities also have fitness functions matching my heavy hand of 
justice..  (If I prove that conservatism just doesn't work, I'll be sure to 
pass it along.)


Marcus



From: Friam > on 
behalf of Frank Wimberly >
Sent: Tuesday, August 8, 2017 7:57:06 PM
To: The Friday Morning Applied Complexity Coffee Group
Subject: Re: [FRIAM] Future of humans and artificial intelligence

Then there's best-first search, B*, C*, constraint-directed search, etc.  And 
these are just classical search methods.

Feank

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 7:20 PM, "Marcus Daniels" 
> wrote:

"But one problem is that breadth-first and depth-first search are just fast 
ways to find answers."


Just _not_ -- general but not efficient.   [My dog was demanding attention! ]


From: Friam > on 
behalf of Marcus Daniels >
Sent: Tuesday, August 8, 2017 6:43:40 PM
To: The Friday Morning Applied Complexity Coffee Group; glen ☣
Subject: Re: [FRIAM] Future of humans and artificial intelligence


Grant writes:


"On the other hand... evolution is stochastic. (You actually did not disagree 
with me on that. You only said that the reason I was right was another one.) "


I think of logic programming systems as a traditional tool of AI research (e.g. 
Prolog, now Curry, similar capabilities implemented in Lisp) from the age 
before the AI winter.  These systems provide a very flexible way to pose 
constraint problems.  But one problem is that breadth-first and depth-first 
search are just fast ways to find answers.  Recent work seems to have shifted 
to SMT solvers and specialized constraint solving algorithms, but these have 
somewhat less expressiveness as programming languages.  Meanwhile, machine 
learning has come on the scene in a big way and tasks traditionally associated 
with old-school AI, like natural language processing, are now matched or even 
dominated using neural nets (LSTM).  I find the range of capabilities provided 
by groups like nlp.stanford.edu really impressive -- 
there examples of both approaches (logic programming and machine learning) and 
then don't need to be mutually exclusive.


Quantum annealing is one area where the two may increasingly come together by 
using physical phenomena to accelerate the rate at which high dimensional 
discrete systems can be solved, without relying on fragile or domain-specific 
heuristics.


I often use evolutionary algorithms for hard optimization problems.  Genetic 
algorithms, for example, are robust to  noise (or if you like ambiguity) in 
fitness functions, and they are trivial to parallelize.


Marcus


From: Friam 

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Frank Wimberly
My point was that depth-first and breadth-first can probably serve only as
a straw-man (straw-men?).

Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 10:11 PM, "Marcus Daniels"  wrote:

> Frank writes:
>
>
> "Then there's best-first search, B*, C*, constraint-directed search,
> etc.  And these are just classical search methods."
>
>
> Connecting this back to evolutionary / stochastic techniques, genetic
> programming is one way to get the best of both approaches, at least in
> principle.   One can expose these human-designed algorithms as predefined
> library functions.  Typically in genetic programming the vocabulary
> consists of simple routines (e.g. arithmetic), conditionals, and recursion.
>
>
> In practice, this kind of seeding of the solution space can collapse
> diversity.   It is a drag to see tons of compute time spent on a million
> little refinements around an already good solution.  (Yes, I know that
> solution!)  More fun to see a set of clumsy solutions turn into to
> decent-performing but weird solutions.  I find my attention is drawn to
> properties of sub-populations and how I can keep the historically good
> performers _out_.  Not a pure GA, but a GA where communities also have
> fitness functions matching my heavy hand of justice..  (If I prove that
> conservatism just doesn't work, I'll be sure to pass it along.)
>
>
> Marcus
>
>
> --
> *From:* Friam  on behalf of Frank Wimberly <
> wimber...@gmail.com>
> *Sent:* Tuesday, August 8, 2017 7:57:06 PM
> *To:* The Friday Morning Applied Complexity Coffee Group
> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>
> Then there's best-first search, B*, C*, constraint-directed search, etc.
> And these are just classical search methods.
>
> Feank
>
> Frank Wimberly
> Phone (505) 670-9918
>
> On Aug 8, 2017 7:20 PM, "Marcus Daniels"  wrote:
>
>> "But one problem is that breadth-first and depth-first search are just
>> fast ways to find answers."
>>
>>
>> Just _not_ -- general but not efficient.   [My dog was demanding
>> attention! ]
>> --
>> *From:* Friam  on behalf of Marcus Daniels <
>> mar...@snoutfarm.com>
>> *Sent:* Tuesday, August 8, 2017 6:43:40 PM
>> *To:* The Friday Morning Applied Complexity Coffee Group; glen ☣
>> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>>
>>
>> Grant writes:
>>
>>
>> "On the other hand... evolution *is* stochastic. (You actually did not
>> disagree with me on that. You only said that the reason I was right was
>> another one.) "
>>
>>
>> I think of logic programming systems as a traditional tool of AI research
>> (e.g. Prolog, now Curry, similar capabilities implemented in Lisp) from the
>> age before the AI winter.  These systems provide a very flexible way to
>> pose constraint problems.  But one problem is that breadth-first and
>> depth-first search are just fast ways to find answers.  Recent work seems
>> to have shifted to SMT solvers and specialized constraint solving
>> algorithms, but these have somewhat less expressiveness as programming
>> languages.  Meanwhile, machine learning has come on the scene in a big way
>> and tasks traditionally associated with old-school AI, like natural
>> language processing, are now matched or even dominated using neural nets
>> (LSTM).  I find the range of capabilities provided by groups like
>> nlp.stanford.edu really impressive -- there examples of both approaches
>> (logic programming and machine learning) and then don't need to be mutually
>> exclusive.
>>
>>
>> Quantum annealing is one area where the two may increasingly come
>> together by using physical phenomena to accelerate the rate at which high
>> dimensional discrete systems can be solved, without relying on fragile or
>> domain-specific heuristics.
>>
>>
>> I often use evolutionary algorithms for hard optimization problems.
>> Genetic algorithms, for example, are robust to  noise (or if you like
>> ambiguity) in fitness functions, and they are trivial to parallelize.
>>
>>
>> Marcus
>> --
>> *From:* Friam  on behalf of Grant Holland <
>> grant.holland...@gmail.com>
>> *Sent:* Tuesday, August 8, 2017 4:51:18 PM
>> *To:* The Friday Morning Applied Complexity Coffee Group; glen ☣
>> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>>
>>
>> Thanks for throwing in on this one, Glen. Your thoughts are
>> ever-insightful. And ever-entertaining!
>>
>> For example, I did not know that von Neumann put forth a set theory.
>>
>> On the other hand... evolution *is* stochastic. (You actually did not
>> disagree with me on that. You only said that the reason I was right was
>> another one.) A good book on the stochasticity of evolution is "Chance and
>> Necessity" by Jacques Monod. (I just finished rereading it for the second
>> time. And that proved 

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread glen ☣

I'm not sure how Asimov intended them.  But the three laws is a trope that 
clearly shows the inadequacy of deontological ethics.  Rules are fine as far as 
they go.  But they don't go very far.  We can see this even in the foundations 
of mathematics, the unification of physics, and polyphenism/robustness in 
biology.  Von Neumann (Burks) said it best when he said: "But in the 
complicated parts of formal logic it is always one order of magnitude harder to 
tell what an object can do than to produce the object."  Or, if you don't like 
that, you can see the same perspective in his iterative construction of sets as 
an alternative to the classical conception.

The point being that reality, traditionally, has shown more expressiveness than 
any of our rule sets.

There are ways to handle the mismatch in expressivity between reality versus 
our rule sets.  Stochasticity is the measure of the extent to which a rule set 
matches a set of patterns.  But Grant's right to qualify that with evolution, 
not because of the way evolution is stochastic, but because evolution requires 
a unit to regularly (or sporadically) sync with its environment.

An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head will 
*always* fail.  It's guaranteed to fail because syncing with the environment 
isn't *built in*.  The sync isn't part of the AI's onto- or phylo-geny.

-- 
☣ glen


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Grant Holland

Marcus,
Good points, all. I suggest you turn to the Copenhagen interpretation of 
Quantum Mechanics (the "usual interpretation") for musings on your very 
pertinent question about "Why probabilities in the physical world".

Although, I'm sure you have already looked there.
Of course, the Copenhagen guys (Heisenberg, Born, etc.) don't really try 
to answer your question either - opting instead to say that theirs is 
merely a theory, a model. And, of course, they are right.
On the other hand, other physicists (i.e. de Broglie, Bohm, Einstein and 
others) have spent a century trying to defend causal determinism against 
the Copenhagen interpretation. These days the defenders of the faith 
have resorted to philosophy over this issue and are considering the 
"ontic" versus the "epistemic". And yet, Copenhagen is still referred to 
as "the usual interpretation", and when QM is taught today, I think, it 
is essentially Copenhagen or some derivative of it. Perhaps Bell's 
theorem has contributed to the longevity of the Copenhagen perspective.



On 8/8/17 2:42 AM, Marcus Daniels wrote:


Grant writes:


"Fortunately, the AI folks don't seem to see - yet - that they are 
stumbling all over the missing piece: stochastic adaptation. You know, 
like in evolution: chance mutations. AI is still down with a bad case 
of causal determinism. But I expect they will fairly shortly get over 
that. Watch out."



What is probability, physically?   It could be an illusion and that 
there is no such thing as an independent observer. Even if that is 
true, sampling techniques are used in many machine learning algorithms 
-- it is not a question of if they work, it is an academic question of 
why they work.



Marcus


*From:* Friam  on behalf of Grant Holland 


*Sent:* Monday, August 7, 2017 11:38:03 PM
*To:* The Friday Morning Applied Complexity Coffee Group; Carl Tollander
*Subject:* Re: [FRIAM] Future of humans and artificial intelligence

That sounds right, Carl. Asimov's three "laws" of robotics are more 
like Asimov's three "wishes" for robotics. AI entities are already no 
longer servants. They have become machine learners. They have actually 
learned to project conditional probability. The cat is out of the 
barn. Or is it that the horse is out of the bag?


Whatever. Fortunately, the AI folks don't seem to see - yet - that 
they are stumbling all over the missing piece: stochastic adaptation. 
You know, like in evolution: chance mutations. AI is still down with a 
bad case of causal determinism. But I expect they will fairly shortly 
get over that. Watch out.


And we still must answer Stephen Hawking's burning question: Is 
intelligence a survivable trait?



On 8/7/17 9:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not 
entirely on board with Asimov's First Law of Robotics, at least 
insofar as it may apply to themselves, so I suspect notions of 
"reining it in" are probably not going to fly.





On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez 
> wrote:


Future will be quite interesting. How will be the human being of
the future? For sure not a human being in the way we know.


http://m.eltiempo.com/tecnosfera/novedades-tecnologia/peligros-y-avances-de-la-inteligencia-artificial-para-los-humanos-117158




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe
http://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM-COMIC http://friam-comic.blogspot.com/
 by Dr. Strangelove





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIChttp://friam-comic.blogspot.com/  by Dr. Strangelove





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Frank Wimberly
Talmud:

Do not be daunted by the enormity of the world's grief.  Do justly, now.
Love mercy, now. Walk humbly, now.  You are not obligated to complete the
work, but neither are you free to abandon it.

Plus 10,000 other pages.


Frank Wimberly
Phone (505) 670-9918

On Aug 8, 2017 11:18 AM, "Pamela McCorduck"  wrote:

> Grant, does it really seem plausible to you that the thousands of crack
> researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and
> other places have not seen this? And found remedies?
>
> Just for FRIAM’s information, John McCarthy used to call Asimov’s Three
> Laws Talmudic. Sorry I don’t know enough about the Talmud to agree or
> disagree.
>
>
>
>
> On Aug 8, 2017, at 1:42 AM, Marcus Daniels  wrote:
>
> Grant writes:
>
> "Fortunately, the AI folks don't seem to see - yet - that they are
> stumbling all over the missing piece: stochastic adaptation. You know, like
> in evolution: chance mutations. AI is still down with a bad case of causal
> determinism. But I expect they will fairly shortly get over that. Watch out.
> "
>
> What is probability, physically?   It could be an illusion and that there
> is no such thing as an independent observer.   Even if that is true,
> sampling techniques are used in many machine learning algorithms -- it is
> not a question of if they work, it is an academic question of why they work.
>
> Marcus
> --
> *From:* Friam  on behalf of Grant Holland <
> grant.holland...@gmail.com>
> *Sent:* Monday, August 7, 2017 11:38:03 PM
> *To:* The Friday Morning Applied Complexity Coffee Group; Carl Tollander
> *Subject:* Re: [FRIAM] Future of humans and artificial intelligence
>
> That sounds right, Carl. Asimov's three "laws" of robotics are more like
> Asimov's three "wishes" for robotics. AI entities are already no longer
> servants. They have become machine learners. They have actually learned to
> project conditional probability. The cat is out of the barn. Or is it that
> the horse is out of the bag?
> Whatever. Fortunately, the AI folks don't seem to see - yet - that they
> are stumbling all over the missing piece: stochastic adaptation. You know,
> like in evolution: chance mutations. AI is still down with a bad case of
> causal determinism. But I expect they will fairly shortly get over that.
> Watch out.
> And we still must answer Stephen Hawking's burning question: Is
> intelligence a survivable trait?
>
> On 8/7/17 9:54 PM, Carl Tollander wrote:
>
> It seems to me that there are many here in the US who are not entirely on
> board with Asimov's First Law of Robotics, at least insofar as it may apply
> to themselves, so I suspect notions of "reining it in" are probably not
> going to fly.
>
>
>
>
> On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez <
> alfr...@covaleda.co> wrote:
>
>> Future will be quite interesting. How will be the human being of the
>> future? For sure not a human being in the way we know.
>>
>> http://m.eltiempo.com/tecnosfera/novedades-tecnologia/
>> peligros-y-avances-de-la-inteligencia-artificial-para-los-humanos-117158
>>
>> 
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>>
>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
>
>
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Pamela McCorduck
Grant, does it really seem plausible to you that the thousands of crack 
researchers at Stanford, Carnegie Mellon, Google, MIT, Cal Berkeley, and other 
places have not seen this? And found remedies?

Just for FRIAM’s information, John McCarthy used to call Asimov’s Three Laws 
Talmudic. Sorry I don’t know enough about the Talmud to agree or disagree.




> On Aug 8, 2017, at 1:42 AM, Marcus Daniels  wrote:
> 
> Grant writes:
> 
> "Fortunately, the AI folks don't seem to see - yet - that they are stumbling 
> all over the missing piece: stochastic adaptation. You know, like in 
> evolution: chance mutations. AI is still down with a bad case of causal 
> determinism. But I expect they will fairly shortly get over that. Watch out."
> 
> What is probability, physically?   It could be an illusion and that there is 
> no such thing as an independent observer.   Even if that is true, sampling 
> techniques are used in many machine learning algorithms -- it is not a 
> question of if they work, it is an academic question of why they work.
> 
> Marcus
> From: Friam > on 
> behalf of Grant Holland  >
> Sent: Monday, August 7, 2017 11:38:03 PM
> To: The Friday Morning Applied Complexity Coffee Group; Carl Tollander
> Subject: Re: [FRIAM] Future of humans and artificial intelligence
>  
> That sounds right, Carl. Asimov's three "laws" of robotics are more like 
> Asimov's three "wishes" for robotics. AI entities are already no longer 
> servants. They have become machine learners. They have actually learned to 
> project conditional probability. The cat is out of the barn. Or is it that 
> the horse is out of the bag?  
> Whatever. Fortunately, the AI folks don't seem to see - yet - that they are 
> stumbling all over the missing piece: stochastic adaptation. You know, like 
> in evolution: chance mutations. AI is still down with a bad case of causal 
> determinism. But I expect they will fairly shortly get over that. Watch out.
> And we still must answer Stephen Hawking's burning question: Is intelligence 
> a survivable trait?
> 
> On 8/7/17 9:54 PM, Carl Tollander wrote:
>> It seems to me that there are many here in the US who are not entirely on 
>> board with Asimov's First Law of Robotics, at least insofar as it may apply 
>> to themselves, so I suspect notions of "reining it in" are probably not 
>> going to fly.
>> 
>> 
>> 
>> 
>> On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda Vélez > > wrote:
>> Future will be quite interesting. How will be the human being of the future? 
>> For sure not a human being in the way we know.
>> 
>> http://m.eltiempo.com/tecnosfera/novedades-tecnologia/peligros-y-avances-de-la-inteligencia-artificial-para-los-humanos-117158
>>  
>> 
>> 
>> 
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com 
>> 
>> FRIAM-COMIC http://friam-comic.blogspot.com/ 
>>  by Dr. Strangelove
>> 
>> 
>> 
>> 
>> FRIAM Applied Complexity Group listserv
>> Meets Fridays 9a-11:30 at cafe at St. John's College
>> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com 
>> 
>> FRIAM-COMIC http://friam-comic.blogspot.com/ 
>>  by Dr. Strangelove
> 
> 
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com 
> 
> FRIAM-COMIC http://friam-comic.blogspot.com/ 
>  by Dr. Strangelove


FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Re: [FRIAM] Future of humans and artificial intelligence

2017-08-08 Thread Grant Holland

Pamela,

I expect that they have! And I certainly hope so. I simply have not 
found them yet after some earnest looking. Can you please send me some 
references?? Right now I suspect that the heart of machine learning has 
the pearl, and I'm just now turning there.


And I'm optimistically suspicious that those entropic functionals that 
you find in information theory and that are built on top of conditional 
probability (relative entropy, mutual information, conditional entropy, 
entropy rate, etc.) hold promise...and that at the heart of machine 
learning they lay lurking - or could.


Anyway, thx for the note; and /please/ send me any related referernces!

Grant


On 8/8/17 11:20 AM, Pamela McCorduck wrote:
Grant, does it really seem plausible to you that the thousands of 
crack researchers at Stanford, Carnegie Mellon, Google, MIT, Cal 
Berkeley, and other places have not seen this? And found remedies?


Just for FRIAM’s information, John McCarthy used to call Asimov’s 
Three Laws Talmudic. Sorry I don’t know enough about the Talmud to 
agree or disagree.





On Aug 8, 2017, at 1:42 AM, Marcus Daniels > wrote:


Grant writes:

"Fortunately, the AI folks don't seem to see - yet - that they are 
stumbling all over the missing piece: stochastic adaptation. You 
know, like in evolution: chance mutations. AI is still down with a 
bad case of causal determinism. But I expect they will fairly shortly 
get over that. Watch out."


What is probability, physically?   It could be an illusion and that 
there is no such thing as an independent observer.   Even if that is 
true, sampling techniques are used in many machine learning 
algorithms -- it is not a question of if they work, it is an academic 
question of why they work.


Marcus

*From:*Friam > on behalf of Grant Holland 
>

*Sent:*Monday, August 7, 2017 11:38:03 PM
*To:*The Friday Morning Applied Complexity Coffee Group; Carl Tollander
*Subject:*Re: [FRIAM] Future of humans and artificial intelligence
That sounds right, Carl. Asimov's three "laws" of robotics are more 
like Asimov's three "wishes" for robotics. AI entities are already no 
longer servants. They have become machine learners. They have 
actually learned to project conditional probability. The cat is out 
of the barn. Or is it that the horse is out of the bag?
Whatever. Fortunately, the AI folks don't seem to see - yet - that 
they are stumbling all over the missing piece: stochastic adaptation. 
You know, like in evolution: chance mutations. AI is still down with 
a bad case of causal determinism. But I expect they will fairly 
shortly get over that. Watch out.
And we still must answer Stephen Hawking's burning question: Is 
intelligence a survivable trait?


On 8/7/17 9:54 PM, Carl Tollander wrote:
It seems to me that there are many here in the US who are not 
entirely on board with Asimov's First Law of Robotics, at least 
insofar as it may apply to themselves, so I suspect notions of 
"reining it in" are probably not going to fly.





On Mon, Aug 7, 2017 at 1:57 AM, Alfredo Covaleda 
Vélez>wrote:


Future will be quite interesting. How will be the human being of
the future? For sure not a human being in the way we know.


http://m.eltiempo.com/tecnosfera/novedades-tecnologia/peligros-y-avances-de-la-inteligencia-artificial-para-los-humanos-117158




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to
unsubscribehttp://redfish.com/mailman/listinfo/friam_redfish.com

FRIAM-COMIChttp://friam-comic.blogspot.com/
by Dr. Strangelove





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIChttp://friam-comic.blogspot.com/  by Dr. Strangelove



FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIChttp://friam-comic.blogspot.com/by Dr. Strangelove





FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC