@Nick that's a fair question. On a pragmatic side not much...yet. However
as I understand it (some) amount of AI was invaluable for making pretty gud
guesses about frustrating issues: Like what the heck is going on with the
weather.
Robots and androids (so far) are better then humans at somethings....and
pretty bad at others. Androids the R2-D2 kind. Basically computers speek
computer better than people
Computers can talk to computers reely reely fast and possibly understand
each other better than humans do. For some (I think) reely awsome things
they've done (so far): Dictation software basically asks your computer to
guess what you're saying (AI) . Mine litterally tries to learn how make
small improvements as I uses it and has gotten a lot better over time.
Their's a video on youtube of some MIT guys that have a robot band playing
disney inspired music. Those robots have tastes and stuff they like playing
more than others. Some better than others.
FWIW what I thought was too cool was some of stuff sounded reely good.
Robots driving cars or helping people could rock.  Or robots exploring
awsome  stuff that humans can't(yet)

Though I haven't a clue how close any of that is yet.  And you are right to
be concerned ^_^

On Tue, Aug 8, 2017 at 4:51 PM, Grant Holland <grant.holland...@gmail.com>
wrote:

> Thanks for throwing in on this one, Glen. Your thoughts are
> ever-insightful. And ever-entertaining!
>
> For example, I did not know that von Neumann put forth a set theory.
>
> On the other hand... evolution *is* stochastic. (You actually did not
> disagree with me on that. You only said that the reason I was right was
> another one.) A good book on the stochasticity of evolution is "Chance and
> Necessity" by Jacques Monod. (I just finished rereading it for the second
> time. And that proved quite fruitful.)
>
> G.
>
> On 8/8/17 12:44 PM, glen ☣ wrote:
>
> I'm not sure how Asimov intended them.  But the three laws is a trope that 
> clearly shows the inadequacy of deontological ethics.  Rules are fine as far 
> as they go.  But they don't go very far.  We can see this even in the 
> foundations of mathematics, the unification of physics, and 
> polyphenism/robustness in biology.  Von Neumann (Burks) said it best when he 
> said: "But in the complicated parts of formal logic it is always one order of 
> magnitude harder to tell what an object can do than to produce the object."  
> Or, if you don't like that, you can see the same perspective in his iterative 
> construction of sets as an alternative to the classical conception.
>
> The point being that reality, traditionally, has shown more expressiveness 
> than any of our rule sets.
>
> There are ways to handle the mismatch in expressivity between reality versus 
> our rule sets.  Stochasticity is the measure of the extent to which a rule 
> set matches a set of patterns.  But Grant's right to qualify that with 
> evolution, not because of the way evolution is stochastic, but because 
> evolution requires a unit to regularly (or sporadically) sync with its 
> environment.
>
> An AI (or a rule-obsessed human) that sprouts fully formed from Zeus' head 
> will *always* fail.  It's guaranteed to fail because syncing with the 
> environment isn't *built in*.  The sync isn't part of the AI's onto- or 
> phylo-geny.
>
>
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

Reply via email to