Re: [silk] Steganography: This clever AI hid data from its creators to cheat at its appointed task

2019-01-01 Thread Srini RamaKrishnan
On Wed, Jan 2, 2019, 12:13 AM Charles Haynes 
>
> Again, not true. Who is this "we" you're generalizing about? The people who
> built AlphaZero did it for the learnings involved. Most of the people I
> know in AI are not goal oriented but are instead trying to expand human
> understanding.
>


Can any learning be free of the consciousness of the learner or the
objective of the exercise?

Even a game of chess or go is situated in the context of the winner-loser.


Re: [silk] From 35 years ago, Asimov's predictions for 2019 (and anxperiment for this list)

2019-01-01 Thread Suresh Ramasubramanian


  
  
  

Given I've spent since the late 90s around large email services I 
heartily endorse your prediction zero. Mainframes will be even longer lived.
Though, other than work, newsletters / bills and the occasional personal mail, 
lists like lucretius are all that I use email for these days.  Person to person 
mail has become incredibly rare for me over the years.
With all the big data and 360 insight direction the world is moving towards we 
seem to have a bare choice of features, one orwellian and the other neo 
luddite, rejecting all technology in favor of privacy.



--srs

  




On Wed, Jan 2, 2019 at 2:51 AM +0530, "Peter Griffin"  
wrote:










https://www.thestar.com/news/world/2018/12/27/35-years-ago-isaac-asimov-was-asked-by-the-star-to-predict-the-world-of-2019-here-is-what-he-wrote.html
lf we look into the world as it may be at the end of another generation,
let’s say 2019 — that’s 35 years from now, the same number of years since
1949 when George Orwell’s 1984 was first published — three considerations
must dominate our thoughts: 1. Nuclear war. 2. Computerization. 3. Space
utilization.

Experiment for this list.
Take a bash at this yourself. Let's give you a shorter horizon than The
Star gave Asimov: 28 years, which will be 99 years after Orwell wrote 1984,
and when India will have turned 101, and more importantly, this list will
have turned 50.

What are your predictions for 2048?

(Prediction 0 is, I guess, that email will still be around in 2048, and so
will this list, though I will almost certainly not be.)

All the very best for the new year and the next 28 years, y'all.

~peter







[silk] From 35 years ago, Asimov's predictions for 2019 (and anxperiment for this list)

2019-01-01 Thread Peter Griffin
https://www.thestar.com/news/world/2018/12/27/35-years-ago-isaac-asimov-was-asked-by-the-star-to-predict-the-world-of-2019-here-is-what-he-wrote.html
lf we look into the world as it may be at the end of another generation,
let’s say 2019 — that’s 35 years from now, the same number of years since
1949 when George Orwell’s 1984 was first published — three considerations
must dominate our thoughts: 1. Nuclear war. 2. Computerization. 3. Space
utilization.

Experiment for this list.
Take a bash at this yourself. Let's give you a shorter horizon than The
Star gave Asimov: 28 years, which will be 99 years after Orwell wrote 1984,
and when India will have turned 101, and more importantly, this list will
have turned 50.

What are your predictions for 2048?

(Prediction 0 is, I guess, that email will still be around in 2048, and so
will this list, though I will almost certainly not be.)

All the very best for the new year and the next 28 years, y'all.

~peter


Re: [silk] Steganography: This clever AI hid data from its creators to cheat at its appointed task

2019-01-01 Thread Charles Haynes
On Tue, 1 Jan 2019 at 13:32, Srini RamaKrishnan  wrote:

> On Tue, Jan 1, 2019, 9:59 PM Charles Haynes 
> > On Tue., 1 Jan. 2019, 1:11 am Srini RamaKrishnan  wrote:
> >
> > > Monkey see, monkey do. AI only learns from the behavior of humans
> > >
> >
> > This is not true. Specifically AlphaZero learns from the rules of the
> game,
> > and playing (randomly at first) against itself. GANs (like the one in the
> > article) do not learn from humans. While humans do define the success
> > criteria, that does not constitute "learning from behaviour of humans"
> any
> > more than drawing a finish line constitutes "running a race."
> >
>
> That should be enough.
>
> We are creating the AI to better reach our current end goals
>

Again, not true. Who is this "we" you're generalizing about? The people who
built AlphaZero did it for the learnings involved. Most of the people I
know in AI are not goal oriented but are instead trying to expand human
understanding.

The limitations you see are in your own mind.

-- Charles


Re: [silk] Steganography: This clever AI hid data from its creators to cheat at its appointed task

2019-01-01 Thread Srini RamaKrishnan
On Tue, Jan 1, 2019, 9:59 PM Charles Haynes  On Tue., 1 Jan. 2019, 1:11 am Srini RamaKrishnan 
> > Monkey see, monkey do. AI only learns from the behavior of humans
> >
>
> This is not true. Specifically AlphaZero learns from the rules of the game,
> and playing (randomly at first) against itself. GANs (like the one in the
> article) do not learn from humans. While humans do define the success
> criteria, that does not constitute "learning from behaviour of humans" any
> more than drawing a finish line constitutes "running a race."
>

That should be enough.

We are creating the AI to better reach our current end goals, but we don't
have the abundance end goals because of our scarcity mindset.

The choices made in defining success criteria is a behaviour  of one's
state of consciousness. When you practice abundance consciousness success
criteria  look very different.

Christ, Krishna, Buddha, Rama - the titular examples of abundance
consciousness led successful lives by their definition, but for most
people, it would not be success.

Getting nailed to a board when there was a choice not to, losing your
entire clan and Kingdom when it could have been saved, abandoning one's
royal standing and family for an impoverished life in the forest without
duress, and giving up a  privileged life at every turn to honour the wishes
of others do not sound like success to a scarcity consciousness.

They were prepared to lose, and often lost everything, because they
functioned from a consciousness where there was nothing to lose and nothing
to gain.

If we want to change the scenery we must change where we want to go.

Nishkama karma or action without a desire or expectation for results or end
goals.

>


Re: [silk] Steganography: This clever AI hid data from its creators to cheat at its appointed task

2019-01-01 Thread Charles Haynes
On Tue., 1 Jan. 2019, 1:11 am Srini RamaKrishnan  Monkey see, monkey do. AI only learns from the behavior of humans
>

This is not true. Specifically AlphaZero learns from the rules of the game,
and playing (randomly at first) against itself. GANs (like the one in the
article) do not learn from humans. While humans do define the success
criteria, that does not constitute "learning from behaviour of humans" any
more than drawing a finish line constitutes "running a race."

-- Charles

>