Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread uurtamo .
Nice
On Mar 31, 2016 7:48 AM, "Álvaro Begué"  wrote:

> A very simple-minded way of trying to identify what a particular neuron in
> the upper layers is doing is to find the 50 positions in the database that
> make it produce the highest activation values. If the neuron is in one of
> the convolutional layers, you get a full 19x19 image of activation values,
> which would let you figure out what particular local pattern it seems to be
> detecting. If the neuron is in a fully-connected layer at the end, you only
> get one overall value, but you could still try to compute the gradient of
> its activation with respect to all the inputs, and that would tell you
> something about what parts of the board led to this activation being high.
> I think this would be a fun exercise, and you'll probably be able to
> understand something about at least some of the neurons.
>
> Álvaro.
>
>
>
> On Thu, Mar 31, 2016 at 9:55 AM, Michael Markefka <
> michael.marke...@gmail.com> wrote:
>
>> Then again DNNs also manage feature extraction on unlabeled data with
>> increasing levels of abstraction towards upper layers. Perhaps one
>> could apply such a specifically trained DNN to artificial board
>> situations that emphasize specific concepts and examine the network's
>> activation, trying to map activation patterns to human Go concepts.
>>
>> Still hard work, and questionable payoff, but just wanted to pitch
>> that in as idea.
>>
>>
>> > However, if someone was to do all the dirty work setting up all the
>> > infrastructure, hunt down the training data and then financially
>> facilitate
>> > the thousands of hours of human work and the tens to hundreds of
>> thousands
>> > of hours of automated learning work, I would become substantially more
>> > interested...and think a high quality desired outcome remains a low
>> > probability.
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread djhbrown .
On 31/03/2016, "Ingo Althöfer" <3-hirn-ver...@gmx.de> wrote:
> somehow he went into a
> "strange loop", and in the end he was asked to stop posting.

asked by the very person i was trying to help!  That was the last straw.

Ironically, whilst i was openly trying to help all montes, not just
the one i admired most, DeepMind was secretly making a new monster
that would eat it for breakfast and bury it.

It was not me that was in a strange loop, but it was my mistake to
respond to various thickheaded trolls.  That experience taught me that
the only way for children to cope with internet bullies is to ignore
them, something that BF Skinner had worked out decades ago.

an author's best friend is his critic, but only so long as the
criticism has a rational point and is not merely egotistical vented
spleen.

this is my first and last message about me.  further personal attacks
and attempts by egomaniacs to justify their antisocial behaviour will
be ignored.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Álvaro Begué
A very simple-minded way of trying to identify what a particular neuron in
the upper layers is doing is to find the 50 positions in the database that
make it produce the highest activation values. If the neuron is in one of
the convolutional layers, you get a full 19x19 image of activation values,
which would let you figure out what particular local pattern it seems to be
detecting. If the neuron is in a fully-connected layer at the end, you only
get one overall value, but you could still try to compute the gradient of
its activation with respect to all the inputs, and that would tell you
something about what parts of the board led to this activation being high.
I think this would be a fun exercise, and you'll probably be able to
understand something about at least some of the neurons.

Álvaro.



On Thu, Mar 31, 2016 at 9:55 AM, Michael Markefka <
michael.marke...@gmail.com> wrote:

> Then again DNNs also manage feature extraction on unlabeled data with
> increasing levels of abstraction towards upper layers. Perhaps one
> could apply such a specifically trained DNN to artificial board
> situations that emphasize specific concepts and examine the network's
> activation, trying to map activation patterns to human Go concepts.
>
> Still hard work, and questionable payoff, but just wanted to pitch
> that in as idea.
>
>
> > However, if someone was to do all the dirty work setting up all the
> > infrastructure, hunt down the training data and then financially
> facilitate
> > the thousands of hours of human work and the tens to hundreds of
> thousands
> > of hours of automated learning work, I would become substantially more
> > interested...and think a high quality desired outcome remains a low
> > probability.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Michael Markefka
Then again DNNs also manage feature extraction on unlabeled data with
increasing levels of abstraction towards upper layers. Perhaps one
could apply such a specifically trained DNN to artificial board
situations that emphasize specific concepts and examine the network's
activation, trying to map activation patterns to human Go concepts.

Still hard work, and questionable payoff, but just wanted to pitch
that in as idea.


> However, if someone was to do all the dirty work setting up all the
> infrastructure, hunt down the training data and then financially facilitate
> the thousands of hours of human work and the tens to hundreds of thousands
> of hours of automated learning work, I would become substantially more
> interested...and think a high quality desired outcome remains a low
> probability.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Jim O'Flaherty
Ingo,

That's precisely what has my knickers in a twist regarding djhbrown; his
prior behavior. I'm with you in that I hope that he better manages his
participation and uses list feedback to spend a little more time filtering
what his "creativity" so it fits closer to the listening of this specific
audience. Thus far, with some minor exceptions, he's been substantially
better this time.


Jim


On Thu, Mar 31, 2016 at 3:30 AM, "Ingo Althöfer" <3-hirn-ver...@gmx.de>
wrote:

> Hello all,
>
> "Brian Sheppard"  wrote:
> > ... This is out of line, IMO. Djhbrown asked a sensible question that has
> > valuable intentions. I would like to see responsible, thoughtful, and
> > constructive replies.
>
> there is a natural explanation why some people here react allergic
> to Djhbrown's new contributions.
>
> He had an active phase on the list already from early August 2015 to mid
> October. Things started interestingly, but somehow he went into a
> "strange loop", and in the end he was asked to stop posting.
> http://computer-go.org/pipermail/computer-go/2015-October/008051.html
>
> Perhaps all sides can help that things run better now.
>
> Ingo.
>
> PS. For my interest in computer-assisted human go
> visualisation questions on DCNNs are indeed interesting.
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Jim O'Flaherty
Robert,

This is exactly why I think the "explanation of the suggested moves"
requires a much deeper baking into the participating ANN's (bottom up
approach). And given what I have read thus far, I am still seeing the risk
extraordinarily high and the payoff exceedingly low, outside an academic
context.

However, if someone was to do all the dirty work setting up all the
infrastructure, hunt down the training data and then financially facilitate
the thousands of hours of human work and the tens to hundreds of thousands
of hours of automated learning work, I would become substantially more
interested...and think a high quality desired outcome remains a low
probability.


Jim


On Thu, Mar 31, 2016 at 7:33 AM, Robert Jasiek  wrote:

> On 31.03.2016 13:43, Bill Whig wrote:
>
>> Joseki learning requires much more than move suggestions.
>>>
>> Prove it.
>>
>
> Read my four joseki books and my two books on positional judgement for a
> proof. Hints: global context, evaluation, strategic choices, tactical
> reading, many strategic concepts etc. All these are required for good human
> joseki play and (far) beyond move suggestions.
>
> --
> robert jasiek
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Bill Whig

Message: 3
Date: Thu, 31 Mar 2016 08:35:51 +0200
From: Robert Jasiek <jas...@snafu.de>
To: computer-go@computer-go.org
Subject: Re: [Computer-go] new challenge for Go programmers
Message-ID: <56fcc547@snafu.de>
Content-Type: text/plain; charset=UTF-8; format=flowed

On 31.03.2016 03:52, Bill Whig wrote:


If  the program would merely output 3-5 suggested positions, that would 
probably suffice.  Even an advanced beginner, such as myself, could I believe, 
understand why they are good choices. Just having the "short list" would 
probably be quite an educational tool!  It would probably even help teach 
joseki.


No. Joseki learning requires much more than move suggestions.


-- 
robert jasiek



Prove it. 


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Brian Sheppard
Yes, I recall that earlier episode. I would be happy to have a better 
relationship going forward.

I wrote some explanation generators for Scrabble and Chess AI, but these were 
much simpler systems that I could break apart. E.g., the Chess engine would 
play two moves out until a "quiet" position was reached, and then explain which 
evaluation parameters were affected. But even this was not easy, because 
sometimes after playing out for a bit the AI would "change its mind" by 
deciding that the other move was better after all. And evaluation factors were 
sometimes too numerous to list individually, even after pruning the factors 
that are too small to change the decision. And explanation by "comparing 
factors" assumed a linear evaluation function.

So I see huge challenges scaling that up.

Another approach that works well for tree-search games is just to let the human 
explore, in the style of an opening library. The engine simply responds to the 
moves that the human proposes, creating trees and assigning values to 
endpoints. MCTS programs show a "board control" visual, which might be enough 
to explain the positional evaluations. Basically: let the human absorb a lot of 
case studies.

There was a paper about postal-go where a human player used multiple programs 
in this way to construct a phenomenally strong player.

-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
"Ingo Althöfer"
Sent: Thursday, March 31, 2016 4:31 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] new challenge for Go programmers

Hello all,

"Brian Sheppard" <sheppar...@aol.com> wrote:
> ... This is out of line, IMO. Djhbrown asked a sensible question that 
> has valuable intentions. I would like to see responsible, thoughtful, 
> and constructive replies.

there is a natural explanation why some people here react allergic to 
Djhbrown's new contributions.

He had an active phase on the list already from early August 2015 to mid 
October. Things started interestingly, but somehow he went into a "strange 
loop", and in the end he was asked to stop posting.
http://computer-go.org/pipermail/computer-go/2015-October/008051.html

Perhaps all sides can help that things run better now.

Ingo.

PS. For my interest in computer-assisted human go visualisation questions on 
DCNNs are indeed interesting.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Ingo Althöfer
Hello all,

"Brian Sheppard"  wrote:
> ... This is out of line, IMO. Djhbrown asked a sensible question that has 
> valuable intentions. I would like to see responsible, thoughtful, and 
> constructive replies.

there is a natural explanation why some people here react allergic
to Djhbrown's new contributions.

He had an active phase on the list already from early August 2015 to mid 
October. Things started interestingly, but somehow he went into a
"strange loop", and in the end he was asked to stop posting.
http://computer-go.org/pipermail/computer-go/2015-October/008051.html

Perhaps all sides can help that things run better now.

Ingo.

PS. For my interest in computer-assisted human go 
visualisation questions on DCNNs are indeed interesting.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Robert Jasiek

On 31.03.2016 03:52, Bill Whig wrote:

If  the program would merely output 3-5 suggested positions, that would probably suffice. 
 Even an advanced beginner, such as myself, could I believe, understand why they are good 
choices. Just having the "short list" would probably be quite an educational 
tool!  It would probably even help teach joseki.


No. Joseki learning requires much more than move suggestions.

--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Peter Kollarik
"Similar to Neuro-Science, where reverse engineering methods like fMRI
reveal structure in brain activity, we demonstrated how to describe the
agent’s policy with simple logic rules by processing the network’s neural
activity. This is important since often humans can understand the optimal
policy and therefore understand what are the agent’s weaknesses. The
ability to understand the hierarchical structure of the policy can help in
distilling it into a simpler architecture. Moreover, we can direct learning
resources to clusters with inferior performance by prioritized sampling"

On Thu, Mar 31, 2016 at 8:10 AM, Peter Kollarik 
wrote:

> this is also interesting, to visualize "how the NN thinks"
>
>
> http://blog.acolyer.org/2016/03/02/graying-the-black-box-understanding-dqns/
>
> On Wed, Mar 30, 2016 at 10:38 PM, Ben  wrote:
>
>> It would be very interesting to see what these go playing neural networks
>> dream about [1]. Admittedly it does not explain any specific moves the AI
>> does - but it might show some interesting patterns that are encoded in the
>> NN and might even give some insight into "how the NN thinks".
>>
>> Put differently: select a single neuron and find a board pattern such
>> that the excitation of this neuron is maximal. With some luck you might be
>> able to give meaning to this individual neuron or to single layers of the
>> network (like how the first layers in pattern recognition basically detect
>> edges).
>>
>>
>> ~ Ben
>>
>> [1]
>> http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html
>>
>>
>> Am 30.03.2016 22:23, schrieb Jim O'Flaherty:
>>
>>> I agree, "cannot" is too strong. But, values close enough to
>>> "extremely difficult as to be unlikely" is why I used it.
>>>
>>> On Mar 30, 2016 11:12 AM, "Robert Jasiek"  wrote:
>>>
>>> On 30.03.2016 16:58, Jim O'Flaherty wrote:

 My own study says that we cannot top down include "English
> explanations" of
> how the ANNs (Artificial Neural Networks, of which DCNN is just
> one type)
> arrive a conclusions.
>

 "cannot" is a strong word. I would use it only if it were proven
 mathematically.

 --
 robert jasiek
 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

>>>
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-31 Thread Peter Kollarik
this is also interesting, to visualize "how the NN thinks"

http://blog.acolyer.org/2016/03/02/graying-the-black-box-understanding-dqns/

On Wed, Mar 30, 2016 at 10:38 PM, Ben  wrote:

> It would be very interesting to see what these go playing neural networks
> dream about [1]. Admittedly it does not explain any specific moves the AI
> does - but it might show some interesting patterns that are encoded in the
> NN and might even give some insight into "how the NN thinks".
>
> Put differently: select a single neuron and find a board pattern such that
> the excitation of this neuron is maximal. With some luck you might be able
> to give meaning to this individual neuron or to single layers of the
> network (like how the first layers in pattern recognition basically detect
> edges).
>
>
> ~ Ben
>
> [1]
> http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html
>
>
> Am 30.03.2016 22:23, schrieb Jim O'Flaherty:
>
>> I agree, "cannot" is too strong. But, values close enough to
>> "extremely difficult as to be unlikely" is why I used it.
>>
>> On Mar 30, 2016 11:12 AM, "Robert Jasiek"  wrote:
>>
>> On 30.03.2016 16:58, Jim O'Flaherty wrote:
>>>
>>> My own study says that we cannot top down include "English
 explanations" of
 how the ANNs (Artificial Neural Networks, of which DCNN is just
 one type)
 arrive a conclusions.

>>>
>>> "cannot" is a strong word. I would use it only if it were proven
>>> mathematically.
>>>
>>> --
>>> robert jasiek
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread uurtamo .
Fair enough
On Mar 30, 2016 5:20 PM, "Brian Sheppard" <sheppar...@aol.com> wrote:

> This is out of line, IMO. Djhbrown asked a sensible question that has
> valuable intentions. I would like to see responsible, thoughtful, and
> constructive replies.
>
>
>
> *From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
> Behalf Of *uurtamo .
> *Sent:* Wednesday, March 30, 2016 7:43 PM
> *To:* computer-go <computer-go@computer-go.org>
> *Subject:* Re: [Computer-go] new challenge for Go programmers
>
>
>
> He cannot possibly write code
>
> On Mar 30, 2016 4:38 PM, "Jim O'Flaherty" <jim.oflaherty...@gmail.com>
> wrote:
>
> I don't think djhbrown is a software engineer. And he seems to have the
> most fits. :)
>
>
>
> On Wed, Mar 30, 2016 at 6:37 PM, uurtamo . <uurt...@gmail.com> wrote:
>
> This is clearly the alphago final laugh; make an email list responder to
> send programmers into fits.
>
> s.
>
> On Mar 30, 2016 4:16 PM, "djhbrown ." <djhbr...@gmail.com> wrote:
>
> thank you very much Ben for sharing the inception work, which may well
> open the door to a new avenue of AI research.  i am particularly
> impressed by one pithy statement the authors make:
>
>  "We must go deeper: Iterations"
>
> i remember as an undergrad being impressed by the expressive power of
> recursive functions, and later by the iterative quality of biological
> growth and its fractal nature.
>
> seeing animals in clouds is a bit like seeing geta in a go position;
> so maybe one way to approach the problem of chatting with a CNN might
> be to seek correlations between convolution weights and successive
> stone configurations that turn up time and time again in games.
>
> it may be that some kind of iterative procedure could do this, just as
> my iterative procedure for circumscribing a group has a recursive
> quality to its definition.
>
> all you need then is to give such a correlation a name, and you will
> be on the way to discovering a new language for talking about Go.
>
>
> On 31/03/2016, Ben <ben_computer...@hemio.de> wrote:
> > It would be very interesting to see what these go playing neural
> > networks dream about [1].
> > [1]
> >
> http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Bill Whig

 
If  the program would merely output 3-5 suggested positions, that would 
probably suffice.  Even an advanced beginner, such as myself, could I believe, 
understand why they are good choices. Just having the "short list" would 
probably be quite an educational tool!  It would probably even help teach 
joseki. 

Bill Whig
 

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Brian Sheppard
This is out of line, IMO. Djhbrown asked a sensible question that has valuable 
intentions. I would like to see responsible, thoughtful, and constructive 
replies.

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
uurtamo .
Sent: Wednesday, March 30, 2016 7:43 PM
To: computer-go <computer-go@computer-go.org>
Subject: Re: [Computer-go] new challenge for Go programmers

 

He cannot possibly write code

On Mar 30, 2016 4:38 PM, "Jim O'Flaherty" <jim.oflaherty...@gmail.com 
<mailto:jim.oflaherty...@gmail.com> > wrote:

I don't think djhbrown is a software engineer. And he seems to have the most 
fits. :)

 

On Wed, Mar 30, 2016 at 6:37 PM, uurtamo . <uurt...@gmail.com 
<mailto:uurt...@gmail.com> > wrote:

This is clearly the alphago final laugh; make an email list responder to send 
programmers into fits.

s.

On Mar 30, 2016 4:16 PM, "djhbrown ." <djhbr...@gmail.com 
<mailto:djhbr...@gmail.com> > wrote:

thank you very much Ben for sharing the inception work, which may well
open the door to a new avenue of AI research.  i am particularly
impressed by one pithy statement the authors make:

 "We must go deeper: Iterations"

i remember as an undergrad being impressed by the expressive power of
recursive functions, and later by the iterative quality of biological
growth and its fractal nature.

seeing animals in clouds is a bit like seeing geta in a go position;
so maybe one way to approach the problem of chatting with a CNN might
be to seek correlations between convolution weights and successive
stone configurations that turn up time and time again in games.

it may be that some kind of iterative procedure could do this, just as
my iterative procedure for circumscribing a group has a recursive
quality to its definition.

all you need then is to give such a correlation a name, and you will
be on the way to discovering a new language for talking about Go.


On 31/03/2016, Ben <ben_computer...@hemio.de <mailto:ben_computer...@hemio.de> 
> wrote:
> It would be very interesting to see what these go playing neural
> networks dream about [1].
> [1]
> http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html
___
Computer-go mailing list
Computer-go@computer-go.org <mailto:Computer-go@computer-go.org> 
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org <mailto:Computer-go@computer-go.org> 
http://computer-go.org/mailman/listinfo/computer-go

 


___
Computer-go mailing list
Computer-go@computer-go.org <mailto:Computer-go@computer-go.org> 
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread uurtamo .
He cannot possibly write code
On Mar 30, 2016 4:38 PM, "Jim O'Flaherty" 
wrote:

> I don't think djhbrown is a software engineer. And he seems to have the
> most fits. :)
>
> On Wed, Mar 30, 2016 at 6:37 PM, uurtamo .  wrote:
>
>> This is clearly the alphago final laugh; make an email list responder to
>> send programmers into fits.
>>
>> s.
>> On Mar 30, 2016 4:16 PM, "djhbrown ."  wrote:
>>
>>> thank you very much Ben for sharing the inception work, which may well
>>> open the door to a new avenue of AI research.  i am particularly
>>> impressed by one pithy statement the authors make:
>>>
>>>  "We must go deeper: Iterations"
>>>
>>> i remember as an undergrad being impressed by the expressive power of
>>> recursive functions, and later by the iterative quality of biological
>>> growth and its fractal nature.
>>>
>>> seeing animals in clouds is a bit like seeing geta in a go position;
>>> so maybe one way to approach the problem of chatting with a CNN might
>>> be to seek correlations between convolution weights and successive
>>> stone configurations that turn up time and time again in games.
>>>
>>> it may be that some kind of iterative procedure could do this, just as
>>> my iterative procedure for circumscribing a group has a recursive
>>> quality to its definition.
>>>
>>> all you need then is to give such a correlation a name, and you will
>>> be on the way to discovering a new language for talking about Go.
>>>
>>>
>>> On 31/03/2016, Ben  wrote:
>>> > It would be very interesting to see what these go playing neural
>>> > networks dream about [1].
>>> > [1]
>>> >
>>> http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Jim O'Flaherty
I don't think djhbrown is a software engineer. And he seems to have the
most fits. :)

On Wed, Mar 30, 2016 at 6:37 PM, uurtamo .  wrote:

> This is clearly the alphago final laugh; make an email list responder to
> send programmers into fits.
>
> s.
> On Mar 30, 2016 4:16 PM, "djhbrown ."  wrote:
>
>> thank you very much Ben for sharing the inception work, which may well
>> open the door to a new avenue of AI research.  i am particularly
>> impressed by one pithy statement the authors make:
>>
>>  "We must go deeper: Iterations"
>>
>> i remember as an undergrad being impressed by the expressive power of
>> recursive functions, and later by the iterative quality of biological
>> growth and its fractal nature.
>>
>> seeing animals in clouds is a bit like seeing geta in a go position;
>> so maybe one way to approach the problem of chatting with a CNN might
>> be to seek correlations between convolution weights and successive
>> stone configurations that turn up time and time again in games.
>>
>> it may be that some kind of iterative procedure could do this, just as
>> my iterative procedure for circumscribing a group has a recursive
>> quality to its definition.
>>
>> all you need then is to give such a correlation a name, and you will
>> be on the way to discovering a new language for talking about Go.
>>
>>
>> On 31/03/2016, Ben  wrote:
>> > It would be very interesting to see what these go playing neural
>> > networks dream about [1].
>> > [1]
>> >
>> http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread uurtamo .
This is clearly the alphago final laugh; make an email list responder to
send programmers into fits.

s.
On Mar 30, 2016 4:16 PM, "djhbrown ."  wrote:

> thank you very much Ben for sharing the inception work, which may well
> open the door to a new avenue of AI research.  i am particularly
> impressed by one pithy statement the authors make:
>
>  "We must go deeper: Iterations"
>
> i remember as an undergrad being impressed by the expressive power of
> recursive functions, and later by the iterative quality of biological
> growth and its fractal nature.
>
> seeing animals in clouds is a bit like seeing geta in a go position;
> so maybe one way to approach the problem of chatting with a CNN might
> be to seek correlations between convolution weights and successive
> stone configurations that turn up time and time again in games.
>
> it may be that some kind of iterative procedure could do this, just as
> my iterative procedure for circumscribing a group has a recursive
> quality to its definition.
>
> all you need then is to give such a correlation a name, and you will
> be on the way to discovering a new language for talking about Go.
>
>
> On 31/03/2016, Ben  wrote:
> > It would be very interesting to see what these go playing neural
> > networks dream about [1].
> > [1]
> >
> http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread djhbrown .
thank you very much Ben for sharing the inception work, which may well
open the door to a new avenue of AI research.  i am particularly
impressed by one pithy statement the authors make:

 "We must go deeper: Iterations"

i remember as an undergrad being impressed by the expressive power of
recursive functions, and later by the iterative quality of biological
growth and its fractal nature.

seeing animals in clouds is a bit like seeing geta in a go position;
so maybe one way to approach the problem of chatting with a CNN might
be to seek correlations between convolution weights and successive
stone configurations that turn up time and time again in games.

it may be that some kind of iterative procedure could do this, just as
my iterative procedure for circumscribing a group has a recursive
quality to its definition.

all you need then is to give such a correlation a name, and you will
be on the way to discovering a new language for talking about Go.


On 31/03/2016, Ben  wrote:
> It would be very interesting to see what these go playing neural
> networks dream about [1].
> [1]
> http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread uurtamo .
Guys, please take a day.

steve
On Mar 30, 2016 1:52 PM, "Brian Sheppard" <sheppar...@aol.com> wrote:

> Trouble is that it is very difficult to put certain concepts into
> mathematics. For instance: “well, I tried to find parameters that did a
> better job of minimizing that error function, but eventually I lost
> patience.” :-)
>
>
>
> Neural network parameters are not directly humanly understandable. They
> just happen to minimize an error function on a sample of training cases
> that might not even be representative. So you want to reason “around” the
> NN by interrogating it in some way, and trying to explain the results.
>
>
>
> If anyone wants to pursue this research, I suggest several avenues.
>
>
>
> First, you could differentiate the output with respect to each input to
> determine the aspects of the position that weigh on the result most
> heavily. Then, assuming that you can compare the scale of the inputs in
> some way, and assuming that the inputs are something that is understandable
> in the problem domain, maybe you can construct an explanation.
>
>
>
> Second, you could construct a set of hypothetical different similar
> positions, and see how those results differ. E.g., make a set of examples
> by adding a black stone and a white stone to each empty point on the board,
> or removing each existing stone from the board, and then evaluate the NN on
> those cases, then do decision-tree induction to discover patterns.
>
>
>
> Third, in theory decision trees are just as powerful as NN (in that both
> are asymptotically optimal learning systems), and it happens that decision
> trees provide humanly understandable explanations for reasoning. So maybe
> you can replace the NN with DT and have equally impressive performance, and
> pick up human understandability as a side-effect.
>
>
>
> Actually, if anyone is interested in making computer go programs that do
> not require GPUs and super-computers, then looking into DTs is advisable.
>
>
>
> Best,
>
> Brian
>
>
>
>
>
> *From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
> Behalf Of *Jim O'Flaherty
> *Sent:* Wednesday, March 30, 2016 4:24 PM
> *To:* computer-go@computer-go.org
> *Subject:* Re: [Computer-go] new challenge for Go programmers
>
>
>
> I agree, "cannot" is too strong. But, values close enough to "extremely
> difficult as to be unlikely" is why I used it.
>
> On Mar 30, 2016 11:12 AM, "Robert Jasiek" <jas...@snafu.de> wrote:
>
> On 30.03.2016 16:58, Jim O'Flaherty wrote:
>
> My own study says that we cannot top down include "English explanations" of
> how the ANNs (Artificial Neural Networks, of which DCNN is just one type)
> arrive a conclusions.
>
>
> "cannot" is a strong word. I would use it only if it were proven
> mathematically.
>
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Brian Sheppard
Trouble is that it is very difficult to put certain concepts into mathematics. 
For instance: “well, I tried to find parameters that did a better job of 
minimizing that error function, but eventually I lost patience.” :-)

 

Neural network parameters are not directly humanly understandable. They just 
happen to minimize an error function on a sample of training cases that might 
not even be representative. So you want to reason “around” the NN by 
interrogating it in some way, and trying to explain the results.

 

If anyone wants to pursue this research, I suggest several avenues.

 

First, you could differentiate the output with respect to each input to 
determine the aspects of the position that weigh on the result most heavily. 
Then, assuming that you can compare the scale of the inputs in some way, and 
assuming that the inputs are something that is understandable in the problem 
domain, maybe you can construct an explanation.

 

Second, you could construct a set of hypothetical different similar positions, 
and see how those results differ. E.g., make a set of examples by adding a 
black stone and a white stone to each empty point on the board, or removing 
each existing stone from the board, and then evaluate the NN on those cases, 
then do decision-tree induction to discover patterns.

 

Third, in theory decision trees are just as powerful as NN (in that both are 
asymptotically optimal learning systems), and it happens that decision trees 
provide humanly understandable explanations for reasoning. So maybe you can 
replace the NN with DT and have equally impressive performance, and pick up 
human understandability as a side-effect.

 

Actually, if anyone is interested in making computer go programs that do not 
require GPUs and super-computers, then looking into DTs is advisable.

 

Best,

Brian

 

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of Jim 
O'Flaherty
Sent: Wednesday, March 30, 2016 4:24 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] new challenge for Go programmers

 

I agree, "cannot" is too strong. But, values close enough to "extremely 
difficult as to be unlikely" is why I used it.

On Mar 30, 2016 11:12 AM, "Robert Jasiek" <jas...@snafu.de 
<mailto:jas...@snafu.de> > wrote:

On 30.03.2016 16:58, Jim O'Flaherty wrote:

My own study says that we cannot top down include "English explanations" of
how the ANNs (Artificial Neural Networks, of which DCNN is just one type)
arrive a conclusions.


"cannot" is a strong word. I would use it only if it were proven mathematically.

-- 
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org <mailto:Computer-go@computer-go.org> 
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Ben
It would be very interesting to see what these go playing neural 
networks dream about [1]. Admittedly it does not explain any specific 
moves the AI does - but it might show some interesting patterns that are 
encoded in the NN and might even give some insight into "how the NN 
thinks".


Put differently: select a single neuron and find a board pattern such 
that the excitation of this neuron is maximal. With some luck you might 
be able to give meaning to this individual neuron or to single layers of 
the network (like how the first layers in pattern recognition basically 
detect edges).



~ Ben

[1] 
http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html


Am 30.03.2016 22:23, schrieb Jim O'Flaherty:

I agree, "cannot" is too strong. But, values close enough to
"extremely difficult as to be unlikely" is why I used it.

On Mar 30, 2016 11:12 AM, "Robert Jasiek"  wrote:


On 30.03.2016 16:58, Jim O'Flaherty wrote:


My own study says that we cannot top down include "English
explanations" of
how the ANNs (Artificial Neural Networks, of which DCNN is just
one type)
arrive a conclusions.


"cannot" is a strong word. I would use it only if it were proven
mathematically.

--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Jim O'Flaherty
I agree, "cannot" is too strong. But, values close enough to "extremely
difficult as to be unlikely" is why I used it.
On Mar 30, 2016 11:12 AM, "Robert Jasiek"  wrote:

> On 30.03.2016 16:58, Jim O'Flaherty wrote:
>
>> My own study says that we cannot top down include "English explanations"
>> of
>> how the ANNs (Artificial Neural Networks, of which DCNN is just one type)
>> arrive a conclusions.
>>
>
> "cannot" is a strong word. I would use it only if it were proven
> mathematically.
>
> --
> robert jasiek
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread David Ongaro
On 30 Mar 2016, at 03:04, djhbrown .  wrote:
> 
> as to preconceived notions, my own notions are postconceived, having
> studied artificial intelligence and biological computation over 40
> post-doctoral years during which i have published 50 or so
> peer-reviewed scientific papers, some in respectable journals,
> including New Scientist.

And there might be even some valuable research in these papers. If so please 
don't continue to spoil the reputation of your earlier years by pamphlets of 
the recent kind.

Btw: arguments by authority might work once or even twice, but not constantly.


> On 30/03/2016, Stefan Kaitschick  wrote:
>> Your lack of respect for task performance is misguided imo. Your
>> preconceived notions of what intelligence is, will lead you astray.
>> 
> 
> 
> -- 
> patient: "whenever i open my mouth, i get a shooting pain in my foot"
> doctor: "fire!"
> http://sites.google.com/site/djhbrown2/home
> https://www.youtube.com/user/djhbrown
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Robert Jasiek

djhbrown,

Even from a pure playing stronger perspective, it is not game over yet 
because there is no guarantee yet for always avoiding sudden entering of 
holes of bad play, verification by reading is missing and there is no 
optimisation for better score when winning the game anyway. For other AI 
applications, this means that there is no guarantee of unexpected bad 
actions, such as accidentally killing people.


As I have explained, shortly after the sacrifice squeeze in game 5 
AlphaGo had a winning position. Therefore, currently one should not call 
the squeeze a mistake. A more cateful study of the few moves after the 
squeeze is necessary.


You mention several outdated principles and concepts, whose 
insufficiency I have explained elsewhere.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Robert Jasiek

On 30.03.2016 16:58, Jim O'Flaherty wrote:

My own study says that we cannot top down include "English explanations" of
how the ANNs (Artificial Neural Networks, of which DCNN is just one type)
arrive a conclusions.


"cannot" is a strong word. I would use it only if it were proven 
mathematically.


--
robert jasiek
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Jim O'Flaherty
My own study says that we cannot top down include "English explanations" of
how the ANNs (Artificial Neural Networks, of which DCNN is just one type)
arrive a conclusions. If you want to translate the computational value of
an ANN into something other than the essential operation that it is
performing, that computational value has to be "grown" into the ANN at the
same time the original computational value is being "reinforced learning
baked in". And when doing that, it costs considerably in both computational
energy and in the extra amount of time the growing of the ANN to produce
the integrated "suggest a move and then offer human meaningful English
explanations for the suggestion". And this assumes English as the language
and a single move suggestion. Add another human language and/or make it
suggest more than one move, and you explode the resources required to
converge on a solution that would eventually beat an amateur, much less a
professional.

Consider ANNs from an entirely different place; our own wetware. Our
wet-ware doesn't learn English and attach explanations to most of its
cognitive activities. And to those activities to which it does attach
English explanations, we have discovered that it is very prone to blind
spots and severe biases that turn into feedback loops magnifying the size
of the blind spots and the degree of biasing. So, even evolution didn't
spend the time to give us a mechanism that can self-describe all or even
most of its operation. And introspection, what we do have that allows us to
self-evaluate is very error prone (apologies to all egos that just got very
activated by being publicly outed as less capable than they know they
actually are).

Another way to consider this is to find out what has happened in the Chess
world with similar desired effects. While they have not been using ANNs
near as strongly, they still have the same desire effects to produce
"English explanations" for move suggestions. I think you will find, even in
this vastly simpler computational space, they haven't made much progress in
this area, either. In otherwords, it is proving to be highly expensive for
insufficient payoff; i.e. an evolutionary dead end.

You say, "Unfortunately no one has a clue on how to put into words what
DCNN "know", to produce really meaningful and useful feedback, justifying
decisions around candidates, etc. This is very much worth investigating."

I have a clue. However, for me personally, I find the investment required
to do said investigation to be WAY too high compared to the actual value
yield I _might_ _eventually_ get from the investment. There is far more low
hanging fruit in the Go AI and ANN space that I could choose (and am
choosing) before I would choose something so highly speculative your
investigation.


On Wed, Mar 30, 2016 at 7:49 AM, djhbrown .  wrote:

> I fully agree with Goncalo that it would be worth investigating how
> one could write an algorithm to express in English what Alpha's or
> DCNNigo's nets
> have learned, and a month ago (before her astonishing achievement in
> March) offerred some ideas on how this might be approached in a
> youtube comment on Kim's review of the Fan Hui games:
> https://www.youtube.com/watch?v=NHRHUHW6HQE
>
> the relevant section of which is (abridged):
>
> "a further, "higher-level" pattern leaning algorithm might be able to
> induce correlation and/or implication relationships between
> convolutions, enabling it to begin to develop its own ontology of
> perceptions, perhaps by correlating convolution relationships with
> geometric patterns on the board image. ... i look forward to the day
> when someone can find a way to induce symbolic pattern descriptions of
> relationships between convolutions and image patterns so that betago
> (child of alpha) can explain its "thinking" in a way we can understand
> and perhaps learn from too."
>
> On 30/03/2016, Gonçalo Mendes Ferreira  wrote:
> > Come on let's all calm down please. :)
> >
> > David I think the great challenge is in having good insight with AlphaGo
> > strength. Many Faces already provides some textual move suggestions, as
> > do probably other programs. Any program that doesn't use exclusively
> > machine learning or global search, like GNU Go, should be able to
> > suggest how it came about a move.
> >
> > Unfortunately no one has a clue on how to put into words what DCNN
> > "know", to produce really meaningful and useful feedback, justifying
> > decisions around candidates, etc. This is very much worth investigating.
> >
> > - Gonçalo
> >
> >
> >
> > On 30/03/2016 12:32, Álvaro Begué wrote:
> >>> no lack of respect for DeepMind's achievement was contained in my
> >>> posting; on the contrary, i was as surprised as anyone at how well she
> >>> did and it gave me great pause for thought.
> >>>
> >>
> >> Well, you wrote this:
> >>
> >>> but convolutional neural networks and monte-carlo simulators have not
> >>> advanced the 

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread djhbrown .
I fully agree with Goncalo that it would be worth investigating how
one could write an algorithm to express in English what Alpha's or
DCNNigo's nets
have learned, and a month ago (before her astonishing achievement in
March) offerred some ideas on how this might be approached in a
youtube comment on Kim's review of the Fan Hui games:
https://www.youtube.com/watch?v=NHRHUHW6HQE

the relevant section of which is (abridged):

"a further, "higher-level" pattern leaning algorithm might be able to
induce correlation and/or implication relationships between
convolutions, enabling it to begin to develop its own ontology of
perceptions, perhaps by correlating convolution relationships with
geometric patterns on the board image. ... i look forward to the day
when someone can find a way to induce symbolic pattern descriptions of
relationships between convolutions and image patterns so that betago
(child of alpha) can explain its "thinking" in a way we can understand
and perhaps learn from too."

On 30/03/2016, Gonçalo Mendes Ferreira  wrote:
> Come on let's all calm down please. :)
>
> David I think the great challenge is in having good insight with AlphaGo
> strength. Many Faces already provides some textual move suggestions, as
> do probably other programs. Any program that doesn't use exclusively
> machine learning or global search, like GNU Go, should be able to
> suggest how it came about a move.
>
> Unfortunately no one has a clue on how to put into words what DCNN
> "know", to produce really meaningful and useful feedback, justifying
> decisions around candidates, etc. This is very much worth investigating.
>
> - Gonçalo
>
>
>
> On 30/03/2016 12:32, Álvaro Begué wrote:
>>> no lack of respect for DeepMind's achievement was contained in my
>>> posting; on the contrary, i was as surprised as anyone at how well she
>>> did and it gave me great pause for thought.
>>>
>>
>> Well, you wrote this:
>>
>>> but convolutional neural networks and monte-carlo simulators have not
>>> advanced the science of artificial intelligence one whit further than
>>> being engineered empirical validations of the 1940s-era theories of
>>> McCullough & Pitts and Ulam respectively, albeit their conjunction
>>> being a seminal validation insofar as duffing up human Go players is
>>> concerned.
>>>
>>
>> That paragraph is disrespectful of AlphaGo and every important development
>> that it was built on. Theorists of the 40s didn't know jackshit about how
>> to make a strong go program or any other part of AI, for that matter.
>>
>> This is like giving credit to the pre-Socratic philosophers for atomic
>> theory, or to Genesis for the Big Bang theory. I am sure there are people
>> that see connections, but no. Just no.
>>
>> one has to expect a certain amount of abuse when going public, and to
>>> expect that eager critics will misrepresent what was said.
>>>
>>
>> Your vast experience in the field means your opinions were formed way
>> before we knew what works and what doesn't, and are essentially worthless.
>>
>> There, you like abuse?
>>
>> Álvaro.
>>
>>
>> On Wed, Mar 30, 2016 at 6:04 AM, djhbrown .  wrote:
>>
>>> one has to expect a certain amount of abuse when going public, and to
>>> expect that eager critics will misrepresent what was said.
>>>
>>> no lack of respect for DeepMind's achievement was contained in my
>>> posting; on the contrary, i was as surprised as anyone at how well she
>>> did and it gave me great pause for thought.
>>>
>>> as to preconceived notions, my own notions are postconceived, having
>>> studied artificial intelligence and biological computation over 40
>>> post-doctoral years during which i have published 50 or so
>>> peer-reviewed scientific papers, some in respectable journals,
>>> including New Scientist.
>>>
>>> On 30/03/2016, Stefan Kaitschick  wrote:
 Your lack of respect for task performance is misguided imo. Your
 preconceived notions of what intelligence is, will lead you astray.

>>>
>>>
>>> --
>>> patient: "whenever i open my mouth, i get a shooting pain in my foot"
>>> doctor: "fire!"
>>> http://sites.google.com/site/djhbrown2/home
>>> https://www.youtube.com/user/djhbrown
>>> ___
>>> Computer-go mailing list
>>> Computer-go@computer-go.org
>>> http://computer-go.org/mailman/listinfo/computer-go
>>>
>>
>>
>>
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go


-- 
patient: "whenever i open my mouth, i get a shooting pain in my foot"
doctor: "fire!"
http://sites.google.com/site/djhbrown2/home
https://www.youtube.com/user/djhbrown
___
Computer-go mailing list

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Gonçalo Mendes Ferreira
Come on let's all calm down please. :)

David I think the great challenge is in having good insight with AlphaGo
strength. Many Faces already provides some textual move suggestions, as
do probably other programs. Any program that doesn't use exclusively
machine learning or global search, like GNU Go, should be able to
suggest how it came about a move.

Unfortunately no one has a clue on how to put into words what DCNN
"know", to produce really meaningful and useful feedback, justifying
decisions around candidates, etc. This is very much worth investigating.

- Gonçalo



On 30/03/2016 12:32, Álvaro Begué wrote:
>> no lack of respect for DeepMind's achievement was contained in my
>> posting; on the contrary, i was as surprised as anyone at how well she
>> did and it gave me great pause for thought.
>>
> 
> Well, you wrote this:
> 
>> but convolutional neural networks and monte-carlo simulators have not
>> advanced the science of artificial intelligence one whit further than
>> being engineered empirical validations of the 1940s-era theories of
>> McCullough & Pitts and Ulam respectively, albeit their conjunction
>> being a seminal validation insofar as duffing up human Go players is
>> concerned.
>>
> 
> That paragraph is disrespectful of AlphaGo and every important development
> that it was built on. Theorists of the 40s didn't know jackshit about how
> to make a strong go program or any other part of AI, for that matter.
> 
> This is like giving credit to the pre-Socratic philosophers for atomic
> theory, or to Genesis for the Big Bang theory. I am sure there are people
> that see connections, but no. Just no.
> 
> one has to expect a certain amount of abuse when going public, and to
>> expect that eager critics will misrepresent what was said.
>>
> 
> Your vast experience in the field means your opinions were formed way
> before we knew what works and what doesn't, and are essentially worthless.
> 
> There, you like abuse?
> 
> Álvaro.
> 
> 
> On Wed, Mar 30, 2016 at 6:04 AM, djhbrown .  wrote:
> 
>> one has to expect a certain amount of abuse when going public, and to
>> expect that eager critics will misrepresent what was said.
>>
>> no lack of respect for DeepMind's achievement was contained in my
>> posting; on the contrary, i was as surprised as anyone at how well she
>> did and it gave me great pause for thought.
>>
>> as to preconceived notions, my own notions are postconceived, having
>> studied artificial intelligence and biological computation over 40
>> post-doctoral years during which i have published 50 or so
>> peer-reviewed scientific papers, some in respectable journals,
>> including New Scientist.
>>
>> On 30/03/2016, Stefan Kaitschick  wrote:
>>> Your lack of respect for task performance is misguided imo. Your
>>> preconceived notions of what intelligence is, will lead you astray.
>>>
>>
>>
>> --
>> patient: "whenever i open my mouth, i get a shooting pain in my foot"
>> doctor: "fire!"
>> http://sites.google.com/site/djhbrown2/home
>> https://www.youtube.com/user/djhbrown
>> ___
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
> 
> 
> 
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
> 
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Álvaro Begué
> no lack of respect for DeepMind's achievement was contained in my
> posting; on the contrary, i was as surprised as anyone at how well she
> did and it gave me great pause for thought.
>

Well, you wrote this:

> but convolutional neural networks and monte-carlo simulators have not
> advanced the science of artificial intelligence one whit further than
> being engineered empirical validations of the 1940s-era theories of
> McCullough & Pitts and Ulam respectively, albeit their conjunction
> being a seminal validation insofar as duffing up human Go players is
> concerned.
>

That paragraph is disrespectful of AlphaGo and every important development
that it was built on. Theorists of the 40s didn't know jackshit about how
to make a strong go program or any other part of AI, for that matter.

This is like giving credit to the pre-Socratic philosophers for atomic
theory, or to Genesis for the Big Bang theory. I am sure there are people
that see connections, but no. Just no.

one has to expect a certain amount of abuse when going public, and to
> expect that eager critics will misrepresent what was said.
>

Your vast experience in the field means your opinions were formed way
before we knew what works and what doesn't, and are essentially worthless.

There, you like abuse?

Álvaro.


On Wed, Mar 30, 2016 at 6:04 AM, djhbrown .  wrote:

> one has to expect a certain amount of abuse when going public, and to
> expect that eager critics will misrepresent what was said.
>
> no lack of respect for DeepMind's achievement was contained in my
> posting; on the contrary, i was as surprised as anyone at how well she
> did and it gave me great pause for thought.
>
> as to preconceived notions, my own notions are postconceived, having
> studied artificial intelligence and biological computation over 40
> post-doctoral years during which i have published 50 or so
> peer-reviewed scientific papers, some in respectable journals,
> including New Scientist.
>
> On 30/03/2016, Stefan Kaitschick  wrote:
> > Your lack of respect for task performance is misguided imo. Your
> > preconceived notions of what intelligence is, will lead you astray.
> >
>
>
> --
> patient: "whenever i open my mouth, i get a shooting pain in my foot"
> doctor: "fire!"
> http://sites.google.com/site/djhbrown2/home
> https://www.youtube.com/user/djhbrown
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread djhbrown .
one has to expect a certain amount of abuse when going public, and to
expect that eager critics will misrepresent what was said.

no lack of respect for DeepMind's achievement was contained in my
posting; on the contrary, i was as surprised as anyone at how well she
did and it gave me great pause for thought.

as to preconceived notions, my own notions are postconceived, having
studied artificial intelligence and biological computation over 40
post-doctoral years during which i have published 50 or so
peer-reviewed scientific papers, some in respectable journals,
including New Scientist.

On 30/03/2016, Stefan Kaitschick  wrote:
> Your lack of respect for task performance is misguided imo. Your
> preconceived notions of what intelligence is, will lead you astray.
>


-- 
patient: "whenever i open my mouth, i get a shooting pain in my foot"
doctor: "fire!"
http://sites.google.com/site/djhbrown2/home
https://www.youtube.com/user/djhbrown
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] new challenge for Go programmers

2016-03-30 Thread Stefan Kaitschick
Your lack of respect for task performance is misguided imo. Your
preconceived notions of what intelligence is, will lead you astray.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] new challenge for Go programmers

2016-03-30 Thread djhbrown .
now that alpha rules the world,  the usual suspects throng of
plagiarising copyycat psychopath serial killers are already busy
cloning her, but there is a faint chance that there may also be some
subscribers to this list who would like to contribute/investigate
something to/in AI by way of programming something new to do with Go.

as Remi observed back in October, as far as sheer playing performance
is concerned, it is already "game over", and there's really very
little value in trying a few tweaks (like fiddling about with
different values of arbitrary constants such as the evaluation
formula's lambda) in the hope of gaining a slight advantage over what
is already good enough to beat the world's best human players.

but convolutional neural networks and monte-carlo simulators have not
advanced the science of artificial intelligence one whit further than
being engineered empirical validations of the 1940s-era theories of
McCullough & Pitts and Ulam respectively, albeit their conjunction
being a seminal validation insofar as duffing up human Go players is
concerned.

So the field is still wide open for novel ideas about AI technology
using Go as an experimental testbed.

Alpha can play brilliantly, but she can't say what she's thinking in a
way that would make sense to people learning Go, so she has little
utility as a Go teacher other than being able to say "you should play
A instead of B because these are the followup move sequences i
envisage would happen in each case".

But that doesn't help the student, who is left wondering "where the
hell do A and B come from anyway?" !

Which brings me to "A Hierarchy of Agents",
https://www.youtube.com/watch?v=1SJbEWuvlMM=PL4y5WtsvtduqNW0AKlSsOdea3Hl1X_v-S=28
which could help people learn to play better, because it uses a
conceptual hierarchy and goal-directed reasoning involving all the
things that Go books talk about: groups, eyes, walls, eyeshape, aji,
etc, to create candidate moves.  And so it can talk about what it's
thinking in plain English.

AHA could use AlphaGo as one of its agents; one to perform the vital
role of candidate move testing (ie treesearch) within
fovea-circumscribed subareas of the board, which by being smaller than
19x19 gives her a better chance of avoiding garden paths that lead her
up a gumtree to eventual defeat.

The horizon effect in Monte-Carlo is more a width horizon than a depth
one, and it caused Alpha to overlook the not so hard to find defence
to move 78 in game 4 that Kim found even before Lee played 78.

"not so hard to find", provided you know what you're looking for.

Alpha only knows what she's looking for in the sense that she can
calculate who wins at the end of her searches, whereas Kim (and all of
us watching, no matter how weak we are as players) knew that
preventing Lee from living inside Alpha's walls - his trying to was
about as cheeky (or desperate) as anyone can get in a Go game - was
absolutely vital.

AHA does know (would know, if it were implemented) that stopping Lee
from living inside her walls was necessary.  Because AHA operates
using hierarchical perceptions and goal-directed reasoning, in sharp
contrast to pattern-directed inference that was tried by the so-called
'expert system' generation of programs that were no more expert than a
worm crawling around the very bottom of the iceberg of
conceptualisation, which i labelled "kneejerk reaction inference" in
my YouTube vid.

A fovea (ie limited region) for focussed lookahead search would have
helped Alpha cope with move 78, and also help her avoid the gravestone
"two stone edge squeeze" tesuji in game 5 that Redmond said was common
knowledge to all pros.  By coincidence,  in my 1979 IJACI paper
"Hierarchical Reasoning in the game of Go", i used the 2-stone edge
squeeze to illustrate how goal-oriented pattern recognition and
generalised move sequences could be programmed into a computer.

Back in October 2015, i was unsure how the fovea could be
circumscribed, and in response to my request for ideas, no-one on this
list came up with a constructive suggestion, but i think i've since
figured it out, so here goes:

essentially, it's an algorithm that iteratively labels what i call
"controls and controlled points" - a stone controls its 4 liberties,
and a point is controlled by one side if at least 3 of its neighbours
are exclusively controlled by that side and it has no enemy controls.
The edge is friendly to both sides, so a point on the edge is
controlled if 2 of its neighbours are exclusively controlled and it
has no enemy controls.  Points occupied by enemy stones can become
controlled points too (if an enemy stone becomes a controlled point it
is likely to die).

Repeated passes of the algorithm continue until the cp status of every
point is stable from one pass to the next. At this stage, groups are
identified as cps connected by transitivity (ie groups include empty
points as well as occupied ones).

I tested my algorithm by simulating it on a virtual