On 01.04.2016 02:22, djhbrown . wrote:
kogo is great for corner openings
Kogo contains many mistakes. Too many kyus got their hands on it.
It would be better to spend 3+ weeks using kombilo on GoGoD and create a
new joseki tree. A summary of such an effort (with some interesting,
on the subject of tools for learning josekis, i would love to have to
the help of a computerised assistant who could show me a flip-through
"photo album" of how alternative paths in a joseki end up, without
having to plod along the paths (which to me is a dark and mysterious
tree of paths in a
Nice
On Mar 31, 2016 7:48 AM, "Álvaro Begué" wrote:
> A very simple-minded way of trying to identify what a particular neuron in
> the upper layers is doing is to find the 50 positions in the database that
> make it produce the highest activation values. If the neuron is
It all comes down to having a reasonable way to search the MCTS’s tree. An
elegant tool would be wonderful, but even something basic would allow a
determined person to find interesting things. When I was debugging SlugGo,
which had a tree as wide as 24 and as deep as 10, with nothing other than
On 31/03/2016, "Ingo Althöfer" <3-hirn-ver...@gmx.de> wrote:
> somehow he went into a
> "strange loop", and in the end he was asked to stop posting.
asked by the very person i was trying to help! That was the last straw.
Ironically, whilst i was openly trying to help all montes, not just
Major changes in the evaluation probability could likely have a horizon of
a few moves behind that might be interesting to more closely evaluate. With
a small window like that, a deeper/more exhaustive search might work.
s.
On Mar 31, 2016 10:21 AM, "Petr Baudis" wrote:
> On Thu,
Petr,
You know, just exploring this conversation is motivating to me, even if I
am still seeing it as huge risk with small payoff.
I like your line of thinking in that from a top down approach, we start
simple and just push it as far as it can go, acknowledging we won't likely
get anywhere near
On Thu, Mar 31, 2016 at 08:51:30AM -0500, Jim O'Flaherty wrote:
> What I was addressing was more around what Robert Jasiek is describing in
> his joseki books and other materials he's produced. And it is exactly why I
> think the "explanation of the suggested moves" requires a much deeper
> baking
Petr suggests "caption Go positions based on game commentary"
Without doubt, there has to be a lot of mileage in looking for a way
for a machine to learn from expert commentaries.
i see a difference between labelling a cat in a photo and labelling a
stone configuration in a picture of a board,
no
On Thu, Mar 31, 2016 at 8:33 AM, Lucas, Simon M wrote:
> Thanks Ryan,
>
> Nice paper – did you follow up on any of the future work?
>
> Simon
>
>
>
> From: Computer-go on behalf of Ryan
> Hayward
> Reply-To:
On 31.03.2016 16:54, Bill Whig wrote:
Wouldn't you agree that a lot of people (most?) might might advance more swiftly
> with move suggestions rather than text that they have to work through
like a textbook?
I say the opposite.
Move suggestions without any additional information are
A very simple-minded way of trying to identify what a particular neuron in
the upper layers is doing is to find the 50 positions in the database that
make it produce the highest activation values. If the neuron is in one of
the convolutional layers, you get a full 19x19 image of activation values,
Message: 1
Date: Thu, 31 Mar 2016 14:33:39 +0200
From: Robert Jasiek
To: computer-go@computer-go.org
Subject: Re: [Computer-go] new challenge for Go programmers
Message-ID: <56fd1923.4080...@snafu.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
On 31.03.2016 13:43,
Thanks Ryan,
Nice paper – did you follow up on any of the future work?
Simon
From: Computer-go
>
on behalf of Ryan Hayward >
Reply-To:
Then again DNNs also manage feature extraction on unlabeled data with
increasing levels of abstraction towards upper layers. Perhaps one
could apply such a specifically trained DNN to artificial board
situations that emphasize specific concepts and examine the network's
activation, trying to map
Ingo,
That's precisely what has my knickers in a twist regarding djhbrown; his
prior behavior. I'm with you in that I hope that he better manages his
participation and uses list feedback to spend a little more time filtering
what his "creativity" so it fits closer to the listening of this
Robert,
This is exactly why I think the "explanation of the suggested moves"
requires a much deeper baking into the participating ANN's (bottom up
approach). And given what I have read thus far, I am still seeing the risk
extraordinarily high and the payoff exceedingly low, outside an academic
Message: 3
Date: Thu, 31 Mar 2016 08:35:51 +0200
From: Robert Jasiek
To: computer-go@computer-go.org
Subject: Re: [Computer-go] new challenge for Go programmers
Message-ID: <56fcc547@snafu.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
On 31.03.2016 03:52, Bill
On Wed, Mar 30, 2016 at 09:58:48AM -0500, Jim O'Flaherty wrote:
> My own study says that we cannot top down include "English explanations" of
> how the ANNs (Artificial Neural Networks, of which DCNN is just one type)
> arrive a conclusions.
I don't think that's obvious at all. My current avenue
Yes, I recall that earlier episode. I would be happy to have a better
relationship going forward.
I wrote some explanation generators for Scrabble and Chess AI, but these were
much simpler systems that I could break apart. E.g., the Chess engine would
play two moves out until a "quiet"
Hello all,
"Brian Sheppard" wrote:
> ... This is out of line, IMO. Djhbrown asked a sensible question that has
> valuable intentions. I would like to see responsible, thoughtful, and
> constructive replies.
there is a natural explanation why some people here react allergic
On 31.03.2016 03:52, Bill Whig wrote:
If the program would merely output 3-5 suggested positions, that would probably suffice.
Even an advanced beginner, such as myself, could I believe, understand why they are good
choices. Just having the "short list" would probably be quite an educational
"Similar to Neuro-Science, where reverse engineering methods like fMRI
reveal structure in brain activity, we demonstrated how to describe the
agent’s policy with simple logic rules by processing the network’s neural
activity. This is important since often humans can understand the optimal
policy
this is also interesting, to visualize "how the NN thinks"
http://blog.acolyer.org/2016/03/02/graying-the-black-box-understanding-dqns/
On Wed, Mar 30, 2016 at 10:38 PM, Ben wrote:
> It would be very interesting to see what these go playing neural networks
> dream
24 matches
Mail list logo