On 02.02.2016 17:29, Jim O'Flaherty wrote:
AI Software Engineers: Robert, please stop asking our AI for explanations.
We don't want to distract it with limited human understanding. And we don't
want the Herculean task of coding up that extremely frail and error prone
bridge.
Currently I do not
Hi David,
I've used a GTX 970 for training deep convnets without issue. Depending on
your budget, a GTX 980 Ti or TITAN X would be even better (we use some
TITAN X's in our lab). The main thing about using smaller GPUs for training
these networks is that depending on the implementation of the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
this is a very difficult questions:
I do have 100 batches (each 64) within 1 minute for the big facebook
DCNN (384 layers in each of the 9 3x3 kernel two 128 5x5 and two 128
7x7 before that)
What facebook calls an epoch is 400 of this (4 *
>
> If AlphaGo had lost at least one game, I'd understand how people can have
>> an upper bound on its level, but with 5-0 (except for Blitz) it's hard to
>> have an upper bound on his level. After all, AlphaGo might just have played
>> well enough for crushing Fan Hui, and a weak move while the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi David,
I use Ubuntu 14.04 LTS with a NVIDIA GTX970 Graphic card (and
i7-4970k, but this is not important for training I think) and
installed CUDNN v4 (important, at least a factor 4 in training speed).
This Ubuntu version is officially supported
2016-02-01 12:24 GMT+01:00 Olivier Teytaud :
> If AlphaGo had lost at least one game, I'd understand how people can have
> an upper bound on its level, but with 5-0 (except for Blitz) it's hard to
> have an upper bound on his level. After all, AlphaGo might just have played
> well
Robert, please consider some of this as the difference between math and
engineering. Math desires rigor. Engineering desires working solutions. When
an engineering solution is being described, you shouldn't expect the same level
of rigor as in a mathematical proof. Often all we can say is
Amazon uses deep neural nets in many, many areas. There is some overlap with
the kind of nets used in AlphaGo. I passed a link to the paper on to one of
our researchers and he found it very interesting. DNN works very well when
there is a lot of labelled data to learn from. It can be useful
On 02.02.2016 13:05, "Ingo Althöfer" wrote:
when a student starts
studying Mathematics (s)he learns in the first two semesters that
everything has to be defined waterproof. Later, in particular
when (s)he comes near to doing own research, you have to make
compromises - otherwise you will never
Hi Michael,
It's an intersting idea.
Maybe I could collect many LD positions from pro games.
So LD and semeai solver will be needed. Or just use LD
problems with sequence.
Without difficult feature, DCNN may find answer.
Regards,
Hiroshi Yamashita
- Original Message -
From: "Michael
Detlef, Hiroshi, Hideki, and others,
I have caffelib integrated with Many Faces so I can evaluate a DNN. Thank you
very much Detlef for sample code to set up the input layer. Building caffe on
windows is painful. If anyone else is doing it and gets stuck I might be able
to help.
What
Hi David,
I use GTS 450 and GTX 980. I use Caffe on ubuntu 14.04.
Caffe's install is difficult. So I recommend using ubuntu 14.04.
time for predicting a position
Detlef44% Detlef54% CUDA cores clock
GTS 450 17.2ms 21 ms 192 783MHz
GTX 980
Hi,
I would expect this to happen if the system is trained by normal games,
only. But I think the system should see actual live-and-death (LD)
sequences (from some collection) to be able to learn about them and not
soak this knowledge up from a whole game where most of the moves are
"noise"
What? You have mixed up things.
http://www.europeangodatabase.eu/EGD/Player_Card.php?=17374016
2016-02-02 20:21 GMT+01:00 Olivier Teytaud :
>>> If AlphaGo had lost at least one game, I'd understand how people can have
>>> an upper bound on its level, but with 5-0
On 02.02.2016 20:21, Olivier Teytaud wrote:
On the other hand, they have super strong people in the team (at the pro
level, maybe ? if Aja has pro level...)
Ca. 5d amateur in the team is enough, regardless of whether Myongwan Kim
thinks that only 9p can understand. Not so. Kim's above 5d
Hi,
I made CGOS rating include AlphaGo.
CGOSRemi's pro rating(same as paper)
Ke Jie 4642? 3620 Strongest human
Lee Sedol4538? 3516
AlphaGo 4162? 3140 Distributed, 1202CPUs, 176GPUs
Fan Hui 3942? 2920
AlphaGo 3912?
Since Zen's engine is improved sololy by Yomato, I have no idea
in detail but I believe Yamato has used one Mac Pro so far
(Linux and Windows).
#He has implemented DCNN by himself, not using tools.
Hideki
David Fotland: <0a0301d15de7$1180d760$34828620$@smart-games.com>:
>Detlef, Hiroshi,
I think it would be an awesome commercial product for strong Go players.
Maybe even if the AI shows the continuations and the score estimates
between different lines, it will give the player enough reasoning to
understand why one move is better than the other.
On 2016-02-02 8:29, Jim
But AlphaGo single machine is stronger than Fan Hui, since it won 8-2 in
all matches combined on one machine, as far as I understand they used
this version
On 2016-02-02 13:33, Hiroshi Yamashita wrote:
Hi,
I made CGOS rating include AlphaGo.
CGOSRemi's pro rating(same as
>But AlphaGo single machine is stronger than Fan Hui, since it won 8-2 in
>all matches combined on one machine, as far as I understand they used
>this version
No. AlphaGo Distributed was used for the 10 games.
Hideki
>On 2016-02-02 13:33, Hiroshi Yamashita wrote:
>> Hi,
>>
>> I made CGOS
The Deepmind paper has a short section on the Rollout Policy they use, it
looks like they made some improvements on for their rollouts, maybe they
are better at handling semeai than previous methods. The response and
non-response patterns sound similar, but they also include liberty counts.
I
On 02.02.2016 19:07, David Fotland wrote:
consider some of this as the difference between math and engineering. Math
desires rigor.
Engineering desires working solutions. When an engineering solution is being
described,
you shouldn't expect the same level of rigor as in a mathematical proof.
How long does it take to train one of your nets? Is it safe to assume that
training time is roughly proportional to the number of neurons in the net?
Thanks,
David
> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Detlef Schmicker
>
At least on digital filter time increases non-linearly - you can think NN
as non-linear FIR. And multilayer structure should make this harder, if you
think of it . So some tricks to speed it up might be necessary. dunno about
NN but on digital filters one trick was to train first part of filter
On 01.02.2016 23:01, Brian Cloutier wrote:> I had to search a lot of
papers on MCTS which
> mentioned "terminal states" before finding one which defined them.
> [...] they defined it as a position where there are no more legal
> moves.
On 01.02.2016 23:15, Brian Sheppard wrote:
You play until
Hi George,
welcome, and thanks for your valuable hint on the Google-whitepaper.
Do/did you have/see any cross-relations between your research and
computer Go?
Cheers, Ingo.
Gesendet: Dienstag, 02. Februar 2016 um 05:14 Uhr
Von: "George Dahl"
An: computer-go
And to meta this awesome short story...
AI Software Engineers: Robert, please stop asking our AI for explanations.
We don't want to distract it with limited human understanding. And we don't
want the Herculean task of coding up that extremely frail and error prone
bridge.
On Feb 1, 2016 3:03 PM,
> Without clarity, progress is delayed. Every professor at university will
> confirm this to you.
>
IMHO, Petr contributed enough to academic research
for not needing a discussion with a professor at university
for learning how to do/clarify research :-)
--
On 02.02.2016 13:05, "Ingo Althöfer" wrote:
For research in general it is good to have waves:
Research is faster if informalism and formalism progress simultaneously
(by different people or in different papers).
--
robert jasiek
___
Computer-go
Hi Robert,
we met for the first time at the EGC 2000 in Berlin-Strausberg.
I know your special ways of argumenting - and think that you
are an enrichment both for the go world and for the computer go scene.
But ...
> Without clarity, progress is delayed. Every
> professor at university will
On 02.02.2016 11:49, Petr Baudis wrote:
you seem to come off as perhaps a little too
aggressive in your recent few emails...
If I were not aggressively critical about inappropriate ambiguity, it
would continue for further decades. Papers containing mathematical
contents must clarify when
Hi,
This is not training time, but about mini_batch for prediction.
Need time for one batch and time for per one position
mini_batch one batchone position Memory required(Caffe's log)
1 0.002330 sec 2.33ms 4435968 (4.2MB)
2 0.002440 sec
32 matches
Mail list logo