That's easy enough. Mfog12 trial is a free download and I can provide a
registration code for the computer used for any competition to enable the 12
kyu level. 12 kyu has not full board search, monte carlo or otherwise.
David
-Original Message-
From: Computer-go
Competitive with Alpha-go, one developer, not possible. I do think it is
possible to make a pro level program with one person or a small team. Look at
Deep Zen and Aya for example. I expect I’ll get there (pro level) with Many
Faces as well.
David
From: Computer-go
Thanks to the new volunteers. I hope the new team will consider making the
journal available online.
Rémi
- Mail original -
De: "Ingo Althöfer" <3-hirn-ver...@gmx.de>
À: computer-go@computer-go.org
Envoyé: Jeudi 5 Janvier 2017 21:32:55
Objet: [Computer-go] ICGA Journal with new Steam
blockquote, div.yahoo_quoted { margin-left: 0 !important; border-left:1px
#715FFA solid !important; padding-left:1ex !important; background-color:white
!important; } During its training, AlphaGo played many handicap games against a
previous version of itself, so the team and the program are
On 05.01.2017 17:32, Jim O'Flaherty wrote:
I don't follow.
1) "For each arcane position reached, there would now be ample data for
AlphaGo to train on that particular pathway." is false. See below.
2) "two strategies. The first would be to avoid the state in the first
place." Does AlphaGo
On 06.01.2017 03:36, David Ongaro wrote:
Two amateur players where analyzing a Game and a professional player happened
to come by.
So they asked him how he would assess the position. After a quick look he said
“White is
> leading by two points”. The two players where wondering: “You can
count
That was a quite elegant way to present the idea. Ty for sharing.
On Jan 5, 2017 8:36 PM, "David Ongaro" wrote:
> This discussion reminds me of an incident which happened at the EGC in
> Tuchola 2004 (maybe someone can find a source for this). I don’t remember
> all
Hello,
On 2017/01/06 3:34, Xavier Combelle wrote:
>> Honestly I got a little frustrated that many people didn't think that
>> was AlphaGo. It was almost clear to me because I know the difficulty of
>> developing AlphaGo-like bots.
> thanks for this insight, if I understand well developing a bot
>
Hello everybody,
as some of you know, I am Vice President of the ICGA
(= International Computer Games Association). After
a very long and successful period (over 30 years) with
Prof. Dr. Jaap van den Herik as Chief Editor, now a
new team will steer the Journal. The official statement
of the ICGA
The sheer amount of processing power required is kinda frustrating..
Not able to use my computer for a whole month in order to train knowing
it is only 1/100th the training time AlphaGo was trained with..
Yet it is extremely satisfying to see it grow stronger and surpass oneself..
I hope at some
> On Jan 5, 2017, at 2:37 AM, Detlef Schmicker wrote:
>
> Hi,
>
> what makes you think the opening theory with reverse komi would be the
> same as with standard komi?
>
> I would be afraid to invest an enormous amount of time just to learn,
> that you have to open differently
I mean as a company too, until this point none has succeed
Le 05/01/2017 à 19:35, Adrian Petrescu a écrit :
> As an individual? Probably, yes.
>
> On Thu, Jan 5, 2017 at 1:34 PM, Xavier Combelle
> > wrote:
>
>
>
> Le 05/01/2017 à
It helps a lot if you have to do it as a job, as a paid researcher. I once
tried it as a volunteer job for a company I worked for at the time, but we
only got the basic infrastructure going, after half a year of work, with
two people.
We were trying a neural network approach, while everybody said
Le 05/01/2017 à 02:16, Yamato a écrit :
> Yes, it is AlphaGo. I am relieved that DeepMind clarified this.
>
> Honestly I got a little frustrated that many people didn't think that
> was AlphaGo. It was almost clear to me because I know the difficulty of
> developing AlphaGo-like bots.
thanks for
It's one thing to know the recipe; it's another to have an industrial-size
kitchen. Google was able to throw truly gargantuan amounts of computing
resources at this problem.
A few years back, a researcher - was it Remi Coulon? - was able to scrounge a
few thousand cores for a tournament.
For each arcane position reached, there would now be ample data for AlphaGo
to train on that particular pathway. And it would emerge two strategies.
The first would be to avoid the state in the first place. And the second
would be to optimize play in that particular state. So, the human advantage
Thanks, Horace,
On 2017-01-05 at 04:07, Horace Ho wrote:
> The players and the results (in Chinese):
>
> [..]
passing this on :-)
Greetings, Tom
___
Computer-go mailing list
Computer-go@computer-go.org
> Honestly I got a little frustrated that many people didn't think that
> was AlphaGo. It was almost clear to me because I know the difficulty of
> developing AlphaGo-like bots.
>
I feel with you. People seem to think that the Nature paper gave away the
full recipe.
On Fri, Dec 30, 2016 at 01:28:34PM -0700, Anders Kierulf wrote:
>> It would be good if your document would clearly state which properties
>> are part of the FF[4] standard and which ones you’re adding.
>> For example, JD (Japanese Date) is not in FF[4], yet is listed as
>> “standardized in this
Hi,
a few years ago we had agreed that for winning the bet it would be
sufficient to beat the non-deterministic 12-kyu level of MFoG 11
(or MFoG 12). This level has no Monte Carlo elements ionvolved.
Ingo.
Gesendet: Donnerstag, 05. Januar 2017 um 04:15 Uhr
Von: fotl...@smart-games.com
An:
>
>
> what makes you think the opening theory with reverse komi would be the
> same as with standard komi?
>
> The value network only needs to know a given position of the board and a
piece of information who plays next, whether it is a green player or a red
player. Then it tells you a winning
Hi,
what makes you think the opening theory with reverse komi would be the
same as with standard komi?
I would be afraid to invest an enormous amount of time just to learn,
that you have to open differently in reverse komi games :)
Detlef
Am 05.01.2017 um 10:50 schrieb Paweł Morawiecki:
>
2017-01-04 21:07 GMT+01:00 David Ongaro :
>
> [...]So my question is: is it possible to have reverse Komi games by
> feeding the value network with reverse colors?
>
In the paper from Nature (subsection "Features for policy/value network"),
authors state:
*the stone
23 matches
Mail list logo