I took a brief look at the Segal paper and it seems well written to me.

Segal talks about an exponential decay model that allows him to project a
curve (the curve is partially computed, then projected)  which gives an
approximate upper bound on the possible playing strength of a specific
program,  fuego.

The value he gives for the upper bound approximation in GO for the Fuego
program is 4,150 ELO above the play quality achieved by a 1 minute game on
the target hardware.

You implied that the scalability stops far short of perfect play but I don't
see any such claim in the paper.   In fact,  the 4,150 ELO projection in the
paper seems (if anything)  too high.    At 100 ELO per rank,   this
represents about 53 ranks of improvement over a 1 minute per game fuego
player (which at 1 minute is still far better than the very weakest kyu
beginners.)

When I talk about scalability it is a given that it declines as you approach
perfect play and the curve flattens out,  so the papers conclusion about the
"limits of scaling" in no way refutes (nor proves) the idea that scaling
suddenly stops well short of perfect play.   In fact it seems to indicate to
me that perfect play may be farther away than we think and that computers
have a good handle on it.

Don




On Mon, Jun 20, 2011 at 1:12 PM, René van de Veerdonk <
[email protected]> wrote:

> On Mon, Jun 20, 2011 at 8:27 AM, Don Dailey <[email protected]> wrote:
>>
>> On Mon, Jun 20, 2011 at 10:09 AM, terry mcintyre <[email protected]
>> > wrote:
>>
>>> Any particular instance of a program will probably fail to scale -
>>> especially against humans who share the lessons of experience.
>>>
>>
>> That is complete nonsense.   How are you backing this up?   What proof do
>> you have that computers don't play better on better hardware?   Why are the
>> top programs being run on clusters and multi-core computers?   Are the
>> authors just complete idiots?
>>
>> Every bit of evidence we have says they are scaling very well against
>> humans.     That has also been our experience in game after game,  not just
>> in Go.
>>
>> I apologize for being so harsh on this,  but you are too smart to be
>> saying such dumb things.
>>
>
> Please read (from Darren Cook):
>
> Richard Segal (who operates Blue Fuego) has a paper on the upper limit
>> for scaling:
>> http://www.springerlink.com/content/b8p81h40129116kl/
>> (Sorry, I couldn't find an author's download link for the paper; Richard
>> is on the Fuego list but I'm not sure he is even a lurker here.)
>
>
> Another paper that mentions the topic (also Fuego specific) is here:
> http://webdocs.cs.ualberta.ca/~mmueller/ps/fuego-TCIAIG.pdf
>
> The short conclusion of these papers is that it was not worth it to scale
> Fuego to a IBM/BlueGene machine (original purpose of the work), as it didn't
> get any better anymore, i.e., performance didn't (appreciably) scale beyond
> 512 or so cores. Obviously, that's still a big improvement from a desktop
> pc, but far from the capacity of the supercomputer available.
>
> Zen also has a similar issue according to its co-author Kideko Kato. That's
> evidence enough for me to indicate that there indeed is an issue with the
> current breed of programs.
>
> It appears that current specific implementations have a scaling ceiling. I
> expect, as expressed by multiple people here, that once that ceiling gets
> close enough to become a limitation, people will change their algorithms
> accordingly and scaling will continue. But it will only happen once you can
> comfortably test the program in that resource regime.
>
> René
>
> _______________________________________________
> Computer-go mailing list
> [email protected]
> http://dvandva.org/cgi-bin/mailman/listinfo/computer-go
>
_______________________________________________
Computer-go mailing list
[email protected]
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

Reply via email to