First of all, I never purchase papers of unknown quality. If I know a paper is great or that it comes highly recommended by someone I trust I will purchase it although the idea of purchasing papers in general is distasteful to me.
The scalability issue as you summarize has to do with the number of cores - I am not surprised that more cores bring less improvement and that is true in ALL games. That's not the issue I am talking about here although it's certain a legitimate practical issue. To better understand the issue, forget about the specific machine it's on and substitute the phrase "scale with time." There is no scaling issue with reasonably written programs that can be tested with current hardware. Showing that it's difficult to parallelize an algorithm that is basically serial does not prove anything here. I have explained why people believe MCTS may have limits and why they are wrong. All such "evidence" that we have hit some wall is anecdotal. Unless of course you know of someone who has run a few thousands games at 1 day per move on a single core machine (or equivalent.) Don On Mon, Jun 20, 2011 at 1:12 PM, René van de Veerdonk < [email protected]> wrote: > On Mon, Jun 20, 2011 at 8:27 AM, Don Dailey <[email protected]> wrote: >> >> On Mon, Jun 20, 2011 at 10:09 AM, terry mcintyre <[email protected] >> > wrote: >> >>> Any particular instance of a program will probably fail to scale - >>> especially against humans who share the lessons of experience. >>> >> >> That is complete nonsense. How are you backing this up? What proof do >> you have that computers don't play better on better hardware? Why are the >> top programs being run on clusters and multi-core computers? Are the >> authors just complete idiots? >> >> Every bit of evidence we have says they are scaling very well against >> humans. That has also been our experience in game after game, not just >> in Go. >> >> I apologize for being so harsh on this, but you are too smart to be >> saying such dumb things. >> > > Please read (from Darren Cook): > > Richard Segal (who operates Blue Fuego) has a paper on the upper limit >> for scaling: >> http://www.springerlink.com/content/b8p81h40129116kl/ >> (Sorry, I couldn't find an author's download link for the paper; Richard >> is on the Fuego list but I'm not sure he is even a lurker here.) > > > Another paper that mentions the topic (also Fuego specific) is here: > http://webdocs.cs.ualberta.ca/~mmueller/ps/fuego-TCIAIG.pdf > > The short conclusion of these papers is that it was not worth it to scale > Fuego to a IBM/BlueGene machine (original purpose of the work), as it didn't > get any better anymore, i.e., performance didn't (appreciably) scale beyond > 512 or so cores. Obviously, that's still a big improvement from a desktop > pc, but far from the capacity of the supercomputer available. > > Zen also has a similar issue according to its co-author Kideko Kato. That's > evidence enough for me to indicate that there indeed is an issue with the > current breed of programs. > > It appears that current specific implementations have a scaling ceiling. I > expect, as expressed by multiple people here, that once that ceiling gets > close enough to become a limitation, people will change their algorithms > accordingly and scaling will continue. But it will only happen once you can > comfortably test the program in that resource regime. > > René > > _______________________________________________ > Computer-go mailing list > [email protected] > http://dvandva.org/cgi-bin/mailman/listinfo/computer-go >
_______________________________________________ Computer-go mailing list [email protected] http://dvandva.org/cgi-bin/mailman/listinfo/computer-go
