> -----Original Message----- > From: Philippe Michel [mailto:[email protected]] > Sent: Friday, September 11, 2009 11:07 PM > To: Michael Petch > Cc: Ingo Macherius; [email protected] > Subject: Re: [Bug-gnubg] How many threads can gnubg (reliably) handle? > [...] > > Surely, usefulness of a larger cache depends on the number of > positions > evaluated. Maybe something like : > > 2 ply play seconds 10k positions cache > almost useless > 2 ply analysis of match minutes 1M > default size ok > 4 ply an./short rollout hours 100M > max from GUI ~ok > long rollout days billions "set > cache" from > CLI higher than GUI max useful if you have the RAM for it [...]
Thx all benchmarkers, the picture is getting clearer, at least for me, now. But let me repeat my request/use case. I'm using gnubg as a backend for a match playing bot, which means I am interested in exactly one thing: Fastest possible evaluation of a random given position by the CLI. The cache in this use case, as Philippe writes, is "almost useless". OK, makes sense. However, the threading in this use case is non-existent just as well. A bot uses the CLI version and hooks into it with either the "external" or the "hint" command. None of the two is accelerated by threading. "external" despite of the number of available cores uses only one of them. Didn't test "hint", but I'm pretty sure it's the same. It's a work partitioning thingy. As far as I understand now, thread workload is decomposed into work batches of full or partial matches. A single move always ends up on only one core with this approach. Question: What effort it would be to support two thread workload partitioning strategies, "match analysis" and "real time play"? Ingo _______________________________________________ Bug-gnubg mailing list [email protected] http://lists.gnu.org/mailman/listinfo/bug-gnubg
