Good morning. We're looking at replacing the Australian national ranking system, and the question has come up: how many players and how many recent games/player are needed for ELO to generate good strength ratings?
(Questions begged: what does a "good" set of ratings even mean? does it matter if the play graph (edges = games, vertices = players) is well-connected or quite cliquey? is ELO the last word in rating algorithms? do humans behave differently from bots when they know they're being rated?) Does anybody know of a good academic paper, or ideally, someone's thesis? My apologies if this is off-topic, but it's an interesting computation related to go... Cheers, Horatio
_______________________________________________ Computer-go mailing list [email protected] http://computer-go.org/mailman/listinfo/computer-go
