Wodzu,
There are roughly two types of approaches to bettering the skill of
computer go solutions; incremental and breakthrough. I think for
incremental solutions, ones where lots of work results in small shifts
in better go playing performance, you are correct. Any optimizations
around execution speed will result in better go playing performance per
time-unit. This is important if the measurement is winning current
computer_go competitions.
However, what I hear quite a few people saying is they are looking for
"breakthrough" types of go skill improvement, and more at the
application level as opposed to the individual lines of code. And to do
that, their needs around adaptability are more influential on their
primary constraint (personal time, not eventual execution speed). Their
focus is on being able to complete more experiments per personal time
unit with the hope of stumbling into a new "domain" where the go playing
performance takes an order of magnitude leap ahead. Once in the new
domain, the priorities for the person may shift back to incremental as
they begin to see what kinds of limits the new technique might provide.
This is where I am at personally. I am far more interested in
probability based techniques as opposed to perfecting tactics/fighting
strategies. I am very happy others are more interested in those areas.
It's a good match.
There are short-term benefits to the incremental approach. However, it
likely has a much closer threshold of diminishing returns if one follows
the standard constraint reduction techniques, largest first, etc.. At
which point, all the optimizations start fighting with any attempts to
"change" and/or "expand" newer ideas, techniques and experiments in an
effort to continue making progress. Once one has prioritized code
execution speed optimizations as a root value in moving a system
forward, almost always there is a large loss in adaptability with the
eventual result being the person abandons the bulk of the previous code
base to essentially start from scratch. In many, if not most, cases,
the error persists in the new approach as the fixation on maximizing
execution speed remains too high resulting in a rapid pruning of
possible exploration options.
There is no short answer to the Go problem. It is going to takes lots
of investment by both the code speed optimizers (incremental) and the
new technique inventor/innovators (breakthrough) and an effective
integration of both before we see result substantially superior to what
we experiencing right now. Personally, I have been enjoying the
discussions around the progress being made on the UTC/Monte Carlo
techniques. I am eager to start playing in that domain.
Jim
Wodzu wrote:
Huh, why not use Pascal? It has speed of C and
simplicity of Java :)
heck, you could use perl. plenty of packages
available (it can even be made multithreaded!),
shared memory packages, etc.
i mean, if speed isn't your top concern...
i think speed is one of most important things beacuse it affects
strength of the program ;) (if the time for move is restricted)
anyway, chosing a proper (fastest) algorithm has crucial meaning and
other things like language, used data structures and so on, have less
meaning in improving speed.
thats my opinion, regards.
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/