Fulvio, your tests are simulations, but I did a real test with
a two-byte version, and the position search was two times slower.
In my test the observable time was 10 seconds. but the computation
time for the tree search was about 1 second. This means that the
computation time is negligible compared to disk IO time. The move
decoding consumes less than 10% of the total computation time for
the tree search (this is only an estimation, but I know my code
very well). This means that it is impossible to save time with a
faster decoding algorithm. The only realistic way is to decrease
the disk IO time.

Steve says that:
> small file fetches are of virtually the same speed because of
> disk segment size i suppose.

This seems to be true on newer systems (with SATA for example), but
on older system (with IDE) it is indeed better to fetch smaller parts
(I think due to a smaller IO bus). Currently my tree search algorithm
is using a block file reader which reads chunks with a size of 131072
bytes. Now I've implemented a separate reader for the tree search which
is reading chunks of 4096 bytes (smaller chunk sizes seems to be
unrealistic). I did a test on my old main station (with IDE), and the
observable time is 3 times faster with the new reader. I did the test
on my notebook with SATA disk, and the observable time didn't change.
As a result: on systems with a slow bus (IDE) the position search time
can be improved in this way, but not on newer systems. It would be of
interest if someone else is trying the new reader. You have to update
the SVN repository (#266), and the following change in "Makefile.in" is
required:

  CXXFLAGS    = -Wall -DUSE_SEPARATE_SEARCH_READER

After "make clean; make; sudo make install" the new reader will be
used. Note that this implementation is experimental. It would
be of interest to have more results about the behavior of the
new reader on other systems.

As I already stated, the only realistic way is to decrease the disk
IO time. This means, the only realistic way is to improve the
position search algorithm, so that less games will be searched.
Scidb is already using a quite different position search algorithm
than Scid. If you load a database in Scidb, you will see that the
position tree for the standard position is computed very fast,
normally it's the first move on the board which takes the longest
time. In Scid it is the starting position that takes the longest
time (if you do not move before the tree is finished). The behavior
of both algorithms is different. In some cases Scidb's algorithm is
quite faster than Scid's, is some cases a bit slower (due to processing
time, my algorithm is more complex). And there is one secret:
Scidb's search algorithm is not yet finished. One step is missing
which will decrease the number of searchable games once again, but
it is unpredictable if this last step will have a significant impact.
I will implement this last step later (it's a quite complex task).

Gregor
-- 
Gregor Cramer
email: <gcra...@gmx.net>


Reply via email to