> When i started using scid i was amazed by the incredible speed of
position searching (compared to Chessbase 9 that i used at that time).
However you had to wait until the search was completed before
making another move, and i hate to wait computer programs.
So i thought: ok, i have plenty of free RAM, let's add an option
to load the full database into RAM.
Very clever indeed: my code made searching SLOWER!

Shane indeed made a great database program.

> However maybe i made some errors and my measurements was wrong: any
more test/info is certainly welcome.
But you need to test many different positions and you can't
disable the optimizations if you want meaningful results.

> Let's say we have a big database file (filename.big) > 200MB.
.... Point (2) will take the same time of Point (4)

I'm not sure about all the details of your testing, though it seems ok.
Yes - small file fetches are of virtually the same speed because of
disk segment size i suppose.

Running you tests give identical times.

But one thing you havent considered is that the source file will
be (order of) 2 times bigger.

When i make the filename.big twice as large for the 256 byte test, there
is a small slowdown on my system.

filename.big = 14gig

  #time script.sh 256

  real 2.22
  user 0.39
  sys 0.21

filename.big = 7.8 gig

  #time script.sh 128

  real 1.93
  user 0.40
  sys 0.20

I'm not saying your conclusions are wrong, just clarifying a
detail.

Steve




Reply via email to