I've been doing some work to try and identify the actual costs
associated with an index scan with some limited sucess. What's been run
so far can be seen at http://stats.distributed.net/~decibel. But there's
a couple problems. First, I can't use the box exclusively for this
testing, which results in some result inconsistencies. Second, I've been
using a dataset that I can't make public, which means no one else can
run these tests on different hardware.
So what I think would be useful is some way to generate a known dataset,
and then be able to run tests against it on different machines. In the
case of testing index scans, we need to be able to vary correlation,
which so far I've been doing by ordering by different columns. I suspect
it will also be important to test with different tuple sizes. There's
also the question of whether or not the cache should be flushed for each
run or not.
Does this sound like a good way to determine actual costs for index
scans (and hopefully other access methods in the future)? If so, what
would be a good way to implement this?
Jim C. Nasby, Database Consultant [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster