On Thu, Mar 08, 2007 at 05:35:03PM +0000, Gregory Stark wrote:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
> > "Umar Farooq Minhas" <[EMAIL PROTECTED]> writes:
> >> How can we accrately estimate the "seq_page_fetch" and =
> >> "random_page_fetch" costs from outside the postgres using for example a =
> >> C routine.
> >
> > Use a test case larger than memory.  Repeat many times to average out
> > noise.  IIRC, when I did the experiments that led to the current
> > random_page_cost of 4.0, it took about a week before I had numbers I
> > trusted.
> When I was running tests I did it on a filesystem where nothing else was
> running. Between tests I unmounted and remounted it. As I understand it Linux
> associates the cache with the filesystem and not the block device and discards
> all pages from cache when the filesystem is unmounted.
> That doesn't contradict anything Tom said, it might be useful as an additional
> tool though.

Another trick I've used in the past is to just run the machine out of
memory, using the following:

 * $Id: clearmem.c,v 1.1 2003/06/29 20:41:33 decibel Exp $
 * Utility to clear out a chunk of memory and zero it. Useful for flushing disk 

int main(int argc, char *argv[]) {
    if (!calloc(atoi(argv[1]), 1024*1024)) { printf("Error allocating 
memory.\n"); }

I'll monitor top while that's running to ensure that some stuff gets
swapped out to disk. I believe this might still leave some cached data
in other areas of the kernel, but it's probably not enough to worry
Jim Nasby                                            [EMAIL PROTECTED]
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not

Reply via email to