On Fri, Oct 14, 2005 at 03:34:43PM -0500, Kevin Grittner wrote: > of the two times as a reliability factor. Unfortunately, that > means doubling the number of cache flushes, which is likely > to be the most time-consuming part of running the tests. On > the bright side, we would capture the top level runtimes you > want.
Actually, if you shut down the database and run this bit of code with a high enough number you should have a nicely cleaned cache. int main(int argc, char *argv[]) { if (!calloc(atoi(argv[1]), 1024*1024)) { printf("Error allocating memory.\n"); } } Running that on a dual Opteron (842's, I think) gives: [EMAIL PROTECTED]:35]~:10>time ./a.out 3300 3.142u 8.940s 0:40.62 29.7% 5+4302498k 0+0io 2pf+0w That was on http://stats.distributed.net and resulted in about 100MB being paged to disk. With 3000 it only took 20 seconds, but might not have cleared 100% of memory. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq