(from #postgresql IRC on freenode)

darkblue_b I did an interesting experiment the other day davidfetter_vmw .. 
davidfetter_vmw do tell
darkblue_b well you know I do these huge monolithic postGIS queries on an 
otherwise idle linux machine.. and there was a persistant thought in my head 
that Postgresql+PostGIS did not make good use of memory allocation >2G

darkblue_b so I had this long, python driven analysis.. 15 steps.. some, 
unusual for me, are multiple queries running at once on the same data ... and 
others are just one labor intensive thing then the next  
    (one result table is 1.8M rows for 745M on disk, others are smaller)

darkblue_b I finally got the kinks worked out.. so I ran it twice.. 4.5 hours 
on our hardware.. once with shared_buffers set to 2400M and the second time 
with shared_buffers set to 18000M

darkblue_b work_mem was unchanged at 640M and.. the run times were within 
seconds of each other.. no improvement, no penalty

darkblue_b I have been wondering about this for the last two years

davidfetter_vmw darkblue_b, have you gone over any of this on -performance or 
-hackers? darkblue_b no - though I think I should start a blog .. I have a 
couple of things like this now darkblue_b good story though eh ?

 davidfetter_vmw darkblue_b, um, it's a story that hasn't really gotten started 
until you've gotten some feedback from -performance darkblue_b ok - true...

darkblue_b    pg 9.1  PostGIS 1.5.3    Ubuntu Server Oneiric 64bit  Dual Xeons  
one Western Digital black label for pg_default; one 3-disk RAID 5 for the 
database tablespace

==
Brian Hamlin
GeoCal
OSGeo California Chapter
415-717-4462 cell


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to