Hello,
I've been away from postgres for several years, so please forgive me if I forgot nearly everything:-)

I've just inherited a database collecting environmental data. There's a background process continually inserting records (not so often, to say the truth) and a web interface to query data. At the moment the record count of the db is 250M and growing all the time. The 3 main tables have just 3 columns.

Queries get executed very very slowly, say 20 minutes. The most evident problem I see is that io wait load is almost always 90+% while querying data, 30-40% when "idle" (so to say). Obviously disk access is to blame, but I'm a bit surprised because the cluster where this db is running is not at all old iron: it's a vmware VM with 16GB ram, 4cpu 2.2Ghz, 128GB disk (half of which used). The disk system underlying vmware is quite powerful, this postgres is the only system that runs slowly in this cluster.
I can increase resources if necessary, but..

Even before analyzing queries (that I did) I'd like to know if someone has already succeeded in running postgres with 200-300M records with queries running much faster than this. I'd like to compare the current configuration with a super-optimized one to identify the parameters that need to be changed.
Any link to a working configuration would be very appreciated.

Thanks for any help,
  Nico


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to