Kirk Strauser wrote:
> I have an hourly job that converts our legacy Foxpro database into
> PostgreSQL tables so that our web applications, etc. can run reports off
> the data in a reasonable amount of time.  Believe it or not, this has been
> running perfectly in production for over a year.  The only problem I'd
> still like to solve is that loading the data pegs the filesystem at 100%
> for many minutes at a time.

There's a dedicated mailing list for PostgreSQL performance:
pgsql-performance/at/, which can give you really good
advice, but here's some tips:

- have you tuned pgsql away from the (very conservative) defaults?
increased shared_buffers, effective_cache_size, temp_buffers and work_mem?
- What might help you is to keep the WAL (write-ahead-log, i.e. journal)
files on a completely separate (and fast) drive from the rest of the
database, to allow parallelism and speed. For best results, format it
with 32k blocks/8k fragments.
- If you don't specifically need the atomicity of transactions, you
might divide your import into many small transactions, for example one
for every 100,000 rows instead of doing 8 million at once.

If you want to be somewhat adventurous (but still within data safety
limits), you can try fiddling with increasing wal_buffers, commit_delay
and checkpoint_timeout.

Also, what version of PostgreSQL are you using? As a general rule, the
newer the version, the faster it is. This is especially true if you're
using 7.x - go to 8.1.5 immediately (but don't use 8.2 until 8.2.1 gets

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to