Thanks for the info on TOAST. We're still finding our legs with Postgres after several years on MySQL.
We do have the flexibility to adjust our data types and schema if we need to. We try to keep it to a minimum, but it's doable. I'm completely open to the possibility that we just have a very inefficient DB design or that we're misusing the data types. We'll be running some more tests looking for the sweet spot between time and size. I expect we'll find a good balance somewhere. Thanks! -----Original Message----- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Mon 4/14/2008 7:58 PM To: Phillip Smith Cc: Ryan Wells; pgsql-admin@postgresql.org Subject: Re: [ADMIN] Slow pg_dump "Phillip Smith" <[EMAIL PROTECTED]> writes: >> Here's my interpretation of those results: the TOAST tables for >> our image files are compressed by Postgres. During the backup, >> pg_dump uncompresses them, and if compression is turned on, >> recompresses the backup. Please correct me if I'm wrong there. No, the TOAST tables aren't compressed, they're pretty much going to be the raw image data (plus a bit of overhead). What I think is happening is that COPY OUT is encoding the bytea data fairly inefficiently (one byte could go to \\nnn, five bytes) and the compression on the pg_dump side isn't doing very well at buying that back. I experimented a bit and noticed that pg_dump -Fc is a great deal smarter about storing large objects than big bytea fields --- it seems to be pretty nearly one-to-one with the original data size when storing a compressed file that was put into a large object. I dunno if it's practical for you to switch from bytea to large objects, but in the near term I think that's your only option if the dump file size is a showstopper problem for you. regards, tom lane