​Hi Robert...​

On Wed, Mar 11, 2015 at 11:54 AM, Robert Inder <rob...@interactive.co.uk>
wrote:

> Is our current "frequent pg_dump" approach a sensible way to go about
> things.  Or are we missing something?  Is there some other way to
> restore one database without affecting the others?
>

​As you've been told before, pg_dump is the way to go and it hits hard on
the IO load. Also, depending on where you are dumping to you may be hitting
yourself on the foot ( dump to another disk, or on another machine ).

You may try streaming replication + pg_dump, we are currently doing this,
although not in your exact scenario.

This is, build an streaming replication slave, pg_dump from the slave. If
needed, restore in the master.

The thing is you can use desktop class machines for the slave. If you do
not have spare machines I would suggest a desktop class machine with big
RAM and whatever disks you need for the DB plus an extra disk to pg_dump to
( so pg_dump does not compete with DB for the db disks, this really kills
performance ). Replication slaves do not need that much RAM ( as the only
query it is going to run is the pg_dump ones, but desktop ram is cheap ).
We did this with a not so powerful desktop with an extra sata disk to store
the pg_dumps and it worked really well, and we are presently using two
servers, using one of the extra gigabit interfaces with a crossover cable
for the replication connection plus an extra sata disk to make hourly
pg_dumps and it works quite well.

Francisco Olarte.

Reply via email to