Yes the needs are simple.  I was also thinking about using DBI.  The most
important thing to me is that everything is kept in a transaction so that
users can still read the data while I'm snapshotting it at the same time. 
If my transaction is isolated from all the reads happening, then it
shouldn't matter how long it takes for me to move the data over (granted,
that will increase latency, but in this project that's not really too
sensitive) and it will be transparent to the end users.  

Does anyone have any examples of using pg_dump in a transaction with a
DELETE or TRUNCATE command?  I have begun writing this to get the job

cat DELETE.sql COPYDATA.sql | psql -Upostgres -dMyDBName -hTestServer2

This command will combine the two sql files that I have (the first one just
deletes all from a certain table and the second one is a COPY command from a
previous pg_dump of a specific table) and then it pipes that out to psql to
run it on the remote server.  

I like what I have so far but would like to make it more dynamic.  If I
could eliminate the need for the two .sql files and make it all happen
within the command line, that would rock.  

I guess I'd need something like this... (Pseudo code...)

cat "DELETE FROM MyTable" pg_dump MyDBName -hTestServer1 -a -tMyTableName |
psql -Upostgres -dMyDBName -hTestServer2

I'm not sure how to cat the DELETE at the beginning of the COPY command that
would be delivered from the pg_dump and then pipe that complete thing to the
remote server to be executed as a transaction so that users could still read
from that able while my command was running.  

Any ideas???

Thanks in advance,


Christopher Browne-4 wrote:
> [EMAIL PROTECTED] (Paul Silveira) writes:
>> Does anyone have any good examples of implementing "snapshot"
>> replication. I know that PostgreSQL does not have snapshot
>> replication and that Slony-I is the recomended replication senario
>> but I've configured it and it seems rather advanced for a shop that
>> is implementing PostgreSQL for the first time.  I have an
>> application that will be mostly reads and snapshot replication would
>> probably be simple enough and would work.  I was thinking about just
>> using pg_dump to do the trick because the DB should not get very
>> large.  Does anyone have any advanced examples of doing something
>> like this? Also, does anyone have any comments they'd like to share
>> about this...
> If your database is small, and your needs simple, then using pg_dump
> to generate "snapshots" is a perfectly reasonable idea.
> I suppose the primary complication is whether or not you have multiple
> databases around on the cluster...  If you don't, or if they all need
> to be "snapshotted," you might consider using pg_dumpall, which also
> creates users and databases.
> If pg_dumpall is unsuitable, then you'll still need to grab user
> information that isn't part of pg_dump output...
> -- 
> (reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
> "This .signature is  shareware.  Send in $20 for  the fully registered
> version..."
> ---------------------------(end of broadcast)---------------------------
> TIP 1: if posting/reading through Usenet, please send an appropriate
>        subscribe-nomail command to [EMAIL PROTECTED] so that your
>        message can get through to the mailing list cleanly

View this message in context:
Sent from the PostgreSQL - hackers forum at

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to