On 12/07/2010 01:22 AM, Tom Lane wrote: > Josh Berkus <j...@agliodbs.com> writes: >>> However, if you were doing something like parallel pg_dump you could >>> just run the parent and child instances all against the slave, so the >>> pg_dump scenario doesn't seem to offer much of a supporting use-case for >>> worrying about this. When would you really need to be able to do it? > >> If you had several standbys, you could distribute the work of the >> pg_dump among them. This would be a huge speedup for a large database, >> potentially, thanks to parallelization of I/O and network. Imagine >> doing a pg_dump of a 300GB database in 10min. > > That does sound kind of attractive. But to do that I think we'd have to > go with the pass-the-snapshot-through-the-client approach. Shipping > internal snapshot files through the WAL stream doesn't seem attractive > to me.
this kind of functionality would also be very useful/interesting for connection poolers/loadbalancers that are trying to distribute load across multiple hosts and could use that to at least give some sort of consistency guarantee. Stefan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers