Tom,
On Fri, 2006-03-17 at 23:48 -0500, Tom Lane wrote:
> "Brandon Keepers" <[EMAIL PROTECTED]> writes:
> > If it makes a difference, when I un-tar the dump file from each failed
> > dump, it always has 2937 files in it.
>
> That's pretty consisten
If it makes a difference, when I un-tar the dump file from each failed
dump, it always has 2937 files in it. I tried this using an old copy
of the data directory that had significantly less blobs in it and got
the same result.
On 3/16/06, Brandon Keepers <[EMAIL PROTECTED]> wrote:
> Th
On Tue, 2006-03-14 at 23:09 -0500, Tom Lane wrote:
> 7.0 sets the lock table size to 64 * max_connections, so if you can
> crank max_connections up to 300 or so you should be able to dump.
> I think this will work ... it's definitely worth a shot before you
> start thinking about hacking the code.
On 3/13/06, Tom Lane <[EMAIL PROTECTED]> wrote:
> Brandon Keepers <[EMAIL PROTECTED]> writes:
> > Thanks for your quick response! I had actually just been trying that
> > (with 7.1) and came across another error:
>
> > NOTICE: ShmemAlloc: out of memory
Tom,
On Mon, 2006-03-13 at 20:38 -0500, Tom Lane wrote:
> pg_dump should work. If using a pg_dump version older than 8.1, you
> need to use -b switch and a non-default output format (I'd suggest -Fc).
>
> regards, tom lane
Thanks for your quick response! I had actually ju
I'm trying to upgrade a postgresql 7.0.3 database that uses large
objects to a more recent version, but I'm not able to export the blobs.
pg_dumplo was added in 7.1, so I tried compiling and running that
against the 7.0.3 database, but I get the following error:
./contrib/pg_dumplo/pg_dumplo: Fail