Hi. I'm just quoting the answer to a similar question a few weeks ago:
->Hello -> ->I guess "split" and "gzip" are your friends. You can pipe the ->"pgdumpall" to "split" with option to cut the file into pieces. ->Recommended is to use also "gzip" or similar to compress the files. -> ->Hope this helps -> ->-- ->Andreas Hödle (Systemadministration) -> ->Kühn & Weyh Software GmbH ->Linnestr. 1-3 ->79110 Freiburg -> ->WWW.KWSOFT.DE Cheers, Florian > -----Original Message----- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED]]On Behalf Of Andreas Hödle > Sent: Friday, December 07, 2001 5:48 PM > Cc: [EMAIL PROTECTED] > Subject: Re: [ADMIN] pgdumpall_file is bigger than 2 Gigabyte > > > "David M. Richter" schrieb: > > > > Hello! > > > > Ive got a problem! > > My database has the size of almost 5 Gigabytes. > > So the dump will take at least 2 Gigs of harddisk. > > But my Kernel supports only 2 Gig Files! > > > > Any experiences with big dumpfiles? > > > > Thanks a lot > > > > DAvid > > Hello > > I guess "split" and "gzip" are your friends. You can pipe the > "pgdumpall" to "split" with option to cut the file into pieces. > Recommended is to use also "gzip" or similar to compress the files. > > Hope this helps > > -- > Andreas Hödle (Systemadministration) > > Kühn & Weyh Software GmbH > Linnestr. 1-3 > 79110 Freiburg > > WWW.KWSOFT.DE > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html