On Thu, Jul 3, 2014 at 4:50 PM, Ljubomir Ljubojevic cen...@plnet.rs wrote:
Whatever we do, we need the ability to create a point-in-time history.
We commonly use our archival dumps for audit, testing, and debugging
purposes. I don't think PG + WAL provides this type of capability. So at
On 07/07/2014 02:35 PM, SilverTip257 wrote:
On Thu, Jul 3, 2014 at 4:50 PM, Ljubomir Ljubojevic cen...@plnet.rs wrote:
Whatever we do, we need the ability to create a point-in-time history.
We commonly use our archival dumps for audit, testing, and debugging
purposes. I don't think PG + WAL
On 07/07/2014 02:56 PM, Reindl Harald wrote:
Am 07.07.2014 14:53, schrieb Ljubomir Ljubojevic:
Also, check needs to be made if xz supports multitrheading like pk7zip
enter xz --help would have answered this
--threads=NUM
use at most NUM threads; the default is 1;
set to 0 to use the
On 07.Jul.2014, at 14:53, Ljubomir Ljubojevic cen...@plnet.rs wrote:
On 07/07/2014 02:35 PM, SilverTip257 wrote:
On Thu, Jul 3, 2014 at 4:50 PM, Ljubomir Ljubojevic cen...@plnet.rs wrote:
I am inclined to use xz utils as opposed to 7zip since 7zip comes from a
3rd party repo.
check
On 07/07/2014 10:54 PM, Markus Falb wrote:
On 07.Jul.2014, at 14:53, Ljubomir Ljubojevic cen...@plnet.rs wrote:
On 07/07/2014 02:35 PM, SilverTip257 wrote:
On Thu, Jul 3, 2014 at 4:50 PM, Ljubomir Ljubojevic cen...@plnet.rs wrote:
I am inclined to use xz utils as opposed to 7zip since
On Mon, Jul 07, 2014 at 11:56:08PM +0200, Ljubomir Ljubojevic wrote:
On 07/07/2014 10:54 PM, Markus Falb wrote:
On 07.Jul.2014, at 14:53, Ljubomir Ljubojevic cen...@plnet.rs wrote:
On 07/07/2014 02:35 PM, SilverTip257 wrote:
On Thu, Jul 3, 2014 at 4:50 PM, Ljubomir Ljubojevic
On 07/08/2014 12:48 AM, Fred Smith wrote:
On Mon, Jul 07, 2014 at 11:56:08PM +0200, Ljubomir Ljubojevic wrote:
On 07/07/2014 10:54 PM, Markus Falb wrote:
On 07.Jul.2014, at 14:53, Ljubomir Ljubojevic cen...@plnet.rs wrote:
On 07/07/2014 02:35 PM, SilverTip257 wrote:
On Thu, Jul 3, 2014 at
Perhaps there is a file system that supports compression and would do a
good job with the snapshots transparently. Maybe even ZFS or btrfs do?
Hi,
I agree with Lee.
Btrfs actually does sport built-in compression as a mount argument/flag and
the delta snapshots works beautifully well but I
--On Thursday, July 03, 2014 04:47:30 PM -0400 Stephen Harris
li...@spuddy.org wrote:
On Thu, Jul 03, 2014 at 12:48:34PM -0700, Lists wrote:
Whatever we do, we need the ability to create a point-in-time history.
We commonly use our archival dumps for audit, testing, and debugging
purposes. I
Ljubomir Ljubojevic cen...@plnet.rs writes:
7za a -t7z $YearNum-$MonthNum.7z -i...@include.lst -mx$CompressionMetod
-mmt$ThreadNumber -mtc=on
So, 742 files that uncompressed have 179 MB, compressed ocupy only 452
KB, which is only 0.2% of original size, 442 TIMES smaller :
Perhaps there is
On 07/02/2014 12:57 PM, m.r...@5-cent.us wrote:
I think the buzzword you want is dedup.
dedup works at the file level. Here we're talking about files that are
highly similar but not identical. I don't want to rewrite an entire file
that's 99% identical to the new file form, I just want to write
I think the buzzword you want is dedup.
dedup works at the file level. Here we're talking about files that are
highly similar but not identical. I don't want to rewrite an entire file
that's 99% identical to the new file form, I just want to write a small
set of changes. I'd use ZFS to keep
Lists wrote:
On 07/02/2014 12:57 PM, m.r...@5-cent.us wrote:
I think the buzzword you want is dedup.
dedup works at the file level. Here we're talking about files that are
highly similar but not identical. I don't want to rewrite an entire file
that's 99% identical to the new file form, I
On 7/2/2014 12:53 PM, Lists wrote:
I'm trying to streamline a backup system using ZFS. In our situation,
we're writing pg_dump files repeatedly, each file being highly similar
to the previous file. Is there a file system (EG: ext4? xfs?) that, when
re-writing a similar file, will write only
On Thu, Jul 3, 2014 at 2:06 PM, m.r...@5-cent.us wrote:
Lists wrote:
On 07/02/2014 12:57 PM, m.r...@5-cent.us wrote:
I think the buzzword you want is dedup.
dedup works at the file level. Here we're talking about files that are
highly similar but not identical. I don't want to rewrite an
Am 03.07.2014 um 21:19 schrieb John R Pierce pie...@hogranch.com:
On 7/2/2014 12:53 PM, Lists wrote:
I'm trying to streamline a backup system using ZFS. In our situation,
we're writing pg_dump files repeatedly, each file being highly similar
to the previous file. Is there a file system (EG:
On 07/03/2014 12:19 PM, John R Pierce wrote:
you do realize, adding/removing or even changing the length of a single
line in a block of that pg_dump file will change every block after it as
the data will be offset ?
Yes. And I guess this is probably where the conversation should end. I'm
used
On Thu, Jul 3, 2014 at 2:48 PM, Lists li...@benjamindsmith.com wrote:
On 07/03/2014 12:23 PM, Les Mikesell wrote:
But, since this is about postgresql, the right way is probably just to
set up replication and let it send the changes itself instead of doing
frequent dumps.
Whatever we do, we
On Thu, Jul 03, 2014 at 12:48:34PM -0700, Lists wrote:
Whatever we do, we need the ability to create a point-in-time history.
We commonly use our archival dumps for audit, testing, and debugging
purposes. I don't think PG + WAL provides this type of capability. So at
the moment we're down
On 07/03/2014 09:48 PM, Lists wrote:
On 07/03/2014 12:19 PM, John R Pierce wrote:
you do realize, adding/removing or even changing the length of a single
line in a block of that pg_dump file will change every block after it as
the data will be offset ?
Yes. And I guess this is probably
Lists wrote:
I'm trying to streamline a backup system using ZFS. In our situation,
we're writing pg_dump files repeatedly, each file being highly similar
to the previous file. Is there a file system (EG: ext4? xfs?) that, when
re-writing a similar file, will write only the changed blocks and
On Wed, Jul 2, 2014 at 2:53 PM, Lists li...@benjamindsmith.com wrote:
I'm trying to streamline a backup system using ZFS. In our situation,
we're writing pg_dump files repeatedly, each file being highly similar
to the previous file. Is there a file system (EG: ext4? xfs?) that, when
re-writing
22 matches
Mail list logo