Thanks for the tip!
On 9/12/19 3:03 PM, Robert Trevellyan wrote:
> Your TL;DR: for ashift refers to ZFS record size, but ashift
> determines block size (smallest I/O, with 12 meaning 4096 bytes to
> match common modern hard drive sector size), not record size.
>
> Robert Trevellyan
>
>
> On Thu,
Your TL;DR: for ashift refers to ZFS record size, but ashift determines
block size (smallest I/O, with 12 meaning 4096 bytes to match common modern
hard drive sector size), not record size.
Robert Trevellyan
On Thu, Sep 12, 2019 at 8:22 AM Alwyn Kik wrote:
> I did some research for this a
I did some research for this a while ago, and currently use the
following setup:
https://gist.github.com/Alveel/c8e80aeef208a7e27c9bd50d0023420c Of
course this is still far from perfect, and I welcome more ideas or
criticism, but perhaps it can help you out :)
On 9/11/19 10:03 PM, Carl Soderstrom
On 09/11 09:40 , Alexander Moisseev via BackupPC-users wrote:
> On 11.09.2019 18:19, Robert Trevellyan wrote:
> > I'm letting ZFS do the compression (using the default of LZ4) with BackupPC
> > handling deduplication. I think you'll find a reasonable consensus that ZFS
> > compression is always
On 11.09.2019 18:19, Robert Trevellyan wrote:
I'm letting ZFS do the compression (using the default of LZ4) with BackupPC
handling deduplication. I think you'll find a reasonable consensus that ZFS
compression is always a win for storage space (it will store un-compressible
data unmodified),
Yeah, 'noatime' is a good idea for BackupPC in general.
Thanks for the advice on compression, it's good to know.
On 09/11 11:26 , Robert Trevellyan wrote:
> One more thing about ZFS in general - I always set noatime.
--
Carl Soderstrom
Systems Administrator
Real-Time Enterprises
One more thing about ZFS in general - I always set noatime.
Robert Trevellyan
On Tue, Sep 10, 2019 at 10:50 PM Carl Soderstrom <
carl.soderst...@real-time.com> wrote:
> Thanks for the advice. Anyone else care to share their experience?
>
> On 09/10 02:57 , Ray Frush wrote:
> > We backup to a
I'm letting ZFS do the compression (using the default of LZ4) with BackupPC
handling deduplication. I think you'll find a reasonable consensus that ZFS
compression is always a win for storage space (it will store
un-compressible data unmodified), whereas ZFS deduplication is best avoided
in most
Thanks for the advice. Anyone else care to share their experience?
On 09/10 02:57 , Ray Frush wrote:
> We backup to a ZFS based appliance, and we allow ZFS to do compression and
> disable compression in BackupPC. We do not allow ZFS to de-duplicate.
> However since you’re looking at doing
We backup to a ZFS based appliance, and we allow ZFS to do compression and
disable compression in BackupPC. We do not allow ZFS to de-duplicate.
However since you’re looking at doing ZFS on the same box that’s running
BackupPC it probably doesn’t matter which one you have compression turned
We've been a BackupPC v3 shop since there's been a v3 and we're looking at
building our first v4 BackupPC server. The boss wants to put it on ZFS and a
JBOD controller.
I believe that for BackupPC v3 the advice was to turn off ZFS
filesystem-level deduplication and compression.
Is that still
11 matches
Mail list logo