On Tue, 12 Dec 2017 12:18:23 +, Wols Lists wrote:
> > That means every write has to be encrypted 4 times, whereas using
> > encryption in the filesystem means it only has to be done once. I
> > tried setting encrypted BTRFS this way and there was a significant
> > performance hit. I'm seriousl
On 12/12/17 10:15, Neil Bothwick wrote:
> That means every write has to be encrypted 4 times, whereas using
> encryption in the filesystem means it only has to be done once. I tried
> setting encrypted BTRFS this way and there was a significant performance
> hit. I'm seriously considering going bac
On Tue, 12 Dec 2017 00:20:48 +0100, Frank Steinmetzger wrote:
> My new drives are finally here. One of them turned out to be an OEM. -_-
> The shop says it will cover any warranty claims and it’s not a backyard
> seller either, so methinks I’ll keep it.
>
> To evaluate LUKS, I created the followi
On Thu, Dec 07, 2017 at 09:49:29PM +, Wols Lists wrote:
> On 07/12/17 21:37, Frank Steinmetzger wrote:
> > Ooooh, I just came up with another good reason for raidz over mirror:
> > I don't encrypt my drives because it doesn't hold sensitive stuff. (AFAIK
> > native ZFS encryption is available i
On Sun, Dec 10, 2017 at 4:00 PM, Wols Lists wrote:
>
> So the OP needs to be aware that, if his file is smaller than the chunk
> size, then it *will* be recoverable from a disk pulled from an array, be
> it md-raid or zfs.
>
> The question is, then, how big is a chunk? And if zfs is anything like
On 10/12/17 15:07, Rich Freeman wrote:
>> > Is that how ZFS works?
>> >
> I doubt it, hence why I wrote "most parity RAID systems seem to
> operate just as you describe."
So the OP needs to be aware that, if his file is smaller than the chunk
size, then it *will* be recoverable from a disk pulled
On Sun, Dec 10, 2017 at 4:45 AM, Wols Lists wrote:
> On 09/12/17 23:36, Rich Freeman wrote:
>> you instead compute 5 sets of parity so that now you have 9 sets of
>> data that can tolerate the loss of any 5, then throw away the sets
>> containing the original 4 sets of data and store the remaining
On 09/12/17 23:36, Rich Freeman wrote:
> you instead compute 5 sets of parity so that now you have 9 sets of
> data that can tolerate the loss of any 5, then throw away the sets
> containing the original 4 sets of data and store the remaining 5 sets
> of parity data across the 5 drives. You can st
On Sat, Dec 9, 2017 at 1:28 PM, Wols Lists wrote:
> On 09/12/17 16:58, J. Roeleveld wrote:
>> On Friday, December 8, 2017 12:48:45 AM CET Wols Lists wrote:
>>> On 07/12/17 22:35, Frank Steinmetzger wrote:
> (Oh - and md raid-5/6 also mix data and parity, so the same holds true
>
>> the
On 09/12/17 16:58, J. Roeleveld wrote:
> On Friday, December 8, 2017 12:48:45 AM CET Wols Lists wrote:
>> On 07/12/17 22:35, Frank Steinmetzger wrote:
(Oh - and md raid-5/6 also mix data and parity, so the same holds true
> there.)
>>>
>>> Ok, wasn’t aware of that. I thought I read in
On Friday, December 8, 2017 12:48:45 AM CET Wols Lists wrote:
> On 07/12/17 22:35, Frank Steinmetzger wrote:
> >> (Oh - and md raid-5/6 also mix data and parity, so the same holds true
> >>
> >> > there.)
> >
> > Ok, wasn’t aware of that. I thought I read in a ZFS article that this were
> > a spe
On 07/12/17 22:35, Frank Steinmetzger wrote:
>> (Oh - and md raid-5/6 also mix data and parity, so the same holds true
>> > there.)
> Ok, wasn’t aware of that. I thought I read in a ZFS article that this were a
> special thing.
Say you've got a four-drive raid-6, it'll be something like
data1
On Thu, Dec 7, 2017 at 11:04 AM, Frank Steinmetzger wrote:
> On Thu, Dec 07, 2017 at 10:26:34AM -0500, Rich Freeman wrote:
>
>> […] They want 1GB/TB RAM, which rules out a lot of the cheap ARM-based
>> solutions. Maybe you can get by with less, but finding ARM systems with
>> even 4GB of RAM is
On Thu, Dec 07, 2017 at 09:49:29PM +, Wols Lists wrote:
> > So in case I ever need to send in a drive for repair/replacement, noone can
> > read from it (or only in tiny bits'n'pieces from a hexdump), because each
> > disk contains a mix of data and parity blocks.
> >
> > I think I'm finally s
On 07/12/17 21:37, Frank Steinmetzger wrote:
> Ooooh, I just came up with another good reason for raidz over mirror:
> I don't encrypt my drives because it doesn't hold sensitive stuff. (AFAIK
> native ZFS encryption is available in Oracle ZFS, so it might eventually
> come to the Linux world).
>
On Wed, Dec 06, 2017 at 07:29:08PM -0500, Rich Freeman wrote:
> On Wed, Dec 6, 2017 at 7:13 PM, Frank Steinmetzger wrote:
> > On Wed, Dec 06, 2017 at 06:35:10PM -0500, Rich Freeman wrote:
> >>
> >> IMO the cost savings for parity RAID trumps everything unless money
> >> just isn't a factor.
> >
>
On 07/12/17 20:17, Richard Bradfield wrote:
> On Thu, Dec 07, 2017 at 06:35:16PM +, Wols Lists wrote:
>> On 07/12/17 09:52, Richard Bradfield wrote:
>>> I did also investigate USB3 external enclosures, they're pretty
>>> fast these days.
>>
>> AARRGGH !!!
>>
>> If you're using mdadm, DO NOT
On Thu, Dec 07, 2017 at 06:35:16PM +, Wols Lists wrote:
On 07/12/17 09:52, Richard Bradfield wrote:
I did also investigate USB3 external enclosures, they're pretty
fast these days.
AARRGGH !!!
If you're using mdadm, DO NOT TOUCH USB WITH A BARGE POLE !!!
I don't know the details, but
On 07/12/17 14:53, Frank Steinmetzger wrote:
> When I configured my kernel the other day, I discovered network block
> devices as an option. My PC has a hotswap bay[0]. Problem solved. :) Then I
> can do zpool replace with the drive-to-be-replaced still in the pool, which
> improves resilver read d
On 07/12/17 09:52, Richard Bradfield wrote:
> I did also investigate USB3 external enclosures, they're pretty
> fast these days.
AARRGGH !!!
If you're using mdadm, DO NOT TOUCH USB WITH A BARGE POLE !!!
I don't know the details, but I gather the problems are very similar to
the timeout probl
On Thu, Dec 07, 2017 at 10:26:34AM -0500, Rich Freeman wrote:
> On Thu, Dec 7, 2017 at 9:53 AM, Frank Steinmetzger wrote:
> >
> > I see. I'm always looking for ways to optimise expenses and cut down on
> > environmental footprint by keeping stuff around until it really breaks. In
> > order to incr
On Thu, Dec 7, 2017 at 9:53 AM, Frank Steinmetzger wrote:
>
> I see. I'm always looking for ways to optimise expenses and cut down on
> environmental footprint by keeping stuff around until it really breaks. In
> order to increase capacity, I would have to replace all four drives, whereas
> with a
On Thu, Dec 07, 2017 at 09:52:55AM +, Richard Bradfield wrote:
> On Thu, 7 Dec 2017, at 09:28, Frank Steinmetzger wrote:
> > > I incorporated ZFS' expansion inflexibility into my planned
> > > maintenance/servicing budget.
> >
> > What was the conclusion? That having no more free slots meant th
On Thu, 7 Dec 2017, at 09:28, Frank Steinmetzger wrote:
> > I incorporated ZFS' expansion inflexibility into my planned
> > maintenance/servicing budget.
>
> What was the conclusion? That having no more free slots meant that you
> can just as well use the inflexible Raidz, otherwise would have gon
On Thu, Dec 07, 2017 at 07:54:41AM +, Richard Bradfield wrote:
> On Wed, Dec 06, 2017 at 06:35:10PM -0500, Rich Freeman wrote:
> >On Wed, Dec 6, 2017 at 6:28 PM, Frank Steinmetzger wrote:
> >>
> >>I don’t really care about performance. It’s a simple media archive powered
> >>by the cheapest Ha
On Wed, Dec 06, 2017 at 06:35:10PM -0500, Rich Freeman wrote:
On Wed, Dec 6, 2017 at 6:28 PM, Frank Steinmetzger wrote:
I don’t really care about performance. It’s a simple media archive powered
by the cheapest Haswell Celeron I could get (with 16 Gigs of ECC RAM though
^^). Sorry if I more or
On Wed, Dec 6, 2017 at 7:13 PM, Frank Steinmetzger wrote:
> On Wed, Dec 06, 2017 at 06:35:10PM -0500, Rich Freeman wrote:
>>
>> IMO the cost savings for parity RAID trumps everything unless money
>> just isn't a factor.
>
> Cost saving compared to what? In my four-bay-scenario, mirror and raidz2
>
On Wed, Dec 06, 2017 at 06:35:10PM -0500, Rich Freeman wrote:
> On Wed, Dec 6, 2017 at 6:28 PM, Frank Steinmetzger wrote:
> >
> > I don’t really care about performance. It’s a simple media archive powered
> > by the cheapest Haswell Celeron I could get (with 16 Gigs of ECC RAM though
> > ^^). Sorr
On Wed, Dec 6, 2017 at 6:28 PM, Frank Steinmetzger wrote:
>
> I don’t really care about performance. It’s a simple media archive powered
> by the cheapest Haswell Celeron I could get (with 16 Gigs of ECC RAM though
> ^^). Sorry if I more or less stole the thread, but this is almost the same
> topi
On Fri, Dec 01, 2017 at 12:14:12PM -0500, Rich Freeman wrote:
> On Fri, Dec 1, 2017 at 11:58 AM, Wols Lists wrote:
> > On 27/11/17 22:30, Bill Kenworthy wrote:
> >> […]
> >> Is anyone here successfully using btrfs raid 5/6? What is the status of
> >> scrub and self healing? The btrfs wiki is woe
On 01/12/17 17:14, Rich Freeman wrote:
> You could run btrfs over md-raid, but other than the snapshots I think
> this loses a lot of the benefit of btrfs in the first place. You are
> vulnerable to the write hole,
The write hole is now "fixed".
In quotes because, although journalling has now be
On Fri, Dec 1, 2017 at 11:58 AM, Wols Lists wrote:
> On 27/11/17 22:30, Bill Kenworthy wrote:
>> Hi all,
>> I need to expand two bcache fronted 4xdisk btrfs raid 10's - this
>> requires purchasing 4 drives (and one system does not have room for two
>> more drives) so I am trying to see if us
On 27/11/17 22:30, Bill Kenworthy wrote:
> Hi all,
> I need to expand two bcache fronted 4xdisk btrfs raid 10's - this
> requires purchasing 4 drives (and one system does not have room for two
> more drives) so I am trying to see if using raid 5 is an option
>
> I have been trying to find if
On Monday, November 27, 2017 11:30:13 PM CET Bill Kenworthy wrote:
> Hi all,
> I need to expand two bcache fronted 4xdisk btrfs raid 10's - this
> requires purchasing 4 drives (and one system does not have room for two
> more drives) so I am trying to see if using raid 5 is an option
>
> I h
Hi all,
I need to expand two bcache fronted 4xdisk btrfs raid 10's - this
requires purchasing 4 drives (and one system does not have room for two
more drives) so I am trying to see if using raid 5 is an option
I have been trying to find if btrfs raid 5/6 is stable enough to use but
while t
35 matches
Mail list logo