I've looked into snap-raid and it seems well suited to my needs as
most of the data is static. I'm planning on using it in conjunction
with mhddfs so all drives are seen as a single storage pool. Is there
then any benefit in using Btrfs as the underlying filesystem on each
of the drives?
--
To uns
On Thu, Oct 15, 2015 at 07:39:20AM +0200, audio muze wrote:
> Thanks Chris
>
> I should've browsed recent threads, my apologies. Terribly
> frustrating though that the issues you refer to aren't documented in
> the btrfs wiki. Reading the wiki one is lead to believe that the only
> real issue is
audio muze writes:
> It seems to me that the simplest option at present is probably to use
> each disk separately, formatted btrfs, and backed up to other drives.
> The data to be stored on these drives is largely static - video and
> audio library.
In that case this might be applicaple to your
Thanks Roman, but I don't have the appetite to use mdadm and have the
array take forever to build or get yet another set of risks to
ultimately migrate from mdadm to btrfs when raid6 is stable. It seems
to me that the simplest option at present is probably to use each disk
separately, formatted bt
Thanks Chris
I should've browsed recent threads, my apologies. Terribly
frustrating though that the issues you refer to aren't documented in
the btrfs wiki. Reading the wiki one is lead to believe that the only
real issue is the write hole that can occur as a result of a power
loss. There I was
On Thu, 15 Oct 2015 06:11:49 +0200
audio muze wrote:
> Before I go down this road I'd appreciate thoughts/ suggestions/
> alternatives? Have I left anything out? Most importantly is btrfs
> raid6 now stable enough to use in this fashion?
I would suggest going with Btrfs on top of mdadm RAID6,
See the other recent thread on the list "RAID6 stable enough for production?"
A lot of your questions have already been answered in recent previous threads.
While there are advantages to Btrfs raid56, there are some missing
parts that make it incomplete and possibly unworkable for certain use
cas
On Thu, Oct 15, 2015 at 3:11 PM, audio muze wrote:
> Rebuilds and/or expanding the array should be pretty quick given only
> actual data blocks are written on rebuild or expansion as opposed to
> traditional raid systems that write out the entire array.
While that might be the intended final fun
Hi
I've 6 x 3TB SATA drives available with a view to consolidating long
term storage to a single raid array intended to be operational more or
less 24/7. I've done this a few times too many and run into the
inevitable issues like waiting days to expand raid arrays whilst
running the risk of disk