On Oct 27, 2014, at 3:26 AM, Stephan Alz <stephan...@gmx.com> wrote:
> 
> My question is where to go from here? What I going to do right now is to copy 
> the most important data to another separated XFS drive.
> What I planning to do is:
> 
> 1, Upgrade the kernel
> 2, Upgrade BTRFS
> 3, Continue the balancing.

Definitely upgrade the kernel and see how that goes, there's been many many 
changes since 3.13. I would upgrade the user space tools also but that's not as 
important.

FYI you can mount with skip_balance mount option to inhibit resuming balance, 
sometimes pausing the balance isn't fast enough when there are balance problems.

> 
> 
> Could someone please also explain that how is exactly the raid10 setup works 
> with ODD number of drives with btrfs? 
> Raid10 should be a stripe of mirrors. Now then this sdf drive is mirrored or 
> striped or what?

I have no idea honestly. Btrfs is very tolerant of adding odd number and sizes 
of devices, but things get a bit nutty in actual operation sometimes. This 
might be one of them because traditionally raid10 is always even number of 
drives, odd numbers just don't make sense. But Btrfs allows the addition; I 
think the expectation is you'd have added two before doing the balance though.

> Some btrfs gurus could tell me that should I be worried of dataloss because 
> of this or not?

Anything is possible so hopefully you have backups. My expectation is worse 
case scenario the fs gets confused and you can't mount rw anymore in which case 
you won't be able to make it an even drive raid10. But in the case even as ro 
you can update your backups, blow away the Btrfs volume and start from scratch 
with an even number of drives, right?

> Would I need even more free space just to add a 5th drive? If so how much 
> more?

Gonna guess you'd need to add a drive that's at least 2.83TiB in size if you 
want to keep it raid10.

Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to