On Wed, Nov 16, 2011 at 12:03:10PM -0800, Mark Fasheh wrote:
On Wed, Nov 16, 2011 at 11:35:40AM -0800, Mark Fasheh wrote:
We should do it per FS though, I'll patch up btrfs.
I agree about doing it per FS. Ocfs2 just needs a one-liner to mark the
journal transaction as synchronous.
Original-Nachricht
Datum: Thu, 17 Nov 2011 08:42:48 +0100
Von: Arne Jansen sensi...@gmx.net
An: Thomas Schmidt schmid...@gmx.de
CC: linux-btrfs@vger.kernel.org
Betreff: Re: [RFC] improve space utilization on off-sized raid devices
On 17.11.2011 01:27, Thomas Schmidt
On 17.11.2011 12:53, Thomas Schmidt wrote:
On 17.11.2011 01:27, Thomas Schmidt wrote:
In your setup, it should stripe to all 8 devices until the 5 smaller ones
are full, and from then on stripe to the 3 remaining devices.
Afaik the behavior you describe is exactly the problem.
It wants to
Original-Nachricht
Datum: Thu, 17 Nov 2011 13:59:26 +0100
Von: Arne Jansen sensi...@gmx.net
An: Thomas Schmidt schmid...@gmx.de
CC: linux-btrfs@vger.kernel.org
Betreff: Re: [RFC] improve space utilization on off-sized raid devices
Right you are. So you want to sacrifice
My previous patch introduced some u64 for failed_mirror variables, this one
makes it consistent again.
Signed-off-by: Jan Schmidt list.bt...@jan-o-sch.net
---
fs/btrfs/disk-io.c |2 +-
fs/btrfs/extent_io.c |4 ++--
fs/btrfs/extent_io.h |2 +-
3 files changed, 4 insertions(+), 4
We've been hitting BUG()'s in btrfs_cont_expand and btrfs_fallocate and anywhere
else that calls btrfs_get_extent while running xfstests 13 in a loop. This is
because fiemap is calling btrfs_get_extent with non-sectorsize aligned offsets,
which will end up adding mappings that are not sectorsize
Hiya,
Before setting up a new RAID10 btrfs array with 6 drives, I
wanted to check how good it behaved in case of disk failure.
I've not been too impressed. Is RAID10 btrfs support only
meant for reading performance improvement?
My test method was:
Use the device-mapper to have devices mapped
On 11/17/2011 7:59 AM, Arne Jansen wrote:
Right you are. So you want to sacrifice stripe size for space efficiency.
Why don't you just use RAID1?
Instead of reducing the stripe size for the majority of writes, I'd prefer
to allow RAID10 to go down to 2 disks. This should also solve it.
Yes, it
It seems overly harsh to fail a resize of a btrfs file system to the
same size when a shrink or grow would succeed. User app GParted trips
over this error. Allow it by bypassing the shrink or grow operation.
Signed-off-by: Mike Fleetwood mike.fleetw...@googlemail.com
---
Example failed resize:
2011-11-17 17:09:25 +, Stephane CHAZELAS:
[...]
Before setting up a new RAID10 btrfs array with 6 drives, I
wanted to check how good it behaved in case of disk failure.
I've not been too impressed. Is RAID10 btrfs support only
meant for reading performance improvement?
My test method
The usage is 'btrfs subvolume set-default id path', not
'path path'.
The code happens to work because strotoull(path) returns 0, but it
will fail if in the future we check the argument more strict in
btrfs-progs.
Signed-off-by: Li Zefan l...@cn.fujitsu.com
---
254 |4 ++--
1 files changed,
On 18/11/11 08:04, Mike Fleetwood wrote:
It seems overly harsh to fail a resize of a btrfs file system to the
same size when a shrink or grow would succeed. User app GParted trips
over this error. Allow it by bypassing the shrink or grow operation.
OK - I'm a newbie with the code (and I'm
12 matches
Mail list logo