Re: raid6: rmw writes all the time?

2013-05-23 Thread Bob Marley
On 23/05/2013 15:22, Bernd Schubert wrote: Yeah, I know and I'm using iostat already. md raid6 does not do rmw, but does not fill the device queue, afaik it flushes the underlying devices quickly as it does not have barrier support - that is another topic, but was the reason why I started to

Re: Especially broken btrfs

2014-03-31 Thread Bob Marley
Hi, I hadn't noticed this post, I think I know the reason this time : you have used USB you bad guy! I think USB does not support flush / barrier , which is mandatory for BTRFS to work correctly in case of power loss. For most filesystems actually, but the damages suffered by COW filesystems

Systemcall for offline deduplication

2012-10-15 Thread Bob Marley
Hello all btrfs developers I would really appreciate a systemcall (or ioctl or the like) to allow deduplication of a block of a file against a block of another file. (ok if blocks need to be aligned to filesystem blocks) So that if I know that bytes 32768...65536 of FileA are identical to

High-sensitivity fs checker (not repairer) for btrfs

2012-11-10 Thread Bob Marley
Hello all I would like to know if there exists a tool to check the btrfs filesystem very thoroughly. It's ok if it needs the FS unmounted to operate. Also mounted is OK. It does not need repair capability It needs very good checking capability: it has to return Good / Bad status with the Bad

Re: High-sensitivity fs checker (not repairer) for btrfs

2012-11-10 Thread Bob Marley
On 11/10/12 22:23, Hugo Mills wrote: The closest thing is btrfsck. That's about as picky as we've got to date. What exactly is your use-case for this requirement? We need a decently-available system. We can rollback filesystem to last-known-good if the test detects an inconsistency

Re: BTRFS, getting darn slower everyday

2012-12-09 Thread Bob Marley
On 12/09/12 12:38, Hugo Mills wrote: On Sun, Dec 09, 2012 at 12:20:46PM +0100, Swâmi Petaramesh wrote: Le 09/12/2012 11:41, Roman Mamedov a écrit : CoW filesystem incurs fragmentation by its nature, not specifically snapshots. Even without snapshots, rewriting portions of existing files will

Re: 1 week to rebuid 4x 3TB raid10 is a long time!

2014-07-20 Thread Bob Marley
On 20/07/2014 10:45, TM wrote: Hi, I have a raid10 with 4x 3TB disks on a microserver http://n40l.wikia.com/wiki/Base_Hardware_N54L , 8Gb RAM Recently one disk started to fail (smart errors), so I replaced it Mounted as degraded, added new disk, removed old Started yesterday I am monitoring

Performance reduces with nodatasum

2014-10-04 Thread Bob Marley
Hello, apparently I have found an issue with btrfs: performance reduces with nodatasum and multi-device raid0 or single. I was testing with a series of 8 LIO ramdisks, with btrfs on those in multi-device single mode, and writing zeroes on the filesystem with 16 dd in parallel. Performance

Re: Performance reduces with nodatasum

2014-10-04 Thread Bob Marley
On 04/10/2014 12:26, Bob Marley wrote: Hello, apparently I have found an issue with btrfs Sorry I forgot to mention the kernel version: 3.14.19 not tested with higher versions -- To unsubscribe from this list: send the line unsubscribe linux-btrfs in the body of a message to majord

Re: Performance reduces with nodatasum

2014-10-04 Thread Bob Marley
On 04/10/2014 12:36, Bob Marley wrote: On 04/10/2014 12:26, Bob Marley wrote: Hello, apparently I have found an issue with btrfs Sorry I forgot to mention the kernel version: 3.14.19 not tested with higher versions I just noticed that the page I have linked which also reports the problem

Re: What is the vision for btrfs fs repair?

2014-10-10 Thread Bob Marley
On 10/10/2014 03:58, Chris Murphy wrote: * mount -o recovery Enable autorecovery attempts if a bad tree root is found at mount time. I'm confused why it's not the default yet. Maybe it's continuing to evolve at a pace that suggests something could sneak in that makes things worse?

Re: What is the vision for btrfs fs repair?

2014-10-10 Thread Bob Marley
On 10/10/2014 12:59, Roman Mamedov wrote: On Fri, 10 Oct 2014 12:53:38 +0200 Bob Marley bobmar...@shiftmail.org wrote: On 10/10/2014 03:58, Chris Murphy wrote: * mount -o recovery Enable autorecovery attempts if a bad tree root is found at mount time. I'm confused why it's

Re: What is the vision for btrfs fs repair?

2014-10-10 Thread Bob Marley
On 10/10/2014 16:37, Chris Murphy wrote: The fail safe behavior is to treat the known good tree root as the default tree root, and bypass the bad tree root if it cannot be repaired, so that the volume can be mounted with default mount options (i.e. the ones in fstab). Otherwise it's a

Re: device balance times

2014-10-22 Thread Bob Marley
On 22/10/2014 14:40, Piotr Pawłow wrote: On 22.10.2014 03:43, Chris Murphy wrote: On Oct 21, 2014, at 4:14 PM, Piotr Pawłowp...@siedziba.pl wrote: Looks normal to me. Last time I started a balance after adding 6th device to my FS, it took 4 days to move 25GBs of data. It's long term

Re: [PATCH] Btrfs: fix race condition between writting and scrubing supers

2013-10-20 Thread Bob Marley
On 19/10/2013 16:03, Stefan Behrens wrote: On 10/19/2013 12:32, Shilong Wang wrote: Yeah, it did not hurt. but it may output checksum mismatch. For example: Writing 4k superblock is not totally finished, but we are trying to scrub it. Have you ever seen this issue? ... If this is

Re: btrfs and ECC RAM

2014-01-20 Thread Bob Marley
On 20/01/2014 15:57, Ian Hinder wrote: i.e. that there is parity information stored with every piece of data, and ZFS will correct errors automatically from the parity information. So this is not just parity data to check correctness but there are many more additional bits to actually

Re: I need to P. are we almost there yet?

2015-01-03 Thread Bob Marley
On 03/01/2015 14:11, Duncan wrote: Bob Marley posted on Sat, 03 Jan 2015 12:34:41 +0100 as excerpted: On 29/12/2014 19:56, sys.syphus wrote: specifically (P)arity. very specifically n+2. when will raid5 raid6 be at least as safe to run as raid1 currently is? I don't like the idea of being 2

Re: I need to P. are we almost there yet?

2015-01-03 Thread Bob Marley
On 29/12/2014 19:56, sys.syphus wrote: specifically (P)arity. very specifically n+2. when will raid5 raid6 be at least as safe to run as raid1 currently is? I don't like the idea of being 2 bad drives away from total catastrophe. (and yes i backup, it just wouldn't be fun to go down that