Re: Question: raid1 behaviour on failure

2016-04-21 Thread Qu Wenruo
Satoru Takeuchi wrote on 2016/04/22 11:21 +0900: On 2016/04/21 20:58, Qu Wenruo wrote: On 04/21/2016 03:45 PM, Satoru Takeuchi wrote: On 2016/04/21 15:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be

About fi du and reflink/dedupe

2016-04-21 Thread Qu Wenruo
Hi Mark, Thanks for your contribution to btrfs-filesystem-du command. However there seems to be some strange behavior related to reflinke(and further in-band dedupe). (And the root cause is lying quite deep into kernel backref resolving codes) ["Exclusive" value not really exclsuive] When a

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Satoru Takeuchi
On 2016/04/21 20:58, Qu Wenruo wrote: On 04/21/2016 03:45 PM, Satoru Takeuchi wrote: On 2016/04/21 15:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here

Re: Raid5 replace disk problems

2016-04-21 Thread Duncan
Jussi Kansanen posted on Thu, 21 Apr 2016 18:09:31 +0300 as excerpted: > The replace operation is super slow (no other load) with avg. 3x20MB/s > (old disks) reads and 1.4MB/s write (new disk) with CFQ scheduler. Using > deadline schd. the performance is better with avg. 3x40MB/s reads and >

Re: [PATCH] btrfs: Test that qgroup counts are valid after snapshot creation

2016-04-21 Thread Qu Wenruo
Mark Fasheh wrote on 2016/04/21 16:53 -0700: Thank you for the review, comments are below. On Wed, Apr 20, 2016 at 09:48:54AM +0900, Satoru Takeuchi wrote: On 2016/04/20 7:25, Mark Fasheh wrote: +# Force a small leaf size to make it easier to blow out our root +# subvolume tree

[PATCH v2] btrfs: Test that qgroup counts are valid after snapshot creation

2016-04-21 Thread Mark Fasheh
This has been broken since Linux v4.1. We may have worked out a solution on the btrfs list but in the meantime sending a test to expose the issue seems like a good idea. Changes from v1-v2: - cleanups - added 122.out Signed-off-by: Mark Fasheh --- tests/btrfs/122 | 88

Re: [PATCH] btrfs: Test that qgroup counts are valid after snapshot creation

2016-04-21 Thread Mark Fasheh
Thank you for the review, comments are below. On Wed, Apr 20, 2016 at 09:48:54AM +0900, Satoru Takeuchi wrote: > On 2016/04/20 7:25, Mark Fasheh wrote: > >+# Force a small leaf size to make it easier to blow out our root > >+# subvolume tree > >+_scratch_mkfs "--nodesize 16384" > > nodesize

Re: btrfs forced readonly + errno=-28 No space left

2016-04-21 Thread Chris Murphy
On Thu, Apr 21, 2016 at 6:53 AM, Martin Svec wrote: > Hello, > > we use btrfs subvolumes for rsync-based backups. During backups btrfs often > fails with "No space > left" error and goes to readonly mode (dmesg output is below) while there's > still plenty of > unallocated

Клиентские базы т +79133913837 (whatsapp,viber,telegram) Skype: prodawez389 Email: ammanakuw-7...@yopmail.com Соберем для Вас по интернет базу данных потенциальных клиентов для Вашего Бизнеса. По ба

2016-04-21 Thread ammanakuw-7...@yopmail.com
Соберем для Вас по интернет базу данных потенциальных клиентов для Вашего Бизнеса. По базе можно звонить, писать, слать факсы и email,вести любые прямые активные продажи Ваших товаров и услуг Узнайте подробнее по тел +79133913837 (whatsapp,viber,telegram) Skype: prodawez389 Email:

RE: btrfs forced readonly + errno=-28 No space left

2016-04-21 Thread E V
>we use btrfs subvolumes for rsync-based backups. During backups btrfs often >fails with "No >space >left" error and goes to readonly mode (dmesg output is below) while there's >still plenty of >unallocated space I have the same use case and the same issue with no real solution that I've found.

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Matthias Bodenbinder
Am 21.04.2016 um 07:43 schrieb Qu Wenruo: > There are already unmerged patches which will partly do the mdadm level > behavior, like automatically change to degraded mode without making the fs RO. > > The original patchset: > http://comments.gmane.org/gmane.comp.file-systems.btrfs/48335 The

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Matthias Bodenbinder
Am 21.04.2016 um 13:28 schrieb Henk Slager: >> Can anyone explain this behavior? > > All 4 drives (WD20, WD75, WD50, SP2504C) get a disconnect twice in > this test. What is on WD20 is unclear to me, but the raid1 array is > {WD75, WD50, SP2504C} > So the test as described by Matthias is not what

Re: btrfs-progs confusing message

2016-04-21 Thread Konstantin Svist
On 04/21/2016 04:02 AM, Austin S. Hemmelgarn wrote: > On 2016-04-20 16:23, Konstantin Svist wrote: >> Pretty much all commands print out the usage message when no device is >> specified: >> >> [root@host ~]# btrfs scrub start >> btrfs scrub start: too few arguments >> usage: btrfs scrub start

Raid5 replace disk problems

2016-04-21 Thread Jussi Kansanen
Hello, I have a 4x 2TB HDD raid5 array and one of the disks started going bad (according to smart no read/write errors seen by btrfs), after replacing the disk with a new one I ran "btrfs replace" which resulted in kernel crash about 0.5% done: BTRFS info (device dm-10): dev_replace from

Re: "/tmp/mnt.", and not honouring compression

2016-04-21 Thread Chris Murray
On Thu, 2016-03-31 at 23:43 +0100, Duncan wrote: > Chris Murray posted on Thu, 31 Mar 2016 21:49:29 +0100 as excerpted: > > > I'm using Proxmox, based on Debian. Kernel version 4.2.8-1-pve. > Btrfs > > v3.17. > > The problem itself is beyond my level, but aiming for the obvious low- > hanging

btrfs forced readonly + errno=-28 No space left

2016-04-21 Thread Martin Svec
Hello, we use btrfs subvolumes for rsync-based backups. During backups btrfs often fails with "No space left" error and goes to readonly mode (dmesg output is below) while there's still plenty of unallocated space: $ btrfs fi df /backup Data, single: total=15.75TiB, used=15.72TiB System, DUP:

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Qu Wenruo
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote: On 2016/04/21 15:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Henk Slager
On Thu, Apr 21, 2016 at 8:23 AM, Satoru Takeuchi wrote: > On 2016/04/20 14:17, Matthias Bodenbinder wrote: >> >> Am 18.04.2016 um 09:22 schrieb Qu Wenruo: >>> >>> BTW, it would be better to post the dmesg for better debug. >> >> >> So here we. I did the same test

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Austin S. Hemmelgarn
On 2016-04-21 02:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is a full log of what i did. It seems to be mean like

Re: btrfs-progs confusing message

2016-04-21 Thread Austin S. Hemmelgarn
On 2016-04-20 16:23, Konstantin Svist wrote: Pretty much all commands print out the usage message when no device is specified: [root@host ~]# btrfs scrub start btrfs scrub start: too few arguments usage: btrfs scrub start [-BdqrRf] [-c ioprio_class -n ioprio_classdata] | ... However, balance

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Satoru Takeuchi
On 2016/04/21 15:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is a full log of what i did. It seems to be mean

Re: KERNEL PANIC + CORRUPTED BTRFS?

2016-04-21 Thread Qu Wenruo
lenovomi wrote on 2016/04/21 08:53 +0200: Hello Liu, please find both files stored here: https://drive.google.com/folderview?id=0B6RZ_9vVuTEcMDV6eGNmRlZ0ZjQ=sharing Thanks On Thu, Apr 21, 2016 at 7:27 AM, Liu Bo wrote: On Wed, Apr 20, 2016 at 10:09:07PM +0200,

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Anand Jain
On 04/21/2016 01:15 PM, Matthias Bodenbinder wrote: Am 20.04.2016 um 15:32 schrieb Anand Jain: 1. mount the raid1 (2 disc with different size) 2. unplug the biggest drive (hotplug) Btrfs won't know that you have plugged-out a disk. Though it experiences IO failures, it won't close

Re: KERNEL PANIC + CORRUPTED BTRFS?

2016-04-21 Thread lenovomi
I run: od -x /tmp/corrupt-dev2.txt > a od -x /tmp/corrupt-dev1.txt > b cmp a b; diff a b; looks like both files are identical, means that both metadata files got corrupted? thanks On Thu, Apr 21, 2016 at 8:53 AM, lenovomi wrote: > Hello Liu, > > please find both files

Re: KERNEL PANIC + CORRUPTED BTRFS?

2016-04-21 Thread lenovomi
Hello Liu, please find both files stored here: https://drive.google.com/folderview?id=0B6RZ_9vVuTEcMDV6eGNmRlZ0ZjQ=sharing Thanks On Thu, Apr 21, 2016 at 7:27 AM, Liu Bo wrote: > On Wed, Apr 20, 2016 at 10:09:07PM +0200, lenovomi wrote: >> Hi Chris, >> >> please

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Satoru Takeuchi
On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is a full log of what i did. It seems to be mean like a bug in btrfs. Sequenz of events: 1. mount

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Qu Wenruo
Liu Bo wrote on 2016/04/20 23:02 -0700: On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote: Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200: Am 20.04.2016 um 09:25 schrieb Qu Wenruo: Unfortunately, this is the designed behavior. The fs is rw just because it doesn't hit any

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Liu Bo
On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote: > > > Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200: > >Am 20.04.2016 um 09:25 schrieb Qu Wenruo: > > > >> > >>Unfortunately, this is the designed behavior. > >> > >>The fs is rw just because it doesn't hit any critical problem. >