Re: Question: raid1 behaviour on failure

2016-04-28 Thread Henk Slager
On Thu, Apr 28, 2016 at 7:09 AM, Matthias Bodenbinder wrote: > Am 26.04.2016 um 18:19 schrieb Henk Slager: >> It looks like a JMS567 + SATA port multipliers behaind it are used in >> this drivebay. The command lsusb -v could show that. So your HW >> setup is like JBOD,

Re: Question: raid1 behaviour on failure

2016-04-28 Thread Duncan
Gareth Pye posted on Thu, 28 Apr 2016 15:24:51 +1000 as excerpted: > PDF doc info dates it at 23/1/2013, which is the best guess that can > easily be found. Well, "easily" is relative, but motivated by your observation I first confirmed it, then decided to see what google had to say about the

Re: Question: raid1 behaviour on failure

2016-04-27 Thread Gareth Pye
PDF doc info dates it at 23/1/2013, which is the best guess that can easily be found. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Question: raid1 behaviour on failure

2016-04-27 Thread Matthias Bodenbinder
Am 26.04.2016 um 18:42 schrieb Holger Hoffstätte: > On 04/26/16 18:19, Henk Slager wrote: >> It looks like a JMS567 + SATA port multipliers behaind it are used in >> this drivebay. The command lsusb -v could show that. So your HW >> setup is like JBOD, not RAID. > > I hate to quote the

Re: Question: raid1 behaviour on failure

2016-04-27 Thread Matthias Bodenbinder
Am 26.04.2016 um 18:19 schrieb Henk Slager: > It looks like a JMS567 + SATA port multipliers behaind it are used in > this drivebay. The command lsusb -v could show that. So your HW > setup is like JBOD, not RAID. Here is the output of lsusb -v: Bus 003 Device 004: ID 152d:0567 JMicron

Re: Question: raid1 behaviour on failure

2016-04-26 Thread Holger Hoffstätte
On 04/26/16 18:19, Henk Slager wrote: > It looks like a JMS567 + SATA port multipliers behaind it are used in > this drivebay. The command lsusb -v could show that. So your HW > setup is like JBOD, not RAID. I hate to quote the "harmful" trope, but.. SATA Port Multipliers Considered Harmful

Re: Question: raid1 behaviour on failure

2016-04-26 Thread Henk Slager
On Thu, Apr 21, 2016 at 7:27 PM, Matthias Bodenbinder wrote: > Am 21.04.2016 um 13:28 schrieb Henk Slager: >>> Can anyone explain this behavior? >> >> All 4 drives (WD20, WD75, WD50, SP2504C) get a disconnect twice in >> this test. What is on WD20 is unclear to me, but

Re: Question: raid1 behaviour on failure

2016-04-26 Thread Henk Slager
On Sat, Apr 23, 2016 at 9:07 AM, Matthias Bodenbinder wrote: > > Here is my newest test. The backports provide a 4.5 kernel: > > > kernel: 4.5.0-0.bpo.1-amd64 > btrfs-tools: 4.4-1~bpo8+1 > > > This time the raid1 is automatically unmounted after I unplug the

Re: Question: raid1 behaviour on failure

2016-04-26 Thread Satoru Takeuchi
On 2016/04/23 16:07, Matthias Bodenbinder wrote: Here is my newest test. The backports provide a 4.5 kernel: kernel: 4.5.0-0.bpo.1-amd64 btrfs-tools: 4.4-1~bpo8+1 This time the raid1 is automatically unmounted after I unplug the device and it can not be mounted while the device is

Re: Question: raid1 behaviour on failure

2016-04-23 Thread Matthias Bodenbinder
Am 23.04.2016 um 09:07 schrieb Matthias Bodenbinder: > 14# mount /mnt/raid1/ > mount: wrong fs type, bad option, bad superblock on /dev/sdh, >missing codepage or helper program, or other error > >In some cases useful info is found in syslog - try >dmesg | tail or so. >

Re: Question: raid1 behaviour on failure

2016-04-23 Thread Matthias Bodenbinder
Here is my newest test. The backports provide a 4.5 kernel: kernel: 4.5.0-0.bpo.1-amd64 btrfs-tools: 4.4-1~bpo8+1 This time the raid1 is automatically unmounted after I unplug the device and it can not be mounted while the device is missing. See below. Matthias 1) turn on the

Re: Question: raid1 behaviour on failure

2016-04-22 Thread Satoru Takeuchi
On 2016/04/22 14:32, Qu Wenruo wrote: Satoru Takeuchi wrote on 2016/04/22 11:21 +0900: On 2016/04/21 20:58, Qu Wenruo wrote: On 04/21/2016 03:45 PM, Satoru Takeuchi wrote: On 2016/04/21 15:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22

Re: Question: raid1 behaviour on failure

2016-04-22 Thread Qu Wenruo
Matthias Bodenbinder wrote on 2016/04/21 19:40 +0200: Am 21.04.2016 um 07:43 schrieb Qu Wenruo: There are already unmerged patches which will partly do the mdadm level behavior, like automatically change to degraded mode without making the fs RO. The original patchset:

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Qu Wenruo
Satoru Takeuchi wrote on 2016/04/22 11:21 +0900: On 2016/04/21 20:58, Qu Wenruo wrote: On 04/21/2016 03:45 PM, Satoru Takeuchi wrote: On 2016/04/21 15:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Satoru Takeuchi
On 2016/04/21 20:58, Qu Wenruo wrote: On 04/21/2016 03:45 PM, Satoru Takeuchi wrote: On 2016/04/21 15:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Matthias Bodenbinder
Am 21.04.2016 um 07:43 schrieb Qu Wenruo: > There are already unmerged patches which will partly do the mdadm level > behavior, like automatically change to degraded mode without making the fs RO. > > The original patchset: > http://comments.gmane.org/gmane.comp.file-systems.btrfs/48335 The

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Matthias Bodenbinder
Am 21.04.2016 um 13:28 schrieb Henk Slager: >> Can anyone explain this behavior? > > All 4 drives (WD20, WD75, WD50, SP2504C) get a disconnect twice in > this test. What is on WD20 is unclear to me, but the raid1 array is > {WD75, WD50, SP2504C} > So the test as described by Matthias is not what

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Qu Wenruo
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote: On 2016/04/21 15:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Henk Slager
On Thu, Apr 21, 2016 at 8:23 AM, Satoru Takeuchi wrote: > On 2016/04/20 14:17, Matthias Bodenbinder wrote: >> >> Am 18.04.2016 um 09:22 schrieb Qu Wenruo: >>> >>> BTW, it would be better to post the dmesg for better debug. >> >> >> So here we. I did the same test

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Austin S. Hemmelgarn
On 2016-04-21 02:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is a full log of what i did. It seems to be mean like

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Satoru Takeuchi
On 2016/04/21 15:23, Satoru Takeuchi wrote: On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is a full log of what i did. It seems to be mean

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Anand Jain
On 04/21/2016 01:15 PM, Matthias Bodenbinder wrote: Am 20.04.2016 um 15:32 schrieb Anand Jain: 1. mount the raid1 (2 disc with different size) 2. unplug the biggest drive (hotplug) Btrfs won't know that you have plugged-out a disk. Though it experiences IO failures, it won't close

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Satoru Takeuchi
On 2016/04/20 14:17, Matthias Bodenbinder wrote: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is a full log of what i did. It seems to be mean like a bug in btrfs. Sequenz of events: 1. mount

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Qu Wenruo
Liu Bo wrote on 2016/04/20 23:02 -0700: On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote: Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200: Am 20.04.2016 um 09:25 schrieb Qu Wenruo: Unfortunately, this is the designed behavior. The fs is rw just because it doesn't hit any

Re: Question: raid1 behaviour on failure

2016-04-21 Thread Liu Bo
On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote: > > > Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200: > >Am 20.04.2016 um 09:25 schrieb Qu Wenruo: > > > >> > >>Unfortunately, this is the designed behavior. > >> > >>The fs is rw just because it doesn't hit any critical problem. >

Re: Question: raid1 behaviour on failure

2016-04-20 Thread Qu Wenruo
Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200: Am 20.04.2016 um 09:25 schrieb Qu Wenruo: Unfortunately, this is the designed behavior. The fs is rw just because it doesn't hit any critical problem. If you try to touch a file and then sync the fs, btrfs will become RO immediately.

Re: Question: raid1 behaviour on failure

2016-04-20 Thread Matthias Bodenbinder
Am 20.04.2016 um 09:25 schrieb Qu Wenruo: > > Unfortunately, this is the designed behavior. > > The fs is rw just because it doesn't hit any critical problem. > > If you try to touch a file and then sync the fs, btrfs will become RO > immediately. > > Btrfs fails to read space cache,

Re: Question: raid1 behaviour on failure

2016-04-20 Thread Matthias Bodenbinder
Am 20.04.2016 um 15:32 schrieb Anand Jain: >> 1. mount the raid1 (2 disc with different size) > >> 2. unplug the biggest drive (hotplug) > > Btrfs won't know that you have plugged-out a disk. > Though it experiences IO failures, it won't close the bdev. Well, as far as I can tell mdadm can

Re: Question: raid1 behaviour on failure

2016-04-20 Thread Anand Jain
1. mount the raid1 (2 disc with different size) 2. unplug the biggest drive (hotplug) Btrfs won't know that you have plugged-out a disk. Though it experiences IO failures, it won't close the bdev. 3. try to copy something to the degraded raid1 This will work as long as you do

Re: Question: raid1 behaviour on failure

2016-04-20 Thread Qu Wenruo
Matthias Bodenbinder wrote on 2016/04/20 07:17 +0200: Am 18.04.2016 um 09:22 schrieb Qu Wenruo: BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is a full log of what i did. It seems to be mean like a bug in btrfs. Sequenz of events:

Re: Question: raid1 behaviour on failure

2016-04-19 Thread Matthias Bodenbinder
Am 18.04.2016 um 09:22 schrieb Qu Wenruo: > BTW, it would be better to post the dmesg for better debug. So here we. I did the same test again. Here is a full log of what i did. It seems to be mean like a bug in btrfs. Sequenz of events: 1. mount the raid1 (2 disc with different size) 2. unplug

Re: Question: raid1 behaviour on failure

2016-04-18 Thread Qu Wenruo
Not quite sure about raid1 behavior. But your "hotplug" seems to be problem. IIRC Btrfs is known to have problem with re-appearing device. If the hot revmoed device is fully wiped before re-plugged, it should not cause the RO mount (abort transaction). BTW, it would be better to post the

Question: raid1 behaviour on failure

2016-04-17 Thread Matthias Bodenbinder
Hi, I have a raid1 with 3 drives: 698, 465 and 232 GB. I copied 1,7 GB data to that raid1, balanced the filesystem and then removed the bigger drive (hotplug). The data was still available. Now I copied the /root directory to the raid1. It showed up via ls -l. Then I plugged in the missing