On Thu, Apr 28, 2016 at 7:09 AM, Matthias Bodenbinder
wrote:
> Am 26.04.2016 um 18:19 schrieb Henk Slager:
>> It looks like a JMS567 + SATA port multipliers behaind it are used in
>> this drivebay. The command lsusb -v could show that. So your HW
>> setup is like JBOD,
Gareth Pye posted on Thu, 28 Apr 2016 15:24:51 +1000 as excerpted:
> PDF doc info dates it at 23/1/2013, which is the best guess that can
> easily be found.
Well, "easily" is relative, but motivated by your observation I first
confirmed it, then decided to see what google had to say about the
PDF doc info dates it at 23/1/2013, which is the best guess that can
easily be found.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Am 26.04.2016 um 18:42 schrieb Holger Hoffstätte:
> On 04/26/16 18:19, Henk Slager wrote:
>> It looks like a JMS567 + SATA port multipliers behaind it are used in
>> this drivebay. The command lsusb -v could show that. So your HW
>> setup is like JBOD, not RAID.
>
> I hate to quote the
Am 26.04.2016 um 18:19 schrieb Henk Slager:
> It looks like a JMS567 + SATA port multipliers behaind it are used in
> this drivebay. The command lsusb -v could show that. So your HW
> setup is like JBOD, not RAID.
Here is the output of lsusb -v:
Bus 003 Device 004: ID 152d:0567 JMicron
On 04/26/16 18:19, Henk Slager wrote:
> It looks like a JMS567 + SATA port multipliers behaind it are used in
> this drivebay. The command lsusb -v could show that. So your HW
> setup is like JBOD, not RAID.
I hate to quote the "harmful" trope, but..
SATA Port Multipliers Considered Harmful
On Thu, Apr 21, 2016 at 7:27 PM, Matthias Bodenbinder
wrote:
> Am 21.04.2016 um 13:28 schrieb Henk Slager:
>>> Can anyone explain this behavior?
>>
>> All 4 drives (WD20, WD75, WD50, SP2504C) get a disconnect twice in
>> this test. What is on WD20 is unclear to me, but
On Sat, Apr 23, 2016 at 9:07 AM, Matthias Bodenbinder
wrote:
>
> Here is my newest test. The backports provide a 4.5 kernel:
>
>
> kernel: 4.5.0-0.bpo.1-amd64
> btrfs-tools: 4.4-1~bpo8+1
>
>
> This time the raid1 is automatically unmounted after I unplug the
On 2016/04/23 16:07, Matthias Bodenbinder wrote:
Here is my newest test. The backports provide a 4.5 kernel:
kernel: 4.5.0-0.bpo.1-amd64
btrfs-tools: 4.4-1~bpo8+1
This time the raid1 is automatically unmounted after I unplug the device and it
can not be mounted while the device is
Am 23.04.2016 um 09:07 schrieb Matthias Bodenbinder:
> 14# mount /mnt/raid1/
> mount: wrong fs type, bad option, bad superblock on /dev/sdh,
>missing codepage or helper program, or other error
>
>In some cases useful info is found in syslog - try
>dmesg | tail or so.
>
Here is my newest test. The backports provide a 4.5 kernel:
kernel: 4.5.0-0.bpo.1-amd64
btrfs-tools: 4.4-1~bpo8+1
This time the raid1 is automatically unmounted after I unplug the device and it
can not be mounted while the device is missing. See below.
Matthias
1) turn on the
On 2016/04/22 14:32, Qu Wenruo wrote:
Satoru Takeuchi wrote on 2016/04/22 11:21 +0900:
On 2016/04/21 20:58, Qu Wenruo wrote:
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22
Matthias Bodenbinder wrote on 2016/04/21 19:40 +0200:
Am 21.04.2016 um 07:43 schrieb Qu Wenruo:
There are already unmerged patches which will partly do the mdadm level
behavior, like automatically change to degraded mode without making the fs RO.
The original patchset:
Satoru Takeuchi wrote on 2016/04/22 11:21 +0900:
On 2016/04/21 20:58, Qu Wenruo wrote:
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be
On 2016/04/21 20:58, Qu Wenruo wrote:
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here
Am 21.04.2016 um 07:43 schrieb Qu Wenruo:
> There are already unmerged patches which will partly do the mdadm level
> behavior, like automatically change to degraded mode without making the fs RO.
>
> The original patchset:
> http://comments.gmane.org/gmane.comp.file-systems.btrfs/48335
The
Am 21.04.2016 um 13:28 schrieb Henk Slager:
>> Can anyone explain this behavior?
>
> All 4 drives (WD20, WD75, WD50, SP2504C) get a disconnect twice in
> this test. What is on WD20 is unclear to me, but the raid1 array is
> {WD75, WD50, SP2504C}
> So the test as described by Matthias is not what
On 04/21/2016 03:45 PM, Satoru Takeuchi wrote:
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is
On Thu, Apr 21, 2016 at 8:23 AM, Satoru Takeuchi
wrote:
> On 2016/04/20 14:17, Matthias Bodenbinder wrote:
>>
>> Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
>>>
>>> BTW, it would be better to post the dmesg for better debug.
>>
>>
>> So here we. I did the same test
On 2016-04-21 02:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i
did. It seems to be mean like
On 2016/04/21 15:23, Satoru Takeuchi wrote:
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean
On 04/21/2016 01:15 PM, Matthias Bodenbinder wrote:
Am 20.04.2016 um 15:32 schrieb Anand Jain:
1. mount the raid1 (2 disc with different size)
2. unplug the biggest drive (hotplug)
Btrfs won't know that you have plugged-out a disk.
Though it experiences IO failures, it won't close
On 2016/04/20 14:17, Matthias Bodenbinder wrote:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean like a bug in btrfs.
Sequenz of events:
1. mount
Liu Bo wrote on 2016/04/20 23:02 -0700:
On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote:
Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200:
Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
Unfortunately, this is the designed behavior.
The fs is rw just because it doesn't hit any
On Thu, Apr 21, 2016 at 01:43:56PM +0800, Qu Wenruo wrote:
>
>
> Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200:
> >Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
> >
> >>
> >>Unfortunately, this is the designed behavior.
> >>
> >>The fs is rw just because it doesn't hit any critical problem.
>
Matthias Bodenbinder wrote on 2016/04/21 07:22 +0200:
Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
Unfortunately, this is the designed behavior.
The fs is rw just because it doesn't hit any critical problem.
If you try to touch a file and then sync the fs, btrfs will become RO
immediately.
Am 20.04.2016 um 09:25 schrieb Qu Wenruo:
>
> Unfortunately, this is the designed behavior.
>
> The fs is rw just because it doesn't hit any critical problem.
>
> If you try to touch a file and then sync the fs, btrfs will become RO
> immediately.
>
> Btrfs fails to read space cache,
Am 20.04.2016 um 15:32 schrieb Anand Jain:
>> 1. mount the raid1 (2 disc with different size)
>
>> 2. unplug the biggest drive (hotplug)
>
> Btrfs won't know that you have plugged-out a disk.
> Though it experiences IO failures, it won't close the bdev.
Well, as far as I can tell mdadm can
1. mount the raid1 (2 disc with different size)
2. unplug the biggest drive (hotplug)
Btrfs won't know that you have plugged-out a disk.
Though it experiences IO failures, it won't close the bdev.
3. try to copy something to the degraded raid1
This will work as long as you do
Matthias Bodenbinder wrote on 2016/04/20 07:17 +0200:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean like a bug in btrfs.
Sequenz of events:
Am 18.04.2016 um 09:22 schrieb Qu Wenruo:
> BTW, it would be better to post the dmesg for better debug.
So here we. I did the same test again. Here is a full log of what i did. It
seems to be mean like a bug in btrfs.
Sequenz of events:
1. mount the raid1 (2 disc with different size)
2. unplug
Not quite sure about raid1 behavior.
But your "hotplug" seems to be problem.
IIRC Btrfs is known to have problem with re-appearing device.
If the hot revmoed device is fully wiped before re-plugged, it should
not cause the RO mount (abort transaction).
BTW, it would be better to post the
Hi,
I have a raid1 with 3 drives: 698, 465 and 232 GB. I copied 1,7 GB data to that
raid1, balanced the filesystem and then removed the bigger drive (hotplug).
The data was still available. Now I copied the /root directory to the raid1. It
showed up via ls -l. Then I plugged in the missing
33 matches
Mail list logo