Mhmhm, I'm replying to myself. Again.
https://kerneltrap.org/mailarchive/linux-kernel/2007/4/10/75875

NeilBrown:
"...
dm doesn't know that md/raid1 has just changed max_sectors and there
is no convenient way for it to find out.  So when the filesystem tries
to get the max_sectors for the dm device, it gets the value that dm set
up when it was created, which was somewhat larger than one page.
When the request gets down to the raid1 layer, it caused a problem.
..."

This seems to be exactly the issue I see. For whatever reason, Reiser queries
dm for max_sectors and receives the value still valid when the disk was around.
 (248 seems to be chosen due to being <256 and dividable by 8, see this ancient
 thread: http://lkml.indiana.edu/hypermail/linux/kernel/0303.1/0880.html )

Various solutions come to mind:
a) Gracefully handle the issue, i.e. split the request once it does not fit 
into the limit.
  a1) at the caller
  a2) at the callee
b) Let LVM query max_sectors before every (?) request it sends through to the 
device below.

I feel that somehting is really wrong here.

Please fix.

-- 
Raid1 HDD and SD card -> data corruption (bio too big device md0 (248 > 200))
https://bugs.launchpad.net/bugs/320638
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to