Károly Pados posted on Thu, 23 Mar 2017 14:07:31 + as excerpted:
[ Kernel 4.9.13, progs 4.9.1:
1) Mkfs.btrfs a two-device raid1 data/metadata btrfs and mount it.
Don't put any data on it.
2) Remove a device physically or at the block level
3) Remount degraded and balance-convert data to si
In the following situation, scrub will calculate wrong parity to
overwrite correct one:
RAID5 full stripe:
Before
| Dev 1 | Dev 2 | Dev 3 |
| Data stripe 1 | Data stripe 2 | Parity Stripe |
--- 0
| 0x (Bad) | 0x
This patchset can be fetched from my github repo:
https://github.com/adam900710/linux.git raid56_fixes
It's based on v4.11-rc2, the last two patches get modified according to
the advice from Liu Bo.
The patchset fixes the following bugs:
1) False alert or wrong csum error number when scrubbing R
When scrubbing a RAID5 which has recoverable data corruption (only one
data stripe is corrupted), sometimes scrub will report more csum errors
than expected. Sometimes even unrecoverable error will be reported.
The problem can be easily reproduced by the following steps:
1) Create a btrfs with RAI
Unlike other place calling btrfs_map_block(), in raid56 scrub, we don't
use bio_counter to protect from race against dev replace.
This patch will use bio_counter to protect from the beginning of calling
btrfs_map_sblock(), until rbio endio.
Liu Bo
Signed-off-by: Qu Wenruo
---
fs/btrfs/raid56.c
When dev replace is cancelled by running scrub, we will free target
device without waiting flighting bios, causing the following NULL
pointer deference or general protection.
BUG: unable to handle kernel NULL pointer dereference at 05e0
IP: generic_make_request_checks+0x4d/0x610
CPU
Unlike mirror based profiles, RAID5/6 recovery needs to read out the
whole full stripe.
And if we don't do proper protect, it can easily cause race condition.
Introduce 2 new functions: lock_full_stripe() and unlock_full_stripe()
for RAID5/6.
Which stores a rb_tree of mutex for full stripes, so s
On Thu, Mar 23, 2017 at 07:23:40AM -0400, Austin S. Hemmelgarn wrote:
> On 2017-03-23 06:09, Hugo Mills wrote:
> >Direct rename (using rename(2)) isn't possible across subvols,
> > which is what the EXDEV result indicates. The solution is exactly what
> > mv does, which is reflink-and-delete (w
On 3/19/17 11:00 AM, Ilan Schwarts wrote:
> Hi,
> sorry if this is a newbie question. I am newbie.
>
> In my kernel driver, I get device id by converting struct inode struct
> to btrfs_inode, I use the code:
> struct btrfs_inode *btrfsInode;
> btrfsInode = BTRFS_I(inode);
>
> I usually download k
Hi Linus
We have a small set of fixes for the next RC:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
for-linus-4.11
Zygo tracked down a very old bug with inline compressed extents.
I didn't tag this one for stable because I want to do individual tested
backports. It's a
Summary: Trying to remove a missing device tells me there are no missing
devices, but obviously there is one
kernel: 4.9.13
btrfs-progs: 4.9.1
Steps to reproduce:
1) Create a raid1 array:
mkfs.btrfs -f -d raid1 -m raid1 /dev/sda /dev/sdb
mkdir /mnt/volatile
mount -t btrfs /dev/sda /mnt/volatile
On 2017-03-23 06:09, Hugo Mills wrote:
On Wed, Mar 22, 2017 at 10:37:23PM -0700, Sean Greenslade wrote:
Hello, all. I'm currently tracking down the source of some strange
behavior in my setup. I recognize that this isn't strictly a btrfs
issue, but I figured I'd start at the bottom of the stack
On Wed, Mar 22, 2017 at 10:37:23PM -0700, Sean Greenslade wrote:
> Hello, all. I'm currently tracking down the source of some strange
> behavior in my setup. I recognize that this isn't strictly a btrfs
> issue, but I figured I'd start at the bottom of the stack and work my
> way up.
>
> I have a
Hello, all. I'm currently tracking down the source of some strange
behavior in my setup. I recognize that this isn't strictly a btrfs
issue, but I figured I'd start at the bottom of the stack and work my
way up.
I have a server with a btrfs filesystem on it that I remotely access on
several system
14 matches
Mail list logo