Re: Likelihood of read error, recover device failure raid10

2016-08-14 Thread Andrei Borzenkov
14.08.2016 19:20, Chris Murphy пишет: > > As an aside, I'm finding the size information for the data chunk in > 'fi us' confusing... > > The sample file system contains one file: > [root@f24s ~]# ls -lh /mnt/0 > total 1.4G > -rw-r--r--. 1 root root 1.4G Aug 13 19:24 >

Re: Likelihood of read error, recover device failure raid10

2016-08-14 Thread Wolfgang Mader
On Sunday, August 14, 2016 8:04:14 PM CEST you wrote: > On Sunday, August 14, 2016 10:20:39 AM CEST you wrote: > > On Sat, Aug 13, 2016 at 9:39 AM, Wolfgang Mader > > > > wrote: > > > Hi, > > > > > > I have two questions > > > > > > 1) Layout of raid10 in btrfs >

Re: Likelihood of read error, recover device failure raid10

2016-08-14 Thread Andrei Borzenkov
14.08.2016 19:20, Chris Murphy пишет: ... > > This volume now has about a dozen chunks created by kernel code, and > the stripe X to devid Y mapping is identical. Using dd and hexdump, > I'm finding that stripe 0 and 1 are mirrored pairs, they contain > identical information. And stripe 2 and 3

[PATCH v3.1 1/3] btrfs: qgroup: Refactor btrfs_qgroup_insert_dirty_extent()

2016-08-14 Thread Qu Wenruo
Refactor btrfs_qgroup_insert_dirty_extent() function, to two functions: 1. btrfs_qgroup_insert_dirty_extent_nolock() Almost the same with original code. For delayed_ref usage, which has delayed refs locked. Change the return value type to int, since caller never needs the pointer, but

[PATCH v3.1 0/3] Qgroup fix for dirty hack routines

2016-08-14 Thread Qu Wenruo
This patchset contains fixes for REGRESSION introduced in 4.2. This patchset introduce 2 fixes for data extent owner hacks. One can be triggered by balance, another one can be trigged by log replay after power loss. Root cause are all similar: EXTENT_DATA owner is changed by dirty hacks, from

[PATCH v3.1 3/3] btrfs: qgroup: Fix qgroup incorrectness caused by log replay

2016-08-14 Thread Qu Wenruo
When doing log replay at mount time(after power loss), qgroup will leak numbers of replayed data extents. The cause is almost the same of balance. So fix it by manually informing qgroup for owner changed extents. The bug can be detected by btrfs/119 test case. Cc: Mark Fasheh

[PATCH v3.1 2/3] btrfs: relocation: Fix leaking qgroups numbers on data extents

2016-08-14 Thread Qu Wenruo
This patch fixes a REGRESSION introduced in 4.2, caused by the big quota rework. When balancing data extents, qgroup will leak all its numbers for relocated data extents. The relocation is done in the following steps for data extents: 1) Create data reloc tree and inode 2) Copy all data extents

Re: btrfs quota issues

2016-08-14 Thread Qu Wenruo
At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote: I set 200GB limit to one user and 100GB to another user. as soon as I reached 139GB and 53GB each, hitting the quota errors. anyway to workaround quota functionality on btrfs LZO compressed filesystem? Please paste "btrfs qgroup show -prce "

Re: [PATCH v3 2/3] btrfs: relocation: Fix leaking qgroups numbers on data extents

2016-08-14 Thread Qu Wenruo
At 08/12/2016 09:33 PM, Filipe Manana wrote: On Tue, Aug 9, 2016 at 9:30 AM, Qu Wenruo wrote: When balancing data extents, qgroup will leak all its numbers for relocated data extents. The relocation is done in the following steps for data extents: 1) Create data

[PATCH] code cleanup

2016-08-14 Thread Harinath Nampally
This patch checks ret value and jumps to clean up in case of btrs_add_systme_chunk call fails Signed-off-by: Harinath Nampally --- fs/btrfs/volumes.c | 11 +++ 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c

Re: Likelihood of read error, recover device failure raid10

2016-08-14 Thread Wolfgang Mader
On Sunday, August 14, 2016 10:20:39 AM CEST you wrote: > On Sat, Aug 13, 2016 at 9:39 AM, Wolfgang Mader > > wrote: > > Hi, > > > > I have two questions > > > > 1) Layout of raid10 in btrfs > > btrfs pools all devices and than stripes and mirrors across this pool.

Re: Likelihood of read error, recover device failure raid10

2016-08-14 Thread Chris Murphy
On Sat, Aug 13, 2016 at 9:39 AM, Wolfgang Mader wrote: > Hi, > > I have two questions > > 1) Layout of raid10 in btrfs > btrfs pools all devices and than stripes and mirrors across this pool. Is it > therefore correct, that a raid10 layout consisting of 4 devices

Re: memory overflow or undeflow in free space tree / space_info?

2016-08-14 Thread Stefan Priebe - Profihost AG
Hi Josef, anything i could do or test? Results with a vanilla next branch are the same. Stefan Am 11.08.2016 um 08:09 schrieb Stefan Priebe - Profihost AG: > Hello, > > the backtrace and info on umount looks the same: > > [241910.341124] [ cut here ] > [241910.379991]

Re: Likelihood of read error, recover device failure raid10

2016-08-14 Thread Hugo Mills
On Sat, Aug 13, 2016 at 05:39:18PM +0200, Wolfgang Mader wrote: > Hi, > > I have two questions > > 1) Layout of raid10 in btrfs > btrfs pools all devices and than stripes and mirrors across this pool. Is it > therefore correct, that a raid10 layout consisting of 4 devices a,b,c,d is > _not_ >

Re: Likelihood of read error, recover device failure raid10

2016-08-14 Thread Duncan
Wolfgang Mader posted on Sat, 13 Aug 2016 17:39:18 +0200 as excerpted: > Hi, > > I have two questions > > 1) Layout of raid10 in btrfs btrfs pools all devices and than stripes > and mirrors across this pool. Is it therefore correct, that a raid10 > layout consisting of 4 devices a,b,c,d is

[PATCH v4 10/26] fs: btrfs: Use ktime_get_real_ts for root ctime

2016-08-14 Thread Deepa Dinamani
btrfs_root_item maintains the ctime for root updates. This is not part of vfs_inode. Since current_time() uses struct inode* as an argument as Linus suggested, this cannot be used to update root times unless, we modify the signature to use inode. Since btrfs uses nanosecond time granularity, it

Re: btrfs quota issues

2016-08-14 Thread Duncan
Rakesh Sankeshi posted on Fri, 12 Aug 2016 08:47:13 -0700 as excerpted: > Another question I had was, is there any way to check what's the > directory/file sizes prior to compression and how much copression btrfs > did, etc? Basicaly some stats around compression and/or dedupe from > btrfs.

[GIT PULL] [PATCH v4 00/26] Delete CURRENT_TIME and CURRENT_TIME_SEC macros

2016-08-14 Thread Deepa Dinamani
The series is aimed at getting rid of CURRENT_TIME and CURRENT_TIME_SEC macros. The macros are not y2038 safe. There is no plan to transition them into being y2038 safe. ktime_get_* api's can be used in their place. And, these are y2038 safe. Thanks to Arnd Bergmann for all the guidance and