On Sun, Aug 14, 2016 at 04:11:31PM -0400, Harinath Nampally wrote:
> This patch checks ret value and jumps to clean up in case of
> btrs_add_systme_chunk call fails
>
> Signed-off-by: Harinath Nampally
> ---
> fs/btrfs/volumes.c | 11 +++
> 1 file changed, 7 insertions(+), 4 deletions(-
14.08.2016 19:20, Chris Murphy пишет:
>
> As an aside, I'm finding the size information for the data chunk in
> 'fi us' confusing...
>
> The sample file system contains one file:
> [root@f24s ~]# ls -lh /mnt/0
> total 1.4G
> -rw-r--r--. 1 root root 1.4G Aug 13 19:24
> Fedora-Workstation-Live-x86_
On Sunday, August 14, 2016 8:04:14 PM CEST you wrote:
> On Sunday, August 14, 2016 10:20:39 AM CEST you wrote:
> > On Sat, Aug 13, 2016 at 9:39 AM, Wolfgang Mader
> >
> > wrote:
> > > Hi,
> > >
> > > I have two questions
> > >
> > > 1) Layout of raid10 in btrfs
> > > btrfs pools all devices and
14.08.2016 19:20, Chris Murphy пишет:
...
>
> This volume now has about a dozen chunks created by kernel code, and
> the stripe X to devid Y mapping is identical. Using dd and hexdump,
> I'm finding that stripe 0 and 1 are mirrored pairs, they contain
> identical information. And stripe 2 and 3 ar
Refactor btrfs_qgroup_insert_dirty_extent() function, to two functions:
1. btrfs_qgroup_insert_dirty_extent_nolock()
Almost the same with original code.
For delayed_ref usage, which has delayed refs locked.
Change the return value type to int, since caller never needs the
pointer, but
This patchset contains fixes for REGRESSION introduced in 4.2.
This patchset introduce 2 fixes for data extent owner hacks.
One can be triggered by balance, another one can be trigged by log replay
after power loss.
Root cause are all similar: EXTENT_DATA owner is changed by dirty
hacks, from sw
When doing log replay at mount time(after power loss), qgroup will leak
numbers of replayed data extents.
The cause is almost the same of balance.
So fix it by manually informing qgroup for owner changed extents.
The bug can be detected by btrfs/119 test case.
Cc: Mark Fasheh
Signed-off-by: Qu
This patch fixes a REGRESSION introduced in 4.2, caused by the big quota
rework.
When balancing data extents, qgroup will leak all its numbers for
relocated data extents.
The relocation is done in the following steps for data extents:
1) Create data reloc tree and inode
2) Copy all data extents t
At 08/12/2016 01:32 AM, Rakesh Sankeshi wrote:
I set 200GB limit to one user and 100GB to another user.
as soon as I reached 139GB and 53GB each, hitting the quota errors.
anyway to workaround quota functionality on btrfs LZO compressed
filesystem?
Please paste "btrfs qgroup show -prce " ou
At 08/12/2016 09:33 PM, Filipe Manana wrote:
On Tue, Aug 9, 2016 at 9:30 AM, Qu Wenruo wrote:
When balancing data extents, qgroup will leak all its numbers for
relocated data extents.
The relocation is done in the following steps for data extents:
1) Create data reloc tree and inode
2) Copy
This patch checks ret value and jumps to clean up in case of
btrs_add_systme_chunk call fails
Signed-off-by: Harinath Nampally
---
fs/btrfs/volumes.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 366b335..fedb301 10
On Sunday, August 14, 2016 10:20:39 AM CEST you wrote:
> On Sat, Aug 13, 2016 at 9:39 AM, Wolfgang Mader
>
> wrote:
> > Hi,
> >
> > I have two questions
> >
> > 1) Layout of raid10 in btrfs
> > btrfs pools all devices and than stripes and mirrors across this pool. Is
> > it therefore correct, t
On Sat, Aug 13, 2016 at 9:39 AM, Wolfgang Mader
wrote:
> Hi,
>
> I have two questions
>
> 1) Layout of raid10 in btrfs
> btrfs pools all devices and than stripes and mirrors across this pool. Is it
> therefore correct, that a raid10 layout consisting of 4 devices a,b,c,d is
> _not_
>
>
Hi Josef,
anything i could do or test? Results with a vanilla next branch are the
same.
Stefan
Am 11.08.2016 um 08:09 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> the backtrace and info on umount looks the same:
>
> [241910.341124] [ cut here ]
> [241910.379991] W
On Sat, Aug 13, 2016 at 05:39:18PM +0200, Wolfgang Mader wrote:
> Hi,
>
> I have two questions
>
> 1) Layout of raid10 in btrfs
> btrfs pools all devices and than stripes and mirrors across this pool. Is it
> therefore correct, that a raid10 layout consisting of 4 devices a,b,c,d is
> _not_
>
Wolfgang Mader posted on Sat, 13 Aug 2016 17:39:18 +0200 as excerpted:
> Hi,
>
> I have two questions
>
> 1) Layout of raid10 in btrfs btrfs pools all devices and than stripes
> and mirrors across this pool. Is it therefore correct, that a raid10
> layout consisting of 4 devices a,b,c,d is _not_
btrfs_root_item maintains the ctime for root updates.
This is not part of vfs_inode.
Since current_time() uses struct inode* as an argument
as Linus suggested, this cannot be used to update root
times unless, we modify the signature to use inode.
Since btrfs uses nanosecond time granularity, it c
Rakesh Sankeshi posted on Fri, 12 Aug 2016 08:47:13 -0700 as excerpted:
> Another question I had was, is there any way to check what's the
> directory/file sizes prior to compression and how much copression btrfs
> did, etc? Basicaly some stats around compression and/or dedupe from
> btrfs.
There
The series is aimed at getting rid of CURRENT_TIME and CURRENT_TIME_SEC macros.
The macros are not y2038 safe. There is no plan to transition them into being
y2038 safe.
ktime_get_* api's can be used in their place. And, these are y2038 safe.
Thanks to Arnd Bergmann for all the guidance and discus
19 matches
Mail list logo