From: Jeff Mahoney
Subject: btrfs: Ensure proper sector alignment for
btrfs_free_reserved_data_space
References: bsc#1005666
Patch-mainline: Submitted 18 Nov 2016, linux-btrfs
This fixes the WARN_ON on BTRFS_I(inode)->reserved_extents in
btrfs_destroy_inode and the WARN_ON on nonzero delalloc by
Hi,
On 11/19/2016 01:57 AM, Qu Wenruo wrote:
> On 11/18/2016 11:08 PM, Hans van Kranenburg wrote:
>> On 11/18/2016 03:08 AM, Qu Wenruo wrote:
> I don't see what displaying a blockgroup-level aggregate usage number
> has to do with multi-device, except that the same %usage will appear
>
On Wed, Nov 16, 2016 at 01:52:08PM +0100, Christoph Hellwig wrote:
> Pass the full bio to the decompression routines and use bio iterators
> to iterate over the data in the bio.
One question below,
>
> Signed-off-by: Christoph Hellwig
> ---
> fs/btrfs/compression.c | 122
> +--
On 11/18/2016 11:08 PM, Hans van Kranenburg wrote:
On 11/18/2016 03:08 AM, Qu Wenruo wrote:
Just found one small problem.
After specifying --size 16 to output a given block group (small block
group, I need large size to make output visible), it takes a full cpu
and takes a long long long time
On Fri, Nov 18, 2016 at 07:09:34PM +0100, Goffredo Baroncelli wrote:
> Hi Zygo
> On 2016-11-18 00:13, Zygo Blaxell wrote:
> > On Tue, Nov 15, 2016 at 10:50:22AM +0800, Qu Wenruo wrote:
> >> Fix the so-called famous RAID5/6 scrub error.
> >>
> >> Thanks Goffredo Baroncelli for reporting the bug, and
Yes, I don't think one could find any NAND based SSDs with <4k page
size on the market right now (even =4k is hard to get) and 4k is
becoming the new norm for HDDs. However, some HDD manufacturers
continue to offer drives with 512 byte sectors (I think it's possible
to get new ones in sizable quant
On 11/16/2016 11:10 AM, David Sterba wrote:
On Mon, Nov 14, 2016 at 09:55:34AM +0800, Qu Wenruo wrote:
At 11/12/2016 04:22 AM, Liu Bo wrote:
On Tue, Oct 11, 2016 at 02:47:42PM +0800, Wang Xiaoguang wrote:
If we use mount option "-o max_inline=sectorsize", say 4096, indeed
even for a fresh fs
2016-11-18 23:32 GMT+03:00 Janos Toth F. :
> Based on the comments of this patch, stripe size could theoretically
> go as low as 512 byte:
> https://mail-archive.com/linux-btrfs@vger.kernel.org/msg56011.html
> If these very small (0.5k-2k) stripe sizes could really work (it's
> possible to implemen
On Wed, Nov 16, 2016 at 01:52:15PM +0100, Christoph Hellwig wrote:
> And remove the bogus check for a NULL return value from kmap, which
> can't happen. While we're at it: I don't think that kmapping up to 256
> will work without deadlocks on highmem machines, a better idea would
> be to use vm_ma
2016-11-18 21:15 GMT+03:00 Goffredo Baroncelli :
> Hello,
>
> these are only my thoughts; no code here, but I would like to share it hoping
> that it could be useful.
>
> As reported several times by Zygo (and others), one of the problem of raid5/6
> is the write hole. Today BTRFS is not capable
Based on the comments of this patch, stripe size could theoretically
go as low as 512 byte:
https://mail-archive.com/linux-btrfs@vger.kernel.org/msg56011.html
If these very small (0.5k-2k) stripe sizes could really work (it's
possible to implement such changes and it does not degrade performance
to
On Wed, Nov 16, 2016 at 01:52:14PM +0100, Christoph Hellwig wrote:
> Rework the loop a little bit to use the generic bio_for_each_segment_all
> helper for iterating over the bio.
One minor nit. Besides that,
Reviewed-by: Omar Sandoval
> Signed-off-by: Christoph Hellwig
> ---
> fs/btrfs/file-i
On Wed, Nov 16, 2016 at 01:52:13PM +0100, Christoph Hellwig wrote:
> Use the bvec offset and len members to prepare for multipage bvecs.
>
> Signed-off-by: Christoph Hellwig
> ---
> fs/btrfs/compression.c | 10 --
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btr
On Wed, Nov 16, 2016 at 01:52:12PM +0100, Christoph Hellwig wrote:
> Instead of using bi_vcnt to calculate it.
Reviewed-by: Omar Sandoval
> Signed-off-by: Christoph Hellwig
> ---
> fs/btrfs/compression.c | 7 ++-
> 1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/fs/btrfs/
On Wed, Nov 16, 2016 at 01:52:11PM +0100, Christoph Hellwig wrote:
> Use bio_for_each_segment_all to iterate over the segments instead.
> This requires a bit of reshuffling so that we only lookup up the ordered
> item once inside the bio_for_each_segment_all loop.
Reviewed-by: Omar Sandoval
> Si
On Wed, Nov 16, 2016 at 01:52:10PM +0100, Christoph Hellwig wrote:
> Just use bio_for_each_segment_all to iterate over all segments.
The subject seems to be copied from patch 2 for this one. Otherwise,
Reviewed-by: Omar Sandoval
> Signed-off-by: Christoph Hellwig
> ---
> fs/btrfs/inode.c | 7
On Wed, Nov 16, 2016 at 01:52:09PM +0100, Christoph Hellwig wrote:
> Just use bio_for_each_segment_all to iterate over all segments.
>
Besides the minor whitespace issue below,
Reviewed-by: Omar Sandoval
> Signed-off-by: Christoph Hellwig
> ---
> fs/btrfs/raid56.c | 16 ++--
> 1
On Wed, Nov 16, 2016 at 12:54:32PM -0800, Omar Sandoval wrote:
> From: Omar Sandoval
>
> Also, the other progress messages go to stderr, so "checking extents"
> probably should, as well.
>
> Fixes: c7a1f66a205f ("btrfs-progs: check: switch some messages to common
> helpers")
> Signed-off-by: Om
On Thu, Nov 17, 2016 at 02:01:31PM +0900, Tsutomu Itoh wrote:
> Option -f, -F and --sort don't work because a conditional expression
> of ASSERT is wrong.
>
> Signed-off-by: Tsutomu Itoh
Applied, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a m
On Fri, Nov 18, 2016 at 02:44:12PM +0900, Tsutomu Itoh wrote:
> Add test-cli to test target of Makefile.
>
> Signed-off-by: Tsutomu Itoh
Applied, thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo
On Fri, Nov 18, 2016 at 02:36:28PM +0800, Qu Wenruo wrote:
>
>
> On 11/18/2016 01:47 PM, Tsutomu Itoh wrote:
> > In my test environment, size of /lib/modules/`uname -r`/* is
> > larger than 1GB, so test data can not copy to testdev.
> > Therefore we need expand the size of testdev.
>
> IMHO the
Hello,
these are only my thoughts; no code here, but I would like to share it hoping
that it could be useful.
As reported several times by Zygo (and others), one of the problem of raid5/6
is the write hole. Today BTRFS is not capable to address it.
The problem is that the stripe size is bigger
On Fri, Nov 18, 2016 at 02:49:51PM +0900, Tsutomu Itoh wrote:
> convert-tests 004 failed because the argument to check_all_images
> is not specified.
>
> # make test-convert
> [TEST] convert-tests.sh
> [TEST/conv] 004-ext2-backup-superblock-ranges
> find: '': No such file or directory
On Mon, Nov 14, 2016 at 10:43:17AM -0800, Omar Sandoval wrote:
> Cover letter from v1:
>
> This series implements some support for space_cache=v2 in btrfs-progs.
> In particular, this adds support for `btrfs check --clear-space-cache v2`,
> proper printing of the free space tree flags for `btrfs i
Hi Zygo
On 2016-11-18 00:13, Zygo Blaxell wrote:
> On Tue, Nov 15, 2016 at 10:50:22AM +0800, Qu Wenruo wrote:
>> Fix the so-called famous RAID5/6 scrub error.
>>
>> Thanks Goffredo Baroncelli for reporting the bug, and make it into our
>> sight.
>> (Yes, without the Phoronix report on this,
>> http
On Thu, Nov 17, 2016 at 11:24:41AM -0800, Omar Sandoval wrote:
> On Tue, Nov 15, 2016 at 05:16:27PM +0900, Tsutomu Itoh wrote:
> > The following patch was imperfect, so xfstests btrfs/038 was failed.
> >
> > 6d4fb3d btrfs-progs: send: fix handling of multiple snapshots (-p option)
> >
> > [bef
On 11/18/2016 04:30 PM, Austin S. Hemmelgarn wrote:
>
> Now, I personally have no issue with the Hilbert ordering, but if there
> were an option to use a linear ordering, I would almost certainly use
> that instead, simply because I could more easily explain the data to
> people.
It's in there, b
On 2016-11-18 09:37, Hans van Kranenburg wrote:
Ha,
On 11/18/2016 01:36 PM, Austin S. Hemmelgarn wrote:
On 2016-11-17 16:08, Hans van Kranenburg wrote:
On 11/17/2016 08:27 PM, Austin S. Hemmelgarn wrote:
On 2016-11-17 13:51, Hans van Kranenburg wrote:
But, the fun with visualizations of data
On 2016-11-18 10:08, Hans van Kranenburg wrote:
On 11/18/2016 03:08 AM, Qu Wenruo wrote:
When generating a picture of a file system with multiple devices,
boundaries between the separate devices are not visible now.
If someone has a brilliant idea about how to do this without throwing
out actua
On 11/18/2016 03:08 AM, Qu Wenruo wrote:
>
> Just found one small problem.
> After specifying --size 16 to output a given block group (small block
> group, I need large size to make output visible), it takes a full cpu
> and takes a long long long time to run.
> So long I don't even want to wait.
Yes, the short window between the stalls and the panic makes it
difficult to manually check much. I could setup a cron every 5 minutes
or so if you want. Also, I see the OOM's in 4.8, but it has yet to
panic on me. Where as 4.9rc has panic'd both times I've booted it, so
depending on what you want
On Fri, Nov 18, 2016 at 09:38:44AM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> After the last big change in the delayed references code that was needed
> for the last qgroups rework, the red black tree node field of struct
> btrfs_delayed_ref_node is no longer used, so just remove
Ha,
On 11/18/2016 01:36 PM, Austin S. Hemmelgarn wrote:
> On 2016-11-17 16:08, Hans van Kranenburg wrote:
>> On 11/17/2016 08:27 PM, Austin S. Hemmelgarn wrote:
>>> On 2016-11-17 13:51, Hans van Kranenburg wrote:
>> But, the fun with visualizations of data is that you learn whether they
>> just wo
On 11/18/2016 04:37 AM, fdman...@kernel.org wrote:
From: Filipe Manana
In commit 5bc7247ac47c (Btrfs: fix broken nocow after balance) we started
abusing the rtransid and otransid fields of root items from relocation
trees to fix some issues with nodatacow mode. However later in commit
ba8b02893
On 11/18/2016 04:37 AM, fdman...@kernel.org wrote:
From: Filipe Manana
During relocation of a data block group we create a relocation tree
for each fs/subvol tree by making a snapshot of each tree using
btrfs_copy_root() and the tree's commit root, and then setting the last
snapshot field for t
Hello,
I „BCache“ a BTrFS RAID with 4 hard drives.
Normal use seems to work good. I have no problems.
I have the BTrFS RAID mounted through „/dev/bcache3“.
But when I remove a disk (simulating it’s broken), BTrFS doesn’t inform me
about a „missing disk“:
# btrfs fi show
Label: 'RAID' uuid: d0
On 2016-11-17 16:08, Hans van Kranenburg wrote:
On 11/17/2016 08:27 PM, Austin S. Hemmelgarn wrote:
On 2016-11-17 13:51, Hans van Kranenburg wrote:
When generating a picture of a file system with multiple devices,
boundaries between the separate devices are not visible now.
If someone has a b
It could be totally unrelated but I have a similar problem: processes
get randomly OOM'd when I am doing anything "sort of heavy" on my
Btrfs filesystems.
I did some "evil tuning", so I assumed that must be the problem (even
if the values looked sane for my system). Thus, I kept cutting back on
the
On 2016/11/18 6:49, Vlastimil Babka wrote:
> On 11/16/2016 02:39 PM, E V wrote:
>> System panic'd overnight running 4.9rc5 & rsync. Attached a photo of
>> the stack trace, and the 38 call traces in a 2 minute window shortly
>> before, to the bugzilla case for those not on it's e-mail list:
>>
>> ht
On giovedì 17 novembre 2016 04:01:52 CET, Zygo Blaxell wrote:
Duperemove does use a lot of memory, but the logs at that URL only show
2G of RAM in duperemove--not nearly enough to trigger OOM under normal
conditions on an 8G machine. There's another process with 6G of virtual
address space (alth
On Fri, Nov 18, 2016 at 9:37 AM, wrote:
> From: Filipe Manana
>
> On openSUSE/SLE systems where balance is triggered periodically in the
> background, snapshotting happens when doing package installations and
> upgrades, and (by default) the root system is organized with multiple
> subvolumes, t
On Fri, Nov 18, 2016 at 04:06:22AM +0200, Marcus Sundman wrote:
> On 18.11.2016 02:52, Hugo Mills wrote:
> >On Fri, Nov 18, 2016 at 02:38:25AM +0200, Marcus Sundman wrote:
> >>The FAQ says that "the best solution for small devices (under about
> >>16 GB) is to reformat the FS with the --mixed optio
From: Filipe Manana
After the last big change in the delayed references code that was needed
for the last qgroups rework, the red black tree node field of struct
btrfs_delayed_ref_node is no longer used, so just remove it, this helps
us save some memory (since struct rb_node is 24 bytes on x86_64
From: Filipe Manana
During relocation of a data block group we create a relocation tree
for each fs/subvol tree by making a snapshot of each tree using
btrfs_copy_root() and the tree's commit root, and then setting the last
snapshot field for the fs/subvol tree's root to the value of the current
From: Filipe Manana
On openSUSE/SLE systems where balance is triggered periodically in the
background, snapshotting happens when doing package installations and
upgrades, and (by default) the root system is organized with multiple
subvolumes, the following warning was triggered often:
[ 630.773
From: Filipe Manana
In commit 5bc7247ac47c (Btrfs: fix broken nocow after balance) we started
abusing the rtransid and otransid fields of root items from relocation
trees to fix some issues with nodatacow mode. However later in commit
ba8b0289333a (Btrfs: do not reset last_snapshot after relocati
46 matches
Mail list logo