e would be implemented should
be very similar to just reading the chunk tree. (remove the block group
lookup from bg_via_chunks and run that).
Now what's still missing is changing the bg_via_chunks one to start
kicking off the block group searches in parallel, and then you can
predic
t;> + * dropping it. It is unsafe to mess with the fs tree while it's being
>> + * dropped as we unlock the root node and parent nodes as we walk down
>> + * the tree, assuming nothing will change. If something does change
>> + * then we'll have stale information and drop references to blocks we've
>> + * already dropped.
>> + */
>> +set_bit(BTRFS_ROOT_DELETING, >state);
>> if (btrfs_disk_key_objectid(_item->drop_progress) == 0) {
>> level = btrfs_header_level(root->node);
>> path->nodes[level] = btrfs_lock_root_node(root);
>>
>
--
Hans van Kranenburg
signature.asc
Description: OpenPGP digital signature
On 11/13/18 4:03 PM, David Sterba wrote:
> On Thu, Oct 11, 2018 at 07:40:22PM +0000, Hans van Kranenburg wrote:
>> On 10/11/2018 05:13 PM, David Sterba wrote:
>>> On Thu, Oct 04, 2018 at 11:24:37PM +0200, Hans van Kranenburg wrote:
>>>> This patch set contains an addi
On 10/26/18 2:16 PM, Nikolay Borisov wrote:
>
> (Adding Chris to CC since he is the original author of the code)
>
> On 26.10.2018 15:09, Hans van Kranenburg wrote:
>> On 10/26/18 1:43 PM, Nikolay Borisov wrote:
>>> The first part of balance operation is to shrink ev
btrfs_device_get_total_bytes(device) -
> btrfs_device_get_bytes_used(device) > size_to_free ||
>
--
Hans van Kranenburg
On 10/11/2018 09:40 PM, Hans van Kranenburg wrote:
> On 10/11/2018 05:13 PM, David Sterba wrote:
>> On Thu, Oct 04, 2018 at 11:24:37PM +0200, Hans van Kranenburg wrote:
>>> This patch set contains an additional fix for a newly exposed bug after
>>> the previous attempt t
On 10/11/2018 05:13 PM, David Sterba wrote:
> On Thu, Oct 04, 2018 at 11:24:37PM +0200, Hans van Kranenburg wrote:
>> This patch set contains an additional fix for a newly exposed bug after
>> the previous attempt to fix a chunk allocator bug for new DUP chunks:
>>
>> ht
ven know
> what's causing the problem.
>
> a. Freezing means there's a kernel bug. Hands down.
> b. Is it freezing on the rebuild? Or something else?
> c. I think the devs would like to see the output from btrfs-progs
> v4.17.1, 'btrfs check --mode=lowmem' and see if it finds anything
On 10/09/2018 03:14 AM, Qu Wenruo wrote:
>
>
> On 2018/10/9 上午6:20, Hans van Kranenburg wrote:
>> On 10/08/2018 02:30 PM, Qu Wenruo wrote:
>>> Obviously, used bytes can't be larger than total bytes.
>>>
>>> Signed-off-by: Qu Wenruo
>>> ---
>
s wiki with this good news :-)
[...]
> P.S. Please let me know if you'd prefer for me to shift this
> documentation effort to btrfs.wiki.kernel.org.
Yes, absolutely. This is not specific to how we do things for Debian.
Upstream documentation can help all distros.
--
Hans van Kranenburg
signature.asc
Description: OpenPGP digital signature
On 10/08/2018 03:19 PM, Hans van Kranenburg wrote:
> On 10/08/2018 08:43 AM, Qu Wenruo wrote:
>>
>>
>> On 2018/10/5 下午6:58, Hans van Kranenburg wrote:
>>> On 10/05/2018 09:51 AM, Qu Wenruo wrote:
>>>>
>>>>
>>>> On 2018/10/
t;, which is another type of issue
which produces used > total with correct accounting logic.
> + }
> key.objectid = dev_id;
> key.type = BTRFS_DEV_EXTENT_KEY;
> key.offset = 0;
>
--
Hans van Kranenburg
On 10/08/2018 06:37 PM, Holger Hoffstätte wrote:
> On 10/08/18 17:46, Hans van Kranenburg wrote:
>
>> fs.devices() also looks for dev_items in the chunk tree:
>>
>> https://github.com/knorrie/python-btrfs/blob/master/btrfs/ctree.py#L481
>>
>> So, BOOM! yo
On 10/08/2018 05:29 PM, Holger Hoffstätte wrote:
> On 10/08/18 16:40, Hans van Kranenburg wrote:
>>> Looking at the kernel side of things in fs/btrfs/ioctl.c I see both
>>> BTRFS_IOC_TREE_SEARCH[_V2} unconditionally require CAP_SYS_ADMIN.
>>
>> That's the tree se
On 10/08/2018 04:40 PM, Hans van Kranenburg wrote:
> On 10/08/2018 04:27 PM, Holger Hoffstätte wrote:
>> (moving the discussion here from GH [1])
>>
>> Apparently there is something weird going on with the device stats
>> ioctls. I cannot get them to work as re
s: 0
flush_errs: 0
generation_errs: 0
corruption_errs: 0
> So why can Dave get his dev stats as unprivileged user?
> Does this work for anybody else? And why? :)
>
> cheers
> Holger
>
> [1]
> https://github.com/prometheus/node_exporter/issues/1100#issuecomment-427823190
>
--
Hans van Kranenburg
Hi,
On 09/24/2018 01:19 AM, Adam Borowski wrote:
> On Sun, Sep 23, 2018 at 11:54:12PM +0200, Hans van Kranenburg wrote:
>> Two examples have been added, which use the new code. I would appreciate
>> extra testing. Please try them and see if the reported num
On 10/05/2018 04:42 PM, Nikolay Borisov wrote:
>
>
> On 5.10.2018 00:24, Hans van Kranenburg wrote:
>> Instead of hardcoding exceptions for RAID5 and RAID6 in the code, use an
>> nparity field in raid_attr.
>>
>> Signed-off-by: Hans van Kranenburg
&
On 10/05/2018 09:51 AM, Qu Wenruo wrote:
>
>
> On 2018/10/5 上午5:24, Hans van Kranenburg wrote:
>> This patch set contains an additional fix for a newly exposed bug after
>> the previous attempt to fix a chunk allocator bug for new DUP chunks:
>>
>> https://lor
On 10/04/2018 11:24 PM, Hans van Kranenburg wrote:
> Instead of hardcoding exceptions for RAID5 and RAID6 in the code, use an
> nparity field in raid_attr.
>
> Signed-off-by: Hans van Kranenburg
> ---
> fs/btrfs/volumes.c | 18 +++---
> fs/btrfs/volumes.h | 2
On 10/04/2018 11:24 PM, Hans van Kranenburg wrote:
> num_bytes is really a way too generic name for a variable in this
> function. There are a dozen other variables that hold a number of bytes
> as value.
>
> Give it a name that actually describes what it does, which is hol
it to stripe_size at some point.
This removes the whole problematic if block.
Signed-off-by: Hans van Kranenburg
---
fs/btrfs/volumes.c | 46 +-
fs/btrfs/volumes.h | 2 +-
2 files changed, 22 insertions(+), 26 deletions(-)
diff --git a/fs/btrfs/volumes.c b/fs
very much in a learning stage regarding kernel development.
The stable patches handling workflow is not 100% clear to me yet. I
guess I have to add a Fixes: in the DUP patch which points to the
previous commit 92e222df7b.
Moo!,
Knorrie
Hans van Kranenburg (6):
btrfs: alloc_chunk: do not refurbish
RAID5 and RAID6 profile store one copy of the data, not 2 or 3. These
values are not used anywhere by the way.
Signed-off-by: Hans van Kranenburg
---
fs/btrfs/volumes.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index
d calculation is not needed any more.
Signed-off-by: Hans van Kranenburg
---
fs/btrfs/volumes.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 40fa85e68b1f..7045814fc98d 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/bt
num_bytes is really a way too generic name for a variable in this
function. There are a dozen other variables that hold a number of bytes
as value.
Give it a name that actually describes what it does, which is holding
the size of the chunk that we're allocating.
Signed-off-by: Hans van
num_bytes is used to store the chunk length of the chunk that we're
allocating. Do not reuse it for something really different in the same
function.
Signed-off-by: Hans van Kranenburg
---
fs/btrfs/volumes.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs
Instead of hardcoding exceptions for RAID5 and RAID6 in the code, use an
nparity field in raid_attr.
Signed-off-by: Hans van Kranenburg
---
fs/btrfs/volumes.c | 18 +++---
fs/btrfs/volumes.h | 2 ++
2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/volumes.c
On 09/29/2018 01:30 AM, Hans van Kranenburg wrote:
> [...]
>
> I didn't try filling it up and see what happens yet. Also, this can
> probably done with a DUP chunk, but it's a bit harder to quickly prove.
DUP metadata chunk ^^
--
Hans van Kranenburg
On 09/25/2018 02:05 AM, Hans van Kranenburg wrote:
> (I'm using v4.19-rc5 code here.)
>
> Imagine allocating a DATA|DUP chunk.
>
> [blub, see previous message]
Steps to reproduce DUP chunk beyond end of device:
First create a 6302M block device and fill it up.
mkdir bork
cd
back, so that's ok.
For the DUP thing, I sent an explanation ("DUP dev_extent might overlap
something next to it"), which doesn't seem to attract much attention
yet. I'm preparing a pile of patches to volumes.[ch] to fix this, clean
up things that I ran into and make the logic a bit less convoluted.
--
Hans van Kranenburg
used for very different purposes
throughout the function. And it still is, all the time.
So, while it may seem very logical fix (again), I guess this needs more
eyes, since we missed this line the previous time. D:
--
Hans van Kranenburg
logic". It would probably be nice to do the same
in the kernel code, which would also solve the mentioned bugs and
prevent new similar ones from happening.
Have fun,
--
Hans van Kranenburg
On 09/19/2018 10:04 PM, Martin Steigerwald wrote:
> Hans van Kranenburg - 19.09.18, 19:58:
>> However, as soon as we remount the filesystem with space_cache=v2 -
>>
>>> writes drop to just around 3-10 MB/s to each disk. If we remount to
>>> space_cache - lots of w
omething to conveniently to use them to live show what's happening.
I'm still using the "Thanks to a bug, solved in [2]" in the above
mailing list post way of combining extent allocators in btrfs now to
keep things workable on the larger filesystem.
--
Hans van Kranenburg
On 09/18/2018 08:10 PM, Marc Joliet wrote:
> Am Sonntag, 16. September 2018, 14:50:04 CEST schrieb Hans van Kranenburg:
>> The last example, where you make a subvolume and move everything into
>> it, will not do what you want. Since a subvolume is a separate new
>> directo
chive.com/linux-btrfs@vger.kernel.org/msg80664.html
If it does, then the message reached the list, but your own incoming
mail server might be throwing away email with your own address in the
From, but forwarded via a mailing list?
--
Hans van Kranenburg
On 09/16/2018 02:37 PM, Hans van Kranenburg wrote:
> On 09/16/2018 01:14 PM, Rory Campbell-Lange wrote:
>> Hi
>>
>> We have a backup machine that has been happily running its backup
>> partitions on btrfs (on top of a luks encrypted disks) for a few years.
>>
&
combination with using Linux 4.9, I suspect there's also 'ssd' in
your mount options (not in fstab, but enabled by btrfs while mounting,
see /proc/mounts or mount command output)?
If so, this is a nice starting point for more info about what might also
be happening to your filesystem:
https://www.spinics.net/lists/linux-btrfs/msg70622.html
--
Hans van Kranenburg
can be changed on each individual mount (like the atime
options), and when omitting them you get the non-optimal default again.
--
Hans van Kranenburg
en, just see what happens, and only
if you start seeing things slow down a lot, start worrying about what to
do, and let us know how far you got.
Have fun,
P.S. Here's an unfinished page from a tutorial that I'm writing that is
still heavily under construction, which touches the subject of
snapshotting data and metadata. Maybe it might help to explain
"complexity starts when changing things" more:
https://github.com/knorrie/python-btrfs/blob/tutorial/tutorial/cows.md
--
Hans van Kranenburg
e it
> for further examination, or does BTRFS handle that on its own?
It's no different than any other data stored in your filesystem.
So when just reading things from the snapshot, or when using the btrfs
scrub functionality, it will tell you if data that is read back matches
the checksums.
--
Hans van Kranenburg
On 08/14/2018 07:09 PM, Andrei Borzenkov wrote:
> 14.08.2018 18:16, Hans van Kranenburg пишет:
>> On 08/14/2018 03:00 PM, Dmitrii Tcvetkov wrote:
>>>> Scott E. Blomquist writes:
>>>> > Hi All,
>>>> >
>>>> > [...]
>>>
&
ct
> btrfs_trans_handle *trans,
> }
>
> again:
> - ret = btrfs_search_slot(trans, root, , path, extra_size, 1);
> + ret = btrfs_search_slot(trans, root, , path,
> + extra_size + sizeof(struct btrfs_item), 1);
> if (ret < 0) {
&g
en overwritten
already, even the ones in distant corners of trees. A full check / scrub
/ etc would be needed to find out.
--
Hans van Kranenburg
op of that, or single?
At least if you go the mkfs route (I read the other replies) then also
find out what happened. If your storage is losing data in situations
like this while it told btrfs that the data was safe, you're running a
dangerous operation.
--
Hans van Kranenburg
age looks pretty interesting and
> have something in common with planned priority-aware extent allocator.
Priority-aware allocator? Is someone actually working on that, or is it
planned like everything is 'planned' (i.e. nice idea, and might happen
or might as well not happen ever, SIYH)?
--
Hans v
gt; on device B and C, incremental
> backups will not work from A to C without setting received UUID. I have
> seen python-btrfs
> mentioned in a couple of emails; but have anyone of you used it in a
> production environment ?
>
> This is my first post to this email. Plea
on /dev/mapper/foo--vg-root
> Could not open root, trying backup super
>
> We are pretty sure that no unexpected electric cuts has been happened.
>
> At this point I don't know what information I should supply.
>
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ly also
ask you to help testing the fix.
> btrfs-heatmap properly uses the python-exec wrapper and therefore works
> regardless of currently selected default python version. :)
>
> I hope this is useful to someone.
I bet it will. ;-]
--
Hans van Kranenburg
--
To unsubscribe fro
nt [150850146304 17522688]
> ERROR: extent[156909494272, 55320576] referencer count mismatch (root: 22911,
> owner: 374857, offset: 235175936) wanted: 555, have: 1449
> Deleted root 2 item[156909494272, 178, 5476627808561673095]
> ERROR: extent[156909494272, 55320576] referencer cou
On 06/22/2018 06:25 PM, Nikolay Borisov wrote:
>
>
> On 22.06.2018 19:17, Su Yue wrote:
>>
>>
>>
>>> Sent: Friday, June 22, 2018 at 11:26 PM
>>> From: "Hans van Kranenburg"
>>> To: "Nikolay Borisov" , "Su Yue"
found_level = btrfs_header_level(eb);
>> if (found_level >= BTRFS_MAX_LEVEL) {
>> -btrfs_err(fs_info, "bad tree block level %d",
>> - (int)btrfs_header_level(eb));
>> +btrfs_err(fs_info, "bad tree blo
ns.
>
> The biggest problem is, the behavior isn't even consistent across
> btrfs-progs.
> mkfs.btrfs accept such out-of-order parameters while btrfs not.
>
> And most common tools, like commands provided by coretutil, they don't
> care about the order.
> The only daily exception is 'scp', which I found it pretty unhandy.
>
> And just as Paul and Hugo, I think there are quite some users preferring
> out-of-order parameter/options.
>
>
> I also understand the maintenance burden, but at least let's try if
> there are better solution for this.
>
> Thanks,
> Qu
>
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
r changed. But somehow it gets corrupted.
>
> The first possibility I considered was that SE Linux code might be at fault.
> I asked on the SE Linux mailing list (I haven't been involved in SE Linux
> kernel code for about 15 years) and was informed that this isn't likely at
> al
to be divided by 3, and not 6 to get the size of each of
those device extents.
Signed-off-by: Hans van Kranenburg
---
cmds-fi-usage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmds-fi-usage.c b/cmds-fi-usage.c
index b9a2b1c8..3bd2ccdf 100644
--- a/cmds-fi-usage.c
+++ b/cmds
tried to find out why, but apparently it wasn't a really
obvious error somewhere.
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/03/2018 20:47, Marc MERLIN wrote:
> On Mon, Mar 05, 2018 at 10:38:16PM +0300, Andrei Borzenkov wrote:
>>> If I absolutely know that the data is the same on both sides, how do I
>>> either
>>> 1) force back in a 'Received UUID' value on the destination
>>
>> I suppose the most simple is to
t; "parent_uuid=014fc004-ae04-0148-9525-1bf556fd4d10". Not really sure
> where that comes from, but disk B has the same, so maybe that's the
> UUID of the original snapshot on disk A?
>
> Is it possible to continue to send incremental snapshots between these
> two file systems, or must I do a full sync?
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ssd_spread also includes ssd,
but doesn't show it.
I personally don't like all of them at all, and I should really finish
and send my proposal to get them replaced by options that can choose
extent allocator for data and metadata individually (instead of some
setting that changes them both at the same ti
On 02/21/2018 04:19 PM, Ellis H. Wilson III wrote:
> On 02/21/2018 10:03 AM, Hans van Kranenburg wrote:
>> On 02/21/2018 03:49 PM, Ellis H. Wilson III wrote:
>>> On 02/20/2018 08:49 PM, Qu Wenruo wrote:
>>>> My suggestion is to use balance to reduce number of block gr
h command "the balance" means.
And what does this tell you?
https://github.com/knorrie/python-btrfs/blob/develop/examples/show_free_space_fragmentation.py
Just to make sure you're not pointlessly shovelling data around on a
filesystem that is already in bad shape.
>> BTW, if O
so they will have a lingering effect if you don't balance
>>> everything.
>>
>> According to the wiki, 4.14 does indeed have the ssd changes.
>>
>> According to the bug, he's running 4.13.x on one server and 4.14.x on
>> two. So upgrading the one to 4.14.x shou
od to know, thanks!
>
>>> Snapshot creation
>>> and deletion both operate at between 0.25s to 0.5s.
>>
>> IIRC snapshot deletion is delayed, so the real work doesn't happen when
>> "btrfs sub del" returns.
>
> I was using btrfs sub del -C for the deletions, so I believe (if that
> command truly waits for the subvolume to be utterly gone) it captures
> the entirety of the snapshot.
>
> Best,
>
> ellis
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
gt; Thanks,
> Qu
>
>>
>>
>>>
>>> Note that I'm not sensitive to multi-second mount delays. I am
>>> sensitive to multi-minute mount delays, hence why I'm bringing this up.
>>>
>>> FWIW: I am currently populating a machine we have with 6T
On 02/14/2018 03:49 PM, David Sterba wrote:
> On Mon, Feb 05, 2018 at 05:45:11PM +0100, Hans van Kranenburg wrote:
>> In case of using DUP, we search for enough unallocated disk space on a
>> device to hold two stripes.
>>
>> The devices_info[ndevs-1].max_avail that hold
On 02/12/2018 03:45 PM, Ellis H. Wilson III wrote:
> On 02/11/2018 01:03 PM, Hans van Kranenburg wrote:
>>> 3. I need to look at the code to understand the interplay between
>>> qgroups, snapshots, and foreground I/O performance as there isn't
>>> existing architec
otas would be more accurate (no fragmentation _between_
>> snapshots) and you'll have some decent performance with snapshots.
>> If that is all you care.
>
> CoW is still valuable for us as we're shooting to support on the order
> of hundreds of snapshots per subvolume,
H
have time to write-up my findings for #3 I will similarly share that.
>
> Thanks to all for your input on this issue.
Have fun!
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
14 remained, so I estimate 10 were
> deleted as part of snapper's cleaning algorithm. I quickly also ran
> dstat during the slow-down, and after 5s it finally started and reported
> only about 3-6MB/s in terms of read and write to the drive in question.
>
> I have since run top and dsta
nics.net/lists/linux-btrfs/msg69752.html
Fixes: 73c5de0051 ("btrfs: quasi-round-robin for chunk allocation")
Fixes: 86db25785a ("Btrfs: fix max chunk size on raid5/6")
Signed-off-by: Hans van Kranenburg <hans.van.kranenb...@mendix.com>
Cc: Naohiro Aota <naohiro.a...@wdc.
0, );
> +
> [...]
When you have enough subvolumes in a filesystem, let's say 10 (yes,
that sometimes happens), the current btrfs sub list is quite unusable,
which is kind of expected. But, currently, sub show is also unusable
because it still starts loading a list of
list part looks like a great improvement.
>> These patches are also available on my GitHub:
>> https://github.com/osandov/btrfs-progs/tree/libbtrfsutil. That branch
>> will rebase as I update this series.
>>
>> Please share feedback regarding the API, implementation,
On 01/24/2018 07:54 PM, waxhead wrote:
> Hans van Kranenburg wrote:
>> On 01/23/2018 08:51 PM, waxhead wrote:
>>> Nikolay Borisov wrote:
>>>> On 23.01.2018 16:20, Hans van Kranenburg wrote:
>>
>> [...]
>>
>>>>>
>>>>> W
On 01/23/2018 08:51 PM, waxhead wrote:
> Nikolay Borisov wrote:
>> On 23.01.2018 16:20, Hans van Kranenburg wrote:
[...]
>>>
>>> We also had a discussion about the "backup roots" that are stored
>>> besides the superblock, and that they are &
modes.
We also had a discussion about the "backup roots" that are stored
besides the superblock, and that they are "better than nothing" to help
maybe recover something from a borken fs, but never ever guarantee you
will get a working filesystem back.
The same holds for superblocks from
ssd
> changes.
Most probably, yes.
> Maybe it's due to other patches. Either way, it's an
> interesting and useful change. =:^)
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
of replacing a bad disk every now and then :)
I don't think these kind of things will ever end up in kernel code.
[0] There's a version in the devel branch in git that also works without
free space tree, taking a slower detour via the extent tree.
--
Hans van Kranenburg
--
To unsubscribe from thi
btrfs balance at all?
:)
> Are you using any filters
> whatsoever? The documentation
> [https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-balance] has the
> following warning:
>
> Warning: running balance without filters will take a lot of time as it
> basically rewrit
-super' failed
Signed-off-by: Hans van Kranenburg <h...@knorrie.org>
---
Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Makefile b/Makefile
index 30a0ee22..390b138f 100644
--- a/Makefile
+++ b/Makefile
@@ -220,7 +220,7 @@ cmds_restore_cflags =
-DBTRFSRESTOR
Build would fail because it couldn't find the usage function.
Signed-off-by: Hans van Kranenburg <h...@knorrie.org>
---
btrfs-calc-size.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/btrfs-calc-size.c b/btrfs-calc-size.c
index 1ac7c785..d2d68ab2 100644
--- a/btrfs-calc-size.c
+++ b
Alternatively, it might be a better idea to just remove the deprecated source
files, since this is not the first time build failures in them went unnoticed.
Hans van Kranenburg (2):
btrfs-progs: Fix progs_extra build dependencies
btrfs-progs: Fix build of btrfs-calc-size
Makefile
, but
>> I'm pretty sure it is the nightly balance.
>>
>> I've run btrfs check on / with no issues recently.
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
gt;> And as expected:
>> cat /sys/block/sdb/queue/rotational
>> 0
>>
>>> I wonder if it's the same old "ssd allocation scheme" problem, and no
>>> balancing done in a long time or at all.
Looks like it. So, yay, you're on 4.14 already. Now just do a full
o error_bdev_put;
>> +}
>> +disk_super = (struct btrfs_super_block *) sb_bh->b_data;
>>
>> devid = btrfs_stack_device_id(_super->dev_item);
>> transid = btrfs_super_generation(disk_super);
>> @@ -1413,7 +1417,7 @@ int btrfs_scan_one_device(const char *path, fmode_t
>> flags, void *holder,
>> if (!ret && fs_devices_ret)
>> (*fs_devices_ret)->total_devices = total_devices;
>>
>> -btrfs_release_disk_super(page);
>> +brelse(sb_bh);
>>
>> error_bdev_put:
>> blkdev_put(bdev, flags);
>>
>
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
th btrfs because the allocator has to work really hard to find
> free space for COWing. Really consider deleting stuff or adding more space.
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.ke
s (forgot to ask for your /proc/mounts before):
* Use the noatime mount option, so that only accessing files does not
lead to changes in metadata, which lead to writes, which lead to cowing
and writes in a new place, which lead to updates of the free space
administration etc...
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
ng X MB/s to disk?
2) How big is this filesystem? What does your `btrfs fi df /mountpoint` say?
3) What kind of workload are you running? E.g. how can you describe it
within a range from "big files which just sit there" to "small writes
and deletes all over the place all the time"?
races of the
kernel thread.
watch 'cat /proc//stack'
where is the pid of the btrfs-transaction process.
In there, you will see a pattern of reoccuring things, like, it's
searching for free space, it's writing out free space cache, or other
things. Correlate this with the disk write traffic and see i
Ok, just want to add one more thing :)
On 11/28/2017 08:12 PM, David Sterba wrote:
> On Tue, Nov 28, 2017 at 07:00:28PM +0100, Hans van Kranenburg wrote:
>> On 11/28/2017 06:34 PM, David Sterba wrote:
>>> On Fri, Nov 24, 2017 at 08:16:05PM +0100, Hans van Kranenburg wrote:
&g
On 11/28/2017 06:34 PM, David Sterba wrote:
> On Fri, Nov 24, 2017 at 08:16:05PM +0100, Hans van Kranenburg wrote:
>> Last week, when implementing the automatic classifier to dynamically
>> create tree item data objects by key type in python-btrfs, I ran into
>> the follo
-# btrfs inspect-internal dump-tree -t fs /dev/block/device
ERROR: unrecognized tree id: fs
Without this fix I can't dump-tree fs, but I can dump-tree fs_tree and
also fs_tree_tree, which is a bit silly.
---
cmds-inspect-dump-tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
all three to determine what's inside the item data?
So, the code in print_tree.c would also need to know about the tree
number and pass that into the different functions.
Am I missing something, or is my observation correct?
Thanks,--
Hans van Kranenburg
--
To unsubscribe from this list: send
allows us to stream a search query into it to
output a full dump of a metadata tree. See the examples/dump_tree.py
about this!
Tomorrow I'll update pypi and will prepare debian packages for unstable
and stretch-backports.
--
Have fun,
Hans van Kranenburg
--
To unsubscribe from this list: send
On 11/18/2017 12:48 PM, Hans van Kranenburg wrote:
>
> So, who wants to help?
>
> 1. Find a test system that you can crash.
> 2. Create a test filesystem with some data.
> 3. Run with 4.14? (makes the most sense I think)
> 4. Continuously feed the data to balance and send ev
On 11/18/2017 12:48 PM, Hans van Kranenburg wrote:
>
> So, who wants to help?
>
> 1. Find a test system that you can crash.
> 2. Create a test filesystem with some data.
> 3. Run with 4.14? (makes the most sense I think)
> 4. Continuously feed the data to balance and send ev
f 44 00 00 55 48 89 e5 41 57 41
> [ 3498.167690] RIP: read_node_slot+0xd7/0xe0 [btrfs] RSP: b4ee47d5fb88
> [ 3498.167892] ---[ end trace 6a751a3020dd3086 ]---
> [ 3499.572729] BTRFS info (device sdb3): relocating block group
> 304972038144 flags data|raid1
> [ 3504.068432]
On 11/13/2017 01:41 AM, Qu Wenruo wrote:
>
> On 2017年11月13日 06:01, Hans van Kranenburg wrote:
>> On 11/12/2017 09:58 PM, Robert White wrote:
>>> Is the commit interval monotonic, or is it seconds after sync?
>>>
>>> What I mean is that if I manually call
g a sync() the commit=N guarantee is still
> being met for the whole system for any N, but applications could tend to
> avoid mid-write commits by planing their sync()s.
>
> Just a thought.
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
RFS_UUID_SIZE);
Shouldn't we also wipe the other related fields here, like stime, rtime,
stransid, rtransid?
> + }
> + }
> +
> ret = btrfs_update_root(trans, fs_info->tree_root,
> >root_key, >root_item);
> if (ret < 0) {
>
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
1 - 100 of 308 matches
Mail list logo