metadata balance and a subvolume delete at the same time, which is
something that btrfs definitely needs to be able to handle.
Doing a btrfs check now on the block device, will fup with the output.
--
Hans van Kranenburg - System / Network Engineer
T +31 (0)10 2760434 | hans.van.kranenb...@mendix.com
On 07/31/2016 01:18 AM, Hans van Kranenburg wrote:
> blahblahblahblahblalahba
>
> Doing a btrfs check now on the block device, will fup with the output.
>
Output so far:
-# btrfs check /dev/xvdc 2>&1 | tee btrfs-check-xvdc
checking extents
ref mismatch on [3649516437504 1
On 07/31/2016 02:13 AM, Hans van Kranenburg wrote:
> On 07/31/2016 01:18 AM, Hans van Kranenburg wrote:
>> blahblahblahblahblalahba
>>
>> Doing a btrfs check now on the block device, will fup with the output.
>>
>
> Output so far:
>
> -# btrfs chec
btrfs-progs 4.7, check reports many "incorrect local backref count" messages
--
Hans van Kranenburg - System / Network Engineer
T +31 (0)10 2760434 | hans.van.kranenb...@mendix.com | www.mendix.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
lower level, on iSCSI storage) meant to be used for upgrade-testing
and performance testing, so if anything goes wrong in whatever way,
there will be no panicing involved.
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a me
when the bugs caused
corruption, using a fixed kernel with not retroactively fix the corrupt
data.
Hint: "this was fixed in 4.x.y, so run that version or later" is not
always the only answer here, because you'll see that fixes like these
even show up in kernels like 3.16.y
But maybe
On 09/09/2016 11:37 PM, Hans van Kranenburg wrote:
>
> While trying to enable skinny metadata on a filesystem, I got this error
> (after minutes of reading from disk by the program):
>
> -# btrfstune -x /dev/xvdb
> extent-tree.c:2688: btrfs_reserve_extent: Assertion `ret` f
On 09/12/2016 02:39 PM, David Sterba wrote:
> On Fri, Sep 09, 2016 at 11:37:21PM +0200, Hans van Kranenburg wrote:
>> Hi,
>>
>> While trying to enable skinny metadata on a filesystem, I got this error
>> (after minutes of reading from disk by the program):
>>
>&
On 09/12/2016 02:46 PM, Hans van Kranenburg wrote:
> On 09/12/2016 02:39 PM, David Sterba wrote:
>> On Fri, Sep 09, 2016 at 11:37:21PM +0200, Hans van Kranenburg wrote:
>>> Hi,
>>>
>>> While trying to enable skinny metadata on a filesystem, I got this error
>
gger
them, like the excellent commit messages of Filipe in the commits
mentioned above. This helps setting up and maintaining the bug page, and
helps advanced users to decide if they're hitting the edge case or not
with their usage pattern.
I'd like to help creating/maintaining this bug overv
ept in the IRC-hive-mind yet it still needs
> some other way to actually make it appear on wiki. Edit with courage!
Oh, right there at the end, I expected: Join #btrfs on freenode IRC! :-D
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrf
y
greeted with hours of filesystem downtime during mount?
What should be the way of working now, for a user which wants to use FST
and any btrfs-progs r/w functionality?
1) umount
2) mount with clear_cache,space_cache=v1
3) use progs, e.g. btrfstune
4) mount with clear_cache,space_cache
systems produced broken free space tree
> bitmaps,
> + * and btrfs-progs also used to corrupt the free space tree. If this bit is
> + * clear, then the free space tree cannot be trusted. btrfs-progs can also
> + * intentionally clear this bit to ask the kernel to rebuild the free space
> + * tree.
> +
On 09/26/2016 07:39 PM, Omar Sandoval wrote:
> On Sat, Sep 24, 2016 at 09:50:53PM +0200, Hans van Kranenburg wrote:
>> On 09/23/2016 02:24 AM, Omar Sandoval wrote:
>>> From: Omar Sandoval
>>>
>>> There are two separate issues that can lead to corrupted free
e a file that didn't exist, but what if the referenced file is
> there but contains different data? Are there checks for this sort of
> thing, or is it always assumed that the parent subvols are identical and
> if they're not, you're in undefined behavior land?
btrfs send/
On 10/13/2016 01:47 AM, Sean Greenslade wrote:
> On Thu, Oct 13, 2016 at 01:14:51AM +0200, Hans van Kranenburg wrote:
>> On 10/13/2016 12:29 AM, Sean Greenslade wrote:
>>> And while we're at it, what are the failure modes for incremental sends?
>>> Will it throw an
this!"
Also, if you're using IRC, #btrfs on freenode is a good place to hang out.
Have fun,
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi,
On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
>
> cp --reflink=always takes sometimes very long. (i.e. 25-35 minutes)
>
> An example:
>
> source file:
> # ls -la vm-279-disk-1.img
> -rw-r--r-- 1 root root 204010946560 Oct 14 12:15 vm-279-disk-1.img
>
> target file after aroun
On 10/16/2016 08:54 PM, Stefan Priebe - Profihost AG wrote:
> Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
>> On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
>>>
>>> cp --reflink=always takes sometimes very long. (i.e. 25-35 minutes)
>>>
On 10/16/2016 09:48 PM, Hans van Kranenburg wrote:
> On 10/16/2016 08:54 PM, Stefan Priebe - Profihost AG wrote:
>> Am 16.10.2016 um 00:37 schrieb Hans van Kranenburg:
>>> On 10/15/2016 10:49 PM, Stefan Priebe - Profihost AG wrote:
>>>>
>>>> cp --reflink=
5 parent_uuid - path bar
ID 3488 gen 3370 parent 5 top level 5 parent_uuid
8de7ab74-4654-e542-a29b-169848ee73b3 path bar-snap
and there's the parent_uuid...
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/13/18 4:03 PM, David Sterba wrote:
> On Thu, Oct 11, 2018 at 07:40:22PM +0000, Hans van Kranenburg wrote:
>> On 10/11/2018 05:13 PM, David Sterba wrote:
>>> On Thu, Oct 04, 2018 at 11:24:37PM +0200, Hans van Kranenburg wrote:
>>>> This patch set contains an addi
#x27;re
>> + * dropping it. It is unsafe to mess with the fs tree while it's being
>> + * dropped as we unlock the root node and parent nodes as we walk down
>> + * the tree, assuming nothing will change. If something does change
>> + * then we'll have stale information and drop references to blocks we've
>> + * already dropped.
>> + */
>> +set_bit(BTRFS_ROOT_DELETING, &root->state);
>> if (btrfs_disk_key_objectid(&root_item->drop_progress) == 0) {
>> level = btrfs_header_level(root->node);
>> path->nodes[level] = btrfs_lock_root_node(root);
>>
>
--
Hans van Kranenburg
signature.asc
Description: OpenPGP digital signature
.
The time that it takes after option 2 above would be implemented should
be very similar to just reading the chunk tree. (remove the block group
lookup from bg_via_chunks and run that).
Now what's still missing is changing the bg_via_chunks one to start
kicking off the block group searc
struct btrfs_root *root;
[...]
+ if (btrfs_fs_incompat(fs_info, BG_TREE))
+ root = fs_info->bg_root;
+ else
+ root = fs_info->extent_root;
...but creating a new different struct and key type would cause much
more invasive code changes and duplication (and bugs) all over the
place, or wrappers to handle either scenario.
I mean, who cares about some unused chunk_objectid field on a multi-TiB
filesystem...
I'd vote for doing things, and more "design for today". Otherwise the
same might happen that also happens with some other topics every time...
it ends up with the idea to rewrite half btrfs and then in the end
nothing happens at all, and the users are still unhappy. ;-)
Even when splitting the extent tree into multiple trees ever, it would
still be a good idea to have this BG_TREE.
--
Hans van Kranenburg
31146901504 used 31146905600
>>> warning, bad space info total_bytes 32220643328 used 32220647424
>
> I'm not sure what this means.
> I thought it's free space cache, but code doesn't prove that.
I was thinking about a 'space', which in btrfs, is a collection of all
block groups which have a specific combination of type/profile, like
'Data, RAID1'.
Note that the difference between the two numbers in every line seems to
be 4096 bytes.
> [...]
--
Hans van Kranenburg
Hi Sasha,
On 1/8/19 8:25 PM, Sasha Levin wrote:
> From: Hans van Kranenburg
>
> [ Upstream commit baf92114c7e6dd6124aa3d506e4bc4b694da3bc3 ]
>
> Commit 92e222df7b "btrfs: alloc_chunk: fix DUP stripe size handling"
> fixed calculating the stripe_size for a new DUP ch
mentation.
I will move the current work on tutorial style documentation as
referenced above into the sphinx stuff, so I can cross reference
everything. No ETA, since this is just my hobby project. ;]
Have fun, and don't hesitate to talk to me on IRC (Knorrie) or email me
if you run into problems using all of this. o/
--
Hans van Kranenburg
is still also a use case.
>> or even through filesystem corruption
>> (which I experienced).
>>
>
> And if corruption happened after applying changes? End result in the
> same. Of course it would be perfect if btrfs could notice and warn
> you, I just do not see how it can realistically be implemented.
>
--
Hans van Kranenburg
On 1/23/19 3:37 PM, Sasha Levin wrote:
> On Tue, Jan 08, 2019 at 11:52:02PM +0000, Hans van Kranenburg wrote:
>> Hi Sasha,
>>
>> On 1/8/19 8:25 PM, Sasha Levin wrote:
>>> From: Hans van Kranenburg
>>>
>>> [ Upstream commit baf92114c7e6dd6124aa
On 1/23/19 4:32 PM, Nikolay Borisov wrote:
>
>
> On 23.01.19 г. 17:25 ч., Hans van Kranenburg wrote:
>> On 1/23/19 12:25 PM, Andrei Borzenkov wrote:
>>> On Wed, Jan 23, 2019 at 1:45 PM Dennis Katsonis
>>> wrote:
>>>> I think my previous e-
On 1/23/19 4:40 PM, Remi Gauvin wrote:
> On 2019-01-23 10:25 a.m., Hans van Kranenburg wrote:
>
>>
>> But then only disallow if the subvol has a value in the received_uuid
>> field, I'd say.
>>
>
> That would only solve half the self-harm. User can sti
(Add to Cc: Ben Hutchings)
On 1/23/19 7:18 PM, Sasha Levin wrote:
> On Wed, Jan 23, 2019 at 03:54:00PM +0000, Hans van Kranenburg wrote:
>> On 1/23/19 3:37 PM, Sasha Levin wrote:
>>> On Tue, Jan 08, 2019 at 11:52:02PM +, Hans van Kranenburg wrote:
>>>> Hi Sash
print("Searching from {} to {}".format(min_key, max_key))
for header, data in btrfs.ioctl.search_v2(fs.fd, CSUM_TREE_OBJECTID,
min_key, max_key):
print(Key(header.objectid, header.type, header.offset))
-# ./show_csum_keys.py
Searching from (EXTENT_CSUM EXTENT_CSUM 0) to (EXTENT_CSUM EXTENT_CSUM -1)
(EXTENT_CSUM EXTENT_CSUM 5700059136)
(EXTENT_CSUM EXTENT_CSUM 5700321280)
(EXTENT_CSUM EXTENT_CSUM 5700583424)
(EXTENT_CSUM EXTENT_CSUM 5700845568)
(EXTENT_CSUM EXTENT_CSUM 5701107712)
(EXTENT_CSUM EXTENT_CSUM 5704646656)
(EXTENT_CSUM EXTENT_CSUM 5705039872)
(EXTENT_CSUM EXTENT_CSUM 5706350592)
[...]
--
Hans van Kranenburg
On 1/25/19 9:45 PM, Tobias Reinhard wrote:
> Am 25.01.2019 um 19:05 schrieb Hans van Kranenburg:
>> On 1/25/19 5:59 PM, Tobias Reinhard wrote:
>>> Am 13.01.2019 um 12:02 schrieb Qu Wenruo:
>>>> On 2019/1/13 下午6:19, Tobias Reinhard wrote:
>>>>> Hi,
>
Make sorting on virtual address actually produce correct output
for RAID0, RAID10, RAID5 and RAID6.
* Change license to MIT (Expat)
--
Hans van Kranenburg
estion: If I use btrfs everywhere,
can I run dm-integrity without a journal?
As far as I can reason about.. I could. As long as there's no 'nocow'
happening, the only thing that needs to happen correctly is superblock
writes, right?
--
Hans van Kranenburg
On 1/30/19 4:26 PM, Christoph Anton Mitterer wrote:
> On Wed, 2019-01-30 at 07:58 -0500, Austin S. Hemmelgarn wrote:
>> Running dm-integrity without a journal is roughly equivalent to
>> using
>> the nobarrier mount option (the journal is used to provide the same
>> guarantees that barriers do).
On 1/30/19 5:38 PM, Hans van Kranenburg wrote:
> On 1/30/19 4:26 PM, Christoph Anton Mitterer wrote:
>> On Wed, 2019-01-30 at 07:58 -0500, Austin S. Hemmelgarn wrote:
>>> Running dm-integrity without a journal is roughly equivalent to
>>> using
>>> the nobarri
;single" block groups left (using btrfs
balance), which might have been created for new writes when the
filesystem was running in degraded mode.
--
Hans van Kranenburg
On 2/13/19 3:26 PM, Johannes Thumshirn wrote:
> We recently had a customer issue with a corrupted filesystem. When trying
> to mount this image btrfs panicked with a division by zero in
> calc_stripe_length().
>
> The corrupt chunk had a 'num_stripes' value of 1. calc_stripe_length()
> takes this
On 2/13/19 3:37 PM, Johannes Thumshirn wrote:
> On 13/02/2019 15:32, Hans van Kranenburg wrote:
> [...]
>
>>> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
>>> index 03f223aa7194..b40cc7c830f4 100644
>>> --- a/fs/btrfs/volumes.c
>>> +++ b/fs/btr
+--
> fs/f2fs/super.c| 2 +-
> fs/utimes.c| 86 +-
> fs/xfs/libxfs/xfs_format.h | 2 +-
> fs/xfs/libxfs/xfs_log_format.h | 2 +-
> fs/xfs/xfs_iops.c | 11 -
> fs/xfs/xfs_super.c | 2 +-
> include/linux/fs.h | 4 ++
> include/uapi/linux/fcntl.h | 2 +
> 14 files changed, 111 insertions(+), 48 deletions(-)
>
--
Hans van Kranenburg
On 2/15/19 6:39 AM, Omar Sandoval wrote:
> On Fri, Feb 15, 2019 at 01:57:39AM +0000, Hans van Kranenburg wrote:
>> Hi,
>>
>> On 2/14/19 11:00 AM, Omar Sandoval wrote:
>>> From: Omar Sandoval
>>>
>>> Since statx was added in 4.11, userspace has had a
Hi,
On 6/25/19 6:09 PM, David Sterba wrote:
> Hi,
>
> I'd like to get some feedback on the json output, the overall structure
> of the information and naming.
>
> Code: git://github.com/kdave/btrfs-progs.git preview-json
>
> The one example command that supports it is
>
> $ ./btrfs --format j
On 6/26/19 6:47 PM, David Sterba wrote:
> On Tue, Jun 25, 2019 at 08:07:25PM +0200, Hans van Kranenburg wrote:
>> Hi,
>>
>> On 6/25/19 6:09 PM, David Sterba wrote:
>>> Hi,
>>>
>>> I'd like to get some feedback on the json output, th
Hi,
On 7/17/19 1:24 AM, Ulli Horlacher wrote:
>
> I thought, I can recognize a snapshot when it has a Parent UUID, but this
> is not true for snapshots of toplevel subvolumes:
>
> root@trulla:/# btrfs version
> btrfs-progs v4.5.3+20160729
>
> root@trulla:/# btrfs subvolume show /mnt/tmp
> /mnt
Hi,
I was just looking at btrfs property and what it can do.
Now, I notice that the man page contains:
label: label of device
When I look at a device and ask what properties I can set, I see:
-# btrfs property list -t device /dev/xvdb
label Set/get label of device.
But, when I
Hi,
On 8/2/19 2:54 PM, Anand Jain wrote:
>
> So at both, btrfs fi label and btrfs prop set the label works on the
> mount-point or the device path if its unmounted. And even if the device
> path is used the label is for the whole filesystem.
Aha, clear.
Thanks!
Hans
From: Hans van Kranenburg
Recently, commit c9da5695b2 improved the description for the label
property, to clarify it's a filesystem property, and not a device
property. Follow this change in the man page for btrfs-property.
Also add a little hint about what to specify as object.
Signed-o
From: Hans van Kranenburg
In commit c11d2c236cc26 the get_dev_stats ioctl was added.
Shortly thereafter, in commit b27f7c0c150f7, the flags field was added.
However, the calculation for unused padding space was not updated, which
also invalidated the comment.
Clarify what happened to reduce
Hi,
When climbing some metadata trees for fun, I ran into a set of
suspicious otime values on inode objects.
I found that a bunch of inodes have values for the seconds.nseconds
fields that are either 422212465065984.0 or even much higher values like
16811597680319950858.1387412042.
So, I wrote a
On 8/5/19 12:20 PM, Holger Hoffstätte wrote:
> On 8/2/19 6:10 PM, Josef Bacik wrote:
>> In testing block group removal it's sometimes handy to be able to create
>> block groups on demand. Add an ioctl to allow us to force allocation
>> from userspace.
>
> Gave this a try and it works as advertise
On 8/5/19 12:56 PM, Holger Hoffstätte wrote:
> On 8/5/19 12:31 PM, Hans van Kranenburg wrote:
>> On 8/5/19 12:20 PM, Holger Hoffstätte wrote:
>>> On 8/2/19 6:10 PM, Josef Bacik wrote:
>>>> In testing block group removal it's sometimes handy to be able to
>
On 8/27/19 11:14 AM, Swâmi Petaramesh wrote:
> On 8/27/19 8:52 AM, Qu Wenruo wrote:
>>> or to use the V2 space
>>> cache generally speaking, on any machine that I use (I had understood it
>>> was useful only on multi-TB filesystems...)
>> 10GiB is enough to create large enough block groups to utili
On 9/30/19 12:23 PM, Andrey Ivanov wrote:
>
> # /home/andrey/devel/src/btrfs/btrfs-progs-dirty_fix/btrfs-corrupt-block -X
> /dev/sdc1
> key (613019873280 EXTENT_ITEM 1048576)slot end outside of leaf 1073755934 >
> 16283
> Open ctree failed
>
> [...]
bin(1073755934)
'0b1110111000
n up that code. Especially the whole subvolume
list part looks like a great improvement.
>> These patches are also available on my GitHub:
>> https://github.com/osandov/btrfs-progs/tree/libbtrfsutil. That branch
>> will rebase as I update this series.
>>
>> Please share fe
0, &iter);
> +
> [...]
When you have enough subvolumes in a filesystem, let's say 10 (yes,
that sometimes happens), the current btrfs sub list is quite unusable,
which is kind of expected. But, currently, sub show is also unusable
because it still starts loading a list of all
et/lists/linux-btrfs/msg69752.html
Fixes: 73c5de0051 ("btrfs: quasi-round-robin for chunk allocation")
Fixes: 86db25785a ("Btrfs: fix max chunk size on raid5/6")
Signed-off-by: Hans van Kranenburg
Cc: Naohiro Aota
Cc: Arne Jansen
Cc: Chris Mason
---
fs/btrfs/volumes.c | 4 +---
s into the cleaner.
* is it using 100% cpu?
* is it showing 100% disk read I/O utilization?
* is it showing 100% disk write I/O utilization? (is it writing lots and
lots of data to disk?)
Since you could be looking at any combination of answers on all those
things, there's not much specific t
ots prior to this, and after about
> 60s when the pain subsided only about 14 remained, so I estimate 10 were
> deleted as part of snapper's cleaning algorithm. I quickly also ran
> dstat during the slow-down, and after 5s it finally started and reported
> only about 3-6MB/s in
'm done. If I
> have time to write-up my findings for #3 I will similarly share that.
>
> Thanks to all for your input on this issue.
Have fun!
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
locks, you may want to mount all the btrfs with nocow by default.
>> This way the quotas would be more accurate (no fragmentation _between_
>> snapshots) and you'll have some decent performance with snapshots.
>> If that is all you care.
>
> CoW is still valuable for us
On 02/12/2018 03:45 PM, Ellis H. Wilson III wrote:
> On 02/11/2018 01:03 PM, Hans van Kranenburg wrote:
>>> 3. I need to look at the code to understand the interplay between
>>> qgroups, snapshots, and foreground I/O performance as there isn't
>>> existing archit
On 02/14/2018 03:49 PM, David Sterba wrote:
> On Mon, Feb 05, 2018 at 05:45:11PM +0100, Hans van Kranenburg wrote:
>> In case of using DUP, we search for enough unallocated disk space on a
>> device to hold two stripes.
>>
>> The devices_info[ndevs-1].max_avail that hold
TREE16.00KiB 0( 1)
>
> Thanks,
> Qu
>
>>
>>
>>>
>>> Note that I'm not sensitive to multi-second mount delays. I am
>>> sensitive to multi-minute mount delays, hence why I'm bringing this up.
>>>
>>> FWIW: I am c
all
>> unmodified.
>
> Good to know, thanks!
>
>>> Snapshot creation
>>> and deletion both operate at between 0.25s to 0.5s.
>>
>> IIRC snapshot deletion is delayed, so the real work doesn't happen when
>> "btrfs sub del" returns.
>
> I was using btrfs sub del -C for the deletions, so I believe (if that
> command truly waits for the subvolume to be utterly gone) it captures
> the entirety of the snapshot.
>
> Best,
>
> ellis
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
have an
>>> impact on how data is organized on-disk (which is ultimately what causes
>>> the issues), so they will have a lingering effect if you don't balance
>>> everything.
>>
>> According to the wiki, 4.14 does indeed have the ssd changes.
>>
>
subthreads in this
thread) I also can't find in the threads which command "the balance" means.
And what does this tell you?
https://github.com/knorrie/python-btrfs/blob/develop/examples/show_free_space_fragmentation.py
Just to make sure you're not pointlessly shovelling data a
On 02/21/2018 04:19 PM, Ellis H. Wilson III wrote:
> On 02/21/2018 10:03 AM, Hans van Kranenburg wrote:
>> On 02/21/2018 03:49 PM, Ellis H. Wilson III wrote:
>>> On 02/20/2018 08:49 PM, Qu Wenruo wrote:
>>>> My suggestion is to use balance to reduce number of block gr
sd,
but doesn't show it.
I personally don't like all of them at all, and I should really finish
and send my proposal to get them replaced by options that can choose
extent allocator for data and metadata individually (instead of some
setting that changes them both at the same time)
Note that the receiver says
> "parent_uuid=014fc004-ae04-0148-9525-1bf556fd4d10". Not really sure
> where that comes from, but disk B has the same, so maybe that's the
> UUID of the original snapshot on disk A?
>
> Is it possible to continue to send incremental snapshots between these
> two file systems, or must I do a full sync?
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 05/03/2018 20:47, Marc MERLIN wrote:
> On Mon, Mar 05, 2018 at 10:38:16PM +0300, Andrei Borzenkov wrote:
>>> If I absolutely know that the data is the same on both sides, how do I
>>> either
>>> 1) force back in a 'Received UUID' value on the destination
>>
>> I suppose the most simple is to wri
k traces of the
kernel thread.
watch 'cat /proc//stack'
where is the pid of the btrfs-transaction process.
In there, you will see a pattern of reoccuring things, like, it's
searching for free space, it's writing out free space cache, or other
things. Correlate this with the dis
27;s writing X MB/s to disk?
2) How big is this filesystem? What does your `btrfs fi df /mountpoint` say?
3) What kind of workload are you running? E.g. how can you describe it
within a range from "big files which just sit there" to "small writes
and deletes all over the place all the ti
nerate the tree again on next mount.
Additional tips (forgot to ask for your /proc/mounts before):
* Use the noatime mount option, so that only accessing files does not
lead to changes in metadata, which lead to writes, which lead to cowing
and writes in a new place, which lead to updates of the free space
administration etc...
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
*especially* with btrfs because the allocator has to work really hard to find
> free space for COWing. Really consider deleting stuff or adding more space.
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to ma
>> goto error_bdev_put;
>> +}
>> + disk_super = (struct btrfs_super_block *) sb_bh->b_data;
>>
>> devid = btrfs_stack_device_id(&disk_super->dev_item);
>> transid = btrfs_super_generation(disk_super);
>> @@ -1413,7 +1417,7 @@ int btrfs_scan_one_device(const char *path, fmode_t
>> flags, void *holder,
>> if (!ret && fs_devices_ret)
>> (*fs_devices_ret)->total_devices = total_devices;
>>
>> -btrfs_release_disk_super(page);
>> +brelse(sb_bh);
>>
>> error_bdev_put:
>> blkdev_put(bdev, flags);
>>
>
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
/block/sdb/queue/rotational
>> 0
>>
>>> I wonder if it's the same old "ssd allocation scheme" problem, and no
>>> balancing done in a long time or at all.
Looks like it. So, yay, you're on 4.14 already. Now just do a full
balance of your entire filesystem, only once (data only, metadata not
needed) and then you can forget about this again.
>> I had something similar happen on a laptop a while ago - took a while
>> before i could get it back in order
>> (in that case i think it was actually a oops --- it kept saying "no
>> space left" and switched to read only even
>> if you removed a lot of data, invalidated the space cache and so on)
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
, but
>> I'm pretty sure it is the nightly balance.
>>
>> I've run btrfs check on / with no issues recently.
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-show-super' failed
Signed-off-by: Hans van Kranenburg
---
Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Makefile b/Makefile
index 30a0ee22..390b138f 100644
--- a/Makefile
+++ b/Makefile
@@ -220,7 +220,7 @@ cmds_restore_cflags =
-DBTRFSRESTORE_ZSTD=$(BTRFSRE
Build would fail because it couldn't find the usage function.
Signed-off-by: Hans van Kranenburg
---
btrfs-calc-size.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/btrfs-calc-size.c b/btrfs-calc-size.c
index 1ac7c785..d2d68ab2 100644
--- a/btrfs-calc-size.c
+++ b/btrfs-calc-s
Alternatively, it might be a better idea to just remove the deprecated source
files, since this is not the first time build failures in them went unnoticed.
Hans van Kranenburg (2):
btrfs-progs: Fix progs_extra build dependencies
btrfs-progs: Fix build of btrfs-calc-size
Makefile
s balance at all?
:)
> Are you using any filters
> whatsoever? The documentation
> [https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-balance] has the
> following warning:
>
> Warning: running balance without filters will take a lot of time as it
> basically rewrites th
works and only annoys you with the
> need of replacing a bad disk every now and then :)
I don't think these kind of things will ever end up in kernel code.
[0] There's a version in the devel branch in git that also works without
free space tree, taking a slower detour via the extent tr
avior is due to the ssd
> changes.
Most probably, yes.
> Maybe it's due to other patches. Either way, it's an
> interesting and useful change. =:^)
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
no notion of "SSD vs HDD" modes.
We also had a discussion about the "backup roots" that are stored
besides the superblock, and that they are "better than nothing" to help
maybe recover something from a borken fs, but never ever guarantee you
will get a working filesystem bac
On 01/23/2018 08:51 PM, waxhead wrote:
> Nikolay Borisov wrote:
>> On 23.01.2018 16:20, Hans van Kranenburg wrote:
[...]
>>>
>>> We also had a discussion about the "backup roots" that are stored
>>> besides the superblock, and that they are &
On 01/24/2018 07:54 PM, waxhead wrote:
> Hans van Kranenburg wrote:
>> On 01/23/2018 08:51 PM, waxhead wrote:
>>> Nikolay Borisov wrote:
>>>> On 23.01.2018 16:20, Hans van Kranenburg wrote:
>>
>> [...]
>>
>>>>>
>>>>> W
t it and will probably also
ask you to help testing the fix.
> btrfs-heatmap properly uses the python-exec wrapper and therefore works
> regardless of currently selected default python version. :)
>
> I hope this is useful to someone.
I bet it will. ;-]
--
Hans van Kranenburg
--
To uns
found on /dev/mapper/foo--vg-root
> Could not open root, trying backup super
>
> We are pretty sure that no unexpected electric cuts has been happened.
>
> At this point I don't know what information I should supply.
>
--
Hans van Kranenburg
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
pected to differ for the snapshot
> on device B and C, incremental
> backups will not work from A to C without setting received UUID. I have
> seen python-btrfs
> mentioned in a couple of emails; but have anyone of you used it in a
> production environment ?
>
> This i
age looks pretty interesting and
> have something in common with planned priority-aware extent allocator.
Priority-aware allocator? Is someone actually working on that, or is it
planned like everything is 'planned' (i.e. nice idea, and might happen
or might as well not happen ever, SIYH)?
op of that, or single?
At least if you go the mkfs route (I read the other replies) then also
find out what happened. If your storage is losing data in situations
like this while it told btrfs that the data was safe, you're running a
dangerous operation.
--
Hans van Kranenburg
used back then have been overwritten
already, even the ones in distant corners of trees. A full check / scrub
/ etc would be needed to find out.
--
Hans van Kranenburg
fs_item), 1);
> if (ret < 0) {
> err = ret;
> goto out;
> diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
> index f9dd6d1..0b6c581 100644
> --- a/fs/btrfs/file-item.c
> +++ b/fs/btrfs/file-item.c
> @@ -804,7 +804,7 @@ int btrfs_csum_file_blocks(struct btrfs_trans_handle
> *trans,
>*/
> btrfs_release_path(path);
> ret = btrfs_search_slot(trans, root, &file_key, path,
> - csum_size, 1);
> + csum_size + sizeof(struct btrfs_item), 1);
> if (ret < 0)
> goto fail_unlock;
>
>
--
Hans van Kranenburg
On 08/14/2018 07:09 PM, Andrei Borzenkov wrote:
> 14.08.2018 18:16, Hans van Kranenburg пишет:
>> On 08/14/2018 03:00 PM, Dmitrii Tcvetkov wrote:
>>>> Scott E. Blomquist writes:
>>>> > Hi All,
>>>> >
>>>> > [...]
>>>
&
e it
> for further examination, or does BTRFS handle that on its own?
It's no different than any other data stored in your filesystem.
So when just reading things from the snapshot, or when using the btrfs
scrub functionality, it will tell you if data that is read back matches
the checksums.
--
Hans van Kranenburg
ule in your btrbk config, and
set it to never expire older ones. Then, just see what happens, and only
if you start seeing things slow down a lot, start worrying about what to
do, and let us know how far you got.
Have fun,
P.S. Here's an unfinished page from a tutorial that I'm writing that is
still heavily under construction, which touches the subject of
snapshotting data and metadata. Maybe it might help to explain
"complexity starts when changing things" more:
https://github.com/knorrie/python-btrfs/blob/tutorial/tutorial/cows.md
--
Hans van Kranenburg
others can be changed on each individual mount (like the atime
options), and when omitting them you get the non-optimal default again.
--
Hans van Kranenburg
1 - 100 of 370 matches
Mail list logo