On 16 May 2016 at 07:36, Austin S. Hemmelgarn <[email protected]> wrote:
> On 2016-05-16 02:20, Qu Wenruo wrote:
>>
>>
>>
>> Duncan wrote on 2016/05/16 05:59 +0000:
>>>
>>> Qu Wenruo posted on Mon, 16 May 2016 10:24:23 +0800 as excerpted:
>>>
>>>> IIRC clear_cache option is fs level option.
>>>> So the first mount with clear_cache, then all subvolume will have
>>>> clear_cache.
>>>
>>>
>>> Question:  Does clear_cache work with a read-only mount?
>>
>> Good question.
>>
>> But easy to check.
>> I just checked it and found even that's possible, it doesn't work.

+1  I had to use my USB flash rescue disk to mount with clear_cache.

>> Free space cache inode bytenr doesn't change and no generation change.
>> While without ro, it did rebuild free space cache for *SOME* chunks, not
>> *ALL* chunks.
>>
>> And that's the problem I'm just chasing today.

+1  Unfortunately, it didn't fix the affected free space cache files.

>> Short conclude: clear_cache mount option will only rebuild free space
>> cache for chunks which we allocated space from, during the mount time of
>> clear_cache.
>> (Maybe I'm just out-of-date and some other devs may already know that)

Does this mean creating a huge file filled with zeros, while mounted
with clear_cache would solve this?  I think that would be faster than
a full rebalance, but I'm not convinced it would work, because of #1
(see below).

One of the following might have caused this situation:

1) I created a new subvolume, and tested a full restore from
(non-btrfs aware) backups to this subvolume; some time later,
verification (rsync -c with other options) completed, and the backup
was confirmed to be usable; I deleted the subvolume and watched the
cleaner get to work and reduce the allocated space, remembering a time
a balance was necessary.  If this is what caused it, cloud there be a
bug in the cleaner and/or cleaner<->space_cache interaction?

2) The same week I reorganised several hundred gigabytes of short term
backups, moving them from one subvolume to another.  I used cp -ar
--reflink=always from within /btrfs-admin, which is where I mount the
whole volume, because / is subvol=rootfs.  After the copy I removed
the source, then used the checksums I keep in my short-term backup
directory to verify everything was ok (I could have restored from a
fresh long-term backup if this failed).  As expected, everything was
ok.  If this is what caused the free space cache bug, then could there
be a bug in the intersubvolume reflink code?

>> That behavior makes things a little confusing, which users may continue
>> hitting annoying free space cache warning from kernel, even they try to
>> use clear_cache mount option.

So this warning is totally harmless and will not cause future problems?

>> Anyway, I'll add ability for manually wipe out all/given free space
>> cache to btrfsck, at least creating a solution to really rebuild all v1
>> free space cache.

Nice!  Am I correct in understanding that this is as safe as a full
balance, and not dangerous like btrfs check --repair?  Is it likely
that the v2 free space cache will be default by linux-4.10?

> FWIW, I think it's possible to do this by mounting with clear_cache and then
> running a full balance on the filesystem.  Having an option to do this on an
> unmounted FS would be preferred of course, as that would almost certainly
> be more efficient on any reasonably sized filesystem.

I'm running a balance now, like it's 2015! ;-)

Thank you,
Nicholas
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to