(resending to list, I don't know why but I messed up the reply
directly to Nikolay)

On Tue, Oct 22, 2019 at 11:16 AM Nikolay Borisov <nbori...@suse.com> wrote:
>
> On 22.10.19 =D0=B3. 12:00 =D1=87., Chris Murphy wrote:
> > Hi,
> >
> > So XFS has these
> >
> > [49621.415203] XFS (loop0): Mounting V5 Filesystem
> > [49621.444458] XFS (loop0): Ending clean mount
> > ...
> > [49621.444458] XFS (loop0): Ending clean mount
> > [49641.459463] XFS (loop0): Unmounting Filesystem
> >
> > It seems to me linguistically those last two should be reversed, but wh=
atever.
> >
> > The Btrfs mount equivalent messages are:
> > [49896.176646] BTRFS: device fsid f7972e8c-b58a-4b95-9f03-1a08bbcb62a7
> > devid 1 transid 5 /dev/loop0
> > [49901.739591] BTRFS info (device loop0): disk space caching is enabled
> > [49901.739595] BTRFS info (device loop0): has skinny extents
> > [49901.767447] BTRFS info (device loop0): enabling ssd optimizations
> > [49901.767851] BTRFS info (device loop0): checking UUID tree
> >
> > So is it true that for sure there is nothing happening after the UUID
> > tree is checked, that the file system is definitely mounted at this
> > point? And always it's the UUID tree being checked that's the last
> > thing that happens? Or is it actually already mounted just prior to
> > disk space caching enabled message, and the subsequent messages are
> > not at all related to the mount process? See? I can't tell.
> >
> > For umount, zero messages at all.
>
> You are doing it wrong.

I'm doing what wrong?

> Those messages are sent from the given subsys to
> the console and printed whenever. You can never rely on the fact that
> those messages won't race with some code.

That possibility is implicit in all of the questions I asked.


> For example the checking UUID tree happens _before_
> btrfs_check_uuid_tree is called and there is no guarantee when it's
> finished.

Are these messages useful for developers? I don't see them as being
useful for users. They're kinda superfluous for them.


> > The feature request is something like what XFS does, so that we know
> > exactly when the file system is mounted and unmounted as far as Btrfs
> > code is concerned.
> >
> > I don't know that it needs the start and end of the mount and
> > unmounted (i.e. two messages). I'm mainly interested in having a
> > notification for "mount completed successfully" and "unmount completed
> > successfully". i.e. the end of each process, not the start of each.
>
> mount is a blocking syscall, same goes for umount your notifications are
> when the respective syscalls / system utilities return.

Right. Here is the example bug from 2015, that I just became aware of
as the impetus for posting the request; but I've wanted this explicit
notification for a while.

https://bugzilla.redhat.com/show_bug.cgi?id=3D1206874#c7

In that example, there's one Btrfs info message at
[    2.727784] localhost.localdomain kernel: BTRFS info (device sda3):
disk space caching is enabled

And yet systemd times out on the mount unit. If it's true that only
mount blocking systemd could be the cause, then this is a Btrfs, VFS,
or mount related bug (however old it is by now and doesn't really
matter other than conceptually). But there isn't enough granularity in
the kernel messages to understand why the mount is taking so long. If
there were a Btrfs mount succeeded message, we'd know whether the
Btrfs portion of the mount process successfully completed or not, and
perhaps have a better idea where the hang is happening.

On Tue, Oct 22, 2019 at 11:16 AM Nikolay Borisov <nbori...@suse.com> wrote:
>
>
>
> On 22.10.19 г. 12:00 ч., Chris Murphy wrote:
> > Hi,
> >
> > So XFS has these
> >
> > [49621.415203] XFS (loop0): Mounting V5 Filesystem
> > [49621.444458] XFS (loop0): Ending clean mount
> > ...
> > [49621.444458] XFS (loop0): Ending clean mount
> > [49641.459463] XFS (loop0): Unmounting Filesystem
> >
> > It seems to me linguistically those last two should be reversed, but 
> > whatever.
> >
> > The Btrfs mount equivalent messages are:
> > [49896.176646] BTRFS: device fsid f7972e8c-b58a-4b95-9f03-1a08bbcb62a7
> > devid 1 transid 5 /dev/loop0
> > [49901.739591] BTRFS info (device loop0): disk space caching is enabled
> > [49901.739595] BTRFS info (device loop0): has skinny extents
> > [49901.767447] BTRFS info (device loop0): enabling ssd optimizations
> > [49901.767851] BTRFS info (device loop0): checking UUID tree
> >
> > So is it true that for sure there is nothing happening after the UUID
> > tree is checked, that the file system is definitely mounted at this
> > point? And always it's the UUID tree being checked that's the last
> > thing that happens? Or is it actually already mounted just prior to
> > disk space caching enabled message, and the subsequent messages are
> > not at all related to the mount process? See? I can't tell.
> >
> > For umount, zero messages at all.
>
> You are doing it wrong. Those messages are sent from the given subsys to
> the console and printed whenever. You can never rely on the fact that
> those messages won't race with some code.
>
> For example the checking UUID tree happens _before_
> btrfs_check_uuid_tree is called and there is no guarantee when it's
> finished.
>
> >
> > The feature request is something like what XFS does, so that we know
> > exactly when the file system is mounted and unmounted as far as Btrfs
> > code is concerned.
> >
> > I don't know that it needs the start and end of the mount and
> > unmounted (i.e. two messages). I'm mainly interested in having a
> > notification for "mount completed successfully" and "unmount completed
> > successfully". i.e. the end of each process, not the start of each.
>
> mount is a blocking syscall, same goes for umount your notifications are
> when the respective syscalls / system utilities return.
>
> >
> > In particular the unmount notice is somewhat important because as far
> > as I know there's no Btrfs dirty flag from which to infer whether it
> > was really unmounted cleanly. But I'm also not sure what the insertion
> > point for these messages would be. Looking at the mount code in
> > particular, it's a little complicated. And maybe with some of the
> > sanity checking and debug options it could get more complicated, and
> > wouldn't want to conflict with that - or any multiple device use case
> > either.
> >
> >



-- 
Chris Murphy

Reply via email to