Re: Activating space_cache after read-only snapshots without space_cache have been taken

2013-04-16 Thread Sander
Liu Bo wrote (ao):
 On Tue, Apr 16, 2013 at 02:28:51AM +0200, Ochi wrote:
  The situation is the following: I have created a backup-volume to
  which I regularly rsync a backup of my system into a subvolume.
  After rsync'ing, I take a _read-only_ snapshot of that subvolume
  with a timestamp added to its name.
  
  Now at the time I started using this backup volume, I was _not_
  using the space_cache mount option and two read-only snapshots were
  taken during this time. Then I started using the space_cache option
  and continued doing snapshots.
  
  A bit later, I started having very long lags when unmounting the
  backup volume (both during shutdown and when unmounting manually). I
  scrubbed and fsck'd the volume but this didn't show any errors.
  Defragmenting the root and subvolumes took a long time but didn't
  improve the situation much.
 
 So are you using '-o nospace_cache' when creating two RO snapshots?

No, he first created two ro snapshots, then (some time later) mounted
with nospace_cache, and then continued to take ro snapshots.

  Now I started having the suspicion that maybe the space cache
  possibly couldn't be written to disk for the readonly
  subvolumes/snapshots that were created during the time when I wasn't
  using the space_cache option, forcing the cache to be rebuilt every
  time.
  
  Clearing the cache didn't help. But when I deleted the two snapshots
  that I think were taken during the time without the mount option,
  the unmounting time seems to have improved considerably.
 
 I don't know why this happens, but maybe you can observe the umount
 process's very slow behaviour by using 'cat /proc/{umount-pid}/stack'
 or 'perf top'.

AFAIUI the problem is not there anymore, but this is a good tip for the
future.

Sander
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Activating space_cache after read-only snapshots without space_cache have been taken

2013-04-16 Thread Ochi

On 04/16/2013 10:10 AM, Sander wrote:

Liu Bo wrote (ao):

On Tue, Apr 16, 2013 at 02:28:51AM +0200, Ochi wrote:

The situation is the following: I have created a backup-volume to
which I regularly rsync a backup of my system into a subvolume.
After rsync'ing, I take a _read-only_ snapshot of that subvolume
with a timestamp added to its name.

Now at the time I started using this backup volume, I was _not_
using the space_cache mount option and two read-only snapshots were
taken during this time. Then I started using the space_cache option
and continued doing snapshots.

A bit later, I started having very long lags when unmounting the
backup volume (both during shutdown and when unmounting manually). I
scrubbed and fsck'd the volume but this didn't show any errors.
Defragmenting the root and subvolumes took a long time but didn't
improve the situation much.


So are you using '-o nospace_cache' when creating two RO snapshots?


No, he first created two ro snapshots, then (some time later) mounted
with nospace_cache, and then continued to take ro snapshots.


I need to clarify this: The NOspace_cache option was never used, I just 
didn't explicitly activate space_cache in the beginning. However, I was 
not aware that space_cache is the default anyways (at least in Arch 
which is the distro I'm using). I reviewed old system logs and it 
actually looks like space caching was always being used right from the 
beginning, even when I didn't explicitly use the space_cache mount 
option. So I guess this wasn't the problem after all :\



Now I started having the suspicion that maybe the space cache
possibly couldn't be written to disk for the readonly
subvolumes/snapshots that were created during the time when I wasn't
using the space_cache option, forcing the cache to be rebuilt every
time.

Clearing the cache didn't help. But when I deleted the two snapshots
that I think were taken during the time without the mount option,
the unmounting time seems to have improved considerably.


I don't know why this happens, but maybe you can observe the umount
process's very slow behaviour by using 'cat /proc/{umount-pid}/stack'
or 'perf top'.


AFAIUI the problem is not there anymore, but this is a good tip for the
future.

Sander


That's correct, the problem has vanished after the deletion of the 
oldest two snapshots. Mounting and unmounting is reasonably fast now. I 
will just continue to use the volume normally (i.e. making regular 
backups and snapshotting) and report back if the problem appears again.


Just for the records: The btrfs volume and the first snapshots were 
originally created under kernel 3.7.10. I then updated to 3.8.3. I don't 
know if this information is useful - just in case... :)


Thanks,
Sebastian

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Activating space_cache after read-only snapshots without space_cache have been taken

2013-04-15 Thread Ochi

Hello everyone,

I've ran into problems with _very_ slow unmounting of my btrfs-formatted 
backup volume. I have a suspicion what might be the cause but maybe 
someone with more experience with the btrfs code could enlighten me 
whether it is actually correct.


The situation is the following: I have created a backup-volume to which 
I regularly rsync a backup of my system into a subvolume. After 
rsync'ing, I take a _read-only_ snapshot of that subvolume with a 
timestamp added to its name.


Now at the time I started using this backup volume, I was _not_ using 
the space_cache mount option and two read-only snapshots were taken 
during this time. Then I started using the space_cache option and 
continued doing snapshots.


A bit later, I started having very long lags when unmounting the backup 
volume (both during shutdown and when unmounting manually). I scrubbed 
and fsck'd the volume but this didn't show any errors. Defragmenting the 
root and subvolumes took a long time but didn't improve the situation much.


Now I started having the suspicion that maybe the space cache possibly 
couldn't be written to disk for the readonly subvolumes/snapshots that 
were created during the time when I wasn't using the space_cache option, 
forcing the cache to be rebuilt every time.


Clearing the cache didn't help. But when I deleted the two snapshots 
that I think were taken during the time without the mount option, the 
unmounting time seems to have improved considerably.


I will have to observe whether unmounting stays quick now but my 
question is whether it is possible that the read-only snapshots taken 
during the time when I wasn't using space_cache might actually have been 
the culprits.


Best,
Sebastian
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Activating space_cache after read-only snapshots without space_cache have been taken

2013-04-15 Thread Liu Bo
On Tue, Apr 16, 2013 at 02:28:51AM +0200, Ochi wrote:
 Hello everyone,
 
 I've ran into problems with _very_ slow unmounting of my
 btrfs-formatted backup volume. I have a suspicion what might be
 the cause but maybe someone with more experience with the btrfs code
 could enlighten me whether it is actually correct.
 
 The situation is the following: I have created a backup-volume to
 which I regularly rsync a backup of my system into a subvolume.
 After rsync'ing, I take a _read-only_ snapshot of that subvolume
 with a timestamp added to its name.
 
 Now at the time I started using this backup volume, I was _not_
 using the space_cache mount option and two read-only snapshots were
 taken during this time. Then I started using the space_cache option
 and continued doing snapshots.
 
 A bit later, I started having very long lags when unmounting the
 backup volume (both during shutdown and when unmounting manually). I
 scrubbed and fsck'd the volume but this didn't show any errors.
 Defragmenting the root and subvolumes took a long time but didn't
 improve the situation much.

So are you using '-o nospace_cache' when creating two RO snapshots?

 
 Now I started having the suspicion that maybe the space cache
 possibly couldn't be written to disk for the readonly
 subvolumes/snapshots that were created during the time when I wasn't
 using the space_cache option, forcing the cache to be rebuilt every
 time.
 
 Clearing the cache didn't help. But when I deleted the two snapshots
 that I think were taken during the time without the mount option,
 the unmounting time seems to have improved considerably.

I don't know why this happens, but maybe you can observe the umount process's
very slow behaviour by using 'cat /proc/{umount-pid}/stack' or 'perf top'.

thanks,
liubo

 
 I will have to observe whether unmounting stays quick now but my
 question is whether it is possible that the read-only snapshots
 taken during the time when I wasn't using space_cache might actually
 have been the culprits.
 
 Best,
 Sebastian
 --
 To unsubscribe from this list: send the line unsubscribe linux-btrfs in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html