On Monday, June 16, 2014 11:54:40 PM UTC+2, Nikolaus Rath wrote: > > On 06/16/2014 10:27 AM, PA Nilsson wrote: > > > > > > On Sunday, June 15, 2014 11:01:12 PM UTC+2, Nikolaus Rath wrote: > > > > PA Nilsson <[email protected] <javascript:>> writes: > > > On Sunday, June 15, 2014 9:02:42 PM UTC+2, Nikolaus Rath wrote: > > >> > > >> On 06/15/2014 08:21 AM, PA Nilsson wrote: > > >> > Hi, > > >> > > > >> > I am using a script to mount an s3ql filesystem. > > >> > Before mounting the filesystem I am running: > > >> > > > >> > fsck.s3ql --batch $storage_url > > >> > > > >> > If the file system was not unmounted cleanly this will fail and > > exit. > > >> > > >> This should only happen if the file system was not unmounted > cleanly > > >> *and* you lost the local metadata. Otherwise it's a bug. > > >> > > >> If you lost the local metadata, I really don't think you want to > > force > > >> fsck to continue. Are you sure that's what you want? > > > > > > The system lost power, so the file system was not unmounted. > > > How do I know if I have lost the local metadata? > > > > The fsck.s3ql message should say something about that. What message > are > > you getting when it "fails and exits"? > > > > > > Finally had time to reproduce this. > > The system was powercycled in the middle of running a backup with the FS > > mounted. > > Running fsck > > > > fsck.s3ql --ssl-ca-path ${capath} --cachedir ${s3ql_cachedir} --log > > $log_file --authfile ${auth_file} $storage_url > > > > " > > Starting fsck of xxxxxxxxx > > Ignoring locally cached metadata (outdated). > > Backend reports that file system is still mounted elsewhere. Either > > the file system has not been unmounted cleanly or the data has not yet > > propagated through the backend. In the later case, waiting for a while > > should fix the problem, in the former case you should try to run fsck > > on the computer where the file system has been mounted most recently. > > Enter "continue" to use the outdated data anyway: > > " > > > > In this case, it is true that the file system was not cleanly unmounted, > > but what are my options here? > > You should find out why you are loosing your local metadata copy. > > Is your $s3ql_cachedir on a journaling file system? What happened to > this file system on the power cycle? Did it loose data? > > What are the contents of $s3ql_cachedir when you run fsck.s3ql? > > Are you running fsck.s3ql with the same $s3ql_cachedir as mount.s3ql? > Are you *absolutely* sure about that? > > I can only trigger this when the system is powered off during an actual transfer of data. If I let the data transfer finish and then power cycle with the fs mounted, the FS recovers when running fsck.
The system is running on an ext4 filesystem. The filesystem does not seem to have lost any data. The cachedir is read from the same config file and works otherwise, to yes, I am sure about that. Contents of cachdir when failing is: -rw-r--r-- 1 root root 0 Jun 16 13:06 mount.s3ql_crit.log -rw------- 1 root root 77824 Jun 17 07:21 s3c:=2F=2Fstorage.openproducts.com=2F3feae012-2dfc-4aee-972a-532f15e99009.db -rw-r--r-- 1 root root 217 Jun 17 07:21 s3c:=2F=2Fstorage.openproducts.com=2F3feae012-2dfc-4aee-972a-532f15e99009.params /PA -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
