On Dec 17 2015, Jamie Fargen <[email protected]> wrote: > On Thursday, December 17, 2015 at 11:03:54 AM UTC-5, Nikolaus Rath wrote: >> >> On Dec 17 2015, Jamie Fargen <[email protected] <javascript:>> wrote: >> > I noticed that an s3ql file system was down and when I tried to mount it >> > the mount failed and the error stated that s3ql.fsck needed to be >> executed. >> > While running the fsck many messages like Deleted spurious object >> 245444, >> > were received. >> > >> > This is the final output of the fsck: >> > Deleted spurious object 245442 >> > Deleted spurious object 245443 >> > Deleted spurious object 245444 >> [...] >> > >> > After the fsck finished I remounted the file system and all the files >> are >> > now missing. >> > >> > Looking at the object container it had 2T of data before the fsck and >> now >> > only has a few megabytes. >> >> Sounds like the mount crashed before any metadata was uploaded, and you >> then ran fsck.s3ql with a different --cachedir (or you also lost the >> cache directory, or you ran it on a different computer). > > How can one improve metadata consistency and ensure that it is uploaded > regularly?
mount.s3ql --help will tell you. > Why isn't metadata atomic? Data cannot be atomic, only a process can be atomic. But I guess what you are really asking is why the metadata isn't uploaded on every change. The answer is that it would be way too slow, cf. http://www.rath.org/s3ql-docs/impl_details.html. > When I mount the volume I do specify a --cachedir, but when I ran fsck I > did not specify the cachedir. That was not a good idea. If you had specified the cachedir, you wouldn't have lost any of the data. Note that you must have seen the following warning from fsck.s3ql: Backend reports that file system is still mounted elsewhere. Either the file system has not been unmounted cleanly or the data has not yet propagated through the backend. In the later case, waiting for a while should fix the problem, in the former case you should try to run fsck on the computer where the file system has been mounted most recently. Enter "continue" to use the outdated data anyway: I'm not sure how to make it more explicit that this is a dangerous operation. Note that you had to enter "continue" - it wasn't enough to just press enter or 'y'. > The cachedir only has about 7.5GB of data vs 2TB of data lost. Why would > missing cached data result in losing 100% of the file system? Because there is no way to determine what of the 2 TB of data belongs to which files in what order without the 7.5 GB of metadata. See http://www.rath.org/s3ql-docs/impl_details.html. Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
