Hello,

I'm using S3QL with OVH Cloud Storage (no, that's not Hubic) and it's 
working mostly very nicely, thank you. This is running S3QL from Debian 
unstable (version "2.17.1+hg2+dfsg-3").

Unfortunately I'm getting around two or three filesystem crashes each day. 
strace shows the python process stuck on a futex (I can't help you more 
than that without some hand-holding, sorry) and the only solution is to 
kill the process and run fsck.s3ql. It's possibly due to the amount of data 
I'm writing to the store

Directory entries:    6017319
Inodes:               5991966
Data blocks:          761943
Total data size:      4.79 TB
After de-duplication: 1.12 TB (23.38% of total)
After compression:    972 GiB (19.81% of total, 84.75% of de-duplicated)
Database size:        944 MiB (uncompressed)
Cache size:           10.00 GiB, 1030 entries
Cache size (dirty):   10.00 GiB, 1030 entries

I only mount the S3QL filesystem from one client.

Given that I want to remount the filesystem as soon as the fsck has 
completed, is there any sense or possibility in being able to avoid the 
"committing block..." section of the fsck process? This could leave the 
blocks marked as dirty cache and these could be swept up at a later time 
when the filesystem was (more) idle. I'm running with a 10GB cache and the 
crash usually occurs when the cache is completely full of dirty blocks so 
it takes an hour or two to flush them out during the filesystem check.

Thanks,
Chris

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to