Hello Chris,

> I'm using S3QL with OVH Cloud Storage (no, that's not Hubic) and it's
> working mostly very nicely, thank you. This is running S3QL from
> Debian unstable (version "2.17.1+hg2+dfsg-3").
>
> Unfortunately I'm getting around two or three filesystem crashes each
> day. strace shows the python process stuck on a futex (I can't help
> you more than that without some hand-holding, sorry) and the only
> solution is to kill the process and run fsck.s3ql. It's possibly due
> to the amount of data I'm writing to the store
Since you use OVH Swift storage, you might want to install a local DNS
proxy (on Debian `apt-get install unbound` should suffice). OVH always
closes the connection (see
https://bitbucket.org/nikratio/s3ql/issues/178/connection-close-in-response-results-in
). S3QL can handle this but needs to reconnect constantly. Each
reconnect triggers a DNS lookup. This results in quite a high amount of
DNS requests from your machine – for me it was ~150 DNS requests per
second. With so many DNS requests your DNS resolver might throttle and
drop your requests. (Happened to me with the Google Public DNS 8.8.8.8)
>
> Directory entries:    6017319
> Inodes:               5991966
> Data blocks:          761943
> Total data size:      4.79 TB
> After de-duplication: 1.12 TB (23.38% of total)
> After compression:    972 GiB (19.81% of total, 84.75% of de-duplicated)
> Database size:        944 MiB (uncompressed)
> Cache size:           10.00 GiB, 1030 entries
> Cache size (dirty):   10.00 GiB, 1030 entries
6 million files on the S3QL file system might be a little bit much. For
me problems (e.g. very slow directory listings) started with ~ 2 million
files but that probably depends on the hardware of your machine (CPU
speed, amount of RAM and speed of local disk).


-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to