This is the current output of s3qlstat:

Directory entries:    20015509
Inodes:               20015512
Data blocks:          2101939
Total data size:      3.25 TB
After de-duplication: 442 GiB (13.28% of total)
After compression:    320 GiB (9.62% of total, 72.47% of de-duplicated)
Database size:        3.02 GiB (uncompressed)
Cache size:           4.00 GiB, 3045 entries
Cache size (dirty):   0 bytes, 0 entries
Queued object removals: 0

Is this too big to be sensible? I keep s3ql running and back up to it regularly: some things daily, others every few hours.

Both s3qlstat and df take a very long time to respond after s3ql has been running for a while. By a long time, I mean several minutes. If I rerun the command, it takes between 1 and 4 seconds.

If it crashes, fsck, working from the local database, takes over 45 minutes to run.

I'm wondering what the best way to deal with this is.

1.  Break the directory down into separate s3ql mounts for each backup type?

2. Move the .s3ql directory to local SSD? (Less secure than the Ceph drive)?

3.  Run s3qlstat regularly via cron?

4. Set a lower metadata-upload-interval (currently at the default 24 hours)?

In case it's useful, here is the startup info:

MainThread s3ql.mount.determine_threads: Using 10 upload threads.
MainThread s3ql.mount.main: Autodetected 65938 file descriptors available for cache entries
MainThread s3ql.mount.get_metadata: Using cached metadata.
MainThread s3ql.mount.main: Setting cache size to 141192 MB
MainThread s3ql.mount.main: Mounting s3://eu-west-1/backups.might.be at /opt/s3ql...

Any advice appreciated.

Cliff.

--
Cliff Stanford
London:    +44 20 0222 1666               Swansea: +44 1792 469666
Spain:     +34  603 777 666               Estonia: +372  5308 9666
UK Mobile: +44 7973 616 666

--
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to