Hello all,

I'm already a couple of years on s3ql, always on local-storage. All is 
going very well, including upgrades. Recently, I deleted lots of files in 
my s3ql container, and I knew beforehand that maybe not all physical data 
would be wiped out, as some other files are using these blocks due to dedup.

As I'm rsyncing my whole s3ql backend to an off-site storage location, I 
noticed something odd while transferring:

within my s3ql_data_, I see:
112M    ./130
2,3G    ./268
1,1M    ./904
95M     ./473
9,8G    ./281
62M     ./567
121M    ./751
243M    ./758
28M     ./609
1,4M    ./628
239M    ./633
417M    ./961
8,4M    ./501
2,1M    ./787

With for example in 628:
-rw-rwxr-x. 1 root root  15704 11 feb  2015 s3ql_data_6288
-rw-rwxr-x. 1 root root    563 11 feb  2015 s3ql_data_62880
-rw-rwxr-x. 1 root root    432 11 feb  2015 s3ql_data_62881
-rw-rwxr-x. 1 root root    507 11 feb  2015 s3ql_data_62882
-rw-rwxr-x. 1 root root    670 11 feb  2015 s3ql_data_62883
-rw-rwxr-x. 1 root root    514 11 feb  2015 s3ql_data_62884
-rw-rwxr-x. 1 root root    453 11 feb  2015 s3ql_data_62885
-rw-rwxr-x. 1 root root    515 11 feb  2015 s3ql_data_62886
-rw-rwxr-x. 1 root root    498 11 feb  2015 s3ql_data_62887
-rw-rwxr-x. 1 root root    511 11 feb  2015 s3ql_data_62888
-rw-rwxr-x. 1 root root    413 11 feb  2015 s3ql_data_62889
-rw-rwxr-x. 1 root root  27013 11 feb  2015 s3ql_data_6289

Lots of really tiny files, which made me wonder: if these are the 
"left-overs" (remnants?) of what-used-to-be filled blocks, is there a way 
to "defragment" (maybe not the correct term, but I think it is comparable) 
to have these "small chunks" optimize?

Thank you!

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to