Hello intelfx, > does s3ql have any practical/recommended limits on amount of blocks in > a single filesystem? In other words, if I have, say, 10 TiB of data, > what would be the minimum recommended block size? > > My use-case (for this specific filesystem) is storing consolidated > Macrium Reflect backups — a large disk image which is updated in place > daily. If I'm using a large block size (say, 100 or 500 MiB), this > basically causes the entire disk image to be retransmitted almost > entirely every time it is changed. > > Can this be made to work at all? There is a theoretical limit of 2^64 blocks because that is the limit that sqlite <https://sqlite.org/limits.html> imposes. Before you get there your database size gets probably "too big to handle". 10 TiB of data with big files in the file system isn't that much that you need to worry about the database size. I have file systems for Bareos backups ranging from 5 TiB to 20 TiB. They have a maximum block size of 300 MiB because I feared that the database size would become unhandleable big. That fear was unwarranted. The database size is < 10 MiB (uncompressed). If I would redo these file systems I would probably choose a smaller maximum block size of 25 MiB to better utilize the CPU cores.
For your use case (only small blocks of a big file change) lowering the maximum block size to 1 MiB to 5 MiB would be better. I guess this would make a database size (uncompressed) of ~ 1 GiB. I probably would not go below 1 MiB. -- You received this message because you are subscribed to the Google Groups "s3ql" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/s3ql/c7e7d725-ac4d-4860-6fd9-6e88c3e03fe5%40jagszent.de.
signature.asc
Description: OpenPGP digital signature
