Quoting Kyriazis, George (george.kyria...@intel.com):

> The source pool has mainly big files, but there are quite a few
> smaller (<4KB) files that I’m afraid will create waste if I create the
> destination zpool with ashift > 12 (>4K blocks).  I am not sure,
> though, if ZFS will actually write big files in consecutive blocks
> (through a send/receive), so maybe the blocking factor is not the
> actual file size, but rather the zfs block size.  I am planning on
> using zfs gzip-9 compression on the destination pool, if it matters.

You might want to consider Zstandard for compression:
https://engineering.fb.com/core-data/smaller-and-faster-data-compression-with-zstandard/

You can optimize a ZFS fs to use larger blocks for those files that are
small ... and use large block sizes for other fs ... if it's easy to
split them.

-- 
| BIT BV  https://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / i...@bit.nl
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to