> Care to explain that in more detail? Why shouldn't it work on spinning
> disks?

Hash are random they introduce random read access.

With a QCOW2 cluster size of 4KB the deduplication code when writting duplicated
data will do one random read per 4KB block to deduplicate.

A server grade hardisk is rated for 250 iops. This traduce in 1MB/s of 
deduplicated
data. Not very usable.

On the contrary a samsung 840 pro SSD is rated for 80k iops of random read.
That should traduce in 320MB/s of potentially deduplicated data.

Havind dedup metadata on SSD and actual data on disk would solve the problem but
it would need block backend.

Benoît

Reply via email to