On 03/12/2021 00:05, Adrian Vovk wrote:
The only thing I don't understand is why layering dm-integrity in a loop
device on top of dm-integrity on a real disk should necessarily hammer
write performance. I can understand it chewing up ram cache and cpu, but
it shouldn't magnify real writes that much.

Well a write in the mounted home dir = 2 writes to the loopback file,
and a write to the loopback file is 2 writes to the block device. Thus
a write to the home dir is 4 writes to the block device. Am I
mistaken?

No I'd say you're right. But if it's a personal system, like my home server, I'd be more worried about read speed. Certainly on my system (did I say I had raid-5 :-) it's boot times (configuring lvm) and reading from disk that I notice.

Given that I have 16GB ram (with 32GB waiting to be installed :-) and aren't hammering the system, four (or more) writes via write amplification for every write I send from my app is almost un-noticed (apart from xosview telling me my cpu cores are working hard). And if I have a short burst of writes, there's plenty of cache so as pressure on the i/o path goes up it makes elevator optimisation easier and increases disk efficiency.

And if I was using a VM on a big server with lots of VMs, again making plenty of ram available to cache it smooths out the writes and reduces actual pressure on the real physical disks.

Cheers,
Wol

Reply via email to