Dne 21. 07. 25 v 18:42 matthew patton napsal(a):
don't use LVM striping. Use MD/DM striping. Better yet, stripe at the hardware

lvm handles both --type striped, type raid0

controller level if you can. Generally speaking a 32kb or 64kb interleave or "strip size" is ~optimal for any use case (yes, even SSD) that isn't LARGE

Many/most SSDs do internal striping - thus they go with relatively large optimal io sizes (256K, 512K)

streaming reads. If the hypervisor is VMware, it allocates in 1MB extents and it will also interfere with I/O scheduling.

If you want to cut the hypervisor out of the loop, PCIe pass-thru is your only recourse. And again, you can attach no less than 3 up to about 5 (SSD) drives

This purely depends on individual hw capabilities.

to a controller before you saturate the PCIe bus. Also consumer-grade (aka junk) SSDs have miserable write endurance and write speed after a preliminary (ie fake) boost. If you care about your data and speed that doesn't go to zero ONLY buy enterprise grade SSDs and if you're doing a lot of writes, minimum 3x DWPD class though if you're going to attempt parity-RAID, you'll want 10x DWPD disks.

For boosting write performance there is probably a better option to simply use some nvme caching with dm writecache or something like that.

But as said - trying to do any kind of storage optimization inside VM is largely destined to fail - as there is not good enough knowledge about disk topology - since VM fakes whatever it can...

Regards

Zdenek


Reply via email to