Would you mind sharing a little bit of information about your setup? Your local 
network setup appears to be performing much better than mine...

I am using LUKSv2 (not the nbdkit LUKSv1 plugin) atop of an nbdfuse mount point 
on the client side.

Below are some high-level, non-scientific observations for write speeds 
achieved while dd'ing /dev/zero to my target device from a client on a Gigabit 
LAN.  

nbdkit using 1G memory as the storage:
* 810 MiB/s - LUKS + nbdfuse + nbdkit, w/ TLS   (Not a typo, weird outlier 
result)
* ~15 MiB/s - nbdfuse + nbdkit, w/ TLS
* ~60 MiB/s - nbdfuse + nbdkit, no TLS

nbdkit with a SW RAID5 device (HDDs attached via USB3-scsi controller):
  * ~1 MiB/s - LUKS + nbdfuse + nbdkit, w/ TLS
  * ~12 MiB/s - nbdfuse + nbdkit, w/ TLS
  * ~29 MiB/s - nbdfuse + nbdkit, no TLS

Server-side dd /dev/zero directly to SW RAID5 target (HDDs attached via 
USB3-scsi controller):
  * ~64 MiB/s

Both my client and server CPU and memory load seem quite low...almost 
negligible. The disk activity lights on the backing storage suggest to me that 
the disks are not being constantly written to. The disk appear idle for long 
chunks of time, followed by a short burst of activity. The disk write column in 
htop's IO tab shows a bunch of nbdkit threads with values ~50 K/s.

Any thoughts welcome... trying to decide where to start focusing taking deeper 
measurements with and try some tuning.

Thanks,
Jon
_______________________________________________
Libguestfs mailing list -- guestfs@lists.libguestfs.org
To unsubscribe send an email to guestfs-le...@lists.libguestfs.org

Reply via email to