> -----Original Message-----
> From: [email protected] <[email protected]>
> Sent: Thursday, December 7, 2023 7:55 PM
> 
> On Tue, Dec 05, 2023 at 02:06:44PM +0000, Steven Surdock wrote:
> >
> > Using an OBSD 7.4 VM on VMware as an NFS server on HOST02.   It is
> > primarily used to store VMWare VM backups from HOST01, so VMWare is
> > the NFS client.  I'm seeing transfers of about 1.2 MB/s.
> 
> Sounds about right.  On a single (magnetic) disk, assume 200 ops/sec
> maximum, or about 5 kbyte per write op.
> 
> Remember that NFS is synchronous.  It is based on RPC, remote procedure
> calls.  The call has to return a result to the client before the next call
> can happen.  So your client (ESXi) is stuck at the synchronous write rate
> of your disk, which is governed by seek time and rotation rate.
> 
> To confirm, run systat and note the "sec" measurement for your disk.
> It will likely be in the 0.5 to 1.0 range.  This means your disk is 50% to
> 100% busy.  And the speed is about 1MB/s.
> 
> For improvement, use "-o noatime" on your exported partition mount.  This
> reduces inode update IO.
> 
> Or, try "-o async" if you want to live dangerously.
> 
> Or, you could even try ext2 instead of ffs.....rumour has it that
> ext2 is faster.  I don't know, never having tried it.
> 
> Or use an SSD for your export partition.
> 
> Or, crank up a copy of Linux and run NFS v4 server.  That will definitely
> be faster than any NFS v3 server.  V4 streams writes, to be very
> simplistic about it.
> 
> (I think you already confirmed it's NFS v3 with TCP, not NFS v2.
> You should turn UDP off for reliability reasons, not performance.)

So I thought that disk I/O might be an issue as well, but SCP rips at 800+ Mbps 
(95+ MBps).

I did end up trying async and noatime on the filesystem.  'async' offered the 
best improvement with about 75 Mbps (or 9.3 MBps).  Still not what I was hoping 
for, or even close to SCP.

I did confirm NFS V3 (via tcpdump), plus esxi only supports V3 and V4.

I also experimented with netbsd-iscsi-target-20111006p6, but I could not get 
esxi to connect reliably.

You are correct on the disk performance during the NFS write:

Disks   sd0   sd1   
seeks              
xfers     9    92  
speed  110K 5915K  
  sec   0.0   1.0  

For the sake of completeness, here is the disk performance for the scp:

Disks   sd0   sd1
seeks            
xfers    11  1559
speed  131K   97M
  sec   0.0   1.0

This is with /home mounted with 'ffs rw,nodev,nosuid 1 2'

Thanks!

Reply via email to