Re: NFS Server performance

2023-12-07 Thread Abel Abraham Camarillo Ojeda
On Tue, Dec 5, 2023 at 9:25 AM Zé Loff  wrote:

>
> On Tue, Dec 05, 2023 at 02:06:44PM +, Steven Surdock wrote:
> > Using an OBSD 7.4 VM on VMware as an NFS server on HOST02.   It is
> primarily used to store VMWare VM backups from HOST01, so VMWare is the NFS
> client.  I'm seeing transfers of about 1.2 MB/s.
> >
> > SCP from HOST01 to OBSD VM (same filesystem) copies at 110 MB/s.
> > Iperf3 from a VM on HOST01 to OBSD on HOST02 gives me 900+ mbps.
> > OBSD is a stock install running -stable.
> > NFS is using v3 (according to VMWare) and using TCP
> > During the NFS transfer the RECV-Q on the OBSD interface runs either
> 64000+ or 0.
> > I tried both em and vmx interface types.
> >
> > /etc/rc.conf.local:
> > mountd_flags="" # for normal use: ""
> > nfsd_flags="-tun 4" # Crank the 4 for a busy NFS fileserver
> > ntpd_flags=""   # enabled during install
> > portmap_flags=""# for normal use: ""
> >
> > Any clues on where to look to (greatly) improve NFS performance would be
> appreciated.
>
> Increasing write size, read size and the read-ahead count on the client
> has helped me.
>
> E.g., on the client's fstab:
>
>   10.17.18.10:/shared/stuff  /nfs/stuff  nfs
> rw,nodev,nosuid,intr,tcp,bg,noatime,-a=4,-r=32768,-w=32768 0 0
>
>
With this i got around 2.5x improvement (9-10MBps to 25-30MBps)

> Cheers
> Zé
>
> >
> > -Steve S.
> >
>
> --
>
>
>


Re: NFS Server performance

2023-12-07 Thread Steven Surdock
> -Original Message-
> From: j...@bitminer.ca 
> Sent: Thursday, December 7, 2023 7:55 PM
> 
> On Tue, Dec 05, 2023 at 02:06:44PM +, Steven Surdock wrote:
> >
> > Using an OBSD 7.4 VM on VMware as an NFS server on HOST02.   It is
> > primarily used to store VMWare VM backups from HOST01, so VMWare is
> > the NFS client.  I'm seeing transfers of about 1.2 MB/s.
> 
> Sounds about right.  On a single (magnetic) disk, assume 200 ops/sec
> maximum, or about 5 kbyte per write op.
> 
> Remember that NFS is synchronous.  It is based on RPC, remote procedure
> calls.  The call has to return a result to the client before the next call
> can happen.  So your client (ESXi) is stuck at the synchronous write rate
> of your disk, which is governed by seek time and rotation rate.
> 
> To confirm, run systat and note the "sec" measurement for your disk.
> It will likely be in the 0.5 to 1.0 range.  This means your disk is 50% to
> 100% busy.  And the speed is about 1MB/s.
> 
> For improvement, use "-o noatime" on your exported partition mount.  This
> reduces inode update IO.
> 
> Or, try "-o async" if you want to live dangerously.
> 
> Or, you could even try ext2 instead of ffs.rumour has it that
> ext2 is faster.  I don't know, never having tried it.
> 
> Or use an SSD for your export partition.
> 
> Or, crank up a copy of Linux and run NFS v4 server.  That will definitely
> be faster than any NFS v3 server.  V4 streams writes, to be very
> simplistic about it.
> 
> (I think you already confirmed it's NFS v3 with TCP, not NFS v2.
> You should turn UDP off for reliability reasons, not performance.)

So I thought that disk I/O might be an issue as well, but SCP rips at 800+ Mbps 
(95+ MBps).

I did end up trying async and noatime on the filesystem.  'async' offered the 
best improvement with about 75 Mbps (or 9.3 MBps).  Still not what I was hoping 
for, or even close to SCP.

I did confirm NFS V3 (via tcpdump), plus esxi only supports V3 and V4.

I also experimented with netbsd-iscsi-target-20111006p6, but I could not get 
esxi to connect reliably.

You are correct on the disk performance during the NFS write:

Disks   sd0   sd1   
seeks  
xfers 992  
speed  110K 5915K  
  sec   0.0   1.0  

For the sake of completeness, here is the disk performance for the scp:

Disks   sd0   sd1
seeks
xfers11  1559
speed  131K   97M
  sec   0.0   1.0

This is with /home mounted with 'ffs rw,nodev,nosuid 1 2'

Thanks!



Re: NFS Server performance

2023-12-07 Thread j

On Tue, Dec 05, 2023 at 02:06:44PM +, Steven Surdock wrote:


Using an OBSD 7.4 VM on VMware as an NFS server on HOST02.   It is
primarily used to store VMWare VM backups from HOST01, so VMWare
is the NFS client.  I'm seeing transfers of about 1.2 MB/s.


Sounds about right.  On a single (magnetic) disk, assume 200 ops/sec
maximum, or about 5 kbyte per write op.

Remember that NFS is synchronous.  It is based on RPC, remote
procedure calls.  The call has to return a result to the client
before the next call can happen.  So your client (ESXi) is stuck
at the synchronous write rate of your disk, which is governed
by seek time and rotation rate.

To confirm, run systat and note the "sec" measurement for your disk.
It will likely be in the 0.5 to 1.0 range.  This means your
disk is 50% to 100% busy.  And the speed is about 1MB/s.

For improvement, use "-o noatime" on your exported partition
mount.  This reduces inode update IO.

Or, try "-o async" if you want to live dangerously.

Or, you could even try ext2 instead of ffs.rumour has it that
ext2 is faster.  I don't know, never having tried it.

Or use an SSD for your export partition.

Or, crank up a copy of Linux and run NFS v4 server.  That will
definitely be faster than any NFS v3 server.  V4 streams
writes, to be very simplistic about it.

(I think you already confirmed it's NFS v3 with TCP, not NFS v2.
You should turn UDP off for reliability reasons, not performance.)



J



Re: NFS Server performance

2023-12-06 Thread Steven Surdock
No confusion.  The read and write buffer sizes would be above layer 3.  VMware 
offers little ability to modify read and write sizes.  It did inspire me to 
find this:  https://kb.vmware.com/s/article/1007909

NFS.ReceiveBufferSize

This is the size of the receive buffer for NFS sockets. This value is chosen 
based on internal performance testing. VMware does not recommend adjusting this 
value.
 
NFS.SendBufferSize

The size of the send buffer for NFS sockets. This value is chosen based on 
internal performance testing. VMware does not recommend adjusting this value.

...

ESXi 6.0, 6.5, 6.7:
Default Net.TcpipHeapMax is 512MB. Default send/receive socket buffer size of 
NFS is 256K each. So each socket consumes ~512K+.For 256 shares, it would be 
~128M. The default TCPIPheapMax is sufficient even for 256 mounts. Its not 
required to increase.

Also,  the man page for mount_nfs implies -w is useful for UDP mounts.  I have 
verified that this mount is using TCP. 

  -w writesize
 Set the write data size to the specified value.  Ditto the
 comments w.r.t. the -r option, but using the "fragments dropped
 after timeout" value on the server instead of the client.  Note
 that both the -r and -w options should only be used as a last
 ditch effort at improving performance when mounting servers that
 do not support TCP mounts.

-Steve S.

-Original Message-
From: owner-m...@openbsd.org  On Behalf Of Carsten Reith
Sent: Wednesday, December 6, 2023 11:41 AM
To: misc@openbsd.org
Subject: Re: NFS Server performance

[You don't often get email from carsten.re...@t-online.de. Learn why this is 
important at https://aka.ms/LearnAboutSenderIdentification ]

Steven Surdock  writes:

> The client is VMWare ESXi, so my options are limited.  I tried 
> enabling jumbo frames (used 9000) and this made very little 
> difference.
>

Is it possible that you confuse the network layers here ? Jumbo frames are 
layer 2, the read and write sizes referred to apply are layer 3. You can try to 
set them as suggested, indepently of the frame size.



Re: NFS Server performance

2023-12-06 Thread Carsten Reith


Steven Surdock  writes:

> The client is VMWare ESXi, so my options are limited.  I tried
> enabling jumbo frames (used 9000) and this made very little
> difference.
>

Is it possible that you confuse the network layers here ? Jumbo frames
are layer 2, the read and write sizes referred to apply are layer 3. You
can try to set them as suggested, indepently of the frame size.



Re: NFS Server performance

2023-12-06 Thread Steven Surdock
The client is VMWare ESXi, so my options are limited.  I tried enabling jumbo 
frames (used 9000) and this made very little difference.

-Original Message-
From: Zé Loff  
Sent: Tuesday, December 5, 2023 10:12 AM
To: Steven Surdock 
Cc: misc@openbsd.org
Subject: Re: NFS Server performance


On Tue, Dec 05, 2023 at 02:06:44PM +, Steven Surdock wrote:
> Using an OBSD 7.4 VM on VMware as an NFS server on HOST02.   It is primarily 
> used to store VMWare VM backups from HOST01, so VMWare is the NFS client.  
> I'm seeing transfers of about 1.2 MB/s.  
> 
> SCP from HOST01 to OBSD VM (same filesystem) copies at 110 MB/s.  
> Iperf3 from a VM on HOST01 to OBSD on HOST02 gives me 900+ mbps.  
> OBSD is a stock install running -stable.
> NFS is using v3 (according to VMWare) and using TCP During the NFS 
> transfer the RECV-Q on the OBSD interface runs either 64000+ or 0.
> I tried both em and vmx interface types.
> 
> /etc/rc.conf.local:
> mountd_flags="" # for normal use: ""
> nfsd_flags="-tun 4" # Crank the 4 for a busy NFS fileserver
> ntpd_flags=""   # enabled during install
> portmap_flags=""# for normal use: ""
> 
> Any clues on where to look to (greatly) improve NFS performance would be 
> appreciated.

Increasing write size, read size and the read-ahead count on the client has 
helped me.

E.g., on the client's fstab:

  10.17.18.10:/shared/stuff  /nfs/stuff  nfs  
rw,nodev,nosuid,intr,tcp,bg,noatime,-a=4,-r=32768,-w=32768 0 0

Cheers
Zé

-- 
 



Re: NFS Server performance

2023-12-05 Thread Zé Loff


On Tue, Dec 05, 2023 at 02:06:44PM +, Steven Surdock wrote:
> Using an OBSD 7.4 VM on VMware as an NFS server on HOST02.   It is primarily 
> used to store VMWare VM backups from HOST01, so VMWare is the NFS client.  
> I'm seeing transfers of about 1.2 MB/s.  
> 
> SCP from HOST01 to OBSD VM (same filesystem) copies at 110 MB/s.  
> Iperf3 from a VM on HOST01 to OBSD on HOST02 gives me 900+ mbps.  
> OBSD is a stock install running -stable.
> NFS is using v3 (according to VMWare) and using TCP
> During the NFS transfer the RECV-Q on the OBSD interface runs either 64000+ 
> or 0.
> I tried both em and vmx interface types.
> 
> /etc/rc.conf.local:
> mountd_flags="" # for normal use: ""
> nfsd_flags="-tun 4" # Crank the 4 for a busy NFS fileserver
> ntpd_flags=""   # enabled during install
> portmap_flags=""# for normal use: ""
> 
> Any clues on where to look to (greatly) improve NFS performance would be 
> appreciated.

Increasing write size, read size and the read-ahead count on the client has 
helped me.

E.g., on the client's fstab:

  10.17.18.10:/shared/stuff  /nfs/stuff  nfs  
rw,nodev,nosuid,intr,tcp,bg,noatime,-a=4,-r=32768,-w=32768 0 0

Cheers
Zé

> 
> -Steve S.
> 

-- 
 



NFS Server performance

2023-12-05 Thread Steven Surdock
Using an OBSD 7.4 VM on VMware as an NFS server on HOST02.   It is primarily 
used to store VMWare VM backups from HOST01, so VMWare is the NFS client.  I'm 
seeing transfers of about 1.2 MB/s.  

SCP from HOST01 to OBSD VM (same filesystem) copies at 110 MB/s.  
Iperf3 from a VM on HOST01 to OBSD on HOST02 gives me 900+ mbps.  
OBSD is a stock install running -stable.
NFS is using v3 (according to VMWare) and using TCP
During the NFS transfer the RECV-Q on the OBSD interface runs either 64000+ or 
0.
I tried both em and vmx interface types.

/etc/rc.conf.local:
mountd_flags="" # for normal use: ""
nfsd_flags="-tun 4" # Crank the 4 for a busy NFS fileserver
ntpd_flags=""   # enabled during install
portmap_flags=""# for normal use: ""

Any clues on where to look to (greatly) improve NFS performance would be 
appreciated.

-Steve S.



Re: ESXi client / NFS server performance

2010-12-01 Thread Steven Surdock
The upgrade to ESXi 4.1 went well (after several mis-starts) and
performance from the console to the OBSD NFS server is improved, but not
great.  I see about 120 Mbps on the network.  This is roughly double the
ESXi 3.5 performance for me.

-Steve S.



Re: ESXi client / NFS server performance

2010-11-26 Thread Steven Surdock
 -Original Message-
 From: owner-m...@openbsd.org [mailto:owner-m...@openbsd.org] On Behalf
Of
 Steve Shockley
 Sent: Tuesday, November 23, 2010 8:45 AM
 To: misc@openbsd.org
 Subject: Re: ESXi client / NFS server performance

 On 11/14/2010 1:04 PM, Steven Surdock wrote:
  Greetings, I'm attempting to use an OBSD 4.8-stable machine as an
NFS
  server for storing snapshots from an ESXi 3.5 server.  Unfortunately
  my NFS performance seems relatively poor at about 55 Mbps (6 MBps).

 I've found ESX performance over NFS is horrible unless you're doing
async
 mounts or using an NFS server that cheats with sync mounts (like a
 NetApp filer where it writes to NVRAM and sends the response before
it's
 actually on disk).

Is there a way for force async mounts to a OBSD NFS server?  There
doesn't seem to be a way to force async on an ESXi client.  It appears
other OSs allow forcing through an /etc/exports option.   Thanks.

-Steve S.



Re: ESXi client / NFS server performance

2010-11-23 Thread Steve Shockley

On 11/14/2010 1:04 PM, Steven Surdock wrote:

Greetings, I'm attempting to use an OBSD 4.8-stable machine as an NFS
server for storing snapshots from an ESXi 3.5 server.  Unfortunately my
NFS performance seems relatively poor at about 55 Mbps (6 MBps).


I've found ESX performance over NFS is horrible unless you're doing 
async mounts or using an NFS server that cheats with sync mounts (like 
a NetApp filer where it writes to NVRAM and sends the response before 
it's actually on disk).




Re: ESXi client / NFS server performance

2010-11-17 Thread Stuart Henderson
On 2010-11-14, Steven Surdock ssurd...@engineered-net.com wrote:
 Greetings, I'm attempting to use an OBSD 4.8-stable machine as an NFS
 server for storing snapshots from an ESXi 3.5 server.  Unfortunately my
 NFS performance seems relatively poor at about 55 Mbps (6 MBps).  Both
 machines are linked up at 1 Gbps via an HP ProCurve 1850G and I'm
 writing to wd0.  I've looked at disk i/o, upped
 net.inet.tcp.recvspace/sendspace (NFS session is using TCP) and done my
 share of googling, but I'm at a bit of a loss on how to figure out where
 my problem lies.  Any pointers would be appreciated.

When I ran ESXi 3.5 I found i/o was painfully slow, even with local
disk (and NFS performance with an OpenBSD server was appalling).
In general i/o performance in ESXi 4.1 seems a lot better, but
I haven't tried it with OpenBSD as NFS server yet.



Re: ESXi client / NFS server performance

2010-11-17 Thread Steven Surdock
 -Original Message-
 From: owner-m...@openbsd.org [mailto:owner-m...@openbsd.org] On Behalf
Of
 Stuart Henderson
 Subject: Re: ESXi client / NFS server performance

 On 2010-11-14, Steven Surdock ssurd...@engineered-net.com wrote:
...
  where my problem lies.  Any pointers would be appreciated.

 When I ran ESXi 3.5 I found i/o was painfully slow, even with local
disk
 (and NFS performance with an OpenBSD server was appalling).
 In general i/o performance in ESXi 4.1 seems a lot better, but I
haven't
 tried it with OpenBSD as NFS server yet.

Thanks.  I've performed some other testing and found other clients work
acceptably, so I'll assume it's an ESXi issue.  Iperf tops out at 300
Mbps on my network, which sucks but it's better than the 50 Mbps NFS was
doing.  I'll try upgrading to 4.1 this weekend and see if that helps.

-Steve S.



Re: ESXi client / NFS server performance

2010-11-17 Thread Emille Blanc
This may be a bit late, but for what it's worth, 4.8 -release as an ESXi 
4.1 client without any knob tweaking and pf running the default ruleset.
Haven't done anything with ESXi 3.5 though, so I'm not sure what to say 
on that front.


---...@memnarch:/home $ uname -a
OpenBSD memnarch.sarlok.com 4.8 GENERIC#136 i386
---...@memnarch:/home $ sudo dd if=/dev/zero of=/home/test.dat bs=16k 
count=32000

32000+0 records in
32000+0 records out
524288000 bytes transferred in 15.163 secs (34576797 bytes/sec)
---...@memnarch:/home $ sudo dd of=/dev/null if=/home/test.dat bs=16k 
count=32000

32000+0 records in
32000+0 records out
524288000 bytes transferred in 13.937 secs (37617959 bytes/sec)
---...@memnarch:/home $ mount | grep exports
172.16.100.250:/exports/home on /home type nfs (v3, udp, timeo=100, 
retrans=101)

---...@memnarch:/home $ dmesg | head -10
OpenBSD 4.8 (GENERIC) #136: Mon Aug 16 09:06:23 MDT 2010
dera...@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz (GenuineIntel 686-class) 2 GHz
cpu0: 
FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,SSE3,SSSE3,SSE4.1,SSE4.2,POPCNT

real mem  = 536375296 (511MB)
avail mem = 517644288 (493MB)
mainbus0 at root
bios0 at mainbus0: AT/286+ BIOS, date 09/22/09, BIOS32 rev. 0 @ 0xfd780, 
SMBIOS rev. 2.4 @ 0xe0010 (98 entries)

bios0: vendor Phoenix Technologies LTD version 6.00 date 09/22/2009
bios0: VMware, Inc. VMware Virtual Platform


On 10-11-17 7:16 AM, Steven Surdock wrote:

-Original Message-
From: owner-m...@openbsd.org [mailto:owner-m...@openbsd.org] On Behalf

Of

Stuart Henderson
Subject: Re: ESXi client / NFS server performance

On 2010-11-14, Steven Surdockssurd...@engineered-net.com  wrote:

...

where my problem lies.  Any pointers would be appreciated.


When I ran ESXi 3.5 I found i/o was painfully slow, even with local

disk

(and NFS performance with an OpenBSD server was appalling).
In general i/o performance in ESXi 4.1 seems a lot better, but I

haven't

tried it with OpenBSD as NFS server yet.


Thanks.  I've performed some other testing and found other clients work
acceptably, so I'll assume it's an ESXi issue.  Iperf tops out at 300
Mbps on my network, which sucks but it's better than the 50 Mbps NFS was
doing.  I'll try upgrading to 4.1 this weekend and see if that helps.

-Steve S.




--
http://blog.sarlok.com/
Sometimes all the left hand needs to know is where the right hand is, so 
it knows where to point the blame.




Re: ESXi client / NFS server performance

2010-11-17 Thread Steven Surdock
My OBSD (4.8-stable) virtual machine (ESXi 3.5) to OBSD (4.8-stable)
physical machine isn't too bad:

ssurd...@builder03$ sudo dd if=/dev/zero of=/mnt/VMware/test.dat bs=16k
count=
32000+0 records in
32000+0 records out
524288000 bytes transferred in 19.466 secs (2691 bytes/sec)

But, from the ESXi console it sucks.

Was your NFS server physical or virtual?

-Steve S.

 -Original Message-
 From: owner-m...@openbsd.org [mailto:owner-m...@openbsd.org] On Behalf
Of
 Emille Blanc
 Sent: Wednesday, November 17, 2010 11:59 PM
 To: misc@openbsd.org
 Subject: Re: ESXi client / NFS server performance

 This may be a bit late, but for what it's worth, 4.8 -release as an
ESXi
 4.1 client without any knob tweaking and pf running the default
ruleset.
 Haven't done anything with ESXi 3.5 though, so I'm not sure what to
say on
 that front.

 ---...@memnarch:/home $ uname -a
 OpenBSD memnarch.sarlok.com 4.8 GENERIC#136 i386 ---...@memnarch:/home
$
 sudo dd if=/dev/zero of=/home/test.dat bs=16k
 count=32000
 32000+0 records in
 32000+0 records out
 524288000 bytes transferred in 15.163 secs (34576797 bytes/sec) -
 e...@memnarch:/home $ sudo dd of=/dev/null if=/home/test.dat bs=16k
 count=32000
 32000+0 records in
 32000+0 records out
 524288000 bytes transferred in 13.937 secs (37617959 bytes/sec) -
 e...@memnarch:/home $ mount | grep exports 172.16.100.250:/exports/home
on
 /home type nfs (v3, udp, timeo=100,
 retrans=101)
 ---...@memnarch:/home $ dmesg | head -10 OpenBSD 4.8 (GENERIC) #136:
Mon
 Aug 16 09:06:23 MDT 2010
  dera...@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
 cpu0: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz (GenuineIntel 686-class)
2
 GHz
 cpu0:

FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,
CF
 LUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,SSE3,SSSE3,SSE4.1,SSE4.2,POPCNT
 real mem  = 536375296 (511MB)
 avail mem = 517644288 (493MB)
 mainbus0 at root
 bios0 at mainbus0: AT/286+ BIOS, date 09/22/09, BIOS32 rev. 0 @
0xfd780,
 SMBIOS rev. 2.4 @ 0xe0010 (98 entries)
 bios0: vendor Phoenix Technologies LTD version 6.00 date 09/22/2009
 bios0: VMware, Inc. VMware Virtual Platform



Re: ESXi client / NFS server performance

2010-11-17 Thread Emille Blanc
The NFS server (4.8 release) from which the exports are mounted is a 
virtual guest on the ESXi 4.1 host.
Transfer rates to/from the virtual NFS server guest to another guest are 
a little faster on average (which makes some modicum of sense in my 
mind...) than from the NFS guest to a physically separate machine.


The VM guests are using em(4) if that's any help.

On 10-11-17 9:28 PM, Steven Surdock wrote:

My OBSD (4.8-stable) virtual machine (ESXi 3.5) to OBSD (4.8-stable)
physical machine isn't too bad:

ssurd...@builder03$ sudo dd if=/dev/zero of=/mnt/VMware/test.dat bs=16k
count=
32000+0 records in
32000+0 records out
524288000 bytes transferred in 19.466 secs (2691 bytes/sec)

But, from the ESXi console it sucks.

Was your NFS server physical or virtual?

-Steve S.


-Original Message-
From: owner-m...@openbsd.org [mailto:owner-m...@openbsd.org] On Behalf

Of

Emille Blanc
Sent: Wednesday, November 17, 2010 11:59 PM
To: misc@openbsd.org
Subject: Re: ESXi client / NFS server performance

This may be a bit late, but for what it's worth, 4.8 -release as an

ESXi

4.1 client without any knob tweaking and pf running the default

ruleset.

Haven't done anything with ESXi 3.5 though, so I'm not sure what to

say on

that front.

---...@memnarch:/home $ uname -a
OpenBSD memnarch.sarlok.com 4.8 GENERIC#136 i386 ---...@memnarch:/home

$

sudo dd if=/dev/zero of=/home/test.dat bs=16k
count=32000
32000+0 records in
32000+0 records out
524288000 bytes transferred in 15.163 secs (34576797 bytes/sec) -
e...@memnarch:/home $ sudo dd of=/dev/null if=/home/test.dat bs=16k
count=32000
32000+0 records in
32000+0 records out
524288000 bytes transferred in 13.937 secs (37617959 bytes/sec) -
e...@memnarch:/home $ mount | grep exports 172.16.100.250:/exports/home

on

/home type nfs (v3, udp, timeo=100,
retrans=101)
---...@memnarch:/home $ dmesg | head -10 OpenBSD 4.8 (GENERIC) #136:

Mon

Aug 16 09:06:23 MDT 2010
  dera...@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz (GenuineIntel 686-class)

2

GHz
cpu0:


FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,
CF

LUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,SSE3,SSSE3,SSE4.1,SSE4.2,POPCNT
real mem  = 536375296 (511MB)
avail mem = 517644288 (493MB)
mainbus0 at root
bios0 at mainbus0: AT/286+ BIOS, date 09/22/09, BIOS32 rev. 0 @

0xfd780,

SMBIOS rev. 2.4 @ 0xe0010 (98 entries)
bios0: vendor Phoenix Technologies LTD version 6.00 date 09/22/2009
bios0: VMware, Inc. VMware Virtual Platform





--
http://blog.sarlok.com/
Sometimes all the left hand needs to know is where the right hand is, so 
it knows where to point the blame.




ESXi client / NFS server performance

2010-11-14 Thread Steven Surdock
Greetings, I'm attempting to use an OBSD 4.8-stable machine as an NFS
server for storing snapshots from an ESXi 3.5 server.  Unfortunately my
NFS performance seems relatively poor at about 55 Mbps (6 MBps).  Both
machines are linked up at 1 Gbps via an HP ProCurve 1850G and I'm
writing to wd0.  I've looked at disk i/o, upped
net.inet.tcp.recvspace/sendspace (NFS session is using TCP) and done my
share of googling, but I'm at a bit of a loss on how to figure out where
my problem lies.  Any pointers would be appreciated.

##/etc/fstab:
...
/dev/wd0g /home ffs rw,nodev,nosuid,softdep 1 2

##dmesg:

OpenBSD 4.8-stable (GENERIC) #0: Mon Oct 25 13:02:04 EDT 2010
r...@builder03:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: Intel(R) Pentium(R) D CPU 2.80GHz (GenuineIntel 686-class) 2.80
GHz
cpu0:
FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,
CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID
,CX16,xTPR
real mem  = 1064726528 (1015MB)
avail mem = 1037348864 (989MB)
mainbus0 at root
bios0 at mainbus0: AT/286+ BIOS, date 08/02/07, BIOS32 rev. 0 @ 0xf0010,
SMBIOS rev. 2.5 @ 0xfd5f0 (21 entries)
bios0: vendor American Megatrends Inc. version V5.6 date 08/02/2007
bios0: MICRO-STAR INTERNATIONAL CO.,LTD MS-7267
acpi0 at bios0: rev 0
acpi0: sleep states S0 S1 S4 S5
acpi0: tables DSDT FACP APIC MCFG SLIC OEMB
acpi0: wakeup devices PS2K(S4) PS2M(S4) EUSB(S4) MC97(S4) P0P4(S4)
P0P5(S4) P0P6(S4) P0P7(S4) P0P8(S4) P0P9(S4) USB0(S1) USB1(S1) USB2(S1)
USB3(S1) P0P2(S4) P0P1(S4) SLPB(S4)
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: apic clock running at 199MHz
cpu at mainbus0: not configured
ioapic0 at mainbus0: apid 2 pa 0xfec0, version 20, 24 pins
acpiprt0 at acpi0: bus 0 (PCI0)
acpiprt1 at acpi0: bus 1 (P0P4)
acpiprt2 at acpi0: bus -1 (P0P5)
acpiprt3 at acpi0: bus -1 (P0P6)
acpiprt4 at acpi0: bus -1 (P0P7)
acpiprt5 at acpi0: bus -1 (P0P8)
acpiprt6 at acpi0: bus -1 (P0P9)
acpiprt7 at acpi0: bus 2 (P0P2)
acpicpu0 at acpi0
acpibtn0 at acpi0: SLPB
acpibtn1 at acpi0: PWRB
bios0: ROM list: 0xc/0xaa00! 0xcb000/0x1200
pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
pchb0 at pci0 dev 0 function 0 Intel 82945G Host rev 0x02
vga1 at pci0 dev 2 function 0 Intel 82945G Video rev 0x02
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
intagp0 at vga1
agp0 at intagp0: aperture at 0xe000, size 0x1000
inteldrm0 at vga1: apic 2 int 16 (irq 10)
drm0 at inteldrm0
azalia0 at pci0 dev 27 function 0 Intel 82801GB HD Audio rev 0x01:
apic 2 int 16 (irq 10)
azalia0: codecs: Realtek ALC883
audio0 at azalia0
ppb0 at pci0 dev 28 function 0 Intel 82801GB PCIE rev 0x01: apic 2 int
16 (irq 10)
pci1 at ppb0 bus 1
uhci0 at pci0 dev 29 function 0 Intel 82801GB USB rev 0x01: apic 2 int
23 (irq 5)
uhci1 at pci0 dev 29 function 1 Intel 82801GB USB rev 0x01: apic 2 int
19 (irq 11)
uhci2 at pci0 dev 29 function 2 Intel 82801GB USB rev 0x01: apic 2 int
18 (irq 7)
uhci3 at pci0 dev 29 function 3 Intel 82801GB USB rev 0x01: apic 2 int
16 (irq 10)
ehci0 at pci0 dev 29 function 7 Intel 82801GB USB rev 0x01: apic 2 int
23 (irq 5)
usb0 at ehci0: USB revision 2.0
uhub0 at usb0 Intel EHCI root hub rev 2.00/1.00 addr 1
ppb1 at pci0 dev 30 function 0 Intel 82801BA Hub-to-PCI rev 0xe1
pci2 at ppb1 bus 2
dc0 at pci2 dev 1 function 0 ADMtek AN983 rev 0x11: apic 2 int 17 (irq
10), address 00:04:5a:54:a7:5b
acphy0 at dc0 phy 1: AC_UNKNOWN 10/100 PHY, rev. 0
ami0 at pci2 dev 2 function 0 AMI MegaRAID rev 0x02: apic 2 int 18
(irq 7)
ami0: AMI MegaRAID i4, 64b/lhc, FW N661, BIOS v1.01, 16MB RAM
ami0: 4 channels, 0 FC loops, 1 logical drives
scsibus0 at ami0: 1 targets
sd0 at scsibus0 targ 0 lun 0: AMI, Host drive #00,  SCSI2 0/direct
fixed
sd0: 476938MB, 512 bytes/sec, 976769024 sec total
re0 at pci2 dev 4 function 0 Realtek 8169SC rev 0x10: RTL8169/8110SCd
(0x1800), apic 2 int 20 (irq 11), address 00:16:17:d9:9f:a3
rgephy0 at re0 phy 7: RTL8169S/8110S PHY, rev. 2
ichpcib0 at pci0 dev 31 function 0 Intel 82801GB LPC rev 0x01: PM
disabled
pciide0 at pci0 dev 31 function 1 Intel 82801GB IDE rev 0x01: DMA,
channel 0 configured to compatibility, channel 1 configured to
compatibility
wd0 at pciide0 channel 0 drive 1: WDC WD5000AAKB-00H8A0
wd0: 16-sector PIO, LBA48, 476940MB, 976773168 sectors
wd0(pciide0:0:1): using PIO mode 4, Ultra-DMA mode 5
pciide0: channel 1 disabled (no drives)
pciide1 at pci0 dev 31 function 2 Intel 82801GB SATA rev 0x01: DMA,
channel 0 configured to native-PCI, channel 1 configured to native-PCI
pciide1: using apic 2 int 19 (irq 11) for native-PCI interrupt
wd1 at pciide1 channel 0 drive 0: WDC WD1600JS-00MHB0
wd1: 16-sector PIO, LBA48, 152627MB, 312581808 sectors
wd1(pciide1:0:0): using PIO mode 4, Ultra-DMA mode 6
wd2 at pciide1 channel 1 drive 1: WDC WD4000AAKS-00TMA0
wd2: 16-sector PIO, LBA48, 381554MB, 781422768 sectors