Can anybody help me out here.  I've checked what I can, but I'm still 
relatively new to Solaris and I'm stumped as to what's going on here.

I have two Solaris machines connected via gigabit ethernet, but it looks like 
network transfers between the two are limited to around 10MB/s.  I first 
spotted it doing a zfs send / receive, but since then have tried using mbuffer 
with similar results.  It's as if the network is running at 100Mb/s, but both 
the machines and the switch appear to have everything running at gigabit speeds.

On both machines, solaris believes they are running as full duplex gigabit:
server 1 # dladm show-linkprop rge0
LINK         PROPERTY        VALUE          DEFAULT        POSSIBLE
rge0         speed           1000           --             -- 
rge0         duplex          full           --             half,full 

server 2 # dladm show-linkprop nge0
LINK         PROPERTY        VALUE          DEFAULT        POSSIBLE
rge0         speed           1000           --             -- 
rge0         duplex          full           --             half,full 

The switch is a 3com 2948-SFP Plus, it reports that both ports have 
auto-negotiated to full duplex gigabit, and are currently running at 9% of 
capacity.

I tried to use mbuffer to eliminate disk bottlenecks and test the raw network 
speed with:
server 1 (receiving):   /usr/local/bin/mbuffer -I server2:10001 -s 128k -m 512M 
> /dev/null
server 2 (sending):  cat /dev/zero | /usr/local/bin/mbuffer -s 128k -m 512M -O 
server1:10001

That averaged around 10MB/s with a peak rate of just over 11MB/s.

What else should I be looking at to find out why I'm getting such poor 
performance between these servers?

thanks,

Ross
--
This message posted from opensolaris.org

Reply via email to