2012-10-31 13:58, Sebastian Gabler wrote:
2012-10-30 19:21, Sebastian Gabler wrote:
>Whereas that's relative: performance is still at a quite miserable 62
>MB/s through a gigabit link. Apparently, my environment has room for
>improvement.
Does your gigabit ethernet use Jumbo Frames (like 9000 or up to 16KB,
depending on your NICs, switches and other networking gear) for
unrouted (L2) storage links? It is said that traditional MTU=1500
has too many overheads with packet size and preamble delays between
packets that effectively limit a gigabit to 700-800Mbps...


The MTU is on 1500 on source and target system, and there are no
fragmentations happening.

The point of Jumbo frames (in unrouted L2 ethernet segments) is to
remove many overheads - CSMA/CD delays being a large contributor -
and send unfragmented chunks of 9-16Kb in size, increasing the local
network efficiency.

> On the target system I am seeing writes up to
160 MB/s with frequent zpool iostat probes. When iostat probes are up to
5s+, there is a steady stream of 62 MB/s.

I believe this *may* mean that your networking buffer receives data
into memory (ZFS cache) at 62Mb/s, then every 5s the dirty cache
is sent to disks during TXG commit at whatever speed in can burst
(160Mb/s in your case).

> At this time I am not sure if
that is indeed a networking issue. I am also not sure how jumbo frames
could provide an intersting benefit here. The usually alleged 15% (which
are already on the high side) are not in the scope of making or breaking
the use case.

Mostly elaborated above.

Other ways to reduce networking lags were discussed by other
responders, including use of netcat to pipe the stream quickly,
ssh without encryption/with cheap encryption/with HPC patches.

Based on some experience with NFS and OpenVPN I might also
suggest to try UDP vs. TCP (i.e. with netcat), though this
would probably play on the unsafe side - UDP-based programs
include retries like NFS (or accept the drop of data like VoIP),
as they deem necessary, and ZFS-send probably doesn't do this;
it is rather fragile already.

//Jim


_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to