Note that its important to consider the differences between throughput and
latency when discussing “fast”.  (Ideally we’d have very low latency and
very high bandwidth.  However in the real world there is usually some
tradeoff between the two.)

For throughput I’d imagine that it would be impossible to beat the VNIC +
etherstub solution that has already been discussed.

However, because of the architecture of VNICs which will rely upon software
rings (two I think, with etherstub — Robert please correct me if I’m
wrong), I would not be surprised to learn that you can achieve lower
latency with two separate physical NICs dedicated to each zone.

Experimentally, I’m finding that the VNIC architecture roughly doubles the
latency I see when compared with the best performing (in terms of latency)
NIC I can find.  (That best performing NIC is the SolarFlare NIC, the
driver for which I’ll be submitting for review soon — savvy folks can
probably find it in my github repos, I just don’t want to publicly ask
people to start reviewing it yet.  Soon, though.)

What do those latencies look like?

Well, its hard for me to be sure at this point, as I’ve got a rather
complex configuration which includes multiple traversals of TCP in the
kernel, and on my test client.  Under severe loading, my latencies with the
revised sfxge driver are about half what I see with ixgbe when both drivers
are tuned for latency.  I consider this a substantial improvement.

(Sadly, the current sfxge drivers are less well suited for shared use in
virtualization, as they do not express more than a single ring group.
SolarFlare and I plan to fix that later, but probably after the first
integration of the drivers in the coming weeks.)

Anyway, you might try the back to back NICs and measure the performance if
latency is king.  (In my world, HFT, it definitely is.  And dirty secret —
we actually don’t care at all about bandwidth — we rarely see more than
1Gb/sec of consumption, but its all itty bitty 200 byte frames that have to
be dealt with as quickly as possible — I count every microsecond.)

  - Garrett

On Tue, Jan 26, 2016 at 11:43 AM, Robert Mustacchi <r...@joyent.com> wrote:

> On 1/26/16 8:21 , Humberto Ramirez wrote:
> > "Practically, the limits of link speed for a VNIC are based on the
> > underlying device or the kernel data path, so it can saturate a 10
> > Gbit/s device. On the flip side, due to how the hardware virtualization
> > is currently implemented, it is unlikely that you will see speeds much
> > higher than 1 Gbit/s."
> >
> > Robert, so based on your experience I will not see 2 VMs talking faster
> > than 1 Gbit/s? (At least not in SmartOS)
> > Did I understand you correctly?
> 
> That's only true for *KVM* guests. Though it may vary.
> 
> Traditional zones or lx zones can easily saturate 10+ Gbit/s.
> 
> Robert
> 



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to