Ilpo Järvinen:
> > Great find Ilpo! Did you have to do some iptables-trickery for this
> > testing? I have ping working between proxy and appvm, but iperf and nc
> > both tell me no route to host?
>
> Yes, I did (it replies with ICMP by default). You'll need to fill in the
> vif IP-address to t
On Fri, 2 Feb 2018, Jarle Thorsen wrote:
> Ilpo Järvinen:
> > Can you try if you get better throughput between a proxy vm and an appvm
> > using this kind of topology?
> >
> > sys-net <-> iperf-srv (proxyvm) <-> iperf-cli (appvm)
> >
> > I could push ~10Gbps with one flow and slightly more with
Ilpo Järvinen:
> Can you try if you get better throughput between a proxy vm and an appvm
> using this kind of topology?
>
> sys-net <-> iperf-srv (proxyvm) <-> iperf-cli (appvm)
>
> I could push ~10Gbps with one flow and slightly more with more parallel
> flows between them.
Great find Ilpo!
Can you try if you get better throughput between a proxy vm and an appvm
using this kind of topology?
sys-net <-> iperf-srv (proxyvm) <-> iperf-cli (appvm)
I could push ~10Gbps with one flow and slightly more with more parallel
flows between them. But between sys-net and iperf-srv vms I've a lo
Ilpo Järvinen:
> I found this:
> https://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
Thanks, I'll have a look.
> It might be that roughly 4Gbps might be what you can get for cross-vm with
> one flow (but those results are quite old).
>
> I guess that th
On Thu, 1 Feb 2018, Jarle Thorsen wrote:
> Ilpo Järvinen:
> > I'd next try to tweak the txqueuelen (at the netvm side):
> > sudo ifconfig vifxx.0 txqueuelen
> >
> > Appvm side (eth0) seems to have 1000 but the other side (vifxx.0) has
> > only 64 by default that seems a bit small for high
Ilpo Järvinen:
> I'd next try to tweak the txqueuelen (at the netvm side):
> sudo ifconfig vifxx.0 txqueuelen
>
> Appvm side (eth0) seems to have 1000 but the other side (vifxx.0) has
> only 64 by default that seems a bit small for high-performance transfers.
Thanks a lot for your help so
On Wed, 31 Jan 2018, Jarle Thorsen wrote:
> Mike Keehan:
>
> > It sounds a bit ambitious to run 10gb per sec from one VM through
> > another and onto the wire. I suspect you are memory speed limited
> > if you are using a straightforward desktop pc.
>
> I'm not sure what is the limiting factor
Mike Keehan:
> It sounds a bit ambitious to run 10gb per sec from one VM through
> another and onto the wire. I suspect you are memory speed limited
> if you are using a straightforward desktop pc.
I'm not sure what is the limiting factor (memory speed, xen overhead?), but I
just did an iperf
On Wednesday, 31 January 2018 12:15:01 UTC, Jarle Thorsen wrote:
> Alex Duboise:
> > Interested to find out too. Have you tried from FirewallVM?
>
> Same slow performance when iperf is run from a FirewallVM that is connected
> to the netvm.
>
> > Could you also test what happen when during the
On Wed, 31 Jan 2018 05:23:19 -0800 (PST)
Jarle Thorsen wrote:
> Ilpo Järvinen:
> > Please also check that GSO (generic-segmentation-offload) is on at
> > the sending appvm eth0 (I don't remember if the depency logic
> > causes it to get toggled off when SG was off'ed and cannot check it
> > ATM
Ilpo Järvinen:
> Please also check that GSO (generic-segmentation-offload) is on at
> the sending appvm eth0 (I don't remember if the depency logic causes it to
> get toggled off when SG was off'ed and cannot check it ATM myself).
Yes, GSO is automatically turned on when SG is enabled.
> > I'm
On Wed, 31 Jan 2018, Jarle Thorsen wrote:
> Ilpo Järvinen:
> > Scatter-Gather.
> >
> > > Are you talking about enabling sg on the virtual network device in the
> > > netvm?
> > >
> > > Something like "sudo ethtool -K vif12.0 sg on" ?
> >
> > Yes. For both that and the eth0 in appvm.
>
> Thi
Ilpo Järvinen:
> Scatter-Gather.
>
> > Are you talking about enabling sg on the virtual network device in the
> > netvm?
> >
> > Something like "sudo ethtool -K vif12.0 sg on" ?
>
> Yes. For both that and the eth0 in appvm.
This made a huge performance boost! (single threaded iperf went from
Alex Duboise:
> Interested to find out too. Have you tried from FirewallVM?
Same slow performance when iperf is run from a FirewallVM that is connected to
the netvm.
> Could you also test what happen when during the load test you start a
> disposable VM? Does it drop
Running iperf in the netv
On Wed, 31 Jan 2018, Jarle Thorsen wrote:
> onsdag 31. januar 2018 11.12.33 UTC+1 skrev Ilpo Järvinen følgende:
> > On Wed, 31 Jan 2018, Jarle Thorsen wrote:
> > > onsdag 31. januar 2018 10.50.09 UTC+1 skrev Jarle Thorsen følgende:
> > > > My netvm (Fedora 26 template) has a 10gbe network card, an
onsdag 31. januar 2018 11.12.33 UTC+1 skrev Ilpo Järvinen følgende:
> On Wed, 31 Jan 2018, Jarle Thorsen wrote:
> > onsdag 31. januar 2018 10.50.09 UTC+1 skrev Jarle Thorsen følgende:
> > > My netvm (Fedora 26 template) has a 10gbe network card, and from
> > > within the netwm I have no problem sa
On Wednesday, 31 January 2018 09:50:09 UTC, Jarle Thorsen wrote:
> My netvm (Fedora 26 template) has a 10gbe network card, and from within the
> netwm I have no problem saturating the 10Gbit link using iperf to an external
> server.
>
> However, in any vm sending traffic through this netvm I ca
On Wed, 31 Jan 2018, Jarle Thorsen wrote:
> onsdag 31. januar 2018 10.50.09 UTC+1 skrev Jarle Thorsen følgende:
> > My netvm (Fedora 26 template) has a 10gbe network card, and from
> > within the netwm I have no problem saturating the 10Gbit link using
> > iperf to an external server.
> > However
onsdag 31. januar 2018 10.50.09 UTC+1 skrev Jarle Thorsen følgende:
> My netvm (Fedora 26 template) has a 10gbe network card, and from within the
> netwm I have no problem saturating the 10Gbit link using iperf to an external
> server.
>
> However, in any vm sending traffic through this netvm I
20 matches
Mail list logo