Hi Alexandre
Many thanks for your prompt reply

And please, if you can, let me to know the results of your tests

----- Original Message ----- From: "Alexandre DERUMIER" <aderum...@odiso.com>
To: "Cesar Peschiera" <br...@click.com.py>
Cc: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Sunday, August 23, 2015 6:50 AM
Subject: Re: [pve-devel] The network performance future for VMs


Hi Cesar,

I will try to done tests again next week with qemu 2.4 and last virtio driver,
with different windows versions;





----- Mail original -----
De: "Cesar Peschiera" <br...@click.com.py>
À: "aderumier" <aderum...@odiso.com>
Cc: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Dimanche 23 Août 2015 09:10:42
Objet: Fw: [pve-devel] The network performance future for VMs

Hi Alexandre again

Thanks for your prompt reply, and please, let me to understand better...

Only as a memory refresh, please see this link:
http://forum.proxmox.com/threads/20523-Quorum-problems-with-PVE-2-3-and-3-3?p=104808#post104808
2 queues, more don't improve performance. (queues are only for inbound
traffic).
For outbound traffic, as far I remember, the difference is huge between
2008r2 and 2012r2. (something like 1,5 vs 6gbits).

Now, is different the speed of the net with win2k8r2?...
... If it is correct, let me to ask you what is your setup?

Moreover, in this link (Virtio Win Drivers):
https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG

NEWS!!!! :
I see a data maybe very interesting:
* Thu Jun 04 2015 Cole Robinson <crobi...@redhat.com> - 0.1.105-1
- Update to virtio-win-prewhql-0.1-105
- BZ 1223426 NetKVM Fix for performance degradation with multi-queue

Maybe, with this version of the driver, the net driver for Win2k8r2 be more
fast.
(and for any Windows systems)
I would like to hear your opinion, specially for Win2k8r2.

Best regards
Cesar

----- Original Message -----
From: "Alexandre DERUMIER" <aderum...@odiso.com>
To: "Cesar Peschiera" <br...@click.com.py>
Cc: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Wednesday, August 19, 2015 5:08 AM
Subject: Re: [pve-devel] The network performance future for VMs


Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not
for
VMs Windows... is it correct???

I really don't known ;) I thinked that qemu support was enough, maybe not
...


Moreover, all this questions is due to that i want to improve the speed
of
the network in a VM Win2k8 r2
... is there anything that we can do for get better performance?
(I would like to get a maximum of 20 Gb/s due to that i have 2 ports of
10
Gb/s each one with LACP configured and with the Linux stack)
Note: I know that i will need two connections simultaneous of net for get
10
Gb/s in each link.

From my tests, I get a lot better performance with win2012r2 and win2K8r2.

and also, you can try nic multiqueue feature, it should improve
performance
with multiple streams.

Don't remember, but I think I was around 7-8gigabit for 1 windows vm.
But still a lot lower than linux vm.


now, it's clear that dpdk should improve performance for high pps.



----- Mail original ----- De: "Cesar Peschiera" <br...@click.com.py>
À: "aderumier" <aderum...@odiso.com>
Cc: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mercredi 19 Août 2015 10:50:37
Objet: Re: [pve-devel] The network performance future for VMs

Thanks Alexandre for your prompt response.

Seem to be easy with vhost-user virtual network card, and this one can't
work with linux bridge, because it's userland

I forgot say that OVS was configured in two ports for the LAN link, and
other two ports with the Linux stack for DRBD in blance-rr NIC-to-NIC (OVS
not have the option balance-rr).
In this case is that i had problems with DRBD, then, I preferred disable
totally in my servers the OVS setup.

I'm not sure, but maybe dpkg on linux stack can only work with host
physical interfaces and not qemu virtual interfaces.
dpkg?, i assume that you want to say DPDK.

Please see in this link:
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

Page 18:
Config I - DPDK with VFIO device assignment (future)
According to the graph, it is functional to QEMU, but I think that not for
VMs Windows... is it correct???

Moreover, all this questions is due to that i want to improve the speed of
the network in a VM Win2k8 r2
... is there anything that we can do for get better performance?
(I would like to get a maximum of 20 Gb/s due to that i have 2 ports of 10
Gb/s each one with LACP configured and with the Linux stack)
Note: I know that i will need two connections simultaneous of net for get
10
Gb/s in each link.


----- Original Message ----- From: "Alexandre DERUMIER" <aderum...@odiso.com>
To: "Cesar Peschiera" <br...@click.com.py>
Cc: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Wednesday, August 19, 2015 1:39 AM
Subject: Re: [pve-devel] The network performance future for VMs


So now my question is if DPDK can be activated also with the Linux
stack?.

I need to dig a little more about this.
Intel seem to push the ovs-dpdk in all conferenfece I have see.
(Seem to be easy with vhost-user virtual network card, and this one can't
work with linux bridge, because it's userland)


I'm not sure, but maybe dpkg on linux stack can only work with host
physical
interfaces and not qemu virtual interfaces.



----- Mail original ----- De: "Cesar Peschiera" <br...@click.com.py>
À: "aderumier" <aderum...@odiso.com>
Cc: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mardi 18 Août 2015 21:25:46
Objet: Re: [pve-devel] The network performance future for VMs

Oh, ok.

In the past, i had problems with DRBD 8.4.5 when OVS is enabled, so i had
that change my setup from OVS to the Linux stack, after of it, i had no
more
problems with DRBD.

About of the problem with OVS and DRBD, i did not test in depth the
problem
(in the season of preproduction phase), but if i not bad remember, maybe
the
problem appears when "OVS Intport" is enabled, or maybe only when OVS is
enabled in the setup.

I was using PVE 3.3

So now my question is if DPDK can be activated also with the Linux stack?.

----- Original Message ----- From: "Alexandre DERUMIER" <aderum...@odiso.com>
To: "Cesar Peschiera" <br...@click.com.py>
Cc: "pve-devel" <pve-devel@pve.proxmox.com>
Sent: Tuesday, August 18, 2015 8:57 AM
Subject: Re: [pve-devel] The network performance future for VMs


So, i would like to ask about of the future of PVE in network performance
terms.

dpdk will be implemented in openvswitch through vhost-user,
I'm waiting for ovs 2.4 to look at this.


----- Mail original ----- De: "Cesar Peschiera" <br...@click.com.py>
À: "pve-devel" <pve-devel@pve.proxmox.com>
Envoyé: Mardi 18 Août 2015 13:00:59
Objet: [pve-devel] The network performance future for VMs

Hi developers of PVE

I would like to talk about of the network speed for VMs:

I see in this link (Web official of Red Hat):
https://videos.cdn.redhat.com/summit2015/presentations/12752_red-hat-enterprise-virtualization-hypervisor-kvm-now-in-the-future.pdf

In the page 19 of this pdf, i see a interesting info:
Network Function Virtualization (NFV)
Throughput and Packets/sec "RHEL7.x + DPDK (Data Plane Development Kit)":

Millons packets per second:
KVM = 208
Docker = 215
Bare-metal = 218
HW maximum = 225

Between KVM and Bare-metal, the difference is little: 10

Also i see a list of HW NICs compatibility on this link:
http://dpdk.org/doc/nics

So, i would like to ask about of the future of PVE in network performance
terms.

Best regards
Cesar

_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to