Hi,
On 14/05/19 13:47, mich...@fritscher.net wrote:
Am 2019-05-14 09:37, schrieb mich...@fritscher.net:
Am 2019-05-13 23:51, schrieb Michael Fritscher:
On 13.05.19 17:05, Gert Doering wrote:
Hi,
On Mon, May 13, 2019 at 04:20:41PM +0200, mich...@fritscher.net wrote:
I experienced a high system cpu usage of OpenVPN in qemu on Windows.
Both with hax and whpx (kvm-like accelerators). Apparently, it
does make
calls to hpet_read (or acpi_pm_read if hpet is disabled) with 1 kHz.
This makes a overhead ov over 85% regarding the "perf" program.
Is that normal? It seems to only happen on the server, which also
streams UDP packets with about 1 MB/sec.
If you have rate-limiting configured on the server, it needs to
check time vs. buckets - that could be one of the reasons.
Or if you have too much debugging configured and it needs to timestamp
(lots of) debug lines written out...
gert
Hi,
I'm not using the shaper functionality of OpenVPN, and verb is set
to 3.
But we are indeed using tc-htb... Btw, on VMWare I don't see this calls
albeit using exactly the same image.
I've uploaded the data of the perf run on
https://mifritscher.de/austausch/openvpn/ .
Best regards,
Michael Fritscher
Good morning,
I've uploaded the perf file from VMWare for comparison under
https://mifritscher.de/austausch/openvpn/vmware/ . Additionally, I've
uploaded the dmesg output using VMWare and Qemu.
One interesting thing: It does not always happen, and if it doesn't
happen I can provoke it playing with the priority, CPU affinity etc of
the qemu process. So it seems to be really a timing problem which can
be provoked by scheduling issues on the host?
Best regards,
Michael Fritscher
Hello again,
I've uploaded a "good" case with qemu. In this good case, I've 2
additional lines in dmesg:
[ 0.028000] tsc: Fast TSC calibration failed
[ 4.544469] tsc: Refined TSC clocksource calibration: 2808.001
And the perf log show no hpet_read call at all!
Somehow it looks like a severe timer problem...
you're running qemu+hax on Windows and inside the VM you're running
Linux, right?
what happens if you disable tc-htb inside the VM? does the hpet_read
overhead disappear? if so, then you know it's the 'tc' that is causing
the CPU load.
Another thing to test is to run a UDP iperf scan inside the VM with
tc-htb enabled to a host outside, e.g.
iperf -u -b 1G -s
iperf -u -b 1G -c <client>
and then see if you ALSO get a high hpet_read overhead - if you do, then
you know it's not openvpn related at all.
Finally, I am not too surprised that you don't get this behavior with
VMware, as qemu/kvm is much worse at timing than vmware is. You can play
with some of the linux kernel options to turn off or on the hi-res
timers - but for tc to work, you need high-precision timing routines.
HTH,
JJK
_______________________________________________
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users