On 13-05-04 08:23 AM, Stephan von Krawczynski wrote:
> On Sat, 04 May 2013 08:00:53 -0700
> Geoff Nordli <geo...@gnaa.net> wrote:
>
>> On 13-05-04 02:35 AM, Stephan von Krawczynski wrote:
>>> Sorry guys, but I cannot let go this topic.
>>> Maybe it helps to understand the reason why I am very interested in a 
>>> solution:
>>>
>>> Imagine you have two guests, one server of some network file system and one
>>> client for it.
>>> If the client file-stats some 10.000 files (which creates small single 
>>> packets
>>> for every file) there is a big difference between having latencies around
>>> 0.100 ms and 10-100 ms (which is quite a normal value while using 
>>> virtio-net).
>>> So bandwidth does not help you a lot here.
>>> If anybody has an idea what to patch on the OSE virtio-net driver feel free 
>>> to
>>> make suggestions.
>>> If even the devs are not interested in this topic I'll probably end up with
>>> qemu, because this question is a real show-stopper.
>>>
>> there is definitely something not right there.
>>
>> Have you tried other network drivers?  How about the Intel drivers?
>>
>> As well, are you doing host-only or bridged network configuration?
>>
>> For example, I just spun up two Ubuntu 12.04 guests using the Intel 1000
>> MT (82540EM) driver on the host-only network with this output.
>>
>> --- 10.10.64.101 ping statistics ---
>> 82 packets transmitted, 82 received, 0% packet loss, time 81110ms
>> rtt min/avg/max/mdev = 0.294/0.921/1.562/0.176 ms
>>
>>
>>
>> Geoff
> Ah, thanks Geoff, someone to talk to :-)
>
> Well, I did try all kinds of setups. The basic setup is openSUSE 12.3, kernel
> is 3.8.11, vbox is 4.2.12.
> In terms of network I am generally talking about bridged mode.
> Regarding drivers I tried:
>
> PCNet PCI-II  : works, but bad performance, around 150 MBits/s between two
>                  guests
> PCNet FAST III: exactly like above, no real wonder as this is the same
>                  driver on guest
>
> Intel Desktop e1000 MT: works, the performance is around 400-500 MBit/s
> Intel Server Adapters: all broken, I can shoot them down with iperf in a 
> minute
>                         the guests network goes offline as if all cables were
>                         disconnected
>
> All these have in common that the latency looks quite like physical, around
> 0.150-0.300 ms.
>
> virtio-net: works, the performance is around 800-900 MBits/s, but the latency
> hops around from 0.000ms to 800-900 ms (no kidding) _during the same ping
> command_. Almost every ping has completely different times.
>
> All tested with guest kernels 3.2.44 and 3.4.42. There seems to be no
> difference between them.
> The host does not swap btw and iperfs above 940 MBits/s on its physical
> network.
>
> Maybe things are related to some timer or scheduler configuration in the hosts
> kernel setup, I don't know. I have not found any hints about how to configure
> a kernel best for virtualbox both host and guest.
>

I would file a bug:  https://www.virtualbox.org/wiki/Bugtracker

and post the bug reference and description on the dev list:

https://www.virtualbox.org/mailman/listinfo/vbox-dev

Good luck with this, i see you poke around on the glusterfs list as well.

Post back with the result you find.

Geoff









------------------------------------------------------------------------------
Get 100% visibility into Java/.NET code with AppDynamics Lite
It's a free troubleshooting tool designed for production
Get down to code-level detail for bottlenecks, with <2% overhead.
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap2
_______________________________________________
VBox-users-community mailing list
VBox-users-community@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/vbox-users-community
_______________________________________________
Unsubscribe:  
mailto:vbox-users-community-requ...@lists.sourceforge.net?subject=unsubscribe

Reply via email to