Hi list,
Being that the virtio interfaces are stated as acheiving 5-8 Gb
throughput now with vhost, as opposed to 1Gb without, how should their
link speed be defined when the choices are 2500M or 1M?
I have them plotted out to make a 10Gb bond out of a pair, counting on
5Gb max each, which
Interesting, indeed. Looking forward to it as well.
On Wed, 24 Nov 2010 20:56 +0100, André Weidemann
andre.weidem...@web.de wrote:
Hi,
On 24.11.2010 15:06, Prasad Joshi wrote:
I have been following the KVM mailing list last few months and have learned
that the KVM does not have the GPU
if you had infinitely fast processors, every virtual network would be
infinitely fast.
I see on a Vyatta VM, that an interface's link speed attribute can be
explicitly defined, along with duplex.
Possible values are 10 100 1000 Mb, and are configured independently
of the driver/model of NIC.
Hi Everyone,
I'm impressed with all the activity I see here since joining the list
this year.
It helps to reinforce that I chose the right technology. Thanks.
The -device method vhost=on option recently became available to us at
the ProxmoxVE project I'm preparing to start making use of them
On Fri, 29 Oct 2010 13:26 +0200, Michael S. Tsirkin m...@redhat.com
wrote:
On Thu, Oct 28, 2010 at 12:48:57PM +0530, Krishna Kumar2 wrote:
Krishna Kumar2/India/IBM wrote on 10/28/2010 10:44:14 AM:
In practice users are very unlikely to pin threads to CPUs.
I may be misunderstanding what
On Thu, 14 Oct 2010 14:07 +0200, Avi Kivity a...@redhat.com wrote:
On 10/14/2010 12:54 AM, Anthony Liguori wrote:
On 10/13/2010 05:32 PM, Anjali Kulkarni wrote:
What's the motivation for such a huge number of interfaces?
Ultimately to bring multiple 10Gb bonds into a Vyatta guest.
---
The PCI bus has only 32 slots (devices), 3 taken by chipset + vga, and
a 4th if you have, for example, a virtio disk. Are you sure these are
33 PCI devices and not 33 PCI functions?
No, not sure.
Apparently my statement was based on an uninformed assumption.
I tested using a VM that had 30
Hi again everybody,
One of the admins at the ProxmoxVE project was gracious enough to
quickly release a package including the previously discussed change to
allow up to 32 NICs in qemu.
For future reference the .deb is here:
It's 8 otherwise- and after the patch is applied, it still only goes to
28 for some reason.
28's acceptable for my needs, so I'll step aside from here leave it to
the experts.
As for the new -device method, that's all fine good but AFAIK it's not
implemented on my platform, so this was the
Hello list:
I'm working on a project that calls for the creation of a firewall in
KVM.
While adding a 20-interface trunk of virtio adapters to bring in a dual
10GB bond, I've discovered an 8 NIC limit in QEMU.
I found the following thread in the list archives detailing a similar
problem:
Forgot to cc list, forwarding.
In this case, I think you're going to want to send your patch to the
qemu-devel (on CC) mailing list (perhaps in addition to sending it
here, to the kvm list).
Will do, thanks for the pointer.
Before I do so, I'd like to bring up one thing that comes to mind.
Have you tried creating NICs with -device?
I'm not sure what that is, will look into it, thanks.
I'm using ProxmoxVE, and currently add them via a web interface.
Someone happens to host a screenshot of that part here:
http://c-nergy.be/blog/wp-content/uploads/Proxmox_Net2.png
On Tue, 05 Oct
Attached is a patch that allows qemu to have up to 32 NICs, without
using the qdev -device method.
max_nics.patch
Description: Binary data
13 matches
Mail list logo