3:53 AM
To: Kinsella, Ray <ray.kinse...@intel.com>; Kevin O'Connor <ke...@koconnor.net>
Cc: Tan, Jianfeng <jianfeng@intel.com>; seab...@seabios.org; Michael
Tsirkin <m...@redhat.com>; qemu-devel@nongnu.org; Gerd Hoffmann
<kra...@redhat.com>
Subject: Re: [Q
<ray.kinse...@intel.com>; Kevin O'Connor <ke...@koconnor.net>
Cc: Tan, Jianfeng <jianfeng@intel.com>; seab...@seabios.org; Michael Tsirkin
<m...@redhat.com>; qemu-devel@nongnu.org; Gerd Hoffmann <kra...@redhat.com>
Subject: Re: [Qemu-devel] >256 Virtio-net-pci hotp
AM
To: Kinsella, Ray <ray.kinse...@intel.com>; Kevin O'Connor <ke...@koconnor.net>
Cc: Tan, Jianfeng <jianfeng@intel.com>; seab...@seabios.org; Michael
Tsirkin <m...@redhat.com>; qemu-devel@nongnu.org; Gerd Hoffmann
<kra...@redhat.com>
Subject: Re: [Qemu-deve
On 25/07/2017 21:00, Kinsella, Ray wrote:
Hi Marcel,
Hi Ray,
On 24/07/2017 00:14, Marcel Apfelbaum wrote:
On 24/07/2017 7:53, Kinsella, Ray wrote:
Even if I am not aware of how much time would take to init a bare-metal
PCIe Root Port, it seems too much.
So I repeated the testing for
Hi Marcel,
On 24/07/2017 00:14, Marcel Apfelbaum wrote:
On 24/07/2017 7:53, Kinsella, Ray wrote:
Even if I am not aware of how much time would take to init a bare-metal
PCIe Root Port, it seems too much.
So I repeated the testing for 64, 128, 256 and 512 ports. I ensured the
configuration
On 24/07/2017 7:53, Kinsella, Ray wrote:
Hi Ray,
Thank you for the details,
So as it turns out at 512 devices, it is nothing to do SeaBIOS, it was the
Kernel again.
It is taking quite a while to startup, a little over two hours (7489 seconds).
The main culprits appear to be
So as it turns out at 512 devices, it is nothing to do SeaBIOS, it was the
Kernel again.
It is taking quite a while to startup, a little over two hours (7489 seconds).
The main culprits appear to be enumerating/initializing the PCI Express ports
and enabling interrupts.
The PCI Express Root
On Sun, Jul 23, 2017 at 07:28:01PM +0300, Marcel Apfelbaum wrote:
> On 22/07/2017 2:57, Kinsella, Ray wrote:
> > When scaling up to 512 Virtio-net devices SeaBIOS appears to really slow
> > down when configuring PCI Config space - haven't manage to get this to
> > work yet.
If there is a slowdown
On 22/07/2017 2:57, Kinsella, Ray wrote:
Hi Marcel
Hi Ray,
On 21/07/2017 01:33, Marcel Apfelbaum wrote:
On 20/07/2017 3:44, Kinsella, Ray wrote:
That's strange. Please ensure the virtio devices are working in
virtio 1.0 mode (disable-modern=0,disable-legacy=1).
Let us know any problems
Hi Marcel
On 21/07/2017 01:33, Marcel Apfelbaum wrote:
On 20/07/2017 3:44, Kinsella, Ray wrote:
That's strange. Please ensure the virtio devices are working in
virtio 1.0 mode (disable-modern=0,disable-legacy=1).
Let us know any problems you see.
Not sure what yet, I will try scaling it with
On 20/07/2017 3:44, Kinsella, Ray wrote:
Hi Marcel,
Hi Ray,
You can use multi-function PCIe Root Ports, this will give you 8 ports
per slot, if you have 16 empty slots (I think we have more) you reach
128 root ports.
Then you can use multi-function virtio-net-pci devices, this will
give
Hi Marcel,
You can use multi-function PCIe Root Ports, this will give you 8 ports
per slot, if you have 16 empty slots (I think we have more) you reach
128 root ports.
Then you can use multi-function virtio-net-pci devices, this will
give you 8 functions per port, so you reach the target of
On 18/07/2017 0:50, Kinsella, Ray wrote:
Hi folks,
Hi Ray,
I am trying to create a VM that supports hot-plugging a large number of
virtio-net-pci device,
up to 1000 devices initially.
From the docs (see below) and from playing with QEMU, it looks like there are
two options.
Both with
Hi folks,
I am trying to create a VM that supports hot-plugging a large number of
virtio-net-pci device,
up to 1000 devices initially.
>From the docs (see below) and from playing with QEMU, it looks like there are
>two options.
Both with limitations.
PCI Express switch
It looks like using a
14 matches
Mail list logo