On Thu, May 23, 2013 at 09:46:07AM +0200, Stefan Hajnoczi wrote:
> On Wed, May 22, 2013 at 09:48:21PM +0800, Amos Kong wrote:
> > On Wed, May 22, 2013 at 11:32:27AM +0200, Stefan Hajnoczi wrote:
> > > On Wed, May 22, 2013 at 12:57:35PM +0800, Amos Kong wrote:
> > > > I try to hotplug 28 * 8 multiple-function devices to guest with
> > > > old host kernel, ioeventfds in host kernel will be exhausted, then
> > > > qemu fails to allocate ioeventfds for blk/nic devices.
> > > >
> > > > It's better to add detail error here.
> > > >
> > > > Signed-off-by: Amos Kong <[email protected]>
> > > > ---
> > > > kvm-all.c | 4 ++++
> > > > 1 files changed, 4 insertions(+), 0 deletions(-)
> > >
> > > It would be nice to make kvm bus scalable so that the hardcoded
> > > in-kernel I/O device limit can be lifted.
> >
> > I had increased kernel NR_IOBUS_DEVS to 1000 (a limitation is needed for
> > security) in last Mar, and make resizing kvm_io_range array dynamical.
>
> The maximum should not be hardcoded. File descriptor, maximum memory,
> etc are all controlled by rlimits. And since ioeventfds are file
> descriptors they are already limited by the maximum number of file
> descriptors.
For implement the dynamically resize the kvm_io_range array,
I re-allocate new array (with new size) and free old array
when the array flexes. The array is only resized when
add/remove ioeventfds. It will not effect the perf.
> Why is there a need to impose a hardcoded limit?
I will send a patch to fix it.
> Stefan
--
Amos.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html