On Mon, Aug 13, 2012 at 01:58:21PM +0300, Avi Kivity wrote:
> On 08/13/2012 01:38 PM, Michael S. Tsirkin wrote:
> > On Mon, Aug 13, 2012 at 01:31:36PM +0300, Avi Kivity wrote:
> >> On 08/13/2012 01:24 PM, Gleb Natapov wrote:
> >> > On Mon, Aug 13, 2012 at 01:21:33PM +0300, Avi Kivity wrote:
> >> >> On 08/13/2012 01:16 PM, Gleb Natapov wrote:
> >> >> > On Mon, Aug 13, 2012 at 01:12:46PM +0300, Michael S. Tsirkin wrote:
> >> >> >> On Mon, Aug 13, 2012 at 12:36:41PM +0300, Avi Kivity wrote:
> >> >> >> > On 08/13/2012 12:16 PM, Gleb Natapov wrote:
> >> >> >> > > Here is a quick prototype of what we discussed yesterday. This
> >> >> >> > > one
> >> >> >> > > caches only MSI interrupts for now. The obvious problem is that
> >> >> >> > > not
> >> >> >> > > all interrupts (namely IPIs and MSIs using KVM_CAP_SIGNAL_MSI)
> >> >> >> > > use irq
> >> >> >> > > routing table, so they cannot be cached.
> >> >> >> >
> >> >> >> > We can have a small rcu-managed hash table to look those up.
> >> >> >>
> >> >> >> Yes but how small? We probably need at least one entry
> >> >> >> per vcpu, no?
> >> >> >>
> >> >> > One entry? We will spend more time managing it than injecting
> >> >> > interrupts
> >> >> > :) ideally we need entry for each IPI sent and for each potential MSI
> >> >> > from userspace. What happens when hash table is full?
> >> >>
> >> >> Drop the entire cache.
> >> >>
> >> > OK. Then it should be big enough to not do it frequently.
> >>
> >> Should be sized N * vcpus, where N is several dozen (generous amount of
> >> non-device vectors, though multicast will break it since it's
> >> essentially random).
> >
> > KVM_MAX_VCPUS is 256 multiply by what? 50? this is 10K already.
> > You can not allocate that much in a single chunk, right?
>
> Actually this is overkill. Suppose we do an apicid->vcpu translation
> cache? Then we retain O(1) behaviour, no need for a huge cache.
>
Not sure I follow.
--
Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html