On Fri, 14 Nov 2014 14:35:00 +0100
Radim Krčmář <rkrc...@redhat.com> wrote:

> 2014-11-14 12:12+0100, Paolo Bonzini:
> > This completes the optimization from the previous patch, by
> > removing the KVM_MEM_SLOTS_NUM-iteration loop from insert_memslot.
> > 
> > Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
> > ---
> >  virt/kvm/kvm_main.c | 39 +++++++++++++++++++--------------------
> >  1 file changed, 19 insertions(+), 20 deletions(-)
> > 
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index c0c2202e6c4f..c8ff99cc0ccb 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -677,31 +677,30 @@ static int kvm_create_dirty_bitmap(struct
> > kvm_memory_slot *memslot) static void insert_memslot(struct
> > kvm_memslots *slots, struct kvm_memory_slot *new)
> >  {
> > -   int i = slots->id_to_index[new->id];
> > -   struct kvm_memory_slot *old = id_to_memslot(slots,
> > new->id);
> > +   int id = new->id;
> > +   int i = slots->id_to_index[id];
> >     struct kvm_memory_slot *mslots = slots->memslots;
> >  
> > -   if (new->npages == old->npages) {
> > -           *old = *new;
> > -           return;
> > -   }
> > -
> > -   while (1) {
> > -           if (i < (KVM_MEM_SLOTS_NUM - 1) &&
> > -                   new->npages < mslots[i + 1].npages) {
> > -                   mslots[i] = mslots[i + 1];
> > -                   i++;
> > -           } else if (i > 0 && new->npages > mslots[i -
> > 1].npages) {
> > -                   mslots[i] = mslots[i - 1];
> > -                   i--;
> > -           } else {
> > -                   mslots[i] = *new;
> > -                   break;
> > +   WARN_ON(mslots[i].id != id);
> > +   if (new->npages != mslots[i].npages) {
> > +           while (1) {
> > +                   if (i < (KVM_MEM_SLOTS_NUM - 1) &&
> > +                               new->npages < mslots[i +
> > 1].npages) {
>   (^^^^ whitespace error)
> > +                           mslots[i] = mslots[i + 1];
> > +                           slots->id_to_index[mslots[i].id] =
> > i;
> > +                           i++;
> > +                   } else if (i > 0 &&
> > +                              new->npages > mslots[i -
> > 1].npages) {
> > +                           mslots[i] = mslots[i - 1];
> > +                           slots->id_to_index[mslots[i].id] =
> > i;
> > +                           i--;
> > +                   } else
> > +                           break;
> 
> We are replacing in a sorted array, so the the direction of our
> traversal doesn't change, (and we could lose one tab level here,)
> 
>       if (new->npages < mslots[i].npages) {
>               while (i < (KVM_MEM_SLOTS_NUM - 1) &&
>                      new->npages < mslots[i + 1].npages) {
>                       mslots[i] = mslots[i + 1];
>                       slots->id_to_index[mslots[i].id] = i;
>                       i++;
>               }
>       else if (new->npages > mslots[i].npages)
>               while (i > 0 &&
>                      new->npages > mslots[i - 1].npages) {
>                       mslots[i] = mslots[i - 1];
>                       slots->id_to_index[mslots[i].id] = i;
>                       i--;
>               }
> 
> (I guess you don't want me to abstract these two loops further :)
> 
> If the probability of slots with same npages was high, we could also
> move just the last one from each group, but I think that the current
> algorithm is already faster than we need.
> 
> (We'll have to change it into an interval tree, or something, if the
>  number of slots rises anyway.)
Only if it rises to huge amount, I've played with proposed 512 memslots
and it takes ~10000 cycles which is 5% of current heapsort overhead.
Taking in account that it's slow path anyway, it's unlikely that there
would be need to speedup this case even more.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to