Gregory Haskins wrote:
> The VCPU executes synchronously w.r.t. userspace today, and therefore
> interrupt injection is pretty straight forward.  However, we will soon need
> to be able to inject interrupts asynchronous to the execution of the VCPU
> due to the introduction of SMP, paravirtualized drivers, and asynchronous
> hypercalls.  This patch adds support to the interrupt mechanism to force
> a VCPU to VMEXIT when a new interrupt is pending.
>
>   

Comments below are fairly minor, but worthwhile IMO.



> Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
> ---
>
>  drivers/kvm/kvm.h      |    2 ++
>  drivers/kvm/kvm_main.c |   59 
> +++++++++++++++++++++++++++++++++++++++++++++++-
>  drivers/kvm/svm.c      |   43 +++++++++++++++++++++++++++++++++++
>  drivers/kvm/vmx.c      |   43 +++++++++++++++++++++++++++++++++++
>  4 files changed, 146 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
> index 059f074..0f6cc32 100644
> --- a/drivers/kvm/kvm.h
> +++ b/drivers/kvm/kvm.h
> @@ -329,6 +329,8 @@ struct kvm_vcpu_irq {
>       struct kvm_irqdevice dev;
>       int                  pending;
>       int                  deferred;
> +     struct task_struct  *task;
> +     int                  guest_mode;
>   

->guest_mode can be folded into ->task, by specifying that ->task != 
NULL is equivalent to ->guest_mode != 0.  This will make the rest of the 
code easier to read.

>  };
>  
>  struct kvm_vcpu {
> diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
> index 199489b..a160638 100644
> --- a/drivers/kvm/kvm_main.c
> +++ b/drivers/kvm/kvm_main.c
> @@ -1868,6 +1868,9 @@ static int kvm_vcpu_ioctl_run(struct kvm_vcpu *vcpu, 
> struct kvm_run *kvm_run)
>               kvm_arch_ops->decache_regs(vcpu);
>       }
>  
> +     vcpu->irq.task = current;
> +     smp_wmb();
> +
>   

This is best moved where ->guest_mode is set.

> +/*
>   * This function will be invoked whenever the vcpu->irq.dev raises its INTR
>   * line
>   */
> @@ -2318,10 +2335,50 @@ static void kvm_vcpu_intr(struct kvm_irqsink *this,
>  {
>       struct kvm_vcpu *vcpu = (struct kvm_vcpu*)this->private;
>       unsigned long flags;
> +     int direct_ipi = -1;
>  
>       spin_lock_irqsave(&vcpu->irq.lock, flags);
>   

irqs are always enabled here, so spin_lock_irq() (and a corresponding 
spin_unlock_irq) is sufficient.

>  static void kvm_vcpu_irqsink_init(struct kvm_vcpu *vcpu)
> diff --git a/drivers/kvm/svm.c b/drivers/kvm/svm.c
> index 4c03881..91546ae 100644
> --- a/drivers/kvm/svm.c
> +++ b/drivers/kvm/svm.c
> @@ -1542,11 +1542,40 @@ static int svm_vcpu_run(struct kvm_vcpu *vcpu, struct 
> kvm_run *kvm_run)
>       u16 gs_selector;
>       u16 ldt_selector;
>       int r;
> +     unsigned long irq_flags;
>  
>  again:
> +     /*
> +      * We disable interrupts until the next VMEXIT to eliminate a race
> +      * condition for delivery of virtual interrutps.  Note that this is
> +      * probably not as bad as it sounds, as interrupts will still invoke
> +      * a VMEXIT once transitioned to GUEST mode (and thus exit this lock
> +      * scope) even if they are disabled.
> +      *
> +      * FIXME: Do we need to do anything additional to mask IPI/NMIs?
>   

You can remove the FIXME.

> +      */
> +     local_irq_save(irq_flags);
>   

Interrupts are always enabled here, so local_irq_disable() suffices.

> @@ -1688,6 +1717,13 @@ again:
>  #endif
>               : "cc", "memory" );
>  
> +     /*
> +      * FIXME: We'd like to turn on interrupts ASAP, but is this so early
> +      * that we will mess up the state of the CPU before we fully
> +      * transition from guest to host?
> +      */
>   

You can remove the FIXME.  Pre-patch enabled interrupts much earlier.

> +     local_irq_restore(irq_flags);
> +
>       if (vcpu->fpu_active) {
>               fx_save(vcpu->guest_fx_image);
>               fx_restore(vcpu->host_fx_image);
> @@ -1710,6 +1746,13 @@ again:
>       reload_tss(vcpu);
>  
>       /*
> +      * Signal that we have transitioned back to host mode
> +      */
> +     spin_lock_irqsave(&vcpu->irq.lock, irq_flags);
> +     vcpu->irq.guest_mode = 0;
> +     spin_unlock_irqrestore(&vcpu->irq.lock, irq_flags);
>   

 >> Don't you need to check interrupts here?
 > No, we assume that host userspace won't sleep.
Right, I forgot again.


> (prof_on == KVM_PROFILING))
> diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
> index ca858cb..7b81fff 100644
> --- a/drivers/kvm/vmx.c
> +++ b/drivers/kvm/vmx.c
> @@ -1895,6 +1895,7 @@ static int vmx_vcpu_run(struct kvm_vcpu *vcpu, struct 
> kvm_run *kvm_run)
>       u16 fs_sel, gs_sel, ldt_sel;
>       int fs_gs_ldt_reload_needed;
>       int r;
> +     unsigned long irq_flags;
>  
>  preempted:
>       /*
> @@ -1929,9 +1930,37 @@ preempted:
>       if (vcpu->guest_debug.enabled)
>               kvm_guest_debug_pre(vcpu);
>  
> +     /*
> +      * We disable interrupts until the next VMEXIT to eliminate a race
> +      * condition for delivery of virtual interrutps.  Note that this is
> +      * probably not as bad as it sounds, as interrupts will still invoke
> +      * a VMEXIT once transitioned to GUEST mode (and thus exit this lock
> +      * scope) even if they are disabled.
> +      *
> +      * FIXME: Do we need to do anything additional to mask IPI/NMIs?
> +      */
> +     local_irq_save(irq_flags);
> +
>   

Pretty much same comments apply here.  One day we'll unify some of this 
code.


[...]


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to