Re: Difference between vcpu_load and kvm_sched_in ?

2015-10-22 Thread Wanpeng Li

On 10/21/15 2:46 PM, Paolo Bonzini wrote:


On 21/10/2015 00:57, Wanpeng Li wrote:

kvm_sched_out and kvm_sched_in are part of KVM's preemption hooks.  The
hooks are registered only between vcpu_load and vcpu_put, therefore they
know that the mutex is taken.  The sequence will go like this:

  vcpu_load
  kvm_sched_out
  kvm_sched_in
  kvm_sched_out
  kvm_sched_in
  ...
  vcpu_put

If this should be:

vcpu_load
kvm_sched_in
kvm_sched_out
kvm_sched_in
kvm_sched_out
...
vcpu_put

No, because vcpu_load is called while the thread is running.  Therefore,
the first preempt notifier call will be a sched_out notification, which
calls kvm_arch_vcpu_put.  Extending the picture above:

   vcpu_load-> kvm_arch_vcpu_load
   kvm_sched_out-> kvm_arch_vcpu_put
   kvm_sched_in -> kvm_arch_vcpu_load
   kvm_sched_out-> kvm_arch_vcpu_put
   kvm_sched_in -> kvm_arch_vcpu_load
   ...
   kvm_sched_out-> kvm_arch_vcpu_put
   kvm_sched_in -> kvm_arch_vcpu_load
   vcpu_put -> kvm_arch_vcpu_put


Got it, thanks. :-)

Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Difference between vcpu_load and kvm_sched_in ?

2015-10-22 Thread Paolo Bonzini


On 21/10/2015 19:21, Yacine HEBBAL wrote:
> If I correctly understood you last paragraph, it is better to use vm_ioctl
> to do generic processing that doesn't rely on a given VCPU and hence I won't
> need to use "CPU_FOREACH, run_on_cpu and current_cpu".

Right.  On the other hand, you definitely want a vcpu_ioctl if you need
to call vcpu_load.

Thanks,

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Difference between vcpu_load and kvm_sched_in ?

2015-10-21 Thread Yacine HEBBAL
Paolo Bonzini  redhat.com> writes:

> 
> 
> On 21/10/2015 12:17, Hebbal Yacine wrote:
> > Thanks for the explanation, it's very clear.
> > I tired that but I didn't succeed to send the ioctl from "run_on_cpu"
> > function, I didn't find how to set the right CPUStat
> > I've tried "current_cpu"
> 
> Current_cpu is always NULL outside the VCPU thread.
> 
> > 
> > kvm_main.c:
> > 
> > // yacine.begin
> > 
> > static void do_vmi_start_kvm_ioctl(void *type) {
> > printf("do_vmi_start_kvm_ioctl\n");
> > kvm_vm_ioctl(kvm_state, type);


//yacine.begin
int hmp_vmi_op_result = 0;

static void do_vmi_kvm_ioctl(void *type_ioctl) {
int* type = (int*) type_ioctl;
hmp_vmi_op_result = kvm_vcpu_ioctl(current_cpu, *type);
//hmp_vmi_start_result = kvm_vm_ioctl(kvm_state, *type);
}

int vmi_kvm_ioctl(int type) {
CPUState* cpu;

CPU_FOREACH(cpu) {
run_on_cpu(cpu, do_vmi_kvm_ioctl, );
}
return hmp_vmi_op_result;
}
//yacine.end

Yes, it works perfectly this way even when running multiple VCPUs, thank you
a lot :)
In fact, I was using an old version of qemu (1.5.x), and it doesn't have
CPU_FOREACH, i searched a little for to replace it, but without any luck. So
I upgraded my working version and now everything is cool 

> Are you sure you want a VM ioctl and not a VCPU ioctl?  Or perhaps a VM
> ioctl to do generic processing, and a VCPU ioctl that is then sent to
> all VCPUs?
>If you use a VCPU ioctl, you can use CPU_FOREACH or a for loop to
> iterate over all VCPUs.

In fact, I get the same result when using vm_ioctl or vcpu_ioctl.
If I correctly understood you last paragraph, it is better to use vm_ioctl
to do generic processing that doesn't rely on a given VCPU and hence I won't
need to use "CPU_FOREACH, run_on_cpu and current_cpu".

Thanks again :)

> 
> Paolo
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo  vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Difference between vcpu_load and kvm_sched_in ?

2015-10-21 Thread Paolo Bonzini


On 21/10/2015 00:57, Wanpeng Li wrote:
>> kvm_sched_out and kvm_sched_in are part of KVM's preemption hooks.  The
>> hooks are registered only between vcpu_load and vcpu_put, therefore they
>> know that the mutex is taken.  The sequence will go like this:
>>
>>  vcpu_load
>>  kvm_sched_out
>>  kvm_sched_in
>>  kvm_sched_out
>>  kvm_sched_in
>>  ...
>>  vcpu_put
> 
> If this should be:
> 
> vcpu_load
> kvm_sched_in
> kvm_sched_out
> kvm_sched_in
> kvm_sched_out
> ...
> vcpu_put

No, because vcpu_load is called while the thread is running.  Therefore,
the first preempt notifier call will be a sched_out notification, which
calls kvm_arch_vcpu_put.  Extending the picture above:

  vcpu_load-> kvm_arch_vcpu_load
  kvm_sched_out-> kvm_arch_vcpu_put
  kvm_sched_in -> kvm_arch_vcpu_load
  kvm_sched_out-> kvm_arch_vcpu_put
  kvm_sched_in -> kvm_arch_vcpu_load
  ...
  kvm_sched_out-> kvm_arch_vcpu_put
  kvm_sched_in -> kvm_arch_vcpu_load
  vcpu_put -> kvm_arch_vcpu_put

Thanks,

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Difference between vcpu_load and kvm_sched_in ?

2015-10-21 Thread Yacine HEBBAL
Paolo Bonzini  redhat.com> writes:

> 
> 
> On 21/10/2015 00:57, Wanpeng Li wrote:
> >> kvm_sched_out and kvm_sched_in are part of KVM's preemption hooks.  The
> >> hooks are registered only between vcpu_load and vcpu_put, therefore they
> >> know that the mutex is taken.  The sequence will go like this:
> >>
> >>  vcpu_load
> >>  kvm_sched_out
> >>  kvm_sched_in
> >>  kvm_sched_out
> >>  kvm_sched_in
> >>  ...
> >>  vcpu_put
> > 
> > If this should be:
> > 
> > vcpu_load
> > kvm_sched_in
> > kvm_sched_out
> > kvm_sched_in
> > kvm_sched_out
> > ...
> > vcpu_put
> 
> No, because vcpu_load is called while the thread is running.  Therefore,
> the first preempt notifier call will be a sched_out notification, which
> calls kvm_arch_vcpu_put.  Extending the picture above:
> 
>   vcpu_load-> kvm_arch_vcpu_load
>   kvm_sched_out-> kvm_arch_vcpu_put
>   kvm_sched_in -> kvm_arch_vcpu_load
>   kvm_sched_out-> kvm_arch_vcpu_put
>   kvm_sched_in -> kvm_arch_vcpu_load
>   ...
>   kvm_sched_out-> kvm_arch_vcpu_put
>   kvm_sched_in -> kvm_arch_vcpu_load
>   vcpu_put -> kvm_arch_vcpu_put
> 
> Thanks,
> 
> Paolo
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo  vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

Thanks for the explanation, it's very clear.
I tired that but I didn't succeed to send the ioctl from "run_on_cpu"
function, I didn't find how to set the right CPUStat
I've tried "current_cpu"

kvm_main.c:

// yacine.begin

static void do_vmi_start_kvm_ioctl(void *type) {
printf("do_vmi_start_kvm_ioctl\n");
kvm_vm_ioctl(kvm_state, type);
}

int vmi_start_kvm_ioctl(int type) { <- called from hmp.c
printf("vmi_start_kvm_ioctl\n");
run_on_cpu(current_cpu, do_vmi_start_kvm_ioctl, (void *) );
return 0;
}
// yacine.end

This gives me a segmentation fault
Then I tired to replace current_cpu with ENV_GET_CPU(mon_get_cpu()), it
didn't work, I get nothing, no error but doesn't work
I tried also to pass mon->mon_cpu  through int vmi_start_kvm_ioctl(int type)
by adding a first parameter as CPUStat, i get compiler error "dereference
pointer to incomplete type"
I'm beginner to qemu and kvm code, can you please orient me to fix this
problem ? Thanks in advance

Yacine




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Difference between vcpu_load and kvm_sched_in ?

2015-10-21 Thread Paolo Bonzini


On 21/10/2015 12:17, Hebbal Yacine wrote:
> Thanks for the explanation, it's very clear.
> I tired that but I didn't succeed to send the ioctl from "run_on_cpu"
> function, I didn't find how to set the right CPUStat
> I've tried "current_cpu"

Current_cpu is always NULL outside the VCPU thread.

> 
> kvm_main.c:
> 
> // yacine.begin
> 
> static void do_vmi_start_kvm_ioctl(void *type) {
> printf("do_vmi_start_kvm_ioctl\n");
> kvm_vm_ioctl(kvm_state, type);

Are you sure you want a VM ioctl and not a VCPU ioctl?  Or perhaps a VM
ioctl to do generic processing, and a VCPU ioctl that is then sent to
all VCPUs?

If you use a VCPU ioctl, you can use CPU_FOREACH or a for loop to
iterate over all VCPUs.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Difference between vcpu_load and kvm_sched_in ?

2015-10-20 Thread Yacine
Hi, I'm a student working on virtual machine introspection.

I'm trying to implement an application on top of KVM in which I need to trap
writes to CR3 (host with 8 cores and guest with one vcpu).

When I do this when handling a VM EXIT using:
vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL, CPU_BASED_CR3_LOAD_EXITING), it
works correctly and I can see the traps in my log file.

Now when I do the same thing after receiving a command from Qemu (command is
handled in kvm_vm_ioctl by calling a function I added to kvm_x86_ops
vmx_x86_ops) I get a vmwrite error. I found out that the problem is because
the logical processor on the host that is handling the ioctl command is not
the same that is running the VM and holding its state; so I must do the
vmwrite on the one executing the VM

To change the logical cpu executing the VM, I tried this:

vcpu_load; start cr3 trapping; vcpu_put

it worked correctly (in my logs I see that vcpu.cpu become equal to "cpu =
raw_smp_processor_id();") but the VM blocks for a lot of time due to mutex
in vcpu_load (up to serveral seconds and sometimes minutes !)

I replaced vcpu_load with kvm_sched_in, now everything works perfectly and
the VM doesn't block at all (logs here: http://pastebin.com/h5XNNMcb).

So, what I want to know is: what is the difference between vcpu_load and
kvm_sched_in ? both of this functions call kvm_arch_vcpu_loadbut the latter
one does it without doing a mutex

Is there a problem in using kvm_sched_in instead of vcpu_load for my use case ?


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Difference between vcpu_load and kvm_sched_in ?

2015-10-20 Thread Paolo Bonzini


On 20/10/2015 11:57, Yacine wrote:
> vcpu_load; start cr3 trapping; vcpu_put
> 
> it worked correctly (in my logs I see that vcpu.cpu become equal to "cpu =
> raw_smp_processor_id();") but the VM blocks for a lot of time due to mutex
> in vcpu_load (up to serveral seconds and sometimes minutes !)

Right, that's because while the CPU is running the mutex is taken.  If
the VCPU doesn't exit, the mutex is held.

> I replaced vcpu_load with kvm_sched_in, now everything works perfectly and
> the VM doesn't block at all (logs here: http://pastebin.com/h5XNNMcb).
> 
> So, what I want to know is: what is the difference between vcpu_load and
> kvm_sched_in ? both of this functions call kvm_arch_vcpu_loadbut the latter
> one does it without doing a mutex

kvm_sched_out and kvm_sched_in are part of KVM's preemption hooks.  The
hooks are registered only between vcpu_load and vcpu_put, therefore they
know that the mutex is taken.  The sequence will go like this:

vcpu_load
kvm_sched_out
kvm_sched_in
kvm_sched_out
kvm_sched_in
...
vcpu_put

and it will all happen with the mutex held.

> Is there a problem in using kvm_sched_in instead of vcpu_load for my use case 
> ?

Yes, unfortunately it is a problem: you are loading the same VMCS on two
processors, which has undefined results.

To fix the problem, wrap the ioctl into a function and pass the function
to QEMU's "run_on_cpu" function.  It will send the ioctl from the right
thread, so that the kernel will not be holding the vcpu mutex.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Difference between vcpu_load and kvm_sched_in ?

2015-10-20 Thread Wanpeng Li

On 10/20/15 11:44 PM, Paolo Bonzini wrote:


On 20/10/2015 11:57, Yacine wrote:

vcpu_load; start cr3 trapping; vcpu_put

it worked correctly (in my logs I see that vcpu.cpu become equal to "cpu =
raw_smp_processor_id();") but the VM blocks for a lot of time due to mutex
in vcpu_load (up to serveral seconds and sometimes minutes !)

Right, that's because while the CPU is running the mutex is taken.  If
the VCPU doesn't exit, the mutex is held.


I replaced vcpu_load with kvm_sched_in, now everything works perfectly and
the VM doesn't block at all (logs here: http://pastebin.com/h5XNNMcb).

So, what I want to know is: what is the difference between vcpu_load and
kvm_sched_in ? both of this functions call kvm_arch_vcpu_loadbut the latter
one does it without doing a mutex

kvm_sched_out and kvm_sched_in are part of KVM's preemption hooks.  The
hooks are registered only between vcpu_load and vcpu_put, therefore they
know that the mutex is taken.  The sequence will go like this:

 vcpu_load
 kvm_sched_out
 kvm_sched_in
 kvm_sched_out
 kvm_sched_in
 ...
 vcpu_put


If this should be:

vcpu_load
kvm_sched_in
kvm_sched_out
kvm_sched_in
kvm_sched_out
...
vcpu_put

Regards,
Wanpeng Li
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html