Re: [Xen-devel] [XEN PATCH v2] x86/vm_event: add short-circuit for breakpoints (aka, "fast single step")

2019-12-18 Thread Jan Beulich
On 18.12.2019 12:34, Sergey Kovalev wrote:
> On 18.12.2019 13:55, Jan Beulich wrote:
>> On 18.12.2019 06:53, Sergey Kovalev wrote:
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -5126,6 +5126,18 @@ void hvm_toggle_singlestep(struct vcpu *v)
>>>v->arch.hvm.single_step = !v->arch.hvm.single_step;
>>>}
>>>
>>> +void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx)
>>> +{
>>> +ASSERT(atomic_read(>pause_count));
>>> +
>>> +if ( !hvm_is_singlestep_supported() )
>>> +return;
>>> +
>>> +v->arch.hvm.single_step = true;
>>> +v->arch.hvm.fast_single_step.enabled = true;
>>> +v->arch.hvm.fast_single_step.p2midx = p2midx;
>>
>> Perhaps better at least range check p2midx before storing?
> What is the valid range?

The size of the array you use it to index into.

>> Also your patch has come through mangled, reminding me of a problem
>> I had with Thunderbird after initially having switched to it. There
>> are line length / wrapping settings you may need to play with to
>> avoid it inserting extra blanks (I'm sorry, I don't really recall
>> which one(s) it was.).
> Thank You! I used Thunderbird too :) I will re-check it.
> Though I have setup line wrap at 300.

I think you want it set to zero.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [XEN PATCH v2] x86/vm_event: add short-circuit for breakpoints (aka, "fast single step")

2019-12-18 Thread Sergey Kovalev

On 18.12.2019 13:55, Jan Beulich wrote:

On 18.12.2019 06:53, Sergey Kovalev wrote:

When using DRAKVUF (or another system using altp2m with shadow pages similar
to what is described in
https://xenproject.org/2016/04/13/stealthy-monitoring-with-xen-altp2m),
after a breakpoint is hit the system switches to the default
unrestricted altp2m view with singlestep enabled. When the singlestep
traps to Xen another vm_event is sent to the monitor agent, which then
normally disables singlestepping and switches the altp2m view back to
the restricted view.

This patch short-circuiting that last part so that it doesn't need to send the
vm_event out for the singlestep event and should switch back to the restricted
view in Xen automatically.

This optimization gains about 35% speed-up.

Was tested on Debian branch of Xen 4.12. See at:
https://github.com/skvl/xen/tree/debian/knorrie/4.12/fast-singlestep

Rebased on master:
https://github.com/skvl/xen/tree/fast-singlestep

Signed-off-by: Sergey Kovalev 
---
   xen/arch/x86/hvm/hvm.c | 12 
   xen/arch/x86/hvm/monitor.c |  9 +
   xen/arch/x86/vm_event.c|  8 ++--
   xen/include/asm-x86/hvm/hvm.h  |  1 +
   xen/include/asm-x86/hvm/vcpu.h |  4 
   xen/include/public/vm_event.h  | 10 ++
   6 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 47573f71b8..4999569503 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5126,6 +5126,18 @@ void hvm_toggle_singlestep(struct vcpu *v)
   v->arch.hvm.single_step = !v->arch.hvm.single_step;
   }

+void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx)
+{
+ASSERT(atomic_read(>pause_count));
+
+if ( !hvm_is_singlestep_supported() )
+return;
+
+v->arch.hvm.single_step = true;
+v->arch.hvm.fast_single_step.enabled = true;
+v->arch.hvm.fast_single_step.p2midx = p2midx;


Perhaps better at least range check p2midx before storing?

What is the valid range?




--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -61,7 +61,8 @@ void vm_event_cleanup_domain(struct domain *d)
   void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
   vm_event_response_t *rsp)
   {
-if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) )
+if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP ||
+   rsp->flags & VM_EVENT_FLAG_FAST_SINGLESTEP) )


This wants parentheses added, or re-writing as

 if ( !(rsp->flags & (VM_EVENT_FLAG_TOGGLE_SINGLESTEP |
  VM_EVENT_FLAG_FAST_SINGLESTEP)) )


Thank You very much! I didn't notice that...


Also your patch has come through mangled, reminding me of a problem
I had with Thunderbird after initially having switched to it. There
are line length / wrapping settings you may need to play with to
avoid it inserting extra blanks (I'm sorry, I don't really recall
which one(s) it was.).

Thank You! I used Thunderbird too :) I will re-check it.
Though I have setup line wrap at 300.

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [XEN PATCH v2] x86/vm_event: add short-circuit for breakpoints (aka, "fast single step")

2019-12-18 Thread Jan Beulich
On 18.12.2019 06:53, Sergey Kovalev wrote:
> When using DRAKVUF (or another system using altp2m with shadow pages similar
> to what is described in
> https://xenproject.org/2016/04/13/stealthy-monitoring-with-xen-altp2m),
> after a breakpoint is hit the system switches to the default
> unrestricted altp2m view with singlestep enabled. When the singlestep
> traps to Xen another vm_event is sent to the monitor agent, which then
> normally disables singlestepping and switches the altp2m view back to
> the restricted view.
> 
> This patch short-circuiting that last part so that it doesn't need to send the
> vm_event out for the singlestep event and should switch back to the restricted
> view in Xen automatically.
> 
> This optimization gains about 35% speed-up.
> 
> Was tested on Debian branch of Xen 4.12. See at:
> https://github.com/skvl/xen/tree/debian/knorrie/4.12/fast-singlestep
> 
> Rebased on master:
> https://github.com/skvl/xen/tree/fast-singlestep
> 
> Signed-off-by: Sergey Kovalev 
> ---
>   xen/arch/x86/hvm/hvm.c | 12 
>   xen/arch/x86/hvm/monitor.c |  9 +
>   xen/arch/x86/vm_event.c|  8 ++--
>   xen/include/asm-x86/hvm/hvm.h  |  1 +
>   xen/include/asm-x86/hvm/vcpu.h |  4 
>   xen/include/public/vm_event.h  | 10 ++
>   6 files changed, 42 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 47573f71b8..4999569503 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -5126,6 +5126,18 @@ void hvm_toggle_singlestep(struct vcpu *v)
>   v->arch.hvm.single_step = !v->arch.hvm.single_step;
>   }
> 
> +void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx)
> +{
> +ASSERT(atomic_read(>pause_count));
> +
> +if ( !hvm_is_singlestep_supported() )
> +return;
> +
> +v->arch.hvm.single_step = true;
> +v->arch.hvm.fast_single_step.enabled = true;
> +v->arch.hvm.fast_single_step.p2midx = p2midx;

Perhaps better at least range check p2midx before storing?

> --- a/xen/arch/x86/vm_event.c
> +++ b/xen/arch/x86/vm_event.c
> @@ -61,7 +61,8 @@ void vm_event_cleanup_domain(struct domain *d)
>   void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
>   vm_event_response_t *rsp)
>   {
> -if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) )
> +if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP ||
> +   rsp->flags & VM_EVENT_FLAG_FAST_SINGLESTEP) )

This wants parentheses added, or re-writing as

if ( !(rsp->flags & (VM_EVENT_FLAG_TOGGLE_SINGLESTEP |
 VM_EVENT_FLAG_FAST_SINGLESTEP)) )

Also your patch has come through mangled, reminding me of a problem
I had with Thunderbird after initially having switched to it. There
are line length / wrapping settings you may need to play with to
avoid it inserting extra blanks (I'm sorry, I don't really recall
which one(s) it was.).

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [XEN PATCH v2] x86/vm_event: add short-circuit for breakpoints (aka, "fast single step")

2019-12-17 Thread Sergey Kovalev

When using DRAKVUF (or another system using altp2m with shadow pages similar
to what is described in
https://xenproject.org/2016/04/13/stealthy-monitoring-with-xen-altp2m),
after a breakpoint is hit the system switches to the default
unrestricted altp2m view with singlestep enabled. When the singlestep
traps to Xen another vm_event is sent to the monitor agent, which then
normally disables singlestepping and switches the altp2m view back to
the restricted view.

This patch short-circuiting that last part so that it doesn't need to send the
vm_event out for the singlestep event and should switch back to the restricted
view in Xen automatically.

This optimization gains about 35% speed-up.

Was tested on Debian branch of Xen 4.12. See at:
https://github.com/skvl/xen/tree/debian/knorrie/4.12/fast-singlestep

Rebased on master:
https://github.com/skvl/xen/tree/fast-singlestep

Signed-off-by: Sergey Kovalev 
---
 xen/arch/x86/hvm/hvm.c | 12 
 xen/arch/x86/hvm/monitor.c |  9 +
 xen/arch/x86/vm_event.c|  8 ++--
 xen/include/asm-x86/hvm/hvm.h  |  1 +
 xen/include/asm-x86/hvm/vcpu.h |  4 
 xen/include/public/vm_event.h  | 10 ++
 6 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 47573f71b8..4999569503 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5126,6 +5126,18 @@ void hvm_toggle_singlestep(struct vcpu *v)
 v->arch.hvm.single_step = !v->arch.hvm.single_step;
 }

+void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx)
+{
+ASSERT(atomic_read(>pause_count));
+
+if ( !hvm_is_singlestep_supported() )
+return;
+
+v->arch.hvm.single_step = true;
+v->arch.hvm.fast_single_step.enabled = true;
+v->arch.hvm.fast_single_step.p2midx = p2midx;
+}
+
 /*
  * Segment caches in VMCB/VMCS are inconsistent about which bits are checked,
  * important, and preserved across vmentry/exit.  Cook the values to make them
diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c
index 1f23fe25e8..85996a3edd 100644
--- a/xen/arch/x86/hvm/monitor.c
+++ b/xen/arch/x86/hvm/monitor.c
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -159,6 +160,14 @@ int hvm_monitor_debug(unsigned long rip, enum 
hvm_monitor_debug_type type,
 case HVM_MONITOR_SINGLESTEP_BREAKPOINT:
 if ( !ad->monitor.singlestep_enabled )
 return 0;
+if ( curr->arch.hvm.fast_single_step.enabled )
+{
+p2m_altp2m_check(curr, curr->arch.hvm.fast_single_step.p2midx);
+curr->arch.hvm.single_step = false;
+curr->arch.hvm.fast_single_step.enabled = false;
+curr->arch.hvm.fast_single_step.p2midx = 0;
+return 0;
+}
 req.reason = VM_EVENT_REASON_SINGLESTEP;
 req.u.singlestep.gfn = gfn_of_rip(rip);
 sync = true;
diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index 52c2a71fa0..3788d103f9 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -61,7 +61,8 @@ void vm_event_cleanup_domain(struct domain *d)
 void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
 vm_event_response_t *rsp)
 {
-if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP) )
+if ( !(rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP ||
+   rsp->flags & VM_EVENT_FLAG_FAST_SINGLESTEP) )
 return;

 if ( !is_hvm_domain(d) )
@@ -69,7 +70,10 @@ void vm_event_toggle_singlestep(struct domain *d, struct 
vcpu *v,

 ASSERT(atomic_read(>vm_event_pause_count));

-hvm_toggle_singlestep(v);
+if ( rsp->flags & VM_EVENT_FLAG_TOGGLE_SINGLESTEP )
+hvm_toggle_singlestep(v);
+else
+hvm_fast_singlestep(v, rsp->u.fast_singlestep.p2midx);
 }

 void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp)
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 1d7b66f927..09793c12e9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -323,6 +323,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op);

 /* Caller should pause vcpu before calling this function */
 void hvm_toggle_singlestep(struct vcpu *v);
+void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx);

 int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
   struct npfec npfec);
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 38f5c2bb9b..8b8494 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -172,6 +172,10 @@ struct hvm_vcpu {
 boolflag_dr_dirty;
 booldebug_state_latch;
 boolsingle_step;
+struct {
+bool enabled;
+uint16_t p2midx;
+} fast_single_step;

 struct hvm_vcpu_asid n1asid;

diff --git a/xen/include/public/vm_event.h