[PATCH V1] Fix for Coverity ID: 1461759

2020-04-15 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 

---
CC: Jan Beulich 
CC: Andrew Cooper 
CC: Wei Liu 
CC: "Roger Pau Monné" 
---
 xen/arch/x86/hvm/hvm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 6f6f3f73a8..45959d3412 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4798,6 +4798,7 @@ static int do_altp2m_op(
 else
 rc = p2m_set_altp2m_view_visibility(d, idx,
 a.u.set_visibility.visible);
+break;
 }
 
 default:
-- 
2.17.1




[PATCH V8] x86/altp2m: Hypercall to set altp2m view visibility

2020-04-13 Thread Alexandru Isaila
At this moment a guest can call vmfunc to change the altp2m view. This
should be limited in order to avoid any unwanted view switch.

The new xc_altp2m_set_visibility() solves this by making views invisible
to vmfunc.
This is done by having a separate arch.altp2m_working_eptp that is
populated and made invalid in the same places as altp2m_eptp. This is
written to EPTP_LIST_ADDR.
The views are made in/visible by marking them with INVALID_MFN or
copying them back from altp2m_eptp.
To have consistency the visibility also applies to
p2m_switch_domain_altp2m_by_id().

The usage of this hypercall is aimed at dom0 having a logic with a number of 
views
created and at some time there is a need to be sure that only some of the views
can be switched, saving the rest and making them visible when the time
is right.

Note: If altp2m mode is set to mixed the guest is able to change the view
visibility and then call vmfunc.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 
Reviewed-by: Kevin Tian 
---
CC: Ian Jackson 
CC: Wei Liu 
CC: Andrew Cooper 
CC: George Dunlap 
CC: Jan Beulich 
CC: Julien Grall 
CC: Stefano Stabellini 
CC: "Roger Pau Monné" 
CC: Jun Nakajima 
CC: Kevin Tian 
---
Changes since V7:
- Change altp2m_working_eptp to altp2m_visible_eptp
- Rebase.

Changes since V6:
- Update commit message.

Changes since V5:
- Change idx type from uint16_t to unsigned int
- Add rc var and dropped the err return from p2m_get_suppress_ve().

Changes since V4:
- Move p2m specific things from hvm to p2m.c
- Add comment for altp2m_idx bounds check
- Add altp2m_list_lock/unlock().

Changes since V3:
- Change var name form altp2m_idx to idx to shorten line length
- Add bounds check for idx
- Update commit message
- Add comment in xenctrl.h.

Changes since V2:
- Drop hap_enabled() check
- Reduce the indentation depth in hvm.c
- Fix assignment indentation
- Drop pad2.

Changes since V1:
- Drop double view from title.
---
 tools/libxc/include/xenctrl.h   |  7 +++
 tools/libxc/xc_altp2m.c | 24 +++
 xen/arch/x86/hvm/hvm.c  | 14 ++
 xen/arch/x86/hvm/vmx/vmx.c  |  2 +-
 xen/arch/x86/mm/hap/hap.c   | 15 +++
 xen/arch/x86/mm/p2m-ept.c   |  1 +
 xen/arch/x86/mm/p2m.c   | 34 +++--
 xen/include/asm-x86/domain.h|  1 +
 xen/include/asm-x86/p2m.h   |  4 
 xen/include/public/hvm/hvm_op.h |  9 +
 10 files changed, 108 insertions(+), 3 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 58fa931de1..5f25c5a6d4 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1943,6 +1943,13 @@ int xc_altp2m_change_gfn(xc_interface *handle, uint32_t 
domid,
  xen_pfn_t new_gfn);
 int xc_altp2m_get_vcpu_p2m_idx(xc_interface *handle, uint32_t domid,
uint32_t vcpuid, uint16_t *p2midx);
+/*
+ * Set view visibility for xc_altp2m_switch_to_view and vmfunc.
+ * Note: If altp2m mode is set to mixed the guest is able to change the view
+ * visibility and then call vmfunc.
+ */
+int xc_altp2m_set_visibility(xc_interface *handle, uint32_t domid,
+ uint16_t view_id, bool visible);
 
 /** 
  * Mem paging operations.
diff --git a/tools/libxc/xc_altp2m.c b/tools/libxc/xc_altp2m.c
index 46fb725806..6987c9541f 100644
--- a/tools/libxc/xc_altp2m.c
+++ b/tools/libxc/xc_altp2m.c
@@ -410,3 +410,27 @@ int xc_altp2m_get_vcpu_p2m_idx(xc_interface *handle, 
uint32_t domid,
 xc_hypercall_buffer_free(handle, arg);
 return rc;
 }
+
+int xc_altp2m_set_visibility(xc_interface *handle, uint32_t domid,
+ uint16_t view_id, bool visible)
+{
+int rc;
+
+DECLARE_HYPERCALL_BUFFER(xen_hvm_altp2m_op_t, arg);
+
+arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg));
+if ( arg == NULL )
+return -1;
+
+arg->version = HVMOP_ALTP2M_INTERFACE_VERSION;
+arg->cmd = HVMOP_altp2m_set_visibility;
+arg->domain = domid;
+arg->u.set_visibility.altp2m_idx = view_id;
+arg->u.set_visibility.visible = visible;
+
+rc = xencall2(handle->xcall, __HYPERVISOR_hvm_op, HVMOP_altp2m,
+  HYPERCALL_BUFFER_AS_ARG(arg));
+
+xc_hypercall_buffer_free(handle, arg);
+return rc;
+}
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 827c5fa89d..6f6f3f73a8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4509,6 +4509,7 @@ static int do_altp2m_op(
 case HVMOP_altp2m_get_mem_access:
 case HVMOP_altp2m_change_gfn:
 case HVMOP_altp2m_get_p2m_idx:
+case HVMOP_altp2m_set_visibility:
 break;
 
 default:
@@ -4786,6 +4787,19 @@ static int do_altp2m_op(
 break;
 }
 
+case HVMOP_altp2m_set_visibili

[Xen-devel] [PATCH V7] x86/altp2m: Hypercall to set altp2m view visibility

2020-03-30 Thread Alexandru Isaila
At this moment a guest can call vmfunc to change the altp2m view. This
should be limited in order to avoid any unwanted view switch.

The new xc_altp2m_set_visibility() solves this by making views invisible
to vmfunc.
This is done by having a separate arch.altp2m_working_eptp that is
populated and made invalid in the same places as altp2m_eptp. This is
written to EPTP_LIST_ADDR.
The views are made in/visible by marking them with INVALID_MFN or
copying them back from altp2m_eptp.
To have consistency the visibility also applies to
p2m_switch_domain_altp2m_by_id().

The usage of this hypercall is aimed at dom0 having a logic with a number of 
views
created and at some time there is a need to be sure that only some of the views
can be switched, saving the rest and making them visible when the time
is right.

Note: If altp2m mode is set to mixed the guest is able to change the view
visibility and then call vmfunc.

Signed-off-by: Alexandru Isaila 
---
CC: Ian Jackson 
CC: Wei Liu 
CC: Andrew Cooper 
CC: George Dunlap 
CC: Jan Beulich 
CC: Julien Grall 
CC: Konrad Rzeszutek Wilk 
CC: Stefano Stabellini 
CC: "Roger Pau Monné" 
CC: Jun Nakajima 
CC: Kevin Tian 
---
Changes since V6:
- Update commit message.

Changes since V5:
- Change idx type from uint16_t to unsigned int
- Add rc var and dropped the err return from p2m_get_suppress_ve().

Changes since V4:
- Move p2m specific things from hvm to p2m.c
- Add comment for altp2m_idx bounds check
- Add altp2m_list_lock/unlock().

Changes since V3:
- Change var name form altp2m_idx to idx to shorten line length
- Add bounds check for idx
- Update commit message
- Add comment in xenctrl.h.

Changes since V2:
- Drop hap_enabled() check
- Reduce the indentation depth in hvm.c
- Fix assignment indentation
- Drop pad2.

Changes since V1:
- Drop double view from title.
---
 tools/libxc/include/xenctrl.h   |  7 +++
 tools/libxc/xc_altp2m.c | 24 +++
 xen/arch/x86/hvm/hvm.c  | 14 ++
 xen/arch/x86/hvm/vmx/vmx.c  |  2 +-
 xen/arch/x86/mm/hap/hap.c   | 15 +++
 xen/arch/x86/mm/p2m-ept.c   |  1 +
 xen/arch/x86/mm/p2m.c   | 34 +++--
 xen/include/asm-x86/domain.h|  1 +
 xen/include/asm-x86/p2m.h   |  4 
 xen/include/public/hvm/hvm_op.h |  9 +
 10 files changed, 108 insertions(+), 3 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index fc6e57a1a0..2e6e652678 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1943,6 +1943,13 @@ int xc_altp2m_change_gfn(xc_interface *handle, uint32_t 
domid,
  xen_pfn_t new_gfn);
 int xc_altp2m_get_vcpu_p2m_idx(xc_interface *handle, uint32_t domid,
uint32_t vcpuid, uint16_t *p2midx);
+/*
+ * Set view visibility for xc_altp2m_switch_to_view and vmfunc.
+ * Note: If altp2m mode is set to mixed the guest is able to change the view
+ * visibility and then call vmfunc.
+ */
+int xc_altp2m_set_visibility(xc_interface *handle, uint32_t domid,
+ uint16_t view_id, bool visible);
 
 /** 
  * Mem paging operations.
diff --git a/tools/libxc/xc_altp2m.c b/tools/libxc/xc_altp2m.c
index 46fb725806..6987c9541f 100644
--- a/tools/libxc/xc_altp2m.c
+++ b/tools/libxc/xc_altp2m.c
@@ -410,3 +410,27 @@ int xc_altp2m_get_vcpu_p2m_idx(xc_interface *handle, 
uint32_t domid,
 xc_hypercall_buffer_free(handle, arg);
 return rc;
 }
+
+int xc_altp2m_set_visibility(xc_interface *handle, uint32_t domid,
+ uint16_t view_id, bool visible)
+{
+int rc;
+
+DECLARE_HYPERCALL_BUFFER(xen_hvm_altp2m_op_t, arg);
+
+arg = xc_hypercall_buffer_alloc(handle, arg, sizeof(*arg));
+if ( arg == NULL )
+return -1;
+
+arg->version = HVMOP_ALTP2M_INTERFACE_VERSION;
+arg->cmd = HVMOP_altp2m_set_visibility;
+arg->domain = domid;
+arg->u.set_visibility.altp2m_idx = view_id;
+arg->u.set_visibility.visible = visible;
+
+rc = xencall2(handle->xcall, __HYPERVISOR_hvm_op, HVMOP_altp2m,
+  HYPERCALL_BUFFER_AS_ARG(arg));
+
+xc_hypercall_buffer_free(handle, arg);
+return rc;
+}
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a3d115b650..375e9cf368 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4511,6 +4511,7 @@ static int do_altp2m_op(
 case HVMOP_altp2m_get_mem_access:
 case HVMOP_altp2m_change_gfn:
 case HVMOP_altp2m_get_p2m_idx:
+case HVMOP_altp2m_set_visibility:
 break;
 
 default:
@@ -4788,6 +4789,19 @@ static int do_altp2m_op(
 break;
 }
 
+case HVMOP_altp2m_set_visibility:
+{
+unsigned int idx = a.u.set_visibility.altp2m_idx;
+
+if ( a.u.set_visibility.pad )
+   

[Xen-devel] [PATCH v1] x86/hvm: Add check for cpu_has_vmx_virt_exceptions

2018-09-25 Thread Alexandru Isaila
This is useful so HVMOP_altp2m_vcpu_enable_notify will fail and not
silently succeed. It save a call to HVMOP_altp2m_set_suppress_ve.

Signed-off-by: Alexandru Isaila 
---
 xen/arch/x86/hvm/hvm.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 9a490ef68c..51fc3ec07f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4561,6 +4561,12 @@ static int do_altp2m_op(
 break;
 }
 
+if ( !cpu_has_vmx_virt_exceptions )
+{
+rc = -EOPNOTSUPP;
+break;
+}
+
 v = d->vcpu[a.u.enable_notify.vcpu_id];
 
 if ( !gfn_eq(vcpu_altp2m(v).veinfo_gfn, INVALID_GFN) ||
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v2] x86/hvm: Change return error for offline vcpus

2018-09-21 Thread Alexandru Isaila
This patch is needed in order to have a different return error for invalid vcpu
and offline vcpu on the per vcpu king.

Signed-off-by: Alexandru Isaila 

---
Changes since V1:
- Add conditional statement in order to have a difference between
per_vcpu and per_dom return error.
---
 xen/arch/x86/hvm/save.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index d520898843..1764fb0918 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -165,7 +165,8 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 if ( (rv = hvm_sr_handlers[typecode].save(v, )) != 0 )
 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
-else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
+else if ( rv = hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU ?
+  -ENODATA : -ENOENT, ctxt.cur >= sizeof(*desc) )
 {
 uint32_t off;
 
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1] x86/hvm: Change return error for offline vcpus

2018-09-20 Thread Alexandru Isaila
This patch is needed in order to have a different return error for invalid vcpu
and offline vcpu.

Signed-off-by: Alexandru Isaila 
---
 xen/arch/x86/hvm/save.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index d520898843..465eb82bc6 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -165,7 +165,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 if ( (rv = hvm_sr_handlers[typecode].save(v, )) != 0 )
 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
-else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
+else if ( rv = -ENODATA, ctxt.cur >= sizeof(*desc) )
 {
 uint32_t off;
 
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v2] x86/mm: Suppresses vm_events caused by page-walks

2018-09-12 Thread Alexandru Isaila
The original version of the patch emulated the current instruction
(which, as a side-effect, emulated the page-walk as well), however we
need finer-grained control. We want to emulate the page-walk, but still
get an EPT violation event if the current instruction would trigger one.
This patch performs just the page-walk emulation.

Signed-off-by: Alexandru Isaila 

---
Changes since V1:
- Changed guest_walk_tables() to set A bit on each level and
  check if there was any A set. If not the it will set the D bit
  according to the write flags and cr0.wp
---
 xen/arch/x86/mm/guest_walk.c | 23 ++-
 xen/arch/x86/mm/hap/guest_walk.c | 32 +++-
 xen/arch/x86/mm/hap/hap.c| 12 
 xen/arch/x86/mm/hap/private.h| 10 ++
 xen/arch/x86/mm/mem_access.c |  5 -
 xen/arch/x86/mm/shadow/multi.c   |  6 +++---
 xen/include/asm-x86/guest_pt.h   |  3 ++-
 xen/include/asm-x86/paging.h |  5 -
 8 files changed, 84 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index f67aeda3d0..c99c48fa8a 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -82,7 +82,8 @@ static bool set_ad_bits(guest_intpte_t *guest_p, 
guest_intpte_t *walk_p,
 bool
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
   unsigned long va, walk_t *gw,
-  uint32_t walk, mfn_t top_mfn, void *top_map)
+  uint32_t walk, mfn_t top_mfn, void *top_map,
+  bool set_ad)
 {
 struct domain *d = v->domain;
 p2m_type_t p2mt;
@@ -95,6 +96,7 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
 uint32_t gflags, rc;
 unsigned int leaf_level;
 p2m_query_t qt = P2M_ALLOC | P2M_UNSHARE;
+bool accessed = false;
 
 #define AR_ACCUM_AND (_PAGE_USER | _PAGE_RW)
 #define AR_ACCUM_OR  (_PAGE_NX_BIT)
@@ -149,6 +151,10 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
 ar_and &= gflags;
 ar_or  |= gflags;
 
+if ( set_ad && set_ad_bits([guest_l4_table_offset(va)].l4,
+   >l4e.l4, false) )
+accessed = true;
+
 /* Map the l3 table */
 l3p = map_domain_gfn(p2m,
  guest_l4e_get_gfn(gw->l4e),
@@ -179,6 +185,10 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
 ar_and &= gflags;
 ar_or  |= gflags;
 
+if ( set_ad && set_ad_bits([guest_l3_table_offset(va)].l3,
+   >l3e.l3, false) )
+accessed = true;
+
 if ( gflags & _PAGE_PSE )
 {
 /*
@@ -278,6 +288,10 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
 ar_and &= gflags;
 ar_or  |= gflags;
 
+if ( set_ad && set_ad_bits([guest_l2_table_offset(va)].l2,
+   >l2e.l2, false) )
+accessed = true;
+
 if ( gflags & _PAGE_PSE )
 {
 /*
@@ -362,6 +376,13 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
  */
 ar = (ar_and & AR_ACCUM_AND) | (ar_or & AR_ACCUM_OR);
 
+if ( set_ad )
+{
+set_ad_bits([guest_l1_table_offset(va)].l1, >l1e.l1,
+(ar & _PAGE_RW) && !accessed && !guest_wp_enabled(v));
+goto out;
+}
+
 /*
  * Sanity check.  If EFER.NX is disabled, _PAGE_NX_BIT is reserved and
  * should have caused a translation failure before we get here.
diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
index 3b8ee2efce..4cbbf69095 100644
--- a/xen/arch/x86/mm/hap/guest_walk.c
+++ b/xen/arch/x86/mm/hap/guest_walk.c
@@ -29,6 +29,10 @@ asm(".file \"" __OBJECT_FILE__ "\"");
 #define _hap_gva_to_gfn(levels) hap_gva_to_gfn_##levels##_levels
 #define hap_gva_to_gfn(levels) _hap_gva_to_gfn(levels)
 
+#define _hap_page_walk_set_ad_bits(levels) 
\
+hap_page_walk_set_ad_bits_##levels##_levels
+#define hap_page_walk_set_ad_bits(levels) _hap_page_walk_set_ad_bits(levels)
+
 #define _hap_p2m_ga_to_gfn(levels) hap_p2m_ga_to_gfn_##levels##_levels
 #define hap_p2m_ga_to_gfn(levels) _hap_p2m_ga_to_gfn(levels)
 
@@ -39,6 +43,32 @@ asm(".file \"" __OBJECT_FILE__ "\"");
 #include 
 #include 
 
+void hap_page_walk_set_ad_bits(GUEST_PAGING_LEVELS)(
+struct vcpu *v, struct p2m_domain *p2m,
+unsigned long va, uint32_t walk, unsigned long cr3)
+{
+walk_t gw;
+mfn_t top_mfn;
+void *top_map;
+gfn_t top_gfn;
+struct page_info *top_page;
+p2m_type_t p2mt;
+
+top_gfn = _gfn(cr3 >> PAGE_SHIFT);
+top_page = p2m_get_page_from_gfn(p2m, top_gfn, , NULL,
+ P2M_ALLOC | P2M_UNSHARE);
+top_mfn = page_to_mfn(top_page);
+
+/* Map the top-level table and call the tree-walker *

[Xen-devel] [PATCH v20 11/13] x86/domctl: Use hvm_save_vcpu_handler

2018-09-10 Thread Alexandru Isaila
This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V17:
- Remove double ;
- Move struct vcpu *v to reduce scope
- Remove stray lines.
---
 xen/arch/x86/hvm/save.c | 26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 870042b27f..e059ab4e13 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 char *c;
 struct hvm_save_header hdr;
 struct hvm_save_end end;
-hvm_save_handler handler;
 unsigned int i;
 
 if ( d->is_dying )
@@ -223,8 +222,27 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 /* Save all available kinds of state */
 for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
 {
-handler = hvm_sr_handlers[i].save;
-if ( handler != NULL )
+hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
+hvm_save_handler handler = hvm_sr_handlers[i].save;
+
+if ( save_one_handler )
+{
+struct vcpu *v;
+
+for_each_vcpu ( d, v )
+{
+printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+   v, hvm_sr_handlers[i].name);
+if ( save_one_handler(v, h) != 0 )
+{
+printk(XENLOG_G_ERR
+   "HVM %pv save: failed to save type %"PRIu16"\n",
+   v, i);
+return -ENODATA;
+}
+}
+}
+else if ( handler )
 {
 printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
@@ -233,7 +251,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
d->domain_id, i);
-return -EFAULT;
+return -ENODATA;
 }
 }
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 10/13] x86/hvm: Add handler for save_one funcs

2018-09-10 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Change handler name from hvm_save_one_handler to 
hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/emul-i8254.c  | 2 +-
 xen/arch/x86/hvm/hpet.c| 2 +-
 xen/arch/x86/hvm/hvm.c | 7 +--
 xen/arch/x86/hvm/irq.c | 6 +++---
 xen/arch/x86/hvm/mtrr.c| 4 ++--
 xen/arch/x86/hvm/pmtimer.c | 2 +-
 xen/arch/x86/hvm/rtc.c | 2 +-
 xen/arch/x86/hvm/save.c| 3 +++
 xen/arch/x86/hvm/vioapic.c | 2 +-
 xen/arch/x86/hvm/viridian.c| 3 ++-
 xen/arch/x86/hvm/vlapic.c  | 8 
 xen/arch/x86/hvm/vpic.c| 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +-
 14 files changed, 31 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index c2b2b6623c..71afc06f9a 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -397,6 +397,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 #endif
 
diff --git a/xen/arch/x86/emul-i8254.c b/xen/arch/x86/emul-i8254.c
index 7f1ded2623..a85dfcccbc 100644
--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -438,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 #endif
 
 void pit_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index cbd1efbc9f..4d8f6da2d9 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -695,7 +695,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1669957f1c..58c03bed15 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -776,6 +776,7 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1156,8 +1157,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
-  1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+  hvm_load_cpu_ctxt, 1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
save_area) + \
@@ -1508,6 +1509,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
+hvm_save_cpu_xsave_states_one,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1520,6 +1522,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
+hvm_save_cpu_msrs_one,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index fe2c2fa06c..9502bae645 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -773,9 +773,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index f3

[Xen-devel] [PATCH v20 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-09-10 Thread Alexandru Isaila
This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V19:
- Replace d->vcpu[instance] with local variable v.
---
 xen/arch/x86/domctl.c   |  2 --
 xen/arch/x86/hvm/save.c | 10 ++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 797841e803..2284128e93 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -599,12 +599,10 @@ long arch_do_domctl(
  !is_hvm_domain(d) )
 break;
 
-domain_pause(d);
 ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
domctl->u.hvmcontext_partial.instance,
domctl->u.hvmcontext_partial.buffer,
>u.hvmcontext_partial.bufsz);
-domain_unpause(d);
 
 if ( !ret )
 copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 96e77c9e4a..f06c0b31c1 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -156,6 +156,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 if ( !ctxt.data )
 return -ENOMEM;
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_pause(v);
+else
+domain_pause(d);
+
 if ( (rv = hvm_sr_handlers[typecode].save(v, )) != 0 )
 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
@@ -187,6 +192,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 }
 }
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_unpause(v);
+else
+domain_unpause(d);
+
 xfree(ctxt.data);
 return rv;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 12/13] x86/hvm: Remove redundant save functions

2018-09-10 Thread Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs and adapts print messages in order to match the format of the
other save related messages.

Signed-off-by: Alexandru Isaila 

---
Changes since V19:
- Move v initialization after bound check
- Moved the conditional expression inside the square brackets.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +---
 xen/arch/x86/emul-i8254.c  |  5 ++-
 xen/arch/x86/hvm/hpet.c|  7 ++--
 xen/arch/x86/hvm/hvm.c | 75 +++---
 xen/arch/x86/hvm/irq.c | 15 ---
 xen/arch/x86/hvm/mtrr.c| 22 ++
 xen/arch/x86/hvm/pmtimer.c |  5 ++-
 xen/arch/x86/hvm/rtc.c |  5 ++-
 xen/arch/x86/hvm/save.c| 29 +++--
 xen/arch/x86/hvm/vioapic.c |  5 ++-
 xen/arch/x86/hvm/viridian.c| 23 ++-
 xen/arch/x86/hvm/vlapic.c  | 38 ++---
 xen/arch/x86/hvm/vpic.c|  5 ++-
 xen/include/asm-x86/hvm/save.h |  8 +---
 14 files changed, 64 insertions(+), 196 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 71afc06f9a..f15835e9f6 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -350,7 +350,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 }
 
 #if CONFIG_HVM
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_vmce_vcpu ctxt = {
 .caps = v->arch.vmce.mcg_cap,
@@ -362,21 +362,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = vmce_save_vcpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -397,7 +382,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 #endif
 
diff --git a/xen/arch/x86/emul-i8254.c b/xen/arch/x86/emul-i8254.c
index a85dfcccbc..73be4188ad 100644
--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -391,8 +391,9 @@ void pit_stop_channel0_irq(PITState *pit)
 spin_unlock(>lock);
 }
 
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+struct domain *d = v->domain;
 PITState *pit = domain_vpit(d);
 int rc;
 
@@ -438,7 +439,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
 #endif
 
 void pit_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 4d8f6da2d9..be371ecc0b 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -570,16 +570,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+const struct domain *d = v->domain;
 HPETState *hp = domain_vhpet(d);
-struct vcpu *v = pt_global_vcpu_target(d);
 int rc;
 uint64_t guest_time;
 
 if ( !has_vhpet(d) )
 return 0;
 
+v = pt_global_vcpu_target(d);
 write_lock(>lock);
 guest_time = (v->arch.hvm.guest_time ?: hvm_get_guest_time(v)) /
  STIME_PER_HPET_TICK;
@@ -695,7 +696,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 58c03bed15..43145586c5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -731,7 +731,7 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_tsc_adjust ctxt = {
 .tsc_adjust = v->arch.hvm.msr_tsc_adjust,
@@ -740,21 +740,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
 }
 
-static int hvm_save_

[Xen-devel] [PATCH v20 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V14:
- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index a23d0876c4..2df0127a46 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1030,24 +1030,32 @@ static int viridian_load_domain_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
   viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_viridian_vcpu_context ctxt = {
+.vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw,
+.vp_assist_pending = v->arch.hvm.viridian.vp_assist.pending,
+};
 
-if ( !is_viridian_domain(d) )
+if ( !is_viridian_domain(v->domain) )
 return 0;
 
-for_each_vcpu( d, v ) {
-struct hvm_viridian_vcpu_context ctxt = {
-.vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw,
-.vp_assist_pending = v->arch.hvm.viridian.vp_assist.pending,
-};
+return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, );
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, ) != 0 )
-return 1;
+for_each_vcpu ( d, v )
+{
+err = viridian_save_vcpu_ctxt_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c198c9190a..b0cf3a836f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -731,16 +731,23 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_tsc_adjust ctxt = {
+.tsc_adjust = v->arch.hvm.msr_tsc_adjust,
+};
+
+return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_tsc_adjust ctxt;
 int err = 0;
 
 for_each_vcpu ( d, v )
 {
-ctxt.tsc_adjust = v->arch.hvm.msr_tsc_adjust;
-err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+err = hvm_save_tsc_adjust_one(v, h);
 if ( err )
 break;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++--
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1013b6ecc4..1669957f1c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1339,69 +1339,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_save_descriptor *desc = _p(>data[h->cur]);
+struct hvm_msr *ctxt;
+unsigned int i;
+int err;
 
-for_each_vcpu ( d, v )
+err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+if ( err )
+return err;
+ctxt = (struct hvm_msr *)>data[h->cur];
+ctxt->count = 0;
+
+for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
 {
-struct hvm_save_descriptor *desc = _p(>data[h->cur]);
-struct hvm_msr *ctxt;
-unsigned int i;
+uint64_t val;
+int rc = guest_rdmsr(v, msrs_to_send[i], );
 
-if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
-return 1;
-ctxt = (struct hvm_msr *)>data[h->cur];
-ctxt->count = 0;
+/*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+if ( rc == X86EMUL_EXCEPTION )
+continue;
 
-for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+if ( rc != X86EMUL_OKAY )
 {
-uint64_t val;
-int rc = guest_rdmsr(v, msrs_to_send[i], );
+ASSERT_UNREACHABLE();
+return -ENXIO;
+}
 
-/*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
-if ( rc == X86EMUL_EXCEPTION )
-continue;
+if ( !val )
+continue; /* Skip empty MSRs. */
 
-if ( rc != X86EMUL_OKAY )
-{
-ASSERT_UNREACHABLE();
-return -ENXIO;
-}
+ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ctxt->msr[ctxt->count++].val = val;
+}
 
-if ( !val )
-continue; /* Skip empty MSRs. */
+if ( hvm_funcs.save_msr )
+hvm_funcs.save_msr(v, ctxt);
 
-ctxt->msr[ctxt->count].index = msrs_to_send[i];
-ctxt->msr[ctxt->count++].val = val;
-}
+ASSERT(ctxt->count <= msr_count_max);
 
-if ( hvm_funcs.save_msr )
-hvm_funcs.save_msr(v, ctxt);
+for ( i = 0; i < ctxt->count; ++i )
+ctxt->msr[i]._rsvd = 0;
 
-ASSERT(ctxt->count <= msr_count_max);
+if ( ctxt->count )
+{
+/* Rewrite length to indicate how much space we actually used. */
+desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
+else
+/* or rewind and remove the descriptor from the stream. */
+h->cur -= sizeof(struct hvm_save_descriptor);
 
-for ( i = 0; i < ctxt->count; ++i )
-ctxt->msr[i]._rsvd = 0;
+return 0;
+}
 
-if ( ctxt->count )
-{
-/* Rewrite length to indicate how much space we actually used. */
-desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-}
-else
-/* or rewind and remove the descriptor from the stream. */
-h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_msrs_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila i
Reviewed-by: Jan Beulich 

---
Changes since v16:
- Address style comments.
---
 xen/arch/x86/hvm/mtrr.c | 80 ++---
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index de1b5c4614..f3dd972b4a 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -690,52 +690,58 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
uint64_t gfn_start,
 return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+const struct mtrr_state *mtrr_state = >arch.hvm.mtrr;
+struct hvm_hw_mtrr hw_mtrr = {
+.msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+   MTRRdefType_FE) |
+MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+.msr_mtrr_cap  = mtrr_state->mtrr_cap,
+};
+unsigned int i;
 
-/* save mtrr */
-for_each_vcpu(d, v)
+if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
 {
-const struct mtrr_state *mtrr_state = >arch.hvm.mtrr;
-struct hvm_hw_mtrr hw_mtrr = {
-.msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
-   MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-.msr_mtrr_cap  = mtrr_state->mtrr_cap,
-};
-unsigned int i;
+dprintk(XENLOG_G_ERR,
+"HVM save: %pv: too many (%lu) variable range MTRRs\n",
+v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+return -EINVAL;
+}
 
-if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-{
-dprintk(XENLOG_G_ERR,
-"HVM save: %pv: too many (%lu) variable range MTRRs\n",
-v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-return -EINVAL;
-}
+hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+
+for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+{
+hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+}
 
-hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) !=
+ sizeof(mtrr_state->fixed_ranges));
 
-for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-{
-/* save physbase */
-hw_mtrr.msr_mtrr_var[i*2] =
-((uint64_t*)mtrr_state->var_ranges)[i*2];
-/* save physmask */
-hw_mtrr.msr_mtrr_var[i*2+1] =
-((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-}
+memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
+   sizeof(hw_mtrr.msr_mtrr_fixed));
 
-for ( i = 0; i < NUM_FIXED_MSR; i++ )
-hw_mtrr.msr_mtrr_fixed[i] =
-((uint64_t*)mtrr_state->fixed_ranges)[i];
+return hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr);
+}
+
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr) != 0 )
-return 1;
+/* save mtrr */
+for_each_vcpu(d, v)
+{
+   err = hvm_save_mtrr_msr_one(v, h);
+   if ( err )
+   break;
 }
-return 0;
+
+return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 00/13] x86/domctl: Save info for one vcpu instance

2018-09-10 Thread Alexandru Isaila
Hi all,

This patch series addresses the ideea of saving data from a single vcpu 
instance.
First it starts by adding *save_one functions, then it introduces a handler for 
the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func 
while 
changing domain_pause to vcpu_pause.

Cheers,

NOTE: Tested with tools/misc/xen-hvmctx, tools/xentrace/xenctx, xl save/restore,
custom hvm_getcontext/partial code and debug the getcontext part for guest boot.

Alexandru Isaila (13):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Move all free fields to the initializer
- Add blank line to before the return
- Move v->pause_flags check to the save_one function.
---
 xen/arch/x86/hvm/hvm.c | 219 +
 1 file changed, 113 insertions(+), 106 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b0cf3a836f..e1133f64d7 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -778,119 +778,126 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct segment_register seg;
+struct hvm_hw_cpu ctxt = {
+.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm.sync_tsc),
+.msr_tsc_aux = hvm_msr_tsc_aux(v),
+.rax = v->arch.user_regs.rax,
+.rbx = v->arch.user_regs.rbx,
+.rcx = v->arch.user_regs.rcx,
+.rdx = v->arch.user_regs.rdx,
+.rbp = v->arch.user_regs.rbp,
+.rsi = v->arch.user_regs.rsi,
+.rdi = v->arch.user_regs.rdi,
+.rsp = v->arch.user_regs.rsp,
+.rip = v->arch.user_regs.rip,
+.rflags = v->arch.user_regs.rflags,
+.r8  = v->arch.user_regs.r8,
+.r9  = v->arch.user_regs.r9,
+.r10 = v->arch.user_regs.r10,
+.r11 = v->arch.user_regs.r11,
+.r12 = v->arch.user_regs.r12,
+.r13 = v->arch.user_regs.r13,
+.r14 = v->arch.user_regs.r14,
+.r15 = v->arch.user_regs.r15,
+.dr0 = v->arch.debugreg[0],
+.dr1 = v->arch.debugreg[1],
+.dr2 = v->arch.debugreg[2],
+.dr3 = v->arch.debugreg[3],
+.dr6 = v->arch.debugreg[6],
+.dr7 = v->arch.debugreg[7],
+};
+
+/*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+if ( v->pause_flags & VPF_down )
+return 0;
+
+/* Architecture-specific vmcs/vmcb bits */
+hvm_funcs.save_cpu_ctxt(v, );
+
+hvm_get_segment_register(v, x86_seg_idtr, );
+ctxt.idtr_limit = seg.limit;
+ctxt.idtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_gdtr, );
+ctxt.gdtr_limit = seg.limit;
+ctxt.gdtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_cs, );
+ctxt.cs_sel = seg.sel;
+ctxt.cs_limit = seg.limit;
+ctxt.cs_base = seg.base;
+ctxt.cs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ds, );
+ctxt.ds_sel = seg.sel;
+ctxt.ds_limit = seg.limit;
+ctxt.ds_base = seg.base;
+ctxt.ds_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_es, );
+ctxt.es_sel = seg.sel;
+ctxt.es_limit = seg.limit;
+ctxt.es_base = seg.base;
+ctxt.es_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ss, );
+ctxt.ss_sel = seg.sel;
+ctxt.ss_limit = seg.limit;
+ctxt.ss_base = seg.base;
+ctxt.ss_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_fs, );
+ctxt.fs_sel = seg.sel;
+ctxt.fs_limit = seg.limit;
+ctxt.fs_base = seg.base;
+ctxt.fs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_gs, );
+ctxt.gs_sel = seg.sel;
+ctxt.gs_limit = seg.limit;
+ctxt.gs_base = seg.base;
+ctxt.gs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_tr, );
+ctxt.tr_sel = seg.sel;
+ctxt.tr_limit = seg.limit;
+ctxt.tr_base = seg.base;
+ctxt.tr_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ldtr, );
+ctxt.ldtr_sel = seg.sel;
+ctxt.ldtr_limit = seg.limit;
+ctxt.ldtr_base = seg.base;
+ctxt.ldtr_arbytes = seg.attr;
+
+if ( v->fpu_initialised )
+{
+memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ctxt.flags = XEN_X86_FPU_INITIALISED;
+}
+
+return hvm_save_entry(CPU, v->vcpu_id, h, );
+}
+
 static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_hw_cpu ctxt;
-struct segment_register seg;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-/* We don't need to save state for a vcpu that is down; the restore 
- * code will leave it down if there is nothing saved. */
-if ( v->pause_flags & VPF_down )
-continue;
-
-memset(, 0, sizeof(ctxt));
-
-/* Architecture-specific vmcs/vmcb bits */
-hvm_funcs.save_cpu_ctxt(v, );
-
-ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm.sync_tsc);
-
-ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-hvm_get_segment_register(v, x86_seg_idtr, );
-   

[Xen-devel] [PATCH v20 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return
- Move xsave_enabled() check to the save_one func.
---
 xen/arch/x86/hvm/hvm.c | 47 ++
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e1133f64d7..1013b6ecc4 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1163,35 +1163,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, 
hvm_load_cpu_ctxt,
save_area) + \
   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t 
*h)
 {
-struct vcpu *v;
 struct hvm_hw_cpu_xsave *ctxt;
+unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+int err;
 
-if ( !cpu_has_xsave )
+if ( !cpu_has_xsave || !xsave_enabled(v) )
 return 0;   /* do nothing */
 
-for_each_vcpu ( d, v )
-{
-unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+if ( err )
+return err;
 
-if ( !xsave_enabled(v) )
-continue;
-if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-return 1;
-ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
-h->cur += size;
+ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
+h->cur += size;
+ctxt->xfeature_mask = xfeature_mask;
+ctxt->xcr0 = v->arch.xcr0;
+ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-ctxt->xfeature_mask = xfeature_mask;
-ctxt->xcr0 = v->arch.xcr0;
-ctxt->xcr0_accum = v->arch.xcr0_accum;
-expand_xsave_states(v, >save_area,
-size - offsetof(typeof(*ctxt), save_area));
-}
+expand_xsave_states(v, >save_area,
+size - offsetof(typeof(*ctxt), save_area));
 
 return 0;
 }
 
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_xsave_states_one(v, h);
+if ( err )
+break;
+}
+
+return err;
+}
+
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 08/13] x86/hvm: Introduce lapic_save_hidden_one

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 20 
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 04702e96c9..31c7a66d01 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1399,23 +1399,27 @@ static void lapic_rearm(struct vlapic *s)
 s->timer_last_update = s->pt.last_plt_gtime;
 }
 
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+return hvm_save_entry(LAPIC, v->vcpu_id, h, _vlapic(v)->hw);
+}
+
 static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, >hw)) != 0 )
+err = lapic_save_hidden_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v20 09/13] x86/hvm: Introduce lapic_save_regs_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 26 +++---
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 31c7a66d01..8b2955365f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1422,26 +1422,30 @@ static int lapic_save_hidden(struct domain *d, 
hvm_domain_context_t *h)
 return err;
 }
 
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+if ( hvm_funcs.sync_pir_to_irr )
+hvm_funcs.sync_pir_to_irr(v);
+
+return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
+}
+
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-if ( hvm_funcs.sync_pir_to_irr )
-hvm_funcs.sync_pir_to_irr(v);
-
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+err = lapic_save_regs_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 /*
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 12/13] x86/hvm: Remove redundant save functions

2018-09-10 Thread Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs and adapts print messages in order to match the format of the
other save related messages.

Signed-off-by: Alexandru Isaila 

---
Changes since V18:
- Add const struct domain to rtc_save and hpet_save
- Latched the vCPU into a local variable in hvm_save_one()
- Add HVMSR_PER_VCPU kind check to the bounds if.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +---
 xen/arch/x86/emul-i8254.c  |  5 ++-
 xen/arch/x86/hvm/hpet.c|  7 ++--
 xen/arch/x86/hvm/hvm.c | 75 +++---
 xen/arch/x86/hvm/irq.c | 15 ---
 xen/arch/x86/hvm/mtrr.c| 22 ++
 xen/arch/x86/hvm/pmtimer.c |  5 ++-
 xen/arch/x86/hvm/rtc.c |  5 ++-
 xen/arch/x86/hvm/save.c| 28 +++--
 xen/arch/x86/hvm/vioapic.c |  5 ++-
 xen/arch/x86/hvm/viridian.c| 23 ++-
 xen/arch/x86/hvm/vlapic.c  | 38 ++---
 xen/arch/x86/hvm/vpic.c|  5 ++-
 xen/include/asm-x86/hvm/save.h |  8 +---
 14 files changed, 63 insertions(+), 196 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 71afc06f9a..f15835e9f6 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -350,7 +350,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 }
 
 #if CONFIG_HVM
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_vmce_vcpu ctxt = {
 .caps = v->arch.vmce.mcg_cap,
@@ -362,21 +362,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = vmce_save_vcpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -397,7 +382,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 #endif
 
diff --git a/xen/arch/x86/emul-i8254.c b/xen/arch/x86/emul-i8254.c
index a85dfcccbc..73be4188ad 100644
--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -391,8 +391,9 @@ void pit_stop_channel0_irq(PITState *pit)
 spin_unlock(>lock);
 }
 
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+struct domain *d = v->domain;
 PITState *pit = domain_vpit(d);
 int rc;
 
@@ -438,7 +439,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
 #endif
 
 void pit_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 4d8f6da2d9..be371ecc0b 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -570,16 +570,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+const struct domain *d = v->domain;
 HPETState *hp = domain_vhpet(d);
-struct vcpu *v = pt_global_vcpu_target(d);
 int rc;
 uint64_t guest_time;
 
 if ( !has_vhpet(d) )
 return 0;
 
+v = pt_global_vcpu_target(d);
 write_lock(>lock);
 guest_time = (v->arch.hvm.guest_time ?: hvm_get_guest_time(v)) /
  STIME_PER_HPET_TICK;
@@ -695,7 +696,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 58c03bed15..43145586c5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -731,7 +731,7 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_tsc_adjust ctxt = {
 .tsc_adjust = v->arch.hvm.msr_tsc_adjust,
@@ -740,21 +740,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entr

[Xen-devel] [PATCH v19 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V14:
- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index a23d0876c4..2df0127a46 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1030,24 +1030,32 @@ static int viridian_load_domain_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
   viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_viridian_vcpu_context ctxt = {
+.vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw,
+.vp_assist_pending = v->arch.hvm.viridian.vp_assist.pending,
+};
 
-if ( !is_viridian_domain(d) )
+if ( !is_viridian_domain(v->domain) )
 return 0;
 
-for_each_vcpu( d, v ) {
-struct hvm_viridian_vcpu_context ctxt = {
-.vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw,
-.vp_assist_pending = v->arch.hvm.viridian.vp_assist.pending,
-};
+return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, );
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, ) != 0 )
-return 1;
+for_each_vcpu ( d, v )
+{
+err = viridian_save_vcpu_ctxt_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-09-10 Thread Alexandru Isaila
This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V15:
- Moved pause/unpause calls into hvm_save_one()
- Re-add the loop in hvm_save_one().
---
 xen/arch/x86/domctl.c   |  2 --
 xen/arch/x86/hvm/save.c | 10 ++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 797841e803..2284128e93 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -599,12 +599,10 @@ long arch_do_domctl(
  !is_hvm_domain(d) )
 break;
 
-domain_pause(d);
 ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
domctl->u.hvmcontext_partial.instance,
domctl->u.hvmcontext_partial.buffer,
>u.hvmcontext_partial.bufsz);
-domain_unpause(d);
 
 if ( !ret )
 copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index c7e2ecdb9f..403c84da73 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -155,6 +155,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 if ( !ctxt.data )
 return -ENOMEM;
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_pause(d->vcpu[instance]);
+else
+domain_pause(d);
+
 if ( (rv = hvm_sr_handlers[typecode].save(v, )) != 0 )
 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
@@ -186,6 +191,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 }
 }
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_unpause(d->vcpu[instance]);
+else
+domain_unpause(d);
+
 xfree(ctxt.data);
 return rv;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++--
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1013b6ecc4..1669957f1c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1339,69 +1339,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_save_descriptor *desc = _p(>data[h->cur]);
+struct hvm_msr *ctxt;
+unsigned int i;
+int err;
 
-for_each_vcpu ( d, v )
+err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+if ( err )
+return err;
+ctxt = (struct hvm_msr *)>data[h->cur];
+ctxt->count = 0;
+
+for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
 {
-struct hvm_save_descriptor *desc = _p(>data[h->cur]);
-struct hvm_msr *ctxt;
-unsigned int i;
+uint64_t val;
+int rc = guest_rdmsr(v, msrs_to_send[i], );
 
-if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
-return 1;
-ctxt = (struct hvm_msr *)>data[h->cur];
-ctxt->count = 0;
+/*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+if ( rc == X86EMUL_EXCEPTION )
+continue;
 
-for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+if ( rc != X86EMUL_OKAY )
 {
-uint64_t val;
-int rc = guest_rdmsr(v, msrs_to_send[i], );
+ASSERT_UNREACHABLE();
+return -ENXIO;
+}
 
-/*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
-if ( rc == X86EMUL_EXCEPTION )
-continue;
+if ( !val )
+continue; /* Skip empty MSRs. */
 
-if ( rc != X86EMUL_OKAY )
-{
-ASSERT_UNREACHABLE();
-return -ENXIO;
-}
+ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ctxt->msr[ctxt->count++].val = val;
+}
 
-if ( !val )
-continue; /* Skip empty MSRs. */
+if ( hvm_funcs.save_msr )
+hvm_funcs.save_msr(v, ctxt);
 
-ctxt->msr[ctxt->count].index = msrs_to_send[i];
-ctxt->msr[ctxt->count++].val = val;
-}
+ASSERT(ctxt->count <= msr_count_max);
 
-if ( hvm_funcs.save_msr )
-hvm_funcs.save_msr(v, ctxt);
+for ( i = 0; i < ctxt->count; ++i )
+ctxt->msr[i]._rsvd = 0;
 
-ASSERT(ctxt->count <= msr_count_max);
+if ( ctxt->count )
+{
+/* Rewrite length to indicate how much space we actually used. */
+desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
+else
+/* or rewind and remove the descriptor from the stream. */
+h->cur -= sizeof(struct hvm_save_descriptor);
 
-for ( i = 0; i < ctxt->count; ++i )
-ctxt->msr[i]._rsvd = 0;
+return 0;
+}
 
-if ( ctxt->count )
-{
-/* Rewrite length to indicate how much space we actually used. */
-desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-}
-else
-/* or rewind and remove the descriptor from the stream. */
-h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_msrs_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila i
Reviewed-by: Jan Beulich 

---
Changes since v16:
- Address style comments.
---
 xen/arch/x86/hvm/mtrr.c | 80 ++---
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index de1b5c4614..f3dd972b4a 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -690,52 +690,58 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
uint64_t gfn_start,
 return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+const struct mtrr_state *mtrr_state = >arch.hvm.mtrr;
+struct hvm_hw_mtrr hw_mtrr = {
+.msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+   MTRRdefType_FE) |
+MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+.msr_mtrr_cap  = mtrr_state->mtrr_cap,
+};
+unsigned int i;
 
-/* save mtrr */
-for_each_vcpu(d, v)
+if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
 {
-const struct mtrr_state *mtrr_state = >arch.hvm.mtrr;
-struct hvm_hw_mtrr hw_mtrr = {
-.msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
-   MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-.msr_mtrr_cap  = mtrr_state->mtrr_cap,
-};
-unsigned int i;
+dprintk(XENLOG_G_ERR,
+"HVM save: %pv: too many (%lu) variable range MTRRs\n",
+v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+return -EINVAL;
+}
 
-if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-{
-dprintk(XENLOG_G_ERR,
-"HVM save: %pv: too many (%lu) variable range MTRRs\n",
-v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-return -EINVAL;
-}
+hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+
+for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+{
+hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+}
 
-hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) !=
+ sizeof(mtrr_state->fixed_ranges));
 
-for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-{
-/* save physbase */
-hw_mtrr.msr_mtrr_var[i*2] =
-((uint64_t*)mtrr_state->var_ranges)[i*2];
-/* save physmask */
-hw_mtrr.msr_mtrr_var[i*2+1] =
-((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-}
+memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
+   sizeof(hw_mtrr.msr_mtrr_fixed));
 
-for ( i = 0; i < NUM_FIXED_MSR; i++ )
-hw_mtrr.msr_mtrr_fixed[i] =
-((uint64_t*)mtrr_state->fixed_ranges)[i];
+return hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr);
+}
+
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr) != 0 )
-return 1;
+/* save mtrr */
+for_each_vcpu(d, v)
+{
+   err = hvm_save_mtrr_msr_one(v, h);
+   if ( err )
+   break;
 }
-return 0;
+
+return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V11:
- Removed the memset and added init with {}.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 302e13a14d..c2b2b6623c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -350,6 +350,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 }
 
 #if CONFIG_HVM
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_vmce_vcpu ctxt = {
+.caps = v->arch.vmce.mcg_cap,
+.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+};
+
+return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+}
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
@@ -357,14 +369,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 
 for_each_vcpu ( d, v )
 {
-struct hvm_vmce_vcpu ctxt = {
-.caps = v->arch.vmce.mcg_cap,
-.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-};
-
-err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+err = vmce_save_vcpu_ctxt_one(v, h);
 if ( err )
 break;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c198c9190a..b0cf3a836f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -731,16 +731,23 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_tsc_adjust ctxt = {
+.tsc_adjust = v->arch.hvm.msr_tsc_adjust,
+};
+
+return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_tsc_adjust ctxt;
 int err = 0;
 
 for_each_vcpu ( d, v )
 {
-ctxt.tsc_adjust = v->arch.hvm.msr_tsc_adjust;
-err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+err = hvm_save_tsc_adjust_one(v, h);
 if ( err )
 break;
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 10/13] x86/hvm: Add handler for save_one funcs

2018-09-10 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Change handler name from hvm_save_one_handler to 
hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/emul-i8254.c  | 2 +-
 xen/arch/x86/hvm/hpet.c| 2 +-
 xen/arch/x86/hvm/hvm.c | 7 +--
 xen/arch/x86/hvm/irq.c | 6 +++---
 xen/arch/x86/hvm/mtrr.c| 4 ++--
 xen/arch/x86/hvm/pmtimer.c | 2 +-
 xen/arch/x86/hvm/rtc.c | 2 +-
 xen/arch/x86/hvm/save.c| 3 +++
 xen/arch/x86/hvm/vioapic.c | 2 +-
 xen/arch/x86/hvm/viridian.c| 3 ++-
 xen/arch/x86/hvm/vlapic.c  | 8 
 xen/arch/x86/hvm/vpic.c| 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +-
 14 files changed, 31 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index c2b2b6623c..71afc06f9a 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -397,6 +397,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 #endif
 
diff --git a/xen/arch/x86/emul-i8254.c b/xen/arch/x86/emul-i8254.c
index 7f1ded2623..a85dfcccbc 100644
--- a/xen/arch/x86/emul-i8254.c
+++ b/xen/arch/x86/emul-i8254.c
@@ -438,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 #endif
 
 void pit_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index cbd1efbc9f..4d8f6da2d9 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -695,7 +695,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1669957f1c..58c03bed15 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -776,6 +776,7 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1156,8 +1157,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
-  1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+  hvm_load_cpu_ctxt, 1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
save_area) + \
@@ -1508,6 +1509,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
+hvm_save_cpu_xsave_states_one,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1520,6 +1522,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
+hvm_save_cpu_msrs_one,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index fe2c2fa06c..9502bae645 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -773,9 +773,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index f3

[Xen-devel] [PATCH v19 09/13] x86/hvm: Introduce lapic_save_regs_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 26 +++---
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 31c7a66d01..8b2955365f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1422,26 +1422,30 @@ static int lapic_save_hidden(struct domain *d, 
hvm_domain_context_t *h)
 return err;
 }
 
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+if ( hvm_funcs.sync_pir_to_irr )
+hvm_funcs.sync_pir_to_irr(v);
+
+return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
+}
+
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-if ( hvm_funcs.sync_pir_to_irr )
-hvm_funcs.sync_pir_to_irr(v);
-
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+err = lapic_save_regs_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 /*
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 11/13] x86/domctl: Use hvm_save_vcpu_handler

2018-09-10 Thread Alexandru Isaila
This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V17:
- Remove double ;
- Move struct vcpu *v to reduce scope
- Remove stray lines.
---
 xen/arch/x86/hvm/save.c | 26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 870042b27f..e059ab4e13 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 char *c;
 struct hvm_save_header hdr;
 struct hvm_save_end end;
-hvm_save_handler handler;
 unsigned int i;
 
 if ( d->is_dying )
@@ -223,8 +222,27 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 /* Save all available kinds of state */
 for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
 {
-handler = hvm_sr_handlers[i].save;
-if ( handler != NULL )
+hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
+hvm_save_handler handler = hvm_sr_handlers[i].save;
+
+if ( save_one_handler )
+{
+struct vcpu *v;
+
+for_each_vcpu ( d, v )
+{
+printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+   v, hvm_sr_handlers[i].name);
+if ( save_one_handler(v, h) != 0 )
+{
+printk(XENLOG_G_ERR
+   "HVM %pv save: failed to save type %"PRIu16"\n",
+   v, i);
+return -ENODATA;
+}
+}
+}
+else if ( handler )
 {
 printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
@@ -233,7 +251,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
d->domain_id, i);
-return -EFAULT;
+return -ENODATA;
 }
 }
 }
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Move all free fields to the initializer
- Add blank line to before the return
- Move v->pause_flags check to the save_one function.
---
 xen/arch/x86/hvm/hvm.c | 219 +
 1 file changed, 113 insertions(+), 106 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b0cf3a836f..e1133f64d7 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -778,119 +778,126 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct segment_register seg;
+struct hvm_hw_cpu ctxt = {
+.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm.sync_tsc),
+.msr_tsc_aux = hvm_msr_tsc_aux(v),
+.rax = v->arch.user_regs.rax,
+.rbx = v->arch.user_regs.rbx,
+.rcx = v->arch.user_regs.rcx,
+.rdx = v->arch.user_regs.rdx,
+.rbp = v->arch.user_regs.rbp,
+.rsi = v->arch.user_regs.rsi,
+.rdi = v->arch.user_regs.rdi,
+.rsp = v->arch.user_regs.rsp,
+.rip = v->arch.user_regs.rip,
+.rflags = v->arch.user_regs.rflags,
+.r8  = v->arch.user_regs.r8,
+.r9  = v->arch.user_regs.r9,
+.r10 = v->arch.user_regs.r10,
+.r11 = v->arch.user_regs.r11,
+.r12 = v->arch.user_regs.r12,
+.r13 = v->arch.user_regs.r13,
+.r14 = v->arch.user_regs.r14,
+.r15 = v->arch.user_regs.r15,
+.dr0 = v->arch.debugreg[0],
+.dr1 = v->arch.debugreg[1],
+.dr2 = v->arch.debugreg[2],
+.dr3 = v->arch.debugreg[3],
+.dr6 = v->arch.debugreg[6],
+.dr7 = v->arch.debugreg[7],
+};
+
+/*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+if ( v->pause_flags & VPF_down )
+return 0;
+
+/* Architecture-specific vmcs/vmcb bits */
+hvm_funcs.save_cpu_ctxt(v, );
+
+hvm_get_segment_register(v, x86_seg_idtr, );
+ctxt.idtr_limit = seg.limit;
+ctxt.idtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_gdtr, );
+ctxt.gdtr_limit = seg.limit;
+ctxt.gdtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_cs, );
+ctxt.cs_sel = seg.sel;
+ctxt.cs_limit = seg.limit;
+ctxt.cs_base = seg.base;
+ctxt.cs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ds, );
+ctxt.ds_sel = seg.sel;
+ctxt.ds_limit = seg.limit;
+ctxt.ds_base = seg.base;
+ctxt.ds_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_es, );
+ctxt.es_sel = seg.sel;
+ctxt.es_limit = seg.limit;
+ctxt.es_base = seg.base;
+ctxt.es_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ss, );
+ctxt.ss_sel = seg.sel;
+ctxt.ss_limit = seg.limit;
+ctxt.ss_base = seg.base;
+ctxt.ss_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_fs, );
+ctxt.fs_sel = seg.sel;
+ctxt.fs_limit = seg.limit;
+ctxt.fs_base = seg.base;
+ctxt.fs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_gs, );
+ctxt.gs_sel = seg.sel;
+ctxt.gs_limit = seg.limit;
+ctxt.gs_base = seg.base;
+ctxt.gs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_tr, );
+ctxt.tr_sel = seg.sel;
+ctxt.tr_limit = seg.limit;
+ctxt.tr_base = seg.base;
+ctxt.tr_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ldtr, );
+ctxt.ldtr_sel = seg.sel;
+ctxt.ldtr_limit = seg.limit;
+ctxt.ldtr_base = seg.base;
+ctxt.ldtr_arbytes = seg.attr;
+
+if ( v->fpu_initialised )
+{
+memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ctxt.flags = XEN_X86_FPU_INITIALISED;
+}
+
+return hvm_save_entry(CPU, v->vcpu_id, h, );
+}
+
 static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_hw_cpu ctxt;
-struct segment_register seg;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-/* We don't need to save state for a vcpu that is down; the restore 
- * code will leave it down if there is nothing saved. */
-if ( v->pause_flags & VPF_down )
-continue;
-
-memset(, 0, sizeof(ctxt));
-
-/* Architecture-specific vmcs/vmcb bits */
-hvm_funcs.save_cpu_ctxt(v, );
-
-ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm.sync_tsc);
-
-ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-hvm_get_segment_register(v, x86_seg_idtr, );
-   

[Xen-devel] [PATCH v19 00/13] x86/domctl: Save info for one vcpu instance

2018-09-10 Thread Alexandru Isaila
Hi all,

This patch series addresses the ideea of saving data from a single vcpu 
instance.
First it starts by adding *save_one functions, then it introduces a handler for 
the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func 
while 
changing domain_pause to vcpu_pause.

Cheers,

NOTE: Tested with tools/misc/xen-hvmctx, tools/xentrace/xenctx, xl save/restore,
custom hvm_getcontext/partial code and debug the getcontext part for guest boot.

Alexandru Isaila (13):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 08/13] x86/hvm: Introduce lapic_save_hidden_one

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 20 
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 04702e96c9..31c7a66d01 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1399,23 +1399,27 @@ static void lapic_rearm(struct vlapic *s)
 s->timer_last_update = s->pt.last_plt_gtime;
 }
 
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+return hvm_save_entry(LAPIC, v->vcpu_id, h, _vlapic(v)->hw);
+}
+
 static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, >hw)) != 0 )
+err = lapic_save_hidden_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v19 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one

2018-09-10 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return
- Move xsave_enabled() check to the save_one func.
---
 xen/arch/x86/hvm/hvm.c | 47 ++
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e1133f64d7..1013b6ecc4 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1163,35 +1163,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, 
hvm_load_cpu_ctxt,
save_area) + \
   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t 
*h)
 {
-struct vcpu *v;
 struct hvm_hw_cpu_xsave *ctxt;
+unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+int err;
 
-if ( !cpu_has_xsave )
+if ( !cpu_has_xsave || !xsave_enabled(v) )
 return 0;   /* do nothing */
 
-for_each_vcpu ( d, v )
-{
-unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+if ( err )
+return err;
 
-if ( !xsave_enabled(v) )
-continue;
-if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-return 1;
-ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
-h->cur += size;
+ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
+h->cur += size;
+ctxt->xfeature_mask = xfeature_mask;
+ctxt->xcr0 = v->arch.xcr0;
+ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-ctxt->xfeature_mask = xfeature_mask;
-ctxt->xcr0 = v->arch.xcr0;
-ctxt->xcr0_accum = v->arch.xcr0_accum;
-expand_xsave_states(v, >save_area,
-size - offsetof(typeof(*ctxt), save_area));
-}
+expand_xsave_states(v, >save_area,
+size - offsetof(typeof(*ctxt), save_area));
 
 return 0;
 }
 
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_xsave_states_one(v, h);
+if ( err )
+break;
+}
+
+return err;
+}
+
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
-- 
2.17.1


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 12/13] x86/hvm: Remove redundant save functions

2018-09-03 Thread Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs and adapts print messages in order to match the format of the
other save related messages.

Signed-off-by: Alexandru Isaila 

---
Changes since V17:
- Refit HVM_REGISTER_SAVE_RESTORE(CPU)
- Add const to the added struct domain *d
- Changed the instance bound check from hvm_save_one()
- Update ctxt.size for save_one instance.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +-
 xen/arch/x86/hvm/hpet.c|  7 ++--
 xen/arch/x86/hvm/hvm.c | 76 --
 xen/arch/x86/hvm/i8254.c   |  5 +--
 xen/arch/x86/hvm/irq.c | 15 +
 xen/arch/x86/hvm/mtrr.c| 20 ++-
 xen/arch/x86/hvm/pmtimer.c |  5 +--
 xen/arch/x86/hvm/rtc.c |  5 +--
 xen/arch/x86/hvm/save.c| 25 +++---
 xen/arch/x86/hvm/vioapic.c |  5 +--
 xen/arch/x86/hvm/viridian.c| 23 +++--
 xen/arch/x86/hvm/vlapic.c  | 38 +++--
 xen/arch/x86/hvm/vpic.c|  5 +--
 xen/include/asm-x86/hvm/save.h |  8 ++---
 14 files changed, 59 insertions(+), 196 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 return ret;
 }
 
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_vmce_vcpu ctxt = {
 .caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = vmce_save_vcpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..4afa2ab 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,16 +516,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+struct domain *d = v->domain;
 HPETState *hp = domain_vhpet(d);
-struct vcpu *v = pt_global_vcpu_target(d);
 int rc;
 uint64_t guest_time;
 
 if ( !has_vhpet(d) )
 return 0;
 
+v = pt_global_vcpu_target(d);
 write_lock(>lock);
 guest_time = (v->arch.hvm_vcpu.guest_time ?: hvm_get_guest_time(v)) /
  STIME_PER_HPET_TICK;
@@ -640,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4a70251..35192ee 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_tsc_adjust ctxt = {
 .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
 }
 
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = hvm_save_tsc_adjust_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,9 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ct

[Xen-devel] [PATCH v18 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func

2018-09-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..d90da9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_tsc_adjust ctxt = {
+.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
+};
+
+return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_tsc_adjust ctxt;
 int err = 0;
 
 for_each_vcpu ( d, v )
 {
-ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
-err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+err = hvm_save_tsc_adjust_one(v, h);
 if ( err )
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func

2018-09-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Move all free fields to the initializer
- Add blank line to before the return
- Move v->pause_flags check to the save_one function.
---
 xen/arch/x86/hvm/hvm.c | 219 +
 1 file changed, 113 insertions(+), 106 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d90da9a..333c342 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
-struct hvm_hw_cpu ctxt;
 struct segment_register seg;
+struct hvm_hw_cpu ctxt = {
+.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc),
+.msr_tsc_aux = hvm_msr_tsc_aux(v),
+.rax = v->arch.user_regs.rax,
+.rbx = v->arch.user_regs.rbx,
+.rcx = v->arch.user_regs.rcx,
+.rdx = v->arch.user_regs.rdx,
+.rbp = v->arch.user_regs.rbp,
+.rsi = v->arch.user_regs.rsi,
+.rdi = v->arch.user_regs.rdi,
+.rsp = v->arch.user_regs.rsp,
+.rip = v->arch.user_regs.rip,
+.rflags = v->arch.user_regs.rflags,
+.r8  = v->arch.user_regs.r8,
+.r9  = v->arch.user_regs.r9,
+.r10 = v->arch.user_regs.r10,
+.r11 = v->arch.user_regs.r11,
+.r12 = v->arch.user_regs.r12,
+.r13 = v->arch.user_regs.r13,
+.r14 = v->arch.user_regs.r14,
+.r15 = v->arch.user_regs.r15,
+.dr0 = v->arch.debugreg[0],
+.dr1 = v->arch.debugreg[1],
+.dr2 = v->arch.debugreg[2],
+.dr3 = v->arch.debugreg[3],
+.dr6 = v->arch.debugreg[6],
+.dr7 = v->arch.debugreg[7],
+};
 
-for_each_vcpu ( d, v )
+/*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+if ( v->pause_flags & VPF_down )
+return 0;
+
+/* Architecture-specific vmcs/vmcb bits */
+hvm_funcs.save_cpu_ctxt(v, );
+
+hvm_get_segment_register(v, x86_seg_idtr, );
+ctxt.idtr_limit = seg.limit;
+ctxt.idtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_gdtr, );
+ctxt.gdtr_limit = seg.limit;
+ctxt.gdtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_cs, );
+ctxt.cs_sel = seg.sel;
+ctxt.cs_limit = seg.limit;
+ctxt.cs_base = seg.base;
+ctxt.cs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ds, );
+ctxt.ds_sel = seg.sel;
+ctxt.ds_limit = seg.limit;
+ctxt.ds_base = seg.base;
+ctxt.ds_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_es, );
+ctxt.es_sel = seg.sel;
+ctxt.es_limit = seg.limit;
+ctxt.es_base = seg.base;
+ctxt.es_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ss, );
+ctxt.ss_sel = seg.sel;
+ctxt.ss_limit = seg.limit;
+ctxt.ss_base = seg.base;
+ctxt.ss_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_fs, );
+ctxt.fs_sel = seg.sel;
+ctxt.fs_limit = seg.limit;
+ctxt.fs_base = seg.base;
+ctxt.fs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_gs, );
+ctxt.gs_sel = seg.sel;
+ctxt.gs_limit = seg.limit;
+ctxt.gs_base = seg.base;
+ctxt.gs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_tr, );
+ctxt.tr_sel = seg.sel;
+ctxt.tr_limit = seg.limit;
+ctxt.tr_base = seg.base;
+ctxt.tr_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ldtr, );
+ctxt.ldtr_sel = seg.sel;
+ctxt.ldtr_limit = seg.limit;
+ctxt.ldtr_base = seg.base;
+ctxt.ldtr_arbytes = seg.attr;
+
+if ( v->fpu_initialised )
 {
-/* We don't need to save state for a vcpu that is down; the restore 
- * code will leave it down if there is nothing saved. */
-if ( v->pause_flags & VPF_down )
-continue;
+memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ctxt.flags = XEN_X86_FPU_INITIALISED;
+}
 
-memset(, 0, sizeof(ctxt));
-
-/* Architecture-specific vmcs/vmcb bits */
-hvm_funcs.save_cpu_ctxt(v, );
-
-ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
-ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-hvm_get_segment_register(v, x86_seg_idtr, );
-ctxt.idtr_limit = seg.limit;
-ctxt.idtr_base = seg.base;
-
-hvm_get_segme

[Xen-devel] [PATCH v18 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one

2018-09-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return
- Move xsave_enabled() check to the save_one func.
---
 xen/arch/x86/hvm/hvm.c | 47 +--
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 333c342..5b0820e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1187,35 +1187,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, 
hvm_load_cpu_ctxt,
save_area) + \
   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t 
*h)
 {
-struct vcpu *v;
 struct hvm_hw_cpu_xsave *ctxt;
+unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+int err;
 
-if ( !cpu_has_xsave )
+if ( !cpu_has_xsave || !xsave_enabled(v) )
 return 0;   /* do nothing */
 
-for_each_vcpu ( d, v )
-{
-unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+if ( err )
+return err;
 
-if ( !xsave_enabled(v) )
-continue;
-if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-return 1;
-ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
-h->cur += size;
+ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
+h->cur += size;
+ctxt->xfeature_mask = xfeature_mask;
+ctxt->xcr0 = v->arch.xcr0;
+ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-ctxt->xfeature_mask = xfeature_mask;
-ctxt->xcr0 = v->arch.xcr0;
-ctxt->xcr0_accum = v->arch.xcr0_accum;
-expand_xsave_states(v, >save_area,
-size - offsetof(typeof(*ctxt), save_area));
-}
+expand_xsave_states(v, >save_area,
+size - offsetof(typeof(*ctxt), save_area));
 
 return 0;
 }
 
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_xsave_states_one(v, h);
+if ( err )
+break;
+}
+
+return err;
+}
+
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 10/13] x86/hvm: Add handler for save_one funcs

2018-09-03 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Change handler name from hvm_save_one_handler to
  hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/hvm/hpet.c| 2 +-
 xen/arch/x86/hvm/hvm.c | 6 +-
 xen/arch/x86/hvm/i8254.c   | 2 +-
 xen/arch/x86/hvm/irq.c | 6 +++---
 xen/arch/x86/hvm/mtrr.c| 4 ++--
 xen/arch/x86/hvm/pmtimer.c | 2 +-
 xen/arch/x86/hvm/rtc.c | 2 +-
 xen/arch/x86/hvm/save.c| 3 +++
 xen/arch/x86/hvm/vioapic.c | 2 +-
 xen/arch/x86/hvm/viridian.c| 3 ++-
 xen/arch/x86/hvm/vlapic.c  | 4 ++--
 xen/arch/x86/hvm/vpic.c| 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +-
 14 files changed, 29 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7df8744..4a70251 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1180,7 +1181,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+  hvm_load_cpu_ctxt,
   1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1533,6 +1535,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
+hvm_save_cpu_xsave_states_one,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1545,6 +1548,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
+hvm_save_cpu_msrs_one,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 298d7ee..1c4e731 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -822,8 +822,8 @@ static int hvm_

[Xen-devel] [PATCH v18 09/13] x86/hvm: Introduce lapic_save_regs_one func

2018-09-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 26 +++---
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 429ffb5..2e73615 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1458,26 +1458,30 @@ static int lapic_save_hidden(struct domain *d, 
hvm_domain_context_t *h)
 return err;
 }
 
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+if ( hvm_funcs.sync_pir_to_irr )
+hvm_funcs.sync_pir_to_irr(v);
+
+return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
+}
+
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-if ( hvm_funcs.sync_pir_to_irr )
-hvm_funcs.sync_pir_to_irr(v);
-
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+err = lapic_save_regs_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 /*
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-09-03 Thread Alexandru Isaila
This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila 

---
Changes since V15:
- Moved pause/unpause calls into hvm_save_one()
- Re-add the loop in hvm_save_one().
---
 xen/arch/x86/domctl.c   |  2 --
 xen/arch/x86/hvm/save.c | 10 ++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 8fbbf3a..cb53980 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -591,12 +591,10 @@ long arch_do_domctl(
  !is_hvm_domain(d) )
 break;
 
-domain_pause(d);
 ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
domctl->u.hvmcontext_partial.instance,
domctl->u.hvmcontext_partial.buffer,
>u.hvmcontext_partial.bufsz);
-domain_unpause(d);
 
 if ( !ret )
 copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index d66eb62..fcda226 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -152,6 +152,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 if ( !ctxt.data )
 return -ENOMEM;
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_pause(d->vcpu[instance]);
+else
+domain_pause(d);
+
 if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], )) != 0 )
 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
@@ -183,6 +188,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 }
 }
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_unpause(d->vcpu[instance]);
+else
+domain_unpause(d);
+
 xfree(ctxt.data);
 return rv;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 11/13] x86/domctl: Use hvm_save_vcpu_handler

2018-09-03 Thread Alexandru Isaila
This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila 

---
Changes since V17:
- Remove double ;
- Move struct vcpu *v to reduce scope
- Remove stray lines.
---
 xen/arch/x86/hvm/save.c | 26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1106b96..3d08aab 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 char *c;
 struct hvm_save_header hdr;
 struct hvm_save_end end;
-hvm_save_handler handler;
 unsigned int i;
 
 if ( d->is_dying )
@@ -223,8 +222,27 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 /* Save all available kinds of state */
 for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
 {
-handler = hvm_sr_handlers[i].save;
-if ( handler != NULL )
+hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
+hvm_save_handler handler = hvm_sr_handlers[i].save;
+
+if ( save_one_handler )
+{
+struct vcpu *v;
+
+for_each_vcpu ( d, v )
+{
+printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+   v, hvm_sr_handlers[i].name);
+if ( save_one_handler(v, h) != 0 )
+{
+printk(XENLOG_G_ERR
+   "HVM %pv save: failed to save type %"PRIu16"\n",
+   v, i);
+return -ENODATA;
+}
+}
+}
+else if ( handler )
 {
 printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
@@ -233,7 +251,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
d->domain_id, i);
-return -EFAULT;
+return -ENODATA;
 }
 }
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func

2018-09-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V14:
- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..3f52d38 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
   viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_viridian_vcpu_context ctxt = {
+.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
+.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
+};
 
-if ( !is_viridian_domain(d) )
+if ( !is_viridian_domain(v->domain) )
 return 0;
 
-for_each_vcpu( d, v ) {
-struct hvm_viridian_vcpu_context ctxt = {
-.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
-};
+return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, );
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, ) != 0 )
-return 1;
+for_each_vcpu ( d, v )
+{
+err = viridian_save_vcpu_ctxt_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func

2018-09-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila i
Reviewed-by: Jan Beulich 

---
Changes since v16:
- Address style comments.

Note: This patch is based on Roger Pau Monne's series[1]
---
 xen/arch/x86/hvm/mtrr.c | 80 ++---
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..298d7ee 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,58 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
uint64_t gfn_start,
 return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
+struct hvm_hw_mtrr hw_mtrr = {
+.msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+   MTRRdefType_FE) |
+ MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+.msr_mtrr_cap  = mtrr_state->mtrr_cap,
+};
+unsigned int i;
 
-/* save mtrr */
-for_each_vcpu(d, v)
+if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
 {
-const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
-struct hvm_hw_mtrr hw_mtrr = {
-.msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
-   MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-.msr_mtrr_cap  = mtrr_state->mtrr_cap,
-};
-unsigned int i;
+dprintk(XENLOG_G_ERR,
+"HVM save: %pv: too many (%lu) variable range MTRRs\n",
+v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+return -EINVAL;
+}
 
-if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-{
-dprintk(XENLOG_G_ERR,
-"HVM save: %pv: too many (%lu) variable range MTRRs\n",
-v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-return -EINVAL;
-}
+hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+
+for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+{
+hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+}
 
-hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) !=
+ sizeof(mtrr_state->fixed_ranges));
 
-for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-{
-/* save physbase */
-hw_mtrr.msr_mtrr_var[i*2] =
-((uint64_t*)mtrr_state->var_ranges)[i*2];
-/* save physmask */
-hw_mtrr.msr_mtrr_var[i*2+1] =
-((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-}
+memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
+   sizeof(hw_mtrr.msr_mtrr_fixed));
 
-for ( i = 0; i < NUM_FIXED_MSR; i++ )
-hw_mtrr.msr_mtrr_fixed[i] =
-((uint64_t*)mtrr_state->fixed_ranges)[i];
+return hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr);
+}
+
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr) != 0 )
-return 1;
+/* save mtrr */
+for_each_vcpu(d, v)
+{
+   err = hvm_save_mtrr_msr_one(v, h);
+   if ( err )
+   break;
 }
-return 0;
+
+return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func

2018-09-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V11:
- Removed the memset and added init with {}.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..31e553c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 return ret;
 }
 
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_vmce_vcpu ctxt = {
+.caps = v->arch.vmce.mcg_cap,
+.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+};
+
+return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+}
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 
 for_each_vcpu ( d, v )
 {
-struct hvm_vmce_vcpu ctxt = {
-.caps = v->arch.vmce.mcg_cap,
-.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-};
-
-err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+err = vmce_save_vcpu_ctxt_one(v, h);
 if ( err )
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance

2018-09-03 Thread Alexandru Isaila
Hi all,

This patch series addresses the ideea of saving data from a single vcpu 
instance.
First it starts by adding *save_one functions, then it introduces a handler for 
the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func 
while 
changing domain_pause to vcpu_pause.

Cheers,

Alexandru Isaila (13):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 08/13] x86/hvm: Introduce lapic_save_hidden_one

2018-09-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 20 
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..429ffb5 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,23 +1435,27 @@ static void lapic_rearm(struct vlapic *s)
 s->timer_last_update = s->pt.last_plt_gtime;
 }
 
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+return hvm_save_entry(LAPIC, v->vcpu_id, h, _vlapic(v)->hw);
+}
+
 static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, >hw)) != 0 )
+err = lapic_save_hidden_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v18 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func

2018-09-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++--
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5b0820e..7df8744 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1364,69 +1364,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_save_descriptor *desc = _p(>data[h->cur]);
+struct hvm_msr *ctxt;
+unsigned int i;
+int err;
 
-for_each_vcpu ( d, v )
+err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+if ( err )
+return err;
+ctxt = (struct hvm_msr *)>data[h->cur];
+ctxt->count = 0;
+
+for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
 {
-struct hvm_save_descriptor *desc = _p(>data[h->cur]);
-struct hvm_msr *ctxt;
-unsigned int i;
+uint64_t val;
+int rc = guest_rdmsr(v, msrs_to_send[i], );
 
-if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
-return 1;
-ctxt = (struct hvm_msr *)>data[h->cur];
-ctxt->count = 0;
+/*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+if ( rc == X86EMUL_EXCEPTION )
+continue;
 
-for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+if ( rc != X86EMUL_OKAY )
 {
-uint64_t val;
-int rc = guest_rdmsr(v, msrs_to_send[i], );
+ASSERT_UNREACHABLE();
+return -ENXIO;
+}
 
-/*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
-if ( rc == X86EMUL_EXCEPTION )
-continue;
+if ( !val )
+continue; /* Skip empty MSRs. */
 
-if ( rc != X86EMUL_OKAY )
-{
-ASSERT_UNREACHABLE();
-return -ENXIO;
-}
+ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ctxt->msr[ctxt->count++].val = val;
+}
 
-if ( !val )
-continue; /* Skip empty MSRs. */
+if ( hvm_funcs.save_msr )
+hvm_funcs.save_msr(v, ctxt);
 
-ctxt->msr[ctxt->count].index = msrs_to_send[i];
-ctxt->msr[ctxt->count++].val = val;
-}
+ASSERT(ctxt->count <= msr_count_max);
 
-if ( hvm_funcs.save_msr )
-hvm_funcs.save_msr(v, ctxt);
+for ( i = 0; i < ctxt->count; ++i )
+ctxt->msr[i]._rsvd = 0;
 
-ASSERT(ctxt->count <= msr_count_max);
+if ( ctxt->count )
+{
+/* Rewrite length to indicate how much space we actually used. */
+desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
+else
+/* or rewind and remove the descriptor from the stream. */
+h->cur -= sizeof(struct hvm_save_descriptor);
 
-for ( i = 0; i < ctxt->count; ++i )
-ctxt->msr[i]._rsvd = 0;
+return 0;
+}
 
-if ( ctxt->count )
-{
-/* Rewrite length to indicate how much space we actually used. */
-desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-}
-else
-/* or rewind and remove the descriptor from the stream. */
-h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_msrs_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1] x86/mm: Suppresses vm_events caused by page-walks

2018-08-24 Thread Alexandru Isaila
The original version of the patch emulated the current instruction
(which, as a side-effect, emulated the page-walk as well), however we
need finer-grained control. We want to emulate the page-walk, but still
get an EPT violation event if the current instruction would trigger one.
This patch performs just the page-walk emulation.

Signed-off-by: Alexandru Isaila 
---
 xen/arch/x86/mm/guest_walk.c |  7 ++-
 xen/arch/x86/mm/hap/guest_walk.c | 32 +++-
 xen/arch/x86/mm/hap/hap.c| 12 
 xen/arch/x86/mm/hap/private.h| 10 ++
 xen/arch/x86/mm/mem_access.c | 15 ++-
 xen/arch/x86/mm/shadow/multi.c   |  6 +++---
 xen/include/asm-x86/guest_pt.h   |  3 ++-
 xen/include/asm-x86/paging.h |  5 -
 8 files changed, 78 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/mm/guest_walk.c b/xen/arch/x86/mm/guest_walk.c
index f67aeda..54140b9 100644
--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -82,7 +82,7 @@ static bool set_ad_bits(guest_intpte_t *guest_p, 
guest_intpte_t *walk_p,
 bool
 guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
   unsigned long va, walk_t *gw,
-  uint32_t walk, mfn_t top_mfn, void *top_map)
+  uint32_t walk, mfn_t top_mfn, void *top_map, uint32_t *gf)
 {
 struct domain *d = v->domain;
 p2m_type_t p2mt;
@@ -361,6 +361,11 @@ guest_walk_tables(struct vcpu *v, struct p2m_domain *p2m,
  * see whether the access should succeed.
  */
 ar = (ar_and & AR_ACCUM_AND) | (ar_or & AR_ACCUM_OR);
+if ( gf )
+{
+*gf = ar;
+goto out;
+}
 
 /*
  * Sanity check.  If EFER.NX is disabled, _PAGE_NX_BIT is reserved and
diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
index cb3f9ce..c916b67 100644
--- a/xen/arch/x86/mm/hap/guest_walk.c
+++ b/xen/arch/x86/mm/hap/guest_walk.c
@@ -29,6 +29,9 @@ asm(".file \"" __OBJECT_FILE__ "\"");
 #define _hap_gva_to_gfn(levels) hap_gva_to_gfn_##levels##_levels
 #define hap_gva_to_gfn(levels) _hap_gva_to_gfn(levels)
 
+#define _hap_pte_flags(levels) hap_pte_flags_##levels##_levels
+#define hap_pte_flags(levels) _hap_pte_flags(levels)
+
 #define _hap_p2m_ga_to_gfn(levels) hap_p2m_ga_to_gfn_##levels##_levels
 #define hap_p2m_ga_to_gfn(levels) _hap_p2m_ga_to_gfn(levels)
 
@@ -39,6 +42,33 @@ asm(".file \"" __OBJECT_FILE__ "\"");
 #include 
 #include 
 
+bool hap_pte_flags(GUEST_PAGING_LEVELS)(
+struct vcpu *v, struct p2m_domain *p2m,
+unsigned long va, uint32_t walk, unsigned long cr3,
+uint32_t *gf)
+{
+walk_t gw;
+mfn_t top_mfn;
+void *top_map;
+gfn_t top_gfn;
+struct page_info *top_page;
+p2m_type_t p2mt;
+
+top_gfn = _gfn(cr3 >> PAGE_SHIFT);
+top_page = p2m_get_page_from_gfn(p2m, top_gfn, , NULL,
+ P2M_ALLOC | P2M_UNSHARE);
+top_mfn = page_to_mfn(top_page);
+
+/* Map the top-level table and call the tree-walker */
+ASSERT(mfn_valid(top_mfn));
+top_map = map_domain_page(top_mfn);
+#if GUEST_PAGING_LEVELS == 3
+top_map += (cr3 & ~(PAGE_MASK | 31));
+#endif
+
+return guest_walk_tables(v, p2m, va, , walk, top_mfn, top_map, gf);
+}
+
 unsigned long hap_gva_to_gfn(GUEST_PAGING_LEVELS)(
 struct vcpu *v, struct p2m_domain *p2m, unsigned long gva, uint32_t *pfec)
 {
@@ -91,7 +121,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(
 #if GUEST_PAGING_LEVELS == 3
 top_map += (cr3 & ~(PAGE_MASK | 31));
 #endif
-walk_ok = guest_walk_tables(v, p2m, ga, , *pfec, top_mfn, top_map);
+walk_ok = guest_walk_tables(v, p2m, ga, , *pfec, top_mfn, top_map, 
NULL);
 unmap_domain_page(top_map);
 put_page(top_page);
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 812a840..2da7b63 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -767,7 +767,8 @@ static const struct paging_mode hap_paging_real_mode = {
 .update_cr3 = hap_update_cr3,
 .update_paging_modes= hap_update_paging_modes,
 .write_p2m_entry= hap_write_p2m_entry,
-.guest_levels   = 1
+.guest_levels   = 1,
+.pte_flags  = hap_pte_flags_2_levels
 };
 
 static const struct paging_mode hap_paging_protected_mode = {
@@ -778,7 +779,8 @@ static const struct paging_mode hap_paging_protected_mode = 
{
 .update_cr3 = hap_update_cr3,
 .update_paging_modes= hap_update_paging_modes,
 .write_p2m_entry= hap_write_p2m_entry,
-.guest_levels   = 2
+.guest_levels   = 2,
+.pte_flags  = hap_pte_flags_2_levels
 };
 
 static const struct paging_mode hap_paging_pae_mode = {
@@ -789,7 +791,8 @@ static const struct paging_mode hap_paging_pae_mode = {
 .update_cr3 = hap_upda

[Xen-devel] [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-08-22 Thread Alexandru Isaila
This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila 

---
Changes since V15:
- Moved pause/unpause calls into hvm_save_one()
- Re-add the loop in hvm_save_one().
---
 xen/arch/x86/domctl.c   |  2 --
 xen/arch/x86/hvm/save.c | 12 ++--
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 8fbbf3a..cb53980 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -591,12 +591,10 @@ long arch_do_domctl(
  !is_hvm_domain(d) )
 break;
 
-domain_pause(d);
 ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
domctl->u.hvmcontext_partial.instance,
domctl->u.hvmcontext_partial.buffer,
>u.hvmcontext_partial.bufsz);
-domain_unpause(d);
 
 if ( !ret )
 copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 49741e0..2d35f17 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -149,12 +149,15 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 instance >= d->max_vcpus )
 return -ENOENT;
 ctxt.size = hvm_sr_handlers[typecode].size;
-if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
-ctxt.size *= d->max_vcpus;
 ctxt.data = xmalloc_bytes(ctxt.size);
 if ( !ctxt.data )
 return -ENOMEM;
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_pause(d->vcpu[instance]);
+else
+domain_pause(d);
+
 if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], )) != 0 )
 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
@@ -186,6 +189,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 }
 }
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_unpause(d->vcpu[instance]);
+else
+domain_unpause(d);
+
 xfree(ctxt.data);
 return rv;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func

2018-08-22 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila i
Reviewed-by: Jan Beulich 

---
Changes since v16:
- Address style comments.

Note: This patch is based on Roger Pau Monne's series[1]
---
 xen/arch/x86/hvm/mtrr.c | 80 ++---
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..298d7ee 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,58 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
uint64_t gfn_start,
 return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
+struct hvm_hw_mtrr hw_mtrr = {
+.msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+   MTRRdefType_FE) |
+ MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+.msr_mtrr_cap  = mtrr_state->mtrr_cap,
+};
+unsigned int i;
 
-/* save mtrr */
-for_each_vcpu(d, v)
+if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
 {
-const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
-struct hvm_hw_mtrr hw_mtrr = {
-.msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
-   MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-.msr_mtrr_cap  = mtrr_state->mtrr_cap,
-};
-unsigned int i;
+dprintk(XENLOG_G_ERR,
+"HVM save: %pv: too many (%lu) variable range MTRRs\n",
+v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+return -EINVAL;
+}
 
-if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-{
-dprintk(XENLOG_G_ERR,
-"HVM save: %pv: too many (%lu) variable range MTRRs\n",
-v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-return -EINVAL;
-}
+hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+
+for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+{
+hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+}
 
-hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) !=
+ sizeof(mtrr_state->fixed_ranges));
 
-for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-{
-/* save physbase */
-hw_mtrr.msr_mtrr_var[i*2] =
-((uint64_t*)mtrr_state->var_ranges)[i*2];
-/* save physmask */
-hw_mtrr.msr_mtrr_var[i*2+1] =
-((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-}
+memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
+   sizeof(hw_mtrr.msr_mtrr_fixed));
 
-for ( i = 0; i < NUM_FIXED_MSR; i++ )
-hw_mtrr.msr_mtrr_fixed[i] =
-((uint64_t*)mtrr_state->fixed_ranges)[i];
+return hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr);
+}
+
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr) != 0 )
-return 1;
+/* save mtrr */
+for_each_vcpu(d, v)
+{
+   err = hvm_save_mtrr_msr_one(v, h);
+   if ( err )
+   break;
 }
-return 0;
+
+return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 11/13] x86/domctl: Use hvm_save_vcpu_handler

2018-08-22 Thread Alexandru Isaila
This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila 

---
Changes since V15:
- Moved declarations into their scopes
- Remove redundant NULL check
- Remove rc variable
- Change fault return to -ENODATA.
---
 xen/arch/x86/hvm/save.c | 27 +++
 1 file changed, 23 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1106b96..1eb2b01 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 char *c;
 struct hvm_save_header hdr;
 struct hvm_save_end end;
-hvm_save_handler handler;
 unsigned int i;
 
 if ( d->is_dying )
@@ -223,17 +222,37 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 /* Save all available kinds of state */
 for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
 {
-handler = hvm_sr_handlers[i].save;
-if ( handler != NULL )
+struct vcpu *v;
+hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
+hvm_save_handler handler = hvm_sr_handlers[i].save;;
+
+if ( save_one_handler )
+{
+for_each_vcpu ( d, v )
+{
+printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+   v, hvm_sr_handlers[i].name);
+
+if ( save_one_handler(v, h) != 0 )
+{
+printk(XENLOG_G_ERR
+   "HVM %pv save: failed to save type %"PRIu16"\n",
+   v, i);
+return -ENODATA;
+}
+}
+}
+else if ( handler )
 {
 printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
+
 if ( handler(d, h) != 0 )
 {
 printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
d->domain_id, i);
-return -EFAULT;
+return -ENODATA;
 }
 }
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func

2018-08-22 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..d90da9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_tsc_adjust ctxt = {
+.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
+};
+
+return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_tsc_adjust ctxt;
 int err = 0;
 
 for_each_vcpu ( d, v )
 {
-ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
-err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+err = hvm_save_tsc_adjust_one(v, h);
 if ( err )
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one

2018-08-22 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return
- Move xsave_enabled() check to the save_one func.
---
 xen/arch/x86/hvm/hvm.c | 47 +--
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 333c342..5b0820e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1187,35 +1187,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, 
hvm_load_cpu_ctxt,
save_area) + \
   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t 
*h)
 {
-struct vcpu *v;
 struct hvm_hw_cpu_xsave *ctxt;
+unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+int err;
 
-if ( !cpu_has_xsave )
+if ( !cpu_has_xsave || !xsave_enabled(v) )
 return 0;   /* do nothing */
 
-for_each_vcpu ( d, v )
-{
-unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+if ( err )
+return err;
 
-if ( !xsave_enabled(v) )
-continue;
-if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-return 1;
-ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
-h->cur += size;
+ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
+h->cur += size;
+ctxt->xfeature_mask = xfeature_mask;
+ctxt->xcr0 = v->arch.xcr0;
+ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-ctxt->xfeature_mask = xfeature_mask;
-ctxt->xcr0 = v->arch.xcr0;
-ctxt->xcr0_accum = v->arch.xcr0_accum;
-expand_xsave_states(v, >save_area,
-size - offsetof(typeof(*ctxt), save_area));
-}
+expand_xsave_states(v, >save_area,
+size - offsetof(typeof(*ctxt), save_area));
 
 return 0;
 }
 
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_xsave_states_one(v, h);
+if ( err )
+break;
+}
+
+return err;
+}
+
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func

2018-08-22 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V11:
- Removed the memset and added init with {}.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..31e553c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 return ret;
 }
 
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_vmce_vcpu ctxt = {
+.caps = v->arch.vmce.mcg_cap,
+.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+};
+
+return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+}
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 
 for_each_vcpu ( d, v )
 {
-struct hvm_vmce_vcpu ctxt = {
-.caps = v->arch.vmce.mcg_cap,
-.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-};
-
-err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+err = vmce_save_vcpu_ctxt_one(v, h);
 if ( err )
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func

2018-08-22 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++--
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5b0820e..7df8744 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1364,69 +1364,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_save_descriptor *desc = _p(>data[h->cur]);
+struct hvm_msr *ctxt;
+unsigned int i;
+int err;
 
-for_each_vcpu ( d, v )
+err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+if ( err )
+return err;
+ctxt = (struct hvm_msr *)>data[h->cur];
+ctxt->count = 0;
+
+for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
 {
-struct hvm_save_descriptor *desc = _p(>data[h->cur]);
-struct hvm_msr *ctxt;
-unsigned int i;
+uint64_t val;
+int rc = guest_rdmsr(v, msrs_to_send[i], );
 
-if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
-return 1;
-ctxt = (struct hvm_msr *)>data[h->cur];
-ctxt->count = 0;
+/*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+if ( rc == X86EMUL_EXCEPTION )
+continue;
 
-for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+if ( rc != X86EMUL_OKAY )
 {
-uint64_t val;
-int rc = guest_rdmsr(v, msrs_to_send[i], );
+ASSERT_UNREACHABLE();
+return -ENXIO;
+}
 
-/*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
-if ( rc == X86EMUL_EXCEPTION )
-continue;
+if ( !val )
+continue; /* Skip empty MSRs. */
 
-if ( rc != X86EMUL_OKAY )
-{
-ASSERT_UNREACHABLE();
-return -ENXIO;
-}
+ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ctxt->msr[ctxt->count++].val = val;
+}
 
-if ( !val )
-continue; /* Skip empty MSRs. */
+if ( hvm_funcs.save_msr )
+hvm_funcs.save_msr(v, ctxt);
 
-ctxt->msr[ctxt->count].index = msrs_to_send[i];
-ctxt->msr[ctxt->count++].val = val;
-}
+ASSERT(ctxt->count <= msr_count_max);
 
-if ( hvm_funcs.save_msr )
-hvm_funcs.save_msr(v, ctxt);
+for ( i = 0; i < ctxt->count; ++i )
+ctxt->msr[i]._rsvd = 0;
 
-ASSERT(ctxt->count <= msr_count_max);
+if ( ctxt->count )
+{
+/* Rewrite length to indicate how much space we actually used. */
+desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
+else
+/* or rewind and remove the descriptor from the stream. */
+h->cur -= sizeof(struct hvm_save_descriptor);
 
-for ( i = 0; i < ctxt->count; ++i )
-ctxt->msr[i]._rsvd = 0;
+return 0;
+}
 
-if ( ctxt->count )
-{
-/* Rewrite length to indicate how much space we actually used. */
-desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-}
-else
-/* or rewind and remove the descriptor from the stream. */
-h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_msrs_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 09/13] x86/hvm: Introduce lapic_save_regs_one func

2018-08-22 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 26 +++---
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 429ffb5..2e73615 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1458,26 +1458,30 @@ static int lapic_save_hidden(struct domain *d, 
hvm_domain_context_t *h)
 return err;
 }
 
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+if ( hvm_funcs.sync_pir_to_irr )
+hvm_funcs.sync_pir_to_irr(v);
+
+return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
+}
+
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-if ( hvm_funcs.sync_pir_to_irr )
-hvm_funcs.sync_pir_to_irr(v);
-
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+err = lapic_save_regs_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 /*
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 10/13] x86/hvm: Add handler for save_one funcs

2018-08-22 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Change handler name from hvm_save_one_handler to
  hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/hvm/hpet.c| 2 +-
 xen/arch/x86/hvm/hvm.c | 6 +-
 xen/arch/x86/hvm/i8254.c   | 2 +-
 xen/arch/x86/hvm/irq.c | 6 +++---
 xen/arch/x86/hvm/mtrr.c| 4 ++--
 xen/arch/x86/hvm/pmtimer.c | 2 +-
 xen/arch/x86/hvm/rtc.c | 2 +-
 xen/arch/x86/hvm/save.c| 3 +++
 xen/arch/x86/hvm/vioapic.c | 2 +-
 xen/arch/x86/hvm/viridian.c| 3 ++-
 xen/arch/x86/hvm/vlapic.c  | 4 ++--
 xen/arch/x86/hvm/vpic.c| 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +-
 14 files changed, 29 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7df8744..4a70251 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1180,7 +1181,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+  hvm_load_cpu_ctxt,
   1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1533,6 +1535,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
+hvm_save_cpu_xsave_states_one,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1545,6 +1548,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
+hvm_save_cpu_msrs_one,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 298d7ee..1c4e731 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -822,8 +822,8 @@ static int hvm_

[Xen-devel] [PATCH v17 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func

2018-08-22 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V14:
- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..3f52d38 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
   viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_viridian_vcpu_context ctxt = {
+.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
+.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
+};
 
-if ( !is_viridian_domain(d) )
+if ( !is_viridian_domain(v->domain) )
 return 0;
 
-for_each_vcpu( d, v ) {
-struct hvm_viridian_vcpu_context ctxt = {
-.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
-};
+return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, );
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, ) != 0 )
-return 1;
+for_each_vcpu ( d, v )
+{
+err = viridian_save_vcpu_ctxt_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 12/13] x86/hvm: Remove redundant save functions

2018-08-22 Thread Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs.

Signed-off-by: Alexandru Isaila 

---
Changes since V16:
- Drop the "instance = 0" from hvm_save_one
- Changed if ( handler ) to if ( !handler )
- Call handler(d->vcpu[0] for the HVMSR_PER_DOM case.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +--
 xen/arch/x86/hvm/hpet.c|  7 ++--
 xen/arch/x86/hvm/hvm.c | 73 +++---
 xen/arch/x86/hvm/i8254.c   |  5 +--
 xen/arch/x86/hvm/irq.c | 15 +
 xen/arch/x86/hvm/mtrr.c| 20 ++--
 xen/arch/x86/hvm/pmtimer.c |  5 +--
 xen/arch/x86/hvm/rtc.c |  5 +--
 xen/arch/x86/hvm/save.c| 26 ---
 xen/arch/x86/hvm/vioapic.c |  5 +--
 xen/arch/x86/hvm/viridian.c| 23 +++--
 xen/arch/x86/hvm/vlapic.c  | 38 +++---
 xen/arch/x86/hvm/vpic.c|  5 +--
 xen/include/asm-x86/hvm/save.h |  8 ++---
 14 files changed, 60 insertions(+), 193 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 return ret;
 }
 
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_vmce_vcpu ctxt = {
 .caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = vmce_save_vcpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..4afa2ab 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,16 +516,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+struct domain *d = v->domain;
 HPETState *hp = domain_vhpet(d);
-struct vcpu *v = pt_global_vcpu_target(d);
 int rc;
 uint64_t guest_time;
 
 if ( !has_vhpet(d) )
 return 0;
 
+v = pt_global_vcpu_target(d);
 write_lock(>lock);
 guest_time = (v->arch.hvm_vcpu.guest_time ?: hvm_get_guest_time(v)) /
  STIME_PER_HPET_TICK;
@@ -640,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4a70251..831f86b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_tsc_adjust ctxt = {
 .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
 }
 
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = hvm_save_tsc_adjust_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,9 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *

[Xen-devel] [PATCH v17 08/13] x86/hvm: Introduce lapic_save_hidden_one

2018-08-22 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since v15:
- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 20 
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..429ffb5 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,23 +1435,27 @@ static void lapic_rearm(struct vlapic *s)
 s->timer_last_update = s->pt.last_plt_gtime;
 }
 
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+if ( !has_vlapic(v->domain) )
+return 0;
+
+return hvm_save_entry(LAPIC, v->vcpu_id, h, _vlapic(v)->hw);
+}
+
 static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
-
-if ( !has_vlapic(d) )
-return 0;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, >hw)) != 0 )
+err = lapic_save_hidden_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance

2018-08-22 Thread Alexandru Isaila
Hi all,

This patch series addresses the ideea of saving data from a single vcpu 
instance.
First it starts by adding *save_one functions, then it introduces a handler for 
the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func 
while 
changing domain_pause to vcpu_pause.

Cheers,

Alexandru Isaila (13):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v16 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func

2018-08-09 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V14:
- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..3f52d38 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
   viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_viridian_vcpu_context ctxt = {
+.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
+.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
+};
 
-if ( !is_viridian_domain(d) )
+if ( !is_viridian_domain(v->domain) )
 return 0;
 
-for_each_vcpu( d, v ) {
-struct hvm_viridian_vcpu_context ctxt = {
-.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
-};
+return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, );
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, ) != 0 )
-return 1;
+for_each_vcpu ( d, v )
+{
+err = viridian_save_vcpu_ctxt_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v16 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func

2018-08-09 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..d90da9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_tsc_adjust ctxt = {
+.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
+};
+
+return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_tsc_adjust ctxt;
 int err = 0;
 
 for_each_vcpu ( d, v )
 {
-ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
-err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+err = hvm_save_tsc_adjust_one(v, h);
 if ( err )
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v16 11/13] x86/domctl: Use hvm_save_vcpu_handler

2018-08-09 Thread Alexandru Isaila
This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila 

---
Changes since V15:
- Moved declarations into their scopes
- Remove redundant NULL check
- Remove rc variable
- Change fault return to -ENODATA.
---
 xen/arch/x86/hvm/save.c | 27 +++
 1 file changed, 23 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1106b96..1eb2b01 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 char *c;
 struct hvm_save_header hdr;
 struct hvm_save_end end;
-hvm_save_handler handler;
 unsigned int i;
 
 if ( d->is_dying )
@@ -223,17 +222,37 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 /* Save all available kinds of state */
 for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
 {
-handler = hvm_sr_handlers[i].save;
-if ( handler != NULL )
+struct vcpu *v;
+hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
+hvm_save_handler handler = hvm_sr_handlers[i].save;;
+
+if ( save_one_handler )
+{
+for_each_vcpu ( d, v )
+{
+printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+   v, hvm_sr_handlers[i].name);
+
+if ( save_one_handler(v, h) != 0 )
+{
+printk(XENLOG_G_ERR
+   "HVM %pv save: failed to save type %"PRIu16"\n",
+   v, i);
+return -ENODATA;
+}
+}
+}
+else if ( handler )
 {
 printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
+
 if ( handler(d, h) != 0 )
 {
 printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
d->domain_id, i);
-return -EFAULT;
+return -ENODATA;
 }
 }
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v16 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func

2018-08-09 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since v15:
- Drop comments
- Add BUILD_BUG_ON
- memcpy for sizeof().

Note: This patch is based on Roger Pau Monne's series[1]
---
 xen/arch/x86/hvm/mtrr.c | 78 ++---
 1 file changed, 41 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..ea0b3f8 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,56 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
uint64_t gfn_start,
 return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
+struct hvm_hw_mtrr hw_mtrr = {
+.msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+   MTRRdefType_FE) |
+ MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+.msr_mtrr_cap  = mtrr_state->mtrr_cap,
+};
+unsigned int i;
 
-/* save mtrr */
-for_each_vcpu(d, v)
+if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
 {
-const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
-struct hvm_hw_mtrr hw_mtrr = {
-.msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
-   MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-.msr_mtrr_cap  = mtrr_state->mtrr_cap,
-};
-unsigned int i;
+dprintk(XENLOG_G_ERR,
+"HVM save: %pv: too many (%lu) variable range MTRRs\n",
+v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+return -EINVAL;
+}
 
-if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-{
-dprintk(XENLOG_G_ERR,
-"HVM save: %pv: too many (%lu) variable range MTRRs\n",
-v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-return -EINVAL;
-}
+hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
 
-hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+{
+hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+}
 
-for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-{
-/* save physbase */
-hw_mtrr.msr_mtrr_var[i*2] =
-((uint64_t*)mtrr_state->var_ranges)[i*2];
-/* save physmask */
-hw_mtrr.msr_mtrr_var[i*2+1] =
-((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-}
+BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) !=
+ sizeof(mtrr_state->fixed_ranges));
 
-for ( i = 0; i < NUM_FIXED_MSR; i++ )
-hw_mtrr.msr_mtrr_fixed[i] =
-((uint64_t*)mtrr_state->fixed_ranges)[i];
+memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges, 
sizeof(hw_mtrr.msr_mtrr_fixed));
 
-if ( hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr) != 0 )
-return 1;
+return hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr);
+}
+
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+/* save mtrr */
+for_each_vcpu(d, v)
+{
+   err = hvm_save_mtrr_msr_one(v, h);
+   if ( err )
+   break;
 }
-return 0;
+return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v16 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one

2018-08-09 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return
- Move xsave_enabled() check to the save_one func.
---
 xen/arch/x86/hvm/hvm.c | 47 +--
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 333c342..5b0820e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1187,35 +1187,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, 
hvm_load_cpu_ctxt,
save_area) + \
   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t 
*h)
 {
-struct vcpu *v;
 struct hvm_hw_cpu_xsave *ctxt;
+unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+int err;
 
-if ( !cpu_has_xsave )
+if ( !cpu_has_xsave || !xsave_enabled(v) )
 return 0;   /* do nothing */
 
-for_each_vcpu ( d, v )
-{
-unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+if ( err )
+return err;
 
-if ( !xsave_enabled(v) )
-continue;
-if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-return 1;
-ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
-h->cur += size;
+ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
+h->cur += size;
+ctxt->xfeature_mask = xfeature_mask;
+ctxt->xcr0 = v->arch.xcr0;
+ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-ctxt->xfeature_mask = xfeature_mask;
-ctxt->xcr0 = v->arch.xcr0;
-ctxt->xcr0_accum = v->arch.xcr0_accum;
-expand_xsave_states(v, >save_area,
-size - offsetof(typeof(*ctxt), save_area));
-}
+expand_xsave_states(v, >save_area,
+size - offsetof(typeof(*ctxt), save_area));
 
 return 0;
 }
 
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_xsave_states_one(v, h);
+if ( err )
+break;
+}
+
+return err;
+}
+
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v16 00/13] x86/domctl: Save info for one vcpu instance

2018-08-09 Thread Alexandru Isaila
Hi all,

This patch series addresses the ideea of saving data from a single vcpu 
instance.
First it starts by adding *save_one functions, then it introduces a handler for 
the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func 
while 
changing domain_pause to vcpu_pause.

Cheers,

Alexandru Isaila (13):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v16 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func

2018-08-09 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++--
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5b0820e..7df8744 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1364,69 +1364,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_save_descriptor *desc = _p(>data[h->cur]);
+struct hvm_msr *ctxt;
+unsigned int i;
+int err;
 
-for_each_vcpu ( d, v )
+err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+if ( err )
+return err;
+ctxt = (struct hvm_msr *)>data[h->cur];
+ctxt->count = 0;
+
+for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
 {
-struct hvm_save_descriptor *desc = _p(>data[h->cur]);
-struct hvm_msr *ctxt;
-unsigned int i;
+uint64_t val;
+int rc = guest_rdmsr(v, msrs_to_send[i], );
 
-if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
-return 1;
-ctxt = (struct hvm_msr *)>data[h->cur];
-ctxt->count = 0;
+/*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+if ( rc == X86EMUL_EXCEPTION )
+continue;
 
-for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+if ( rc != X86EMUL_OKAY )
 {
-uint64_t val;
-int rc = guest_rdmsr(v, msrs_to_send[i], );
+ASSERT_UNREACHABLE();
+return -ENXIO;
+}
 
-/*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
-if ( rc == X86EMUL_EXCEPTION )
-continue;
+if ( !val )
+continue; /* Skip empty MSRs. */
 
-if ( rc != X86EMUL_OKAY )
-{
-ASSERT_UNREACHABLE();
-return -ENXIO;
-}
+ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ctxt->msr[ctxt->count++].val = val;
+}
 
-if ( !val )
-continue; /* Skip empty MSRs. */
+if ( hvm_funcs.save_msr )
+hvm_funcs.save_msr(v, ctxt);
 
-ctxt->msr[ctxt->count].index = msrs_to_send[i];
-ctxt->msr[ctxt->count++].val = val;
-}
+ASSERT(ctxt->count <= msr_count_max);
 
-if ( hvm_funcs.save_msr )
-hvm_funcs.save_msr(v, ctxt);
+for ( i = 0; i < ctxt->count; ++i )
+ctxt->msr[i]._rsvd = 0;
 
-ASSERT(ctxt->count <= msr_count_max);
+if ( ctxt->count )
+{
+/* Rewrite length to indicate how much space we actually used. */
+desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
+else
+/* or rewind and remove the descriptor from the stream. */
+h->cur -= sizeof(struct hvm_save_descriptor);
 
-for ( i = 0; i < ctxt->count; ++i )
-ctxt->msr[i]._rsvd = 0;
+return 0;
+}
 
-if ( ctxt->count )
-{
-/* Rewrite length to indicate how much space we actually used. */
-desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-}
-else
-/* or rewind and remove the descriptor from the stream. */
-h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_msrs_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v16 10/13] x86/hvm: Add handler for save_one funcs

2018-08-09 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Change handler name from hvm_save_one_handler to
  hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/hvm/hpet.c| 2 +-
 xen/arch/x86/hvm/hvm.c | 6 +-
 xen/arch/x86/hvm/i8254.c   | 2 +-
 xen/arch/x86/hvm/irq.c | 6 +++---
 xen/arch/x86/hvm/mtrr.c| 4 ++--
 xen/arch/x86/hvm/pmtimer.c | 2 +-
 xen/arch/x86/hvm/rtc.c | 2 +-
 xen/arch/x86/hvm/save.c| 3 +++
 xen/arch/x86/hvm/vioapic.c | 2 +-
 xen/arch/x86/hvm/viridian.c| 3 ++-
 xen/arch/x86/hvm/vlapic.c  | 4 ++--
 xen/arch/x86/hvm/vpic.c| 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +-
 14 files changed, 29 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7df8744..4a70251 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1180,7 +1181,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+  hvm_load_cpu_ctxt,
   1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1533,6 +1535,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
+hvm_save_cpu_xsave_states_one,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1545,6 +1548,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
+hvm_save_cpu_msrs_one,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index ea0b3f8..3b1de15 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -820,8 +820,8 @@ static int hvm_

[Xen-devel] [PATCH v16 12/13] x86/hvm: Remove redundant save functions

2018-08-09 Thread Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs.

Signed-off-by: Alexandru Isaila 

---
Changes since V15:
- Add if for hvm_sr_handlers[i].kind to separate HVMSR_PER_VCPU
  from HVMSR_PER_DOM
- Add bounds check for instance.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +--
 xen/arch/x86/hvm/hpet.c|  7 ++--
 xen/arch/x86/hvm/hvm.c | 73 +++---
 xen/arch/x86/hvm/i8254.c   |  5 +--
 xen/arch/x86/hvm/irq.c | 15 +
 xen/arch/x86/hvm/mtrr.c| 19 ++-
 xen/arch/x86/hvm/pmtimer.c |  5 +--
 xen/arch/x86/hvm/rtc.c |  5 +--
 xen/arch/x86/hvm/save.c| 51 ++---
 xen/arch/x86/hvm/vioapic.c |  5 +--
 xen/arch/x86/hvm/viridian.c| 23 +++--
 xen/arch/x86/hvm/vlapic.c  | 41 +++-
 xen/arch/x86/hvm/vpic.c|  5 +--
 xen/include/asm-x86/hvm/save.h |  8 ++---
 14 files changed, 72 insertions(+), 208 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 return ret;
 }
 
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_vmce_vcpu ctxt = {
 .caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = vmce_save_vcpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..4afa2ab 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,16 +516,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+struct domain *d = v->domain;
 HPETState *hp = domain_vhpet(d);
-struct vcpu *v = pt_global_vcpu_target(d);
 int rc;
 uint64_t guest_time;
 
 if ( !has_vhpet(d) )
 return 0;
 
+v = pt_global_vcpu_target(d);
 write_lock(>lock);
 guest_time = (v->arch.hvm_vcpu.guest_time ?: hvm_get_guest_time(v)) /
  STIME_PER_HPET_TICK;
@@ -640,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4a70251..831f86b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_tsc_adjust ctxt = {
 .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
 }
 
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = hvm_save_tsc_adjust_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,9 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct

[Xen-devel] [PATCH v16 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func

2018-08-09 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Jan Beulich 

---
Changes since V14:
- Move all free fields to the initializer
- Add blank line to before the return
- Move v->pause_flags check to the save_one function.
---
 xen/arch/x86/hvm/hvm.c | 219 +
 1 file changed, 113 insertions(+), 106 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d90da9a..333c342 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
-struct hvm_hw_cpu ctxt;
 struct segment_register seg;
+struct hvm_hw_cpu ctxt = {
+.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc),
+.msr_tsc_aux = hvm_msr_tsc_aux(v),
+.rax = v->arch.user_regs.rax,
+.rbx = v->arch.user_regs.rbx,
+.rcx = v->arch.user_regs.rcx,
+.rdx = v->arch.user_regs.rdx,
+.rbp = v->arch.user_regs.rbp,
+.rsi = v->arch.user_regs.rsi,
+.rdi = v->arch.user_regs.rdi,
+.rsp = v->arch.user_regs.rsp,
+.rip = v->arch.user_regs.rip,
+.rflags = v->arch.user_regs.rflags,
+.r8  = v->arch.user_regs.r8,
+.r9  = v->arch.user_regs.r9,
+.r10 = v->arch.user_regs.r10,
+.r11 = v->arch.user_regs.r11,
+.r12 = v->arch.user_regs.r12,
+.r13 = v->arch.user_regs.r13,
+.r14 = v->arch.user_regs.r14,
+.r15 = v->arch.user_regs.r15,
+.dr0 = v->arch.debugreg[0],
+.dr1 = v->arch.debugreg[1],
+.dr2 = v->arch.debugreg[2],
+.dr3 = v->arch.debugreg[3],
+.dr6 = v->arch.debugreg[6],
+.dr7 = v->arch.debugreg[7],
+};
 
-for_each_vcpu ( d, v )
+/*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+if ( v->pause_flags & VPF_down )
+return 0;
+
+/* Architecture-specific vmcs/vmcb bits */
+hvm_funcs.save_cpu_ctxt(v, );
+
+hvm_get_segment_register(v, x86_seg_idtr, );
+ctxt.idtr_limit = seg.limit;
+ctxt.idtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_gdtr, );
+ctxt.gdtr_limit = seg.limit;
+ctxt.gdtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_cs, );
+ctxt.cs_sel = seg.sel;
+ctxt.cs_limit = seg.limit;
+ctxt.cs_base = seg.base;
+ctxt.cs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ds, );
+ctxt.ds_sel = seg.sel;
+ctxt.ds_limit = seg.limit;
+ctxt.ds_base = seg.base;
+ctxt.ds_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_es, );
+ctxt.es_sel = seg.sel;
+ctxt.es_limit = seg.limit;
+ctxt.es_base = seg.base;
+ctxt.es_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ss, );
+ctxt.ss_sel = seg.sel;
+ctxt.ss_limit = seg.limit;
+ctxt.ss_base = seg.base;
+ctxt.ss_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_fs, );
+ctxt.fs_sel = seg.sel;
+ctxt.fs_limit = seg.limit;
+ctxt.fs_base = seg.base;
+ctxt.fs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_gs, );
+ctxt.gs_sel = seg.sel;
+ctxt.gs_limit = seg.limit;
+ctxt.gs_base = seg.base;
+ctxt.gs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_tr, );
+ctxt.tr_sel = seg.sel;
+ctxt.tr_limit = seg.limit;
+ctxt.tr_base = seg.base;
+ctxt.tr_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ldtr, );
+ctxt.ldtr_sel = seg.sel;
+ctxt.ldtr_limit = seg.limit;
+ctxt.ldtr_base = seg.base;
+ctxt.ldtr_arbytes = seg.attr;
+
+if ( v->fpu_initialised )
 {
-/* We don't need to save state for a vcpu that is down; the restore 
- * code will leave it down if there is nothing saved. */
-if ( v->pause_flags & VPF_down )
-continue;
+memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ctxt.flags = XEN_X86_FPU_INITIALISED;
+}
 
-memset(, 0, sizeof(ctxt));
-
-/* Architecture-specific vmcs/vmcb bits */
-hvm_funcs.save_cpu_ctxt(v, );
-
-ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
-ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-hvm_get_segment_register(v, x86_seg_idtr, );
-ctxt.idtr_limit = seg.limit;
-ctxt.idtr_base = seg.base;
-
-hvm_get_segme

[Xen-devel] [PATCH v16 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-08-09 Thread Alexandru Isaila
This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila 

---
Changes since V15:
- Moved pause/unpause calls into hvm_save_one()
- Re-add the loop in hvm_save_one().
---
 xen/arch/x86/domctl.c   |  2 --
 xen/arch/x86/hvm/save.c | 12 ++--
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 8fbbf3a..cb53980 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -591,12 +591,10 @@ long arch_do_domctl(
  !is_hvm_domain(d) )
 break;
 
-domain_pause(d);
 ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
domctl->u.hvmcontext_partial.instance,
domctl->u.hvmcontext_partial.buffer,
>u.hvmcontext_partial.bufsz);
-domain_unpause(d);
 
 if ( !ret )
 copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 54abed4..e86430a 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -151,12 +151,15 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_DOM )
 instance = 0;
 ctxt.size = hvm_sr_handlers[typecode].size;
-if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
-ctxt.size *= d->max_vcpus;
 ctxt.data = xmalloc_bytes(ctxt.size);
 if ( !ctxt.data )
 return -ENOMEM;
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_pause(d->vcpu[instance]);
+else
+domain_pause(d);
+
 if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], )) != 0 )
 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
d->domain_id, typecode, rv);
@@ -188,6 +191,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 }
 }
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+vcpu_unpause(d->vcpu[instance]);
+else
+domain_unpause(d);
+
 xfree(ctxt.data);
 return rv;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 09/14] x86/hvm: Introduce lapic_save_regs_one func

2018-08-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
---
 xen/arch/x86/hvm/vlapic.c | 27 +++
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 0795161..d35810e 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1460,26 +1460,37 @@ static int lapic_save_hidden(struct domain *d, 
hvm_domain_context_t *h)
 return err;
 }
 
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct vlapic *s;
+
+if ( !has_vlapic(v->domain) )
+return 0;
+
+if ( hvm_funcs.sync_pir_to_irr )
+hvm_funcs.sync_pir_to_irr(v);
+
+s = vcpu_vlapic(v);
+
+return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs);
+}
+
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
+int err = 0;
 
 if ( !has_vlapic(d) )
 return 0;
 
 for_each_vcpu ( d, v )
 {
-if ( hvm_funcs.sync_pir_to_irr )
-hvm_funcs.sync_pir_to_irr(v);
-
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+err = lapic_save_regs_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 /*
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 02/14] x86/hvm: Introduce hvm_save_tsc_adjust_one() func

2018-08-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..d90da9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_tsc_adjust ctxt = {
+.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
+};
+
+return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_tsc_adjust ctxt;
 int err = 0;
 
 for_each_vcpu ( d, v )
 {
-ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
-err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+err = hvm_save_tsc_adjust_one(v, h);
 if ( err )
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 13/14] x86/hvm: Remove redundant save functions

2018-08-03 Thread Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs.

Signed-off-by: Alexandru Isaila 

---
Changes since V14:
- Change vcpu to v
- Remove extra space
- Rename save_one handler to save
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +--
 xen/arch/x86/hvm/hpet.c|  7 ++--
 xen/arch/x86/hvm/hvm.c | 73 +++---
 xen/arch/x86/hvm/i8254.c   |  5 +--
 xen/arch/x86/hvm/irq.c | 15 +
 xen/arch/x86/hvm/mtrr.c| 19 ++-
 xen/arch/x86/hvm/pmtimer.c |  5 +--
 xen/arch/x86/hvm/rtc.c |  5 +--
 xen/arch/x86/hvm/save.c|  9 ++
 xen/arch/x86/hvm/vioapic.c |  5 +--
 xen/arch/x86/hvm/viridian.c| 23 +++--
 xen/arch/x86/hvm/vlapic.c  | 41 +++-
 xen/arch/x86/hvm/vpic.c|  5 +--
 xen/include/asm-x86/hvm/save.h |  8 ++---
 14 files changed, 49 insertions(+), 189 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 return ret;
 }
 
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_vmce_vcpu ctxt = {
 .caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = vmce_save_vcpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..4afa2ab 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,16 +516,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+struct domain *d = v->domain;
 HPETState *hp = domain_vhpet(d);
-struct vcpu *v = pt_global_vcpu_target(d);
 int rc;
 uint64_t guest_time;
 
 if ( !has_vhpet(d) )
 return 0;
 
+v = pt_global_vcpu_target(d);
 write_lock(>lock);
 guest_time = (v->arch.hvm_vcpu.guest_time ?: hvm_get_guest_time(v)) /
  STIME_PER_HPET_TICK;
@@ -640,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4a70251..831f86b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_tsc_adjust ctxt = {
 .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
 }
 
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = hvm_save_tsc_adjust_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,9 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct segment_register seg;
 struct hvm_hw_cpu ctxt = {
@@ -895,21

[Xen-devel] [PATCH v15 10/14] x86/hvm: Add handler for save_one funcs

2018-08-03 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 

---
Changes since V14:
- Change handler name from hvm_save_one_handler to
  hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/hvm/hpet.c| 2 +-
 xen/arch/x86/hvm/hvm.c | 6 +-
 xen/arch/x86/hvm/i8254.c   | 2 +-
 xen/arch/x86/hvm/irq.c | 6 +++---
 xen/arch/x86/hvm/mtrr.c| 4 ++--
 xen/arch/x86/hvm/pmtimer.c | 2 +-
 xen/arch/x86/hvm/rtc.c | 2 +-
 xen/arch/x86/hvm/save.c| 3 +++
 xen/arch/x86/hvm/vioapic.c | 2 +-
 xen/arch/x86/hvm/viridian.c| 3 ++-
 xen/arch/x86/hvm/vlapic.c  | 4 ++--
 xen/arch/x86/hvm/vpic.c| 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +-
 14 files changed, 29 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7df8744..4a70251 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1180,7 +1181,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+  hvm_load_cpu_ctxt,
   1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1533,6 +1535,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
+hvm_save_cpu_xsave_states_one,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1545,6 +1548,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
+hvm_save_cpu_msrs_one,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 2d5af72..d4aa026 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -819,8 +819,8 @@ static int hvm_load_mtrr_

[Xen-devel] [PATCH v15 05/14] x86/hvm: Introduce hvm_save_cpu_msrs_one func

2018-08-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V14:
- Remove err init
- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++--
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5b0820e..7df8744 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1364,69 +1364,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_save_descriptor *desc = _p(>data[h->cur]);
+struct hvm_msr *ctxt;
+unsigned int i;
+int err;
 
-for_each_vcpu ( d, v )
+err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+if ( err )
+return err;
+ctxt = (struct hvm_msr *)>data[h->cur];
+ctxt->count = 0;
+
+for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
 {
-struct hvm_save_descriptor *desc = _p(>data[h->cur]);
-struct hvm_msr *ctxt;
-unsigned int i;
+uint64_t val;
+int rc = guest_rdmsr(v, msrs_to_send[i], );
 
-if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
-return 1;
-ctxt = (struct hvm_msr *)>data[h->cur];
-ctxt->count = 0;
+/*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+if ( rc == X86EMUL_EXCEPTION )
+continue;
 
-for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+if ( rc != X86EMUL_OKAY )
 {
-uint64_t val;
-int rc = guest_rdmsr(v, msrs_to_send[i], );
+ASSERT_UNREACHABLE();
+return -ENXIO;
+}
 
-/*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
-if ( rc == X86EMUL_EXCEPTION )
-continue;
+if ( !val )
+continue; /* Skip empty MSRs. */
 
-if ( rc != X86EMUL_OKAY )
-{
-ASSERT_UNREACHABLE();
-return -ENXIO;
-}
+ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ctxt->msr[ctxt->count++].val = val;
+}
 
-if ( !val )
-continue; /* Skip empty MSRs. */
+if ( hvm_funcs.save_msr )
+hvm_funcs.save_msr(v, ctxt);
 
-ctxt->msr[ctxt->count].index = msrs_to_send[i];
-ctxt->msr[ctxt->count++].val = val;
-}
+ASSERT(ctxt->count <= msr_count_max);
 
-if ( hvm_funcs.save_msr )
-hvm_funcs.save_msr(v, ctxt);
+for ( i = 0; i < ctxt->count; ++i )
+ctxt->msr[i]._rsvd = 0;
 
-ASSERT(ctxt->count <= msr_count_max);
+if ( ctxt->count )
+{
+/* Rewrite length to indicate how much space we actually used. */
+desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
+else
+/* or rewind and remove the descriptor from the stream. */
+h->cur -= sizeof(struct hvm_save_descriptor);
 
-for ( i = 0; i < ctxt->count; ++i )
-ctxt->msr[i]._rsvd = 0;
+return 0;
+}
 
-if ( ctxt->count )
-{
-/* Rewrite length to indicate how much space we actually used. */
-desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-}
-else
-/* or rewind and remove the descriptor from the stream. */
-h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_msrs_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 01/14] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func

2018-08-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since V11:
- Removed the memset and added init with {}.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..31e553c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 return ret;
 }
 
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_vmce_vcpu ctxt = {
+.caps = v->arch.vmce.mcg_cap,
+.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+};
+
+return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+}
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 
 for_each_vcpu ( d, v )
 {
-struct hvm_vmce_vcpu ctxt = {
-.caps = v->arch.vmce.mcg_cap,
-.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-};
-
-err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+err = vmce_save_vcpu_ctxt_one(v, h);
 if ( err )
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 00/14] x86/domctl: Save info for one vcpu instance

2018-08-03 Thread Alexandru Isaila
Hi all,

This patch series addresses the ideea of saving data from a single vcpu 
instance.
First it starts by adding *save_one functions, then it introduces a handler for 
the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func 
while 
changing domain_pause to vcpu_pause.

Cheers,

Alexandru Isaila (14):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Drop the use of save functions
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 06/14] x86/hvm: Introduce hvm_save_mtrr_msr_one func

2018-08-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since v14:
- Fix style violations
- Use structure fields over cast
- Use memcpy for fixed_ranges.

Note: This patch is based on Roger Pau Monne's series[1]
---
 xen/arch/x86/hvm/mtrr.c | 77 +
 1 file changed, 40 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..2d5af72 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,55 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
uint64_t gfn_start,
 return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
+struct hvm_hw_mtrr hw_mtrr = {
+.msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+   MTRRdefType_FE) |
+ MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+.msr_mtrr_cap  = mtrr_state->mtrr_cap,
+};
+unsigned int i;
 
-/* save mtrr */
-for_each_vcpu(d, v)
+if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
 {
-const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
-struct hvm_hw_mtrr hw_mtrr = {
-.msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
-   MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-.msr_mtrr_cap  = mtrr_state->mtrr_cap,
-};
-unsigned int i;
+dprintk(XENLOG_G_ERR,
+"HVM save: %pv: too many (%lu) variable range MTRRs\n",
+v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+return -EINVAL;
+}
 
-if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-{
-dprintk(XENLOG_G_ERR,
-"HVM save: %pv: too many (%lu) variable range MTRRs\n",
-v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-return -EINVAL;
-}
+hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+
+for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+{
+/* save physbase */
+hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+/* save physmask */
+hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+}
 
-hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges, NUM_FIXED_MSR);
 
-for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-{
-/* save physbase */
-hw_mtrr.msr_mtrr_var[i*2] =
-((uint64_t*)mtrr_state->var_ranges)[i*2];
-/* save physmask */
-hw_mtrr.msr_mtrr_var[i*2+1] =
-((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-}
+return hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr);
+}
 
-for ( i = 0; i < NUM_FIXED_MSR; i++ )
-hw_mtrr.msr_mtrr_fixed[i] =
-((uint64_t*)mtrr_state->fixed_ranges)[i];
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr) != 0 )
-return 1;
+/* save mtrr */
+for_each_vcpu(d, v)
+{
+   err = hvm_save_mtrr_msr_one(v, h);
+   if ( err )
+   break;
 }
-return 0;
+return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 11/14] x86/domctl: Use hvm_save_vcpu_handler

2018-08-03 Thread Alexandru Isaila
This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila 

---
Changes since V14:
- Removed the modification from the hvm_save_one
- Removed vcpu init
- Declared rc as int
- Add vcpu id to the log print.
---
 xen/arch/x86/hvm/save.c | 28 ++--
 1 file changed, 26 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1106b96..61565fe 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -196,7 +196,10 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 struct hvm_save_header hdr;
 struct hvm_save_end end;
 hvm_save_handler handler;
+hvm_save_vcpu_handler save_one_handler;
 unsigned int i;
+int rc;
+struct vcpu *v;
 
 if ( d->is_dying )
 return -EINVAL;
@@ -224,11 +227,32 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
 {
 handler = hvm_sr_handlers[i].save;
-if ( handler != NULL )
+save_one_handler = hvm_sr_handlers[i].save_one;
+if ( save_one_handler != NULL )
+{
+for_each_vcpu ( d, v )
+{
+printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+   v, hvm_sr_handlers[i].name);
+rc = save_one_handler(v, h);
+
+if( rc != 0 )
+{
+printk(XENLOG_G_ERR
+   "HVM %pv save: failed to save type %"PRIu16"\n",
+   v, i);
+return -EFAULT;
+}
+}
+}
+else if ( handler != NULL )
 {
 printk(XENLOG_G_INFO "HVM%d save: %s\n",
d->domain_id, hvm_sr_handlers[i].name);
-if ( handler(d, h) != 0 )
+
+rc = handler(d, h);
+
+if( rc != 0 )
 {
 printk(XENLOG_G_ERR
"HVM%d save: failed to save type %"PRIu16"\n",
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 08/14] x86/hvm: Introduce lapic_save_hidden_one

2018-08-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
---
 xen/arch/x86/hvm/vlapic.c | 22 ++
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..0795161 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,23 +1435,29 @@ static void lapic_rearm(struct vlapic *s)
 s->timer_last_update = s->pt.last_plt_gtime;
 }
 
-static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
-struct vlapic *s;
-int rc = 0;
+struct vlapic *s = vcpu_vlapic(v);
 
-if ( !has_vlapic(d) )
+if ( !has_vlapic(v->domain) )
 return 0;
 
+return hvm_save_entry(LAPIC, v->vcpu_id, h, >hw);
+}
+
+static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
 for_each_vcpu ( d, v )
 {
-s = vcpu_vlapic(v);
-if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, >hw)) != 0 )
+err = lapic_save_hidden_one(v, h);
+if ( err )
 break;
 }
 
-return rc;
+return err;
 }
 
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 12/14] x86/hvm: Drop the use of save functions

2018-08-03 Thread Alexandru Isaila
This patch drops the use of save functions in hvm_save.

Signed-off-by: Alexandru Isaila 
---
 xen/arch/x86/hvm/save.c | 25 -
 1 file changed, 4 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 61565fe..363695c 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,8 +195,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 char *c;
 struct hvm_save_header hdr;
 struct hvm_save_end end;
-hvm_save_handler handler;
-hvm_save_vcpu_handler save_one_handler;
+hvm_save_vcpu_handler handler;
 unsigned int i;
 int rc;
 struct vcpu *v;
@@ -226,15 +225,14 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 /* Save all available kinds of state */
 for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
 {
-handler = hvm_sr_handlers[i].save;
-save_one_handler = hvm_sr_handlers[i].save_one;
-if ( save_one_handler != NULL )
+handler = hvm_sr_handlers[i].save_one;
+if ( handler != NULL )
 {
 for_each_vcpu ( d, v )
 {
 printk(XENLOG_G_INFO "HVM %pv save: %s\n",
v, hvm_sr_handlers[i].name);
-rc = save_one_handler(v, h);
+rc = handler(v, h);
 
 if( rc != 0 )
 {
@@ -245,21 +243,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
 }
 }
 }
-else if ( handler != NULL )
-{
-printk(XENLOG_G_INFO "HVM%d save: %s\n",
-   d->domain_id, hvm_sr_handlers[i].name);
-
-rc = handler(d, h);
-
-if( rc != 0 )
-{
-printk(XENLOG_G_ERR
-   "HVM%d save: failed to save type %"PRIu16"\n",
-   d->domain_id, i);
-return -EFAULT;
-}
-}
 }
 
 /* Save an end-of-file marker */
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 03/14] x86/hvm: Introduce hvm_save_cpu_ctxt_one func

2018-08-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since V14:
- Move all free fields to the initializer
- Add blank line to before the return
- Move v->pause_flags check to the save_one function.
---
 xen/arch/x86/hvm/hvm.c | 219 +
 1 file changed, 113 insertions(+), 106 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d90da9a..333c342 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
-struct hvm_hw_cpu ctxt;
 struct segment_register seg;
+struct hvm_hw_cpu ctxt = {
+.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc),
+.msr_tsc_aux = hvm_msr_tsc_aux(v),
+.rax = v->arch.user_regs.rax,
+.rbx = v->arch.user_regs.rbx,
+.rcx = v->arch.user_regs.rcx,
+.rdx = v->arch.user_regs.rdx,
+.rbp = v->arch.user_regs.rbp,
+.rsi = v->arch.user_regs.rsi,
+.rdi = v->arch.user_regs.rdi,
+.rsp = v->arch.user_regs.rsp,
+.rip = v->arch.user_regs.rip,
+.rflags = v->arch.user_regs.rflags,
+.r8  = v->arch.user_regs.r8,
+.r9  = v->arch.user_regs.r9,
+.r10 = v->arch.user_regs.r10,
+.r11 = v->arch.user_regs.r11,
+.r12 = v->arch.user_regs.r12,
+.r13 = v->arch.user_regs.r13,
+.r14 = v->arch.user_regs.r14,
+.r15 = v->arch.user_regs.r15,
+.dr0 = v->arch.debugreg[0],
+.dr1 = v->arch.debugreg[1],
+.dr2 = v->arch.debugreg[2],
+.dr3 = v->arch.debugreg[3],
+.dr6 = v->arch.debugreg[6],
+.dr7 = v->arch.debugreg[7],
+};
 
-for_each_vcpu ( d, v )
+/*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+if ( v->pause_flags & VPF_down )
+return 0;
+
+/* Architecture-specific vmcs/vmcb bits */
+hvm_funcs.save_cpu_ctxt(v, );
+
+hvm_get_segment_register(v, x86_seg_idtr, );
+ctxt.idtr_limit = seg.limit;
+ctxt.idtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_gdtr, );
+ctxt.gdtr_limit = seg.limit;
+ctxt.gdtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_cs, );
+ctxt.cs_sel = seg.sel;
+ctxt.cs_limit = seg.limit;
+ctxt.cs_base = seg.base;
+ctxt.cs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ds, );
+ctxt.ds_sel = seg.sel;
+ctxt.ds_limit = seg.limit;
+ctxt.ds_base = seg.base;
+ctxt.ds_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_es, );
+ctxt.es_sel = seg.sel;
+ctxt.es_limit = seg.limit;
+ctxt.es_base = seg.base;
+ctxt.es_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ss, );
+ctxt.ss_sel = seg.sel;
+ctxt.ss_limit = seg.limit;
+ctxt.ss_base = seg.base;
+ctxt.ss_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_fs, );
+ctxt.fs_sel = seg.sel;
+ctxt.fs_limit = seg.limit;
+ctxt.fs_base = seg.base;
+ctxt.fs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_gs, );
+ctxt.gs_sel = seg.sel;
+ctxt.gs_limit = seg.limit;
+ctxt.gs_base = seg.base;
+ctxt.gs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_tr, );
+ctxt.tr_sel = seg.sel;
+ctxt.tr_limit = seg.limit;
+ctxt.tr_base = seg.base;
+ctxt.tr_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ldtr, );
+ctxt.ldtr_sel = seg.sel;
+ctxt.ldtr_limit = seg.limit;
+ctxt.ldtr_base = seg.base;
+ctxt.ldtr_arbytes = seg.attr;
+
+if ( v->fpu_initialised )
 {
-/* We don't need to save state for a vcpu that is down; the restore 
- * code will leave it down if there is nothing saved. */
-if ( v->pause_flags & VPF_down )
-continue;
+memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ctxt.flags = XEN_X86_FPU_INITIALISED;
+}
 
-memset(, 0, sizeof(ctxt));
-
-/* Architecture-specific vmcs/vmcb bits */
-hvm_funcs.save_cpu_ctxt(v, );
-
-ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
-ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-hvm_get_segment_register(v, x86_seg_idtr, );
-ctxt.idtr_limit = seg.limit;
-ctxt.idtr_base = seg.base;
-
-hvm_get_segment_register(v, x86_seg_gdtr, );
-  

[Xen-devel] [PATCH v15 14/14] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-08-03 Thread Alexandru Isaila
This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila 
---
 xen/arch/x86/domctl.c   |  4 ++--
 xen/arch/x86/hvm/save.c | 41 +
 2 files changed, 19 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 8fbbf3a..bd6ba62 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -591,12 +591,12 @@ long arch_do_domctl(
  !is_hvm_domain(d) )
 break;
 
-domain_pause(d);
+vcpu_pause(d->vcpu[domctl->u.hvmcontext_partial.instance]);
 ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
domctl->u.hvmcontext_partial.instance,
domctl->u.hvmcontext_partial.buffer,
>u.hvmcontext_partial.bufsz);
-domain_unpause(d);
+vcpu_unpause(d->vcpu[domctl->u.hvmcontext_partial.instance]);
 
 if ( !ret )
 copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 43eb582..28f3b57 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -138,6 +138,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 int rv;
 hvm_domain_context_t ctxt = { };
 const struct hvm_save_descriptor *desc;
+uint32_t off = 0;
 
 if ( d->is_dying ||
  typecode > HVM_SAVE_CODE_MAX ||
@@ -146,8 +147,6 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
 return -EINVAL;
 
 ctxt.size = hvm_sr_handlers[typecode].size;
-if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
-ctxt.size *= d->max_vcpus;
 ctxt.data = xmalloc_bytes(ctxt.size);
 if ( !ctxt.data )
 return -ENOMEM;
@@ -157,29 +156,23 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
d->domain_id, typecode, rv);
 else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
 {
-uint32_t off;
-
-for ( off = 0; off <= (ctxt.cur - sizeof(*desc)); off += desc->length )
+desc = (void *)(ctxt.data + off);
+/* Move past header */
+off += sizeof(*desc);
+if ( ctxt.cur < desc->length ||
+ off > ctxt.cur - desc->length )
+rv = -EFAULT;
+if ( instance == desc->instance )
 {
-desc = (void *)(ctxt.data + off);
-/* Move past header */
-off += sizeof(*desc);
-if ( ctxt.cur < desc->length ||
- off > ctxt.cur - desc->length )
-break;
-if ( instance == desc->instance )
-{
-rv = 0;
-if ( guest_handle_is_null(handle) )
-*bufsz = desc->length;
-else if ( *bufsz < desc->length )
-rv = -ENOBUFS;
-else if ( copy_to_guest(handle, ctxt.data + off, desc->length) 
)
-rv = -EFAULT;
-else
-*bufsz = desc->length;
-break;
-}
+rv = 0;
+if ( guest_handle_is_null(handle) )
+*bufsz = desc->length;
+else if ( *bufsz < desc->length )
+rv = -ENOBUFS;
+else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+rv = -EFAULT;
+else
+*bufsz = desc->length;
 }
 }
 
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v15 07/14] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func

2018-08-03 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V14:
- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..3f52d38 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
   viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_viridian_vcpu_context ctxt = {
+.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
+.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
+};
 
-if ( !is_viridian_domain(d) )
+if ( !is_viridian_domain(v->domain) )
 return 0;
 
-for_each_vcpu( d, v ) {
-struct hvm_viridian_vcpu_context ctxt = {
-.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
-};
+return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, );
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, ) != 0 )
-return 1;
+for_each_vcpu ( d, v )
+{
+err = viridian_save_vcpu_ctxt_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v1] x86/hvm: Drop hvm_sr_handlers initializer

2018-08-03 Thread Alexandru Isaila
This initializer is flawed and only sets .name of array entry 0
to a non-NULL string.

Signed-off-by: Alexandru Isaila 
Suggested-by: Jan Beulich 
---
 xen/arch/x86/hvm/save.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 8984a23..422b96c 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -89,7 +89,7 @@ static struct {
 const char *name;
 size_t size;
 int kind;
-} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, ""}, };
+} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1];
 
 /* Init-time function to add entries to that list */
 void __init hvm_register_savevm(uint16_t typecode,
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v14 11/11] x86/hvm: Remove save_one handler

2018-07-25 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 
---
 xen/arch/x86/cpu/mcheck/vmce.c |  1 -
 xen/arch/x86/hvm/hpet.c|  2 +-
 xen/arch/x86/hvm/hvm.c |  5 +
 xen/arch/x86/hvm/i8254.c   |  2 +-
 xen/arch/x86/hvm/irq.c |  6 +++---
 xen/arch/x86/hvm/mtrr.c|  2 +-
 xen/arch/x86/hvm/pmtimer.c |  2 +-
 xen/arch/x86/hvm/rtc.c |  2 +-
 xen/arch/x86/hvm/save.c| 14 +++---
 xen/arch/x86/hvm/vioapic.c |  2 +-
 xen/arch/x86/hvm/viridian.c|  3 +--
 xen/arch/x86/hvm/vlapic.c  |  4 ++--
 xen/arch/x86/hvm/vpic.c|  2 +-
 xen/include/asm-x86/hvm/save.h |  6 +-
 14 files changed, 18 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index b53ad7c..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -381,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  NULL,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 6e7f744..3ed6547 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -641,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 122552e..6f1dbd8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -770,7 +770,6 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-  NULL,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
@@ -1154,7 +1153,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
   hvm_load_cpu_ctxt,
   1, HVMSR_PER_VCPU);
 
@@ -1476,7 +1475,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
-NULL,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1489,7 +1487,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
-NULL,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index d51463d..e0d2255 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -438,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index a405e7f..b37275c 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -767,9 +767,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index b7c9bd6..d9a4532 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -808,7 +808,7 @@ static int hvm_load_mtrr_msr(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, NULL,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
   hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
 
 void memory_type_changed(struct domain *d)
diff --gi

[Xen-devel] [PATCH v14 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state

2018-07-25 Thread Alexandru Isaila
This patch is focused on moving the for loop to the caller so
now we can save info for a single vcpu instance with the save_one
handlers.

Signed-off-by: Alexandru Isaila 

---
Changes since V11:
- Changed the CONTINUE return to return 0.
---
 xen/arch/x86/hvm/hvm.c  |  19 ---
 xen/arch/x86/hvm/save.c | 137 +---
 2 files changed, 116 insertions(+), 40 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1246ed5..f140305 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -793,6 +793,14 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 struct segment_register seg;
 struct hvm_hw_cpu ctxt = {};
 
+/*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
+if ( v->pause_flags & VPF_down )
+return 0;
+
+
 /* Architecture-specific vmcs/vmcb bits */
 hvm_funcs.save_cpu_ctxt(v, );
 
@@ -897,13 +905,6 @@ static int hvm_save_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 
 for_each_vcpu ( d, v )
 {
-/*
- * We don't need to save state for a vcpu that is down; the restore
- * code will leave it down if there is nothing saved.
- */
-if ( v->pause_flags & VPF_down )
-continue;
-
 err = hvm_save_cpu_ctxt_one(v, h);
 if ( err )
 break;
@@ -1196,7 +1197,7 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, 
hvm_domain_context_t *h
 unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
 int err = 0;
 
-if ( !cpu_has_xsave )
+if ( !cpu_has_xsave || !xsave_enabled(v) )
 return 0;   /* do nothing */
 
 err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
@@ -1221,8 +1222,6 @@ static int hvm_save_cpu_xsave_states(struct domain *d, 
hvm_domain_context_t *h)
 
 for_each_vcpu ( d, v )
 {
-if ( !xsave_enabled(v) )
-continue;
 err = hvm_save_cpu_xsave_states_one(v, h);
 if ( err )
 break;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index b674937..d57648d 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -138,9 +138,12 @@ size_t hvm_save_size(struct domain *d)
 int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int 
instance,
  XEN_GUEST_HANDLE_64(uint8) handle, uint64_t *bufsz)
 {
-int rv;
+int rv = 0;
 hvm_domain_context_t ctxt = { };
 const struct hvm_save_descriptor *desc;
+bool is_single_instance = false;
+uint32_t off = 0;
+struct vcpu *v;
 
 if ( d->is_dying ||
  typecode > HVM_SAVE_CODE_MAX ||
@@ -148,43 +151,94 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
unsigned int instance,
  !hvm_sr_handlers[typecode].save )
 return -EINVAL;
 
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+instance < d->max_vcpus )
+is_single_instance = true;
+
 ctxt.size = hvm_sr_handlers[typecode].size;
-if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+instance == d->max_vcpus )
 ctxt.size *= d->max_vcpus;
 ctxt.data = xmalloc_bytes(ctxt.size);
 if ( !ctxt.data )
 return -ENOMEM;
 
-if ( (rv = hvm_sr_handlers[typecode].save(d, )) != 0 )
-printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
-   d->domain_id, typecode, rv);
-else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
+if ( is_single_instance )
+vcpu_pause(d->vcpu[instance]);
+else
+domain_pause(d);
+
+if ( is_single_instance )
 {
-uint32_t off;
+if ( hvm_sr_handlers[typecode].save_one != NULL )
+rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
+);
+else
+rv = hvm_sr_handlers[typecode].save(d, );
 
-for ( off = 0; off <= (ctxt.cur - sizeof(*desc)); off += desc->length )
+if ( rv != 0 )
 {
-desc = (void *)(ctxt.data + off);
-/* Move past header */
-off += sizeof(*desc);
-if ( ctxt.cur < desc->length ||
- off > ctxt.cur - desc->length )
-break;
-if ( instance == desc->instance )
-{
-rv = 0;
-if ( guest_handle_is_null(handle) )
-*bufsz = desc->length;
-else if ( *bufsz < desc->length )
-rv = -ENOBUFS;
-else if ( copy_to_guest(handle, ctxt.data + off, desc->length) 
)
-rv = -EFAULT;
-else
-*bu

[Xen-devel] [PATCH v14 10/11] x86/hvm: Remove redundant save functions

2018-07-25 Thread Alexandru Isaila
This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs.

Signed-off-by: Alexandru Isaila 

---
Changes since V11:
- Remove enum return type for save funcs.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 19 ++-
 xen/arch/x86/hvm/hpet.c|  3 +-
 xen/arch/x86/hvm/hvm.c | 75 +-
 xen/arch/x86/hvm/i8254.c   |  3 +-
 xen/arch/x86/hvm/irq.c |  9 +++--
 xen/arch/x86/hvm/mtrr.c| 19 ++-
 xen/arch/x86/hvm/pmtimer.c |  3 +-
 xen/arch/x86/hvm/rtc.c |  3 +-
 xen/arch/x86/hvm/save.c| 26 +++
 xen/arch/x86/hvm/vioapic.c |  3 +-
 xen/arch/x86/hvm/viridian.c| 22 +++--
 xen/arch/x86/hvm/vlapic.c  | 34 ++-
 xen/arch/x86/hvm/vpic.c|  3 +-
 xen/include/asm-x86/hvm/save.h |  2 +-
 14 files changed, 50 insertions(+), 174 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..b53ad7c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 return ret;
 }
 
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_vmce_vcpu ctxt = {
 .caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = vmce_save_vcpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-  vmce_save_vcpu_ctxt_one,
+  NULL,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..6e7f744 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,8 +516,9 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *vcpu, hvm_domain_context_t *h)
 {
+struct domain *d = vcpu->domain;
 HPETState *hp = domain_vhpet(d);
 struct vcpu *v = pt_global_vcpu_target(d);
 int rc;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f140305..122552e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct hvm_tsc_adjust ctxt = {
 .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
 }
 
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = hvm_save_tsc_adjust_one(v, h);
-if ( err )
-break;
-}
-
-return err;
-}
-
 static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,10 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-  hvm_save_tsc_adjust_one,
+  NULL,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
 struct segment_register seg;
 struct hvm_hw_cpu ctxt = {};
@@ -898,20 +883,6 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, 
hvm_domain_context_t *h)
 return hvm_save_entry(CPU, v->vcpu_id, h, );
 }
 
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-struct vcpu *v;
-int err = 0;
-
-for_each_vcpu ( d, v )
-{
-err = hvm_save_cpu_ctxt_one(v, h);
-if ( err )
-break;
-}
-return err;
-}
-
 /* Return a string indicating the error, or NULL for valid. */
 const char *hvm_efer_valid(const struct vcpu 

[Xen-devel] [PATCH v14 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func

2018-07-25 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since V12:
- Changed memset to {} init.
---
 xen/arch/x86/hvm/hvm.c | 214 +
 1 file changed, 111 insertions(+), 103 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d90da9a..720204c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -787,119 +787,127 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct segment_register seg;
+struct hvm_hw_cpu ctxt = {};
+
+/* Architecture-specific vmcs/vmcb bits */
+hvm_funcs.save_cpu_ctxt(v, );
+
+ctxt.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc);
+
+ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
+
+hvm_get_segment_register(v, x86_seg_idtr, );
+ctxt.idtr_limit = seg.limit;
+ctxt.idtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_gdtr, );
+ctxt.gdtr_limit = seg.limit;
+ctxt.gdtr_base = seg.base;
+
+hvm_get_segment_register(v, x86_seg_cs, );
+ctxt.cs_sel = seg.sel;
+ctxt.cs_limit = seg.limit;
+ctxt.cs_base = seg.base;
+ctxt.cs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ds, );
+ctxt.ds_sel = seg.sel;
+ctxt.ds_limit = seg.limit;
+ctxt.ds_base = seg.base;
+ctxt.ds_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_es, );
+ctxt.es_sel = seg.sel;
+ctxt.es_limit = seg.limit;
+ctxt.es_base = seg.base;
+ctxt.es_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ss, );
+ctxt.ss_sel = seg.sel;
+ctxt.ss_limit = seg.limit;
+ctxt.ss_base = seg.base;
+ctxt.ss_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_fs, );
+ctxt.fs_sel = seg.sel;
+ctxt.fs_limit = seg.limit;
+ctxt.fs_base = seg.base;
+ctxt.fs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_gs, );
+ctxt.gs_sel = seg.sel;
+ctxt.gs_limit = seg.limit;
+ctxt.gs_base = seg.base;
+ctxt.gs_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_tr, );
+ctxt.tr_sel = seg.sel;
+ctxt.tr_limit = seg.limit;
+ctxt.tr_base = seg.base;
+ctxt.tr_arbytes = seg.attr;
+
+hvm_get_segment_register(v, x86_seg_ldtr, );
+ctxt.ldtr_sel = seg.sel;
+ctxt.ldtr_limit = seg.limit;
+ctxt.ldtr_base = seg.base;
+ctxt.ldtr_arbytes = seg.attr;
+
+if ( v->fpu_initialised )
+{
+memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+ctxt.flags = XEN_X86_FPU_INITIALISED;
+}
+
+ctxt.rax = v->arch.user_regs.rax;
+ctxt.rbx = v->arch.user_regs.rbx;
+ctxt.rcx = v->arch.user_regs.rcx;
+ctxt.rdx = v->arch.user_regs.rdx;
+ctxt.rbp = v->arch.user_regs.rbp;
+ctxt.rsi = v->arch.user_regs.rsi;
+ctxt.rdi = v->arch.user_regs.rdi;
+ctxt.rsp = v->arch.user_regs.rsp;
+ctxt.rip = v->arch.user_regs.rip;
+ctxt.rflags = v->arch.user_regs.rflags;
+ctxt.r8  = v->arch.user_regs.r8;
+ctxt.r9  = v->arch.user_regs.r9;
+ctxt.r10 = v->arch.user_regs.r10;
+ctxt.r11 = v->arch.user_regs.r11;
+ctxt.r12 = v->arch.user_regs.r12;
+ctxt.r13 = v->arch.user_regs.r13;
+ctxt.r14 = v->arch.user_regs.r14;
+ctxt.r15 = v->arch.user_regs.r15;
+ctxt.dr0 = v->arch.debugreg[0];
+ctxt.dr1 = v->arch.debugreg[1];
+ctxt.dr2 = v->arch.debugreg[2];
+ctxt.dr3 = v->arch.debugreg[3];
+ctxt.dr6 = v->arch.debugreg[6];
+ctxt.dr7 = v->arch.debugreg[7];
+
+return hvm_save_entry(CPU, v->vcpu_id, h, );
+}
+
 static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_hw_cpu ctxt;
-struct segment_register seg;
+int err = 0;
 
 for_each_vcpu ( d, v )
 {
-/* We don't need to save state for a vcpu that is down; the restore 
- * code will leave it down if there is nothing saved. */
+/*
+ * We don't need to save state for a vcpu that is down; the restore
+ * code will leave it down if there is nothing saved.
+ */
 if ( v->pause_flags & VPF_down )
 continue;
 
-memset(, 0, sizeof(ctxt));
-
-/* Architecture-specific vmcs/vmcb bits */
-hvm_funcs.save_cpu_ctxt(v, );
-
-ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
-ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-hvm_get_segment_register(v, x86_seg_idtr, );
-ctxt.idtr_limit = seg.limit;
-ctxt.idtr_base = seg.base;
-
-hvm_get_segment_register(v, x86_seg_gdtr, );
-ctxt.gdtr_limit = seg.limit;
-ctxt.gdtr_base

[Xen-devel] [PATCH v14 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func

2018-07-25 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since v11:
- hvm_save_mtrr_msr() now returns err from hvm_save_mtrr_msr_one().

Note: This patch is based on Roger Pau Monne's series[1]
---
 xen/arch/x86/hvm/mtrr.c | 81 +++--
 1 file changed, 44 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..47a5c29 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,59 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
uint64_t gfn_start,
 return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
+struct hvm_hw_mtrr hw_mtrr = {
+.msr_mtrr_def_type = mtrr_state->def_type |
+ MASK_INSR(mtrr_state->fixed_enabled,
+   MTRRdefType_FE) |
+ MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+.msr_mtrr_cap  = mtrr_state->mtrr_cap,
+};
+unsigned int i;
 
-/* save mtrr */
-for_each_vcpu(d, v)
+if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+ (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
 {
-const struct mtrr_state *mtrr_state = >arch.hvm_vcpu.mtrr;
-struct hvm_hw_mtrr hw_mtrr = {
-.msr_mtrr_def_type = mtrr_state->def_type |
- MASK_INSR(mtrr_state->fixed_enabled,
-   MTRRdefType_FE) |
- MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-.msr_mtrr_cap  = mtrr_state->mtrr_cap,
-};
-unsigned int i;
+dprintk(XENLOG_G_ERR,
+"HVM save: %pv: too many (%lu) variable range MTRRs\n",
+v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+return -EINVAL;
+}
 
-if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
- (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-{
-dprintk(XENLOG_G_ERR,
-"HVM save: %pv: too many (%lu) variable range MTRRs\n",
-v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-return -EINVAL;
-}
+hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
 
-hvm_get_guest_pat(v, _mtrr.msr_pat_cr);
+for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+{
+/* save physbase */
+hw_mtrr.msr_mtrr_var[i*2] =
+((uint64_t*)mtrr_state->var_ranges)[i*2];
+/* save physmask */
+hw_mtrr.msr_mtrr_var[i*2+1] =
+((uint64_t*)mtrr_state->var_ranges)[i*2+1];
+}
 
-for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-{
-/* save physbase */
-hw_mtrr.msr_mtrr_var[i*2] =
-((uint64_t*)mtrr_state->var_ranges)[i*2];
-/* save physmask */
-hw_mtrr.msr_mtrr_var[i*2+1] =
-((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-}
+for ( i = 0; i < NUM_FIXED_MSR; i++ )
+hw_mtrr.msr_mtrr_fixed[i] =
+((uint64_t*)mtrr_state->fixed_ranges)[i];
 
-for ( i = 0; i < NUM_FIXED_MSR; i++ )
-hw_mtrr.msr_mtrr_fixed[i] =
-((uint64_t*)mtrr_state->fixed_ranges)[i];
+return hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr);
+}
 
-if ( hvm_save_entry(MTRR, v->vcpu_id, h, _mtrr) != 0 )
-return 1;
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
+/* save mtrr */
+for_each_vcpu(d, v)
+{
+   err = hvm_save_mtrr_msr_one(v, h);
+   if ( err )
+   break;
 }
-return 0;
+return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs

2018-07-25 Thread Alexandru Isaila
Signed-off-by: Alexandru Isaila 

---
Changes since V8:
- Add comment for the handler return values.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/hvm/hpet.c| 2 +-
 xen/arch/x86/hvm/hvm.c | 6 +-
 xen/arch/x86/hvm/i8254.c   | 2 +-
 xen/arch/x86/hvm/irq.c | 6 +++---
 xen/arch/x86/hvm/mtrr.c| 4 ++--
 xen/arch/x86/hvm/pmtimer.c | 2 +-
 xen/arch/x86/hvm/rtc.c | 2 +-
 xen/arch/x86/hvm/save.c| 5 -
 xen/arch/x86/hvm/vioapic.c | 2 +-
 xen/arch/x86/hvm/viridian.c| 3 ++-
 xen/arch/x86/hvm/vlapic.c  | 4 ++--
 xen/arch/x86/hvm/vpic.c| 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +-
 14 files changed, 30 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+  vmce_save_vcpu_ctxt_one,
   vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a18b9a6..1246ed5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, 
hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+  hvm_save_tsc_adjust_one,
   hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1181,7 +1182,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+  hvm_load_cpu_ctxt,
   1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1534,6 +1536,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_XSAVE_CODE,
 "CPU_XSAVE",
 hvm_save_cpu_xsave_states,
+hvm_save_cpu_xsave_states_one,
 hvm_load_cpu_xsave_states,
 HVM_CPU_XSAVE_SIZE(xfeature_mask) +
 sizeof(struct hvm_save_descriptor),
@@ -1546,6 +1549,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
 hvm_register_savevm(CPU_MSR_CODE,
 "CPU_MSR",
 hvm_save_cpu_msrs,
+hvm_save_cpu_msrs_one,
 hvm_load_cpu_msrs,
 HVM_CPU_MSR_SIZE(msr_count_max) +
 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t 
*h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, 
hvm_domain_context_t *h)
 return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
   1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
   1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 47a5c29..1cb2a2e 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -823,8 +823,8 @@ static int hvm_load_mtrr_msr(struct domain *d, 
hvm_domain_context

[Xen-devel] [PATCH v14 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func

2018-07-25 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since V13:
- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..d90da9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
 destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_tsc_adjust ctxt = {
+.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
+};
+
+return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
-struct hvm_tsc_adjust ctxt;
 int err = 0;
 
 for_each_vcpu ( d, v )
 {
-ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
-err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, );
+err = hvm_save_tsc_adjust_one(v, h);
 if ( err )
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v14 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one

2018-07-25 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since V11:
- hvm_save_cpu_xsave_states_one() returns the err from
  _hvm_init_entry().
- hvm_save_cpu_xsave_states() returns err from
  hvm_save_cpu_xsave_states_one();
---
 xen/arch/x86/hvm/hvm.c | 42 +++---
 1 file changed, 27 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 720204c..a6708f5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1188,33 +1188,45 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, 
hvm_load_cpu_ctxt,
save_area) + \
   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t 
*h)
 {
-struct vcpu *v;
 struct hvm_hw_cpu_xsave *ctxt;
+unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+int err = 0;
 
 if ( !cpu_has_xsave )
 return 0;   /* do nothing */
 
+err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+if ( err )
+return err;
+
+ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
+h->cur += size;
+ctxt->xfeature_mask = xfeature_mask;
+ctxt->xcr0 = v->arch.xcr0;
+ctxt->xcr0_accum = v->arch.xcr0_accum;
+
+expand_xsave_states(v, >save_area,
+size - offsetof(typeof(*ctxt), save_area));
+return 0;
+}
+
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
+
 for_each_vcpu ( d, v )
 {
-unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
-
 if ( !xsave_enabled(v) )
 continue;
-if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-return 1;
-ctxt = (struct hvm_hw_cpu_xsave *)>data[h->cur];
-h->cur += size;
-
-ctxt->xfeature_mask = xfeature_mask;
-ctxt->xcr0 = v->arch.xcr0;
-ctxt->xcr0_accum = v->arch.xcr0_accum;
-expand_xsave_states(v, >save_area,
-size - offsetof(typeof(*ctxt), save_area));
+err = hvm_save_cpu_xsave_states_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 /*
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v14 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func

2018-07-25 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V11:
- hvm_save_cpu_msrs() returns err from hvm_save_cpu_msrs_one().
---
 xen/arch/x86/hvm/hvm.c | 105 +++--
 1 file changed, 58 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a6708f5..a18b9a6 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1366,69 +1366,80 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_save_descriptor *desc = _p(>data[h->cur]);
+struct hvm_msr *ctxt;
+unsigned int i;
+int err = 0;
 
-for_each_vcpu ( d, v )
+err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+ HVM_CPU_MSR_SIZE(msr_count_max));
+if ( err )
+return err;
+ctxt = (struct hvm_msr *)>data[h->cur];
+ctxt->count = 0;
+
+for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
 {
-struct hvm_save_descriptor *desc = _p(>data[h->cur]);
-struct hvm_msr *ctxt;
-unsigned int i;
+uint64_t val;
+int rc = guest_rdmsr(v, msrs_to_send[i], );
 
-if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
- HVM_CPU_MSR_SIZE(msr_count_max)) )
-return 1;
-ctxt = (struct hvm_msr *)>data[h->cur];
-ctxt->count = 0;
+/*
+ * It is the programmers responsibility to ensure that
+ * msrs_to_send[] contain generally-read/write MSRs.
+ * X86EMUL_EXCEPTION here implies a missing feature, and that the
+ * guest doesn't have access to the MSR.
+ */
+if ( rc == X86EMUL_EXCEPTION )
+continue;
 
-for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+if ( rc != X86EMUL_OKAY )
 {
-uint64_t val;
-int rc = guest_rdmsr(v, msrs_to_send[i], );
+ASSERT_UNREACHABLE();
+return -ENXIO;
+}
 
-/*
- * It is the programmers responsibility to ensure that
- * msrs_to_send[] contain generally-read/write MSRs.
- * X86EMUL_EXCEPTION here implies a missing feature, and that the
- * guest doesn't have access to the MSR.
- */
-if ( rc == X86EMUL_EXCEPTION )
-continue;
+if ( !val )
+continue; /* Skip empty MSRs. */
 
-if ( rc != X86EMUL_OKAY )
-{
-ASSERT_UNREACHABLE();
-return -ENXIO;
-}
+ctxt->msr[ctxt->count].index = msrs_to_send[i];
+ctxt->msr[ctxt->count++].val = val;
+}
 
-if ( !val )
-continue; /* Skip empty MSRs. */
+if ( hvm_funcs.save_msr )
+hvm_funcs.save_msr(v, ctxt);
 
-ctxt->msr[ctxt->count].index = msrs_to_send[i];
-ctxt->msr[ctxt->count++].val = val;
-}
+ASSERT(ctxt->count <= msr_count_max);
 
-if ( hvm_funcs.save_msr )
-hvm_funcs.save_msr(v, ctxt);
+for ( i = 0; i < ctxt->count; ++i )
+ctxt->msr[i]._rsvd = 0;
 
-ASSERT(ctxt->count <= msr_count_max);
+if ( ctxt->count )
+{
+/* Rewrite length to indicate how much space we actually used. */
+desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+}
+else
+/* or rewind and remove the descriptor from the stream. */
+h->cur -= sizeof(struct hvm_save_descriptor);
+return 0;
+}
 
-for ( i = 0; i < ctxt->count; ++i )
-ctxt->msr[i]._rsvd = 0;
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( ctxt->count )
-{
-/* Rewrite length to indicate how much space we actually used. */
-desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-}
-else
-/* or rewind and remove the descriptor from the stream. */
-h->cur -= sizeof(struct hvm_save_descriptor);
+for_each_vcpu ( d, v )
+{
+err = hvm_save_cpu_msrs_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v14 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func

2018-07-25 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 

---
Changes since V11:
- Removed the memset and added init with {}.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 21 +
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..31e553c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
 return ret;
 }
 
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+struct hvm_vmce_vcpu ctxt = {
+.caps = v->arch.vmce.mcg_cap,
+.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+};
+
+return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+}
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
 struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 
 for_each_vcpu ( d, v )
 {
-struct hvm_vmce_vcpu ctxt = {
-.caps = v->arch.vmce.mcg_cap,
-.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-};
-
-err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, );
+err = vmce_save_vcpu_ctxt_one(v, h);
 if ( err )
 break;
 }
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v14 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func

2018-07-25 Thread Alexandru Isaila
This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila 
Reviewed-by: Paul Durrant 

---
Changes since V13:
- Fixed style.
---
 xen/arch/x86/hvm/viridian.c | 30 +++---
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..5e3b8ac 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, 
hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
   viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-struct vcpu *v;
+struct hvm_viridian_vcpu_context ctxt = {};
 
-if ( !is_viridian_domain(d) )
+if ( !is_viridian_domain(v->domain) )
 return 0;
 
-for_each_vcpu( d, v ) {
-struct hvm_viridian_vcpu_context ctxt = {
-.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
-};
+ctxt.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
+ctxt.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
+
+return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, );
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+struct vcpu *v;
+int err = 0;
 
-if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, ) != 0 )
-return 1;
+for_each_vcpu ( d, v )
+{
+err = viridian_save_vcpu_ctxt_one(v, h);
+if ( err )
+break;
 }
 
-return 0;
+return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v14 00/11] x86/domctl: Save info for one vcpu instance

2018-07-25 Thread Alexandru Isaila
Hi all,

This patch series addresses the ideea of saving data from a single vcpu 
instance.
First it starts by adding *save_one functions, then it introduces a handler for 
the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final 2 patches are used for clean up. The first one removes the save* 
funcs and
renames the save_one* to save.
The last patch removes the save_one* handler that is no longer used.

Cheers,

Alexandru Isaila (11):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Add handler for save_one funcs
x86/domctl: Don't pause the whole domain if only
x86/hvm: Remove redundant save functions
x86/hvm: Remove save_one handler

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  1   2   3   >