Re: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new mappable resource type...

2017-10-26 Thread Jan Beulich
>>> On 17.10.17 at 15:24,  wrote:
> @@ -777,6 +887,52 @@ int hvm_get_ioreq_server_info(struct domain *d, 
> ioservid_t id,
>  return rc;
>  }
>  
> +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,

There's another silent truncation issue here: Iirc ioservid_t is 16 bits.

> @@ -3866,6 +3867,27 @@ int xenmem_add_to_physmap_one(
>  return rc;
>  }
>  
> +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id,
> +unsigned long frame,
> +unsigned int nr_frames,
> +unsigned long mfn_list[])
> +{
> +unsigned int i;
> +
> +for ( i = 0; i < nr_frames; i++ )
> +{
> +mfn_t mfn;
> +int rc = hvm_get_ioreq_server_frame(d, id, frame + i, );

The caller here holds a 32-bit quantity in its hands, though. With
the upper half suitably checked in either place

Reviewed-by: Jan Beulich 

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new mappable resource type...

2017-10-19 Thread Paul Durrant
> -Original Message-
> From: Julien Grall [mailto:julien.gr...@linaro.org]
> Sent: 19 October 2017 14:08
> To: Paul Durrant <paul.durr...@citrix.com>; xen-de...@lists.xenproject.org
> Cc: Stefano Stabellini <sstabell...@kernel.org>; Wei Liu
> <wei.l...@citrix.com>; Konrad Rzeszutek Wilk <konrad.w...@oracle.com>;
> George Dunlap <george.dun...@citrix.com>; Andrew Cooper
> <andrew.coop...@citrix.com>; Ian Jackson <ian.jack...@citrix.com>; Tim
> (Xen.org) <t...@xen.org>; Jan Beulich <jbeul...@suse.com>
> Subject: Re: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new
> mappable resource type...
> 
> Hi Paul,
> 
> On 10/19/2017 01:58 PM, Paul Durrant wrote:
> >> -Original Message-
> >> From: Julien Grall [mailto:julien.gr...@linaro.org]
> >> Sent: 19 October 2017 13:31
> >> To: Paul Durrant <paul.durr...@citrix.com>; xen-
> de...@lists.xenproject.org
> >> Cc: Stefano Stabellini <sstabell...@kernel.org>; Wei Liu
> >> <wei.l...@citrix.com>; Konrad Rzeszutek Wilk
> <konrad.w...@oracle.com>;
> >> George Dunlap <george.dun...@citrix.com>; Andrew Cooper
> >> <andrew.coop...@citrix.com>; Ian Jackson <ian.jack...@citrix.com>;
> Tim
> >> (Xen.org) <t...@xen.org>; Jan Beulich <jbeul...@suse.com>
> >> Subject: Re: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new
> >> mappable resource type...
> >>
> >> Hi,
> >>
> >> On 17/10/17 14:24, Paul Durrant wrote:
> >>> diff --git a/xen/common/memory.c b/xen/common/memory.c
> >>> index cdd2e030cf..b27a71c4f1 100644
> >>> --- a/xen/common/memory.c
> >>> +++ b/xen/common/memory.c
> >>> @@ -1011,6 +1011,11 @@ static int acquire_resource(
> >>>
> >>>switch ( xmar.type )
> >>>{
> >>> +case XENMEM_resource_ioreq_server:
> >>> +rc = xenmem_acquire_ioreq_server(d, xmar.id, xmar.frame,
> >>> + xmar.nr_frames, mfn_list);
> >>> +break;
> >>
> >> I fully appreciate you are not able to test on x86. However, I would
> >> have expected you to at least build test it.
> >>
> >
> > I don't actually know how to set up cross-compilation, which is why I'd
> originally #ifdef-ed this.
> 
> It is quite trivial. See
> https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions#
> Cross_Compiling
> 

Oh, that's way more trivial than I'd imagined. Thanks!

  Paul

> --
> Julien Grall
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new mappable resource type...

2017-10-19 Thread Julien Grall

Hi Paul,

On 10/19/2017 01:58 PM, Paul Durrant wrote:

-Original Message-
From: Julien Grall [mailto:julien.gr...@linaro.org]
Sent: 19 October 2017 13:31
To: Paul Durrant <paul.durr...@citrix.com>; xen-de...@lists.xenproject.org
Cc: Stefano Stabellini <sstabell...@kernel.org>; Wei Liu
<wei.l...@citrix.com>; Konrad Rzeszutek Wilk <konrad.w...@oracle.com>;
George Dunlap <george.dun...@citrix.com>; Andrew Cooper
<andrew.coop...@citrix.com>; Ian Jackson <ian.jack...@citrix.com>; Tim
(Xen.org) <t...@xen.org>; Jan Beulich <jbeul...@suse.com>
Subject: Re: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new
mappable resource type...

Hi,

On 17/10/17 14:24, Paul Durrant wrote:

diff --git a/xen/common/memory.c b/xen/common/memory.c
index cdd2e030cf..b27a71c4f1 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1011,6 +1011,11 @@ static int acquire_resource(

   switch ( xmar.type )
   {
+case XENMEM_resource_ioreq_server:
+rc = xenmem_acquire_ioreq_server(d, xmar.id, xmar.frame,
+ xmar.nr_frames, mfn_list);
+break;


I fully appreciate you are not able to test on x86. However, I would
have expected you to at least build test it.



I don't actually know how to set up cross-compilation, which is why I'd 
originally #ifdef-ed this.


It is quite trivial. See 
https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions#Cross_Compiling


--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new mappable resource type...

2017-10-19 Thread Paul Durrant
> -Original Message-
> From: Julien Grall [mailto:julien.gr...@linaro.org]
> Sent: 19 October 2017 13:31
> To: Paul Durrant <paul.durr...@citrix.com>; xen-de...@lists.xenproject.org
> Cc: Stefano Stabellini <sstabell...@kernel.org>; Wei Liu
> <wei.l...@citrix.com>; Konrad Rzeszutek Wilk <konrad.w...@oracle.com>;
> George Dunlap <george.dun...@citrix.com>; Andrew Cooper
> <andrew.coop...@citrix.com>; Ian Jackson <ian.jack...@citrix.com>; Tim
> (Xen.org) <t...@xen.org>; Jan Beulich <jbeul...@suse.com>
> Subject: Re: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new
> mappable resource type...
> 
> Hi,
> 
> On 17/10/17 14:24, Paul Durrant wrote:
> > diff --git a/xen/common/memory.c b/xen/common/memory.c
> > index cdd2e030cf..b27a71c4f1 100644
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
> > @@ -1011,6 +1011,11 @@ static int acquire_resource(
> >
> >   switch ( xmar.type )
> >   {
> > +case XENMEM_resource_ioreq_server:
> > +rc = xenmem_acquire_ioreq_server(d, xmar.id, xmar.frame,
> > + xmar.nr_frames, mfn_list);
> > +break;
> 
> I fully appreciate you are not able to test on x86. However, I would
> have expected you to at least build test it.
> 

I don't actually know how to set up cross-compilation, which is why I'd 
originally #ifdef-ed this.

> For instance, here you introduced this function on x86, call in common
> code but does not introduce it on Arm.
> 
> Although, I don't think we should introduce it for Arm. Instead we
> should provide arch helpers as we do for other memory operations.
> 

That would be cleaner in this case.

  Paul

> Cheers,
> 
> --
> Julien Grall
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new mappable resource type...

2017-10-19 Thread Julien Grall

Hi,

On 17/10/17 14:24, Paul Durrant wrote:

diff --git a/xen/common/memory.c b/xen/common/memory.c
index cdd2e030cf..b27a71c4f1 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1011,6 +1011,11 @@ static int acquire_resource(
  
  switch ( xmar.type )

  {
+case XENMEM_resource_ioreq_server:
+rc = xenmem_acquire_ioreq_server(d, xmar.id, xmar.frame,
+ xmar.nr_frames, mfn_list);
+break;


I fully appreciate you are not able to test on x86. However, I would 
have expected you to at least build test it.


For instance, here you introduced this function on x86, call in common 
code but does not introduce it on Arm.


Although, I don't think we should introduce it for Arm. Instead we 
should provide arch helpers as we do for other memory operations.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v12 06/11] x86/hvm/ioreq: add a new mappable resource type...

2017-10-17 Thread Paul Durrant
... XENMEM_resource_ioreq_server

This patch adds support for a new resource type that can be mapped using
the XENMEM_acquire_resource memory op.

If an emulator makes use of this resource type then, instead of mapping
gfns, the IOREQ server will allocate pages from the heap. These pages
will never be present in the P2M of the guest at any point and so are
not vulnerable to any direct attack by the guest. They are only ever
accessible by Xen and any domain that has mapping privilege over the
guest (which may or may not be limited to the domain running the emulator).

NOTE: Use of the new resource type is not compatible with use of
  XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns flag is
  set.

Signed-off-by: Paul Durrant 
---
Cc: George Dunlap 
Cc: Wei Liu 
Cc: Jan Beulich 
Cc: Andrew Cooper 
Cc: Ian Jackson 
Cc: Konrad Rzeszutek Wilk 
Cc: Stefano Stabellini 
Cc: Tim Deegan 

v12:
 - Addressed more comments from Jan.
 - Dropped George's A-b and Wei's R-b because of material change.

v11:
 - Addressed more comments from Jan.

v10:
 - Addressed comments from Jan.

v8:
 - Re-base on new boilerplate.
 - Adjust function signature of hvm_get_ioreq_server_frame(), and test
   whether the bufioreq page is present.

v5:
 - Use get_ioreq_server() function rather than indexing array directly.
 - Add more explanation into comments to state than mapping guest frames
   and allocation of pages for ioreq servers are not simultaneously
   permitted.
 - Add a comment into asm/ioreq.h stating the meaning of the index
   value passed to hvm_get_ioreq_server_frame().
---
 xen/arch/x86/hvm/ioreq.c| 156 
 xen/arch/x86/mm.c   |  22 ++
 xen/common/memory.c |   5 ++
 xen/include/asm-x86/hvm/ioreq.h |   2 +
 xen/include/asm-x86/mm.h|   5 ++
 xen/include/public/hvm/dm_op.h  |   4 ++
 xen/include/public/memory.h |   9 +++
 7 files changed, 203 insertions(+)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index f654e7796c..2c611fbffa 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -259,6 +259,19 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, 
bool buf)
 struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
 int rc;
 
+if ( iorp->page )
+{
+/*
+ * If a page has already been allocated (which will happen on
+ * demand if hvm_get_ioreq_server_frame() is called), then
+ * mapping a guest frame is not permitted.
+ */
+if ( gfn_eq(iorp->gfn, INVALID_GFN) )
+return -EPERM;
+
+return 0;
+}
+
 if ( d->is_dying )
 return -EINVAL;
 
@@ -281,6 +294,70 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, 
bool buf)
 return rc;
 }
 
+static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+{
+struct domain *currd = current->domain;
+struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
+
+if ( iorp->page )
+{
+/*
+ * If a guest frame has already been mapped (which may happen
+ * on demand if hvm_get_ioreq_server_info() is called), then
+ * allocating a page is not permitted.
+ */
+if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
+return -EPERM;
+
+return 0;
+}
+
+/*
+ * Allocated IOREQ server pages are assigned to the emulating
+ * domain, not the target domain. This is because the emulator is
+ * likely to be destroyed after the target domain has been torn
+ * down, and we must use MEMF_no_refcount otherwise page allocation
+ * could fail if the emulating domain has already reached its
+ * maximum allocation.
+ */
+iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
+if ( !iorp->page )
+return -ENOMEM;
+
+if ( !get_page_type(iorp->page, PGT_writable_page) )
+{
+ASSERT_UNREACHABLE();
+put_page(iorp->page);
+iorp->page = NULL;
+return -ENOMEM;
+}
+
+iorp->va = __map_domain_page_global(iorp->page);
+if ( !iorp->va )
+{
+put_page_and_type(iorp->page);
+iorp->page = NULL;
+return -ENOMEM;
+}
+
+clear_page(iorp->va);
+return 0;
+}
+
+static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+{
+struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq;
+
+if ( !iorp->page )
+return;
+
+unmap_domain_page_global(iorp->va);
+iorp->va = NULL;
+
+put_page_and_type(iorp->page);
+iorp->page = NULL;
+}
+
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
 {
 const struct hvm_ioreq_server *s;
@@ -484,6 +561,27 @@ static void hvm_ioreq_server_unmap_pages(struct 
hvm_ioreq_server *s)