Re: [Xen-devel] [PATCH v11 06/11] x86/hvm/ioreq: add a new mappable resource type...
>>> On 16.10.17 at 16:17,wrote: >> From: Jan Beulich [mailto:jbeul...@suse.com] >> Sent: 16 October 2017 15:07 >> >>> On 12.10.17 at 18:25, wrote: >> > ... XENMEM_resource_ioreq_server >> > >> > This patch adds support for a new resource type that can be mapped using >> > the XENMEM_acquire_resource memory op. >> > >> > If an emulator makes use of this resource type then, instead of mapping >> > gfns, the IOREQ server will allocate pages from the heap. These pages >> > will never be present in the P2M of the guest at any point and so are >> > not vulnerable to any direct attack by the guest. They are only ever >> > accessible by Xen and any domain that has mapping privilege over the >> > guest (which may or may not be limited to the domain running the >> emulator). >> > >> > NOTE: Use of the new resource type is not compatible with use of >> > XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns >> flag is >> > set. >> > >> > Signed-off-by: Paul Durrant >> > Acked-by: George Dunlap >> > Reviewed-by: Wei Liu >> >> Can you have validly retained this? > > I didn't think the structure of this particular patch had changed that > fundamentally. The structure didn't change that much, yes, but the page type ref acquiring which you now do alter behavior meaningfully. >> > @@ -777,6 +886,51 @@ int hvm_get_ioreq_server_info(struct domain *d, >> ioservid_t id, >> > return rc; >> > } >> > >> > +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, >> > + unsigned long idx, mfn_t *mfn) >> > +{ >> > +struct hvm_ioreq_server *s; >> > +int rc; >> > + >> > +spin_lock_recursive(>arch.hvm_domain.ioreq_server.lock); >> > + >> > +if ( id == DEFAULT_IOSERVID ) >> > +return -EOPNOTSUPP; >> > + >> > +s = get_ioreq_server(d, id); >> > + >> > +ASSERT(!IS_DEFAULT(s)); >> > + >> > +rc = hvm_ioreq_server_alloc_pages(s); >> > +if ( rc ) >> > +goto out; >> > + >> > +switch ( idx ) >> > +{ >> > +case XENMEM_resource_ioreq_server_frame_bufioreq: >> > +rc = -ENOENT; >> > +if ( !HANDLE_BUFIOREQ(s) ) >> > +goto out; >> > + >> > +*mfn = _mfn(page_to_mfn(s->bufioreq.page)); >> > +rc = 0; >> > +break; >> >> How about >> >> if ( HANDLE_BUFIOREQ(s) ) >> *mfn = _mfn(page_to_mfn(s->bufioreq.page)); >> else >> rc = -ENOENT; >> break; >> > > Looking at the overall structure I prefer it as it is. If I could have got > rid of the out label by doing this then it might have been worth the change. Okay, you're the maintainer. Just to clarify - what I find particularly odd is the setting of rc to zero above, yet the other case block relying on it already being zero when entering the switch(). >> > @@ -629,6 +634,10 @@ struct xen_mem_acquire_resource { >> > * is optional if nr_frames is 0. >> > */ >> > uint64_aligned_t frame; >> > + >> > +#define XENMEM_resource_ioreq_server_frame_bufioreq 0 >> > +#define XENMEM_resource_ioreq_server_frame_ioreq(n_) (1 + (n_)) >> >> I don't see what you need the trailing underscore for. This is >> normally only needed on local variables defined in (gcc extended) >> macros, which we generally can't use in a public header anyway. > > I thought it was generally desirable to attempt to distinguish macro > arguments from variable to avoid name clashes. What do you prefer I should do > in a public header? There are various cases to be considered here, but in the one here there is no risk of name clash at all: Regardless of the name of the parameter, any instance of it will be expanded exactly once. Even if the expansion matches exactly the parameter name, no issue will arise. There are certainly forms of macros where some care is needed in how to name the parameters. Trailing underscores to disambiguate names, however, should - as said - rarely if ever be needed for other than local variables inside the macro body (because _then_ there indeed can be name conflicts with outer scope variables). Jan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v11 06/11] x86/hvm/ioreq: add a new mappable resource type...
> -Original Message- > From: Jan Beulich [mailto:jbeul...@suse.com] > Sent: 16 October 2017 15:07 > To: Paul Durrant <paul.durr...@citrix.com> > Cc: Andrew Cooper <andrew.coop...@citrix.com>; Ian Jackson > <ian.jack...@citrix.com>; Stefano Stabellini <sstabell...@kernel.org>; xen- > de...@lists.xenproject.org; Konrad Rzeszutek Wilk > <konrad.w...@oracle.com>; Tim (Xen.org) <t...@xen.org> > Subject: Re: [Xen-devel] [PATCH v11 06/11] x86/hvm/ioreq: add a new > mappable resource type... > > >>> On 12.10.17 at 18:25, <paul.durr...@citrix.com> wrote: > > ... XENMEM_resource_ioreq_server > > > > This patch adds support for a new resource type that can be mapped using > > the XENMEM_acquire_resource memory op. > > > > If an emulator makes use of this resource type then, instead of mapping > > gfns, the IOREQ server will allocate pages from the heap. These pages > > will never be present in the P2M of the guest at any point and so are > > not vulnerable to any direct attack by the guest. They are only ever > > accessible by Xen and any domain that has mapping privilege over the > > guest (which may or may not be limited to the domain running the > emulator). > > > > NOTE: Use of the new resource type is not compatible with use of > > XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns > flag is > > set. > > > > Signed-off-by: Paul Durrant <paul.durr...@citrix.com> > > Acked-by: George Dunlap <george.dun...@eu.citrix.com> > > Reviewed-by: Wei Liu <wei.l...@citrix.com> > > Can you have validly retained this? I didn't think the structure of this particular patch had changed that fundamentally. > > > --- a/xen/arch/x86/hvm/ioreq.c > > +++ b/xen/arch/x86/hvm/ioreq.c > > @@ -281,6 +294,69 @@ static int hvm_map_ioreq_gfn(struct > hvm_ioreq_server *s, bool buf) > > return rc; > > } > > > > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) > > +{ > > +struct domain *currd = current->domain; > > +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq; > > + > > +if ( iorp->page ) > > +{ > > +/* > > + * If a guest frame has already been mapped (which may happen > > + * on demand if hvm_get_ioreq_server_info() is called), then > > + * allocating a page is not permitted. > > + */ > > +if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) > > +return -EPERM; > > + > > +return 0; > > +} > > + > > +/* > > + * Allocated IOREQ server pages are assigned to the emulating > > + * domain, not the target domain. This is because the emulator is > > + * likely to be destroyed after the target domain has been torn > > + * down, and we must use MEMF_no_refcount otherwise page > allocation > > + * could fail if the emulating domain has already reached its > > + * maximum allocation. > > + */ > > +iorp->page = alloc_domheap_page(currd, MEMF_no_refcount); > > +if ( !iorp->page ) > > +return -ENOMEM; > > + > > +if ( !get_page_type(iorp->page, PGT_writable_page) ) > > +{ > > ASSERT_UNREACHABLE() ? Ok. > > > @@ -777,6 +886,51 @@ int hvm_get_ioreq_server_info(struct domain *d, > ioservid_t id, > > return rc; > > } > > > > +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, > > + unsigned long idx, mfn_t *mfn) > > +{ > > +struct hvm_ioreq_server *s; > > +int rc; > > + > > +spin_lock_recursive(>arch.hvm_domain.ioreq_server.lock); > > + > > +if ( id == DEFAULT_IOSERVID ) > > +return -EOPNOTSUPP; > > + > > +s = get_ioreq_server(d, id); > > + > > +ASSERT(!IS_DEFAULT(s)); > > + > > +rc = hvm_ioreq_server_alloc_pages(s); > > +if ( rc ) > > +goto out; > > + > > +switch ( idx ) > > +{ > > +case XENMEM_resource_ioreq_server_frame_bufioreq: > > +rc = -ENOENT; > > +if ( !HANDLE_BUFIOREQ(s) ) > > +goto out; > > + > > +*mfn = _mfn(page_to_mfn(s->bufioreq.page)); > > +rc = 0; > > +break; > > How about > > if ( HANDLE_BUFIOREQ(s) ) > *mfn = _mfn(page_to_mfn(s->bufioreq.page)); > else > rc = -ENOENT; > break; > Looking a
Re: [Xen-devel] [PATCH v11 06/11] x86/hvm/ioreq: add a new mappable resource type...
>>> On 12.10.17 at 18:25,wrote: > ... XENMEM_resource_ioreq_server > > This patch adds support for a new resource type that can be mapped using > the XENMEM_acquire_resource memory op. > > If an emulator makes use of this resource type then, instead of mapping > gfns, the IOREQ server will allocate pages from the heap. These pages > will never be present in the P2M of the guest at any point and so are > not vulnerable to any direct attack by the guest. They are only ever > accessible by Xen and any domain that has mapping privilege over the > guest (which may or may not be limited to the domain running the emulator). > > NOTE: Use of the new resource type is not compatible with use of > XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns flag is > set. > > Signed-off-by: Paul Durrant > Acked-by: George Dunlap > Reviewed-by: Wei Liu Can you have validly retained this? > --- a/xen/arch/x86/hvm/ioreq.c > +++ b/xen/arch/x86/hvm/ioreq.c > @@ -281,6 +294,69 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, > bool buf) > return rc; > } > > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) > +{ > +struct domain *currd = current->domain; > +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq; > + > +if ( iorp->page ) > +{ > +/* > + * If a guest frame has already been mapped (which may happen > + * on demand if hvm_get_ioreq_server_info() is called), then > + * allocating a page is not permitted. > + */ > +if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) > +return -EPERM; > + > +return 0; > +} > + > +/* > + * Allocated IOREQ server pages are assigned to the emulating > + * domain, not the target domain. This is because the emulator is > + * likely to be destroyed after the target domain has been torn > + * down, and we must use MEMF_no_refcount otherwise page allocation > + * could fail if the emulating domain has already reached its > + * maximum allocation. > + */ > +iorp->page = alloc_domheap_page(currd, MEMF_no_refcount); > +if ( !iorp->page ) > +return -ENOMEM; > + > +if ( !get_page_type(iorp->page, PGT_writable_page) ) > +{ ASSERT_UNREACHABLE() ? > @@ -777,6 +886,51 @@ int hvm_get_ioreq_server_info(struct domain *d, > ioservid_t id, > return rc; > } > > +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id, > + unsigned long idx, mfn_t *mfn) > +{ > +struct hvm_ioreq_server *s; > +int rc; > + > +spin_lock_recursive(>arch.hvm_domain.ioreq_server.lock); > + > +if ( id == DEFAULT_IOSERVID ) > +return -EOPNOTSUPP; > + > +s = get_ioreq_server(d, id); > + > +ASSERT(!IS_DEFAULT(s)); > + > +rc = hvm_ioreq_server_alloc_pages(s); > +if ( rc ) > +goto out; > + > +switch ( idx ) > +{ > +case XENMEM_resource_ioreq_server_frame_bufioreq: > +rc = -ENOENT; > +if ( !HANDLE_BUFIOREQ(s) ) > +goto out; > + > +*mfn = _mfn(page_to_mfn(s->bufioreq.page)); > +rc = 0; > +break; How about if ( HANDLE_BUFIOREQ(s) ) *mfn = _mfn(page_to_mfn(s->bufioreq.page)); else rc = -ENOENT; break; ? > +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id, > +unsigned long frame, > +unsigned long nr_frames, > +unsigned long mfn_list[]) > +{ > +unsigned int i; This now doesn't match up with the upper bound's type. > @@ -629,6 +634,10 @@ struct xen_mem_acquire_resource { > * is optional if nr_frames is 0. > */ > uint64_aligned_t frame; > + > +#define XENMEM_resource_ioreq_server_frame_bufioreq 0 > +#define XENMEM_resource_ioreq_server_frame_ioreq(n_) (1 + (n_)) I don't see what you need the trailing underscore for. This is normally only needed on local variables defined in (gcc extended) macros, which we generally can't use in a public header anyway. Jan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] [PATCH v11 06/11] x86/hvm/ioreq: add a new mappable resource type...
... XENMEM_resource_ioreq_server This patch adds support for a new resource type that can be mapped using the XENMEM_acquire_resource memory op. If an emulator makes use of this resource type then, instead of mapping gfns, the IOREQ server will allocate pages from the heap. These pages will never be present in the P2M of the guest at any point and so are not vulnerable to any direct attack by the guest. They are only ever accessible by Xen and any domain that has mapping privilege over the guest (which may or may not be limited to the domain running the emulator). NOTE: Use of the new resource type is not compatible with use of XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns flag is set. Signed-off-by: Paul DurrantAcked-by: George Dunlap Reviewed-by: Wei Liu --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Ian Jackson Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan v11: - Addressed more comments from Jan. v10: - Addressed comments from Jan. v8: - Re-base on new boilerplate. - Adjust function signature of hvm_get_ioreq_server_frame(), and test whether the bufioreq page is present. v5: - Use get_ioreq_server() function rather than indexing array directly. - Add more explanation into comments to state than mapping guest frames and allocation of pages for ioreq servers are not simultaneously permitted. - Add a comment into asm/ioreq.h stating the meaning of the index value passed to hvm_get_ioreq_server_frame(). --- xen/arch/x86/hvm/ioreq.c| 154 xen/arch/x86/mm.c | 22 ++ xen/common/memory.c | 5 ++ xen/include/asm-x86/hvm/ioreq.h | 2 + xen/include/asm-x86/mm.h| 5 ++ xen/include/public/hvm/dm_op.h | 4 ++ xen/include/public/memory.h | 9 +++ 7 files changed, 201 insertions(+) diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c index f654e7796c..ff41312455 100644 --- a/xen/arch/x86/hvm/ioreq.c +++ b/xen/arch/x86/hvm/ioreq.c @@ -259,6 +259,19 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq; int rc; +if ( iorp->page ) +{ +/* + * If a page has already been allocated (which will happen on + * demand if hvm_get_ioreq_server_frame() is called), then + * mapping a guest frame is not permitted. + */ +if ( gfn_eq(iorp->gfn, INVALID_GFN) ) +return -EPERM; + +return 0; +} + if ( d->is_dying ) return -EINVAL; @@ -281,6 +294,69 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf) return rc; } +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ +struct domain *currd = current->domain; +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq; + +if ( iorp->page ) +{ +/* + * If a guest frame has already been mapped (which may happen + * on demand if hvm_get_ioreq_server_info() is called), then + * allocating a page is not permitted. + */ +if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) +return -EPERM; + +return 0; +} + +/* + * Allocated IOREQ server pages are assigned to the emulating + * domain, not the target domain. This is because the emulator is + * likely to be destroyed after the target domain has been torn + * down, and we must use MEMF_no_refcount otherwise page allocation + * could fail if the emulating domain has already reached its + * maximum allocation. + */ +iorp->page = alloc_domheap_page(currd, MEMF_no_refcount); +if ( !iorp->page ) +return -ENOMEM; + +if ( !get_page_type(iorp->page, PGT_writable_page) ) +{ +put_page(iorp->page); +iorp->page = NULL; +return -ENOMEM; +} + +iorp->va = __map_domain_page_global(iorp->page); +if ( !iorp->va ) +{ +put_page_and_type(iorp->page); +iorp->page = NULL; +return -ENOMEM; +} + +clear_page(iorp->va); +return 0; +} + +static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) +{ +struct hvm_ioreq_page *iorp = buf ? >bufioreq : >ioreq; + +if ( !iorp->page ) +return; + +unmap_domain_page_global(iorp->va); +iorp->va = NULL; + +put_page_and_type(iorp->page); +iorp->page = NULL; +} + bool is_ioreq_server_page(struct domain *d, const struct page_info *page) { const struct hvm_ioreq_server *s; @@ -484,6 +560,27 @@ static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s) hvm_unmap_ioreq_gfn(s, false); } +static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s) +{ +int rc;