Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
>>> On 19.03.18 at 16:36, wrote: >> From: Jan Beulich [mailto:jbeul...@suse.com] >> Sent: 19 March 2018 15:14 >> >> >>> On 12.02.18 at 11:47, wrote: >> > --- a/xen/arch/x86/iommu_op.c >> > +++ b/xen/arch/x86/iommu_op.c >> > @@ -22,6 +22,58 @@ >> > #include >> > #include >> > #include >> > +#include >> > + >> > +struct get_rdm_ctxt { >> > +unsigned int max_entries; >> > +unsigned int nr_entries; >> > +XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions; >> > +}; >> > + >> > +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg) >> > +{ >> > +struct get_rdm_ctxt *ctxt = arg; >> > + >> > +if ( ctxt->nr_entries < ctxt->max_entries ) >> > +{ >> > +xen_iommu_reserved_region_t region = { >> > +.start_bfn = start, >> > +.nr_frames = nr, >> > +}; >> > + >> > +if ( copy_to_guest_offset(ctxt->regions, ctxt->nr_entries, >> > ®ion, >> > + 1) ) >> > +return -EFAULT; >> > +} >> > + >> > +ctxt->nr_entries++; >> > + >> > +return 1; >> > +} >> > + >> > +static int iommuop_query_reserved(struct >> xen_iommu_op_query_reserved *op) >> > +{ >> > +struct get_rdm_ctxt ctxt = { >> > +.max_entries = op->nr_entries, >> > +.regions = op->regions, >> > +}; >> > +int rc; >> > + >> > +if (op->pad != 0) >> > +return -EINVAL; >> > + >> > +rc = iommu_get_reserved_device_memory(get_rdm, &ctxt); >> > +if ( rc ) >> > +return rc; >> > + >> > +/* Pass back the actual number of reserved regions */ >> > +op->nr_entries = ctxt.nr_entries; >> > + >> > +if ( ctxt.nr_entries > ctxt.max_entries ) >> > +return -ENOBUFS; >> > + >> > +return 0; >> > +} >> >> One more note here: As it looks we can only hope there won't be >> too many RMRRs, as the number of entries that can be requested >> here is basically unbounded. >> > > The caller has to be able to allocate a buffer large enough but, yes there > is no explicit limit. I'll add pre-empt checks. Thing is - preempt check probably won't be easy with the way iommu_get_reserved_device_memory() works. Jan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
>>> On 19.03.18 at 16:13, wrote: >> From: Jan Beulich [mailto:jbeul...@suse.com] >> Sent: 19 March 2018 14:10 >> >> >>> On 12.02.18 at 11:47, wrote: >> > +for ( j = 0; \ >> > + j < min_t(unsigned int, (_d_)->nr_entries, \ >> > +*nr_entries); \ >> >> Do you really need min_t() here (rather than the more safe min())? >> > > I've been asked to preferentially use min_t() before (although I don't think > it was by you) so I'm not sure what the expectation is. I'm happy to use > min(). I'd be curious who that was and why. The type check in min() makes it preferable to use whenever the two types are (supposed to be) compatible. >> > +struct xen_iommu_reserved_region { >> > +xen_bfn_t start_bfn; >> > +unsigned int nr_frames; >> > +unsigned int pad; >> >> Fixed width types (i.e. uint32_t) in the public interface please. >> Also, this not being the main MMU, page granularity needs to be >> specified somehow (also for the conversion between xen_bfn_t >> and a bus address). >> > > Do you think it would be better to have a separate query call to get the > IOMMU page size back, or are you anticipating heterogeneous ranges (in which > case I'm going to need to adjust the map and unmap functions to allow for > that)? Fundamentally (on x86) I can't see why we couldn't eventually permit 2M and 1G mappings to be established this way. For RMRRs I don't know how large they can get. I think I've seen larger than single pages ones for graphics devices, but I don't recall how big they were, or whether they were suitable aligned to allow large page mappings. But we should also have ARM (and ideally make this abstract enough to even fit other architectures) in mind. Also remember that someone already has a series somewhere to extend the iommu_{,un}map_page() functions with an order parameter. Jan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
> -Original Message- > From: Jan Beulich [mailto:jbeul...@suse.com] > Sent: 19 March 2018 15:14 > To: Paul Durrant > Cc: Andrew Cooper ; Wei Liu > ; George Dunlap ; Ian > Jackson ; Stefano Stabellini > ; xen-devel@lists.xenproject.org; Konrad Rzeszutek > Wilk ; Tim (Xen.org) > Subject: Re: [PATCH 6/7] x86: add iommu_op to query reserved ranges > > >>> On 12.02.18 at 11:47, wrote: > > --- a/xen/arch/x86/iommu_op.c > > +++ b/xen/arch/x86/iommu_op.c > > @@ -22,6 +22,58 @@ > > #include > > #include > > #include > > +#include > > + > > +struct get_rdm_ctxt { > > +unsigned int max_entries; > > +unsigned int nr_entries; > > +XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions; > > +}; > > + > > +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg) > > +{ > > +struct get_rdm_ctxt *ctxt = arg; > > + > > +if ( ctxt->nr_entries < ctxt->max_entries ) > > +{ > > +xen_iommu_reserved_region_t region = { > > +.start_bfn = start, > > +.nr_frames = nr, > > +}; > > + > > +if ( copy_to_guest_offset(ctxt->regions, ctxt->nr_entries, ®ion, > > + 1) ) > > +return -EFAULT; > > +} > > + > > +ctxt->nr_entries++; > > + > > +return 1; > > +} > > + > > +static int iommuop_query_reserved(struct > xen_iommu_op_query_reserved *op) > > +{ > > +struct get_rdm_ctxt ctxt = { > > +.max_entries = op->nr_entries, > > +.regions = op->regions, > > +}; > > +int rc; > > + > > +if (op->pad != 0) > > +return -EINVAL; > > + > > +rc = iommu_get_reserved_device_memory(get_rdm, &ctxt); > > +if ( rc ) > > +return rc; > > + > > +/* Pass back the actual number of reserved regions */ > > +op->nr_entries = ctxt.nr_entries; > > + > > +if ( ctxt.nr_entries > ctxt.max_entries ) > > +return -ENOBUFS; > > + > > +return 0; > > +} > > One more note here: As it looks we can only hope there won't be > too many RMRRs, as the number of entries that can be requested > here is basically unbounded. > The caller has to be able to allocate a buffer large enough but, yes there is no explicit limit. I'll add pre-empt checks. Paul > Jan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
> -Original Message- > From: Jan Beulich [mailto:jbeul...@suse.com] > Sent: 19 March 2018 14:10 > To: Paul Durrant > Cc: Andrew Cooper ; Wei Liu > ; George Dunlap ; Ian > Jackson ; Stefano Stabellini > ; xen-devel@lists.xenproject.org; Konrad Rzeszutek > Wilk ; Tim (Xen.org) > Subject: Re: [PATCH 6/7] x86: add iommu_op to query reserved ranges > > >>> On 12.02.18 at 11:47, wrote: > > --- a/xen/arch/x86/iommu_op.c > > +++ b/xen/arch/x86/iommu_op.c > > @@ -22,6 +22,58 @@ > > #include > > #include > > #include > > +#include > > + > > +struct get_rdm_ctxt { > > +unsigned int max_entries; > > +unsigned int nr_entries; > > +XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions; > > +}; > > + > > +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg) > > uint32_t please in new code. Ok. > > > +static int iommuop_query_reserved(struct > xen_iommu_op_query_reserved *op) > > +{ > > +struct get_rdm_ctxt ctxt = { > > +.max_entries = op->nr_entries, > > +.regions = op->regions, > > +}; > > +int rc; > > + > > +if (op->pad != 0) > > Missing blanks. Perhaps also drop the " != 0". > Indeed. > > +return -EINVAL; > > + > > +rc = iommu_get_reserved_device_memory(get_rdm, &ctxt); > > +if ( rc ) > > +return rc; > > + > > +/* Pass back the actual number of reserved regions */ > > +op->nr_entries = ctxt.nr_entries; > > + > > +if ( ctxt.nr_entries > ctxt.max_entries ) > > +return -ENOBUFS; > > Perhaps unless the handle is null? > Hmm. I'll re-work my Linux code and try that. > > @@ -132,12 +190,75 @@ int > compat_iommu_op(XEN_GUEST_HANDLE_PARAM(compat_iommu_op_t) > uops, > > break; > > } > > > > +/* > > + * The xlat magic doesn't quite know how to handle the union so > > + * we need to fix things up here. > > + */ > > That's quite sad, as this is the second instance in a relatively short > period of time. We really should see whether the translation code > can't be adjusted suitably. > > > +#define XLAT_iommu_op_u_query_reserved > XEN_IOMMUOP_query_reserved > > +u = cmp.op; > > + > > +#define XLAT_iommu_op_query_reserved_HNDL_regions(_d_, _s_) \ > > +do \ > > +{ \ > > +if ( !compat_handle_is_null((_s_)->regions) ) \ > > In the context of the earlier missing null handle check I find this > a little surprising (but correct). > > > +{ \ > > +unsigned int *nr_entries = COMPAT_ARG_XLAT_VIRT_BASE; \ > > +xen_iommu_reserved_region_t *regions = \ > > +(void *)(nr_entries + 1); \ > > +\ > > +if ( sizeof(*nr_entries) + \ > > + (sizeof(*regions) * (_s_)->nr_entries) > \ > > + COMPAT_ARG_XLAT_SIZE ) \ > > +return -E2BIG; \ > > +\ > > +*nr_entries = (_s_)->nr_entries; \ > > +set_xen_guest_handle((_d_)->regions, regions); \ > > I don't understand why nr_entries has to be a pointer into the > translation area. Can't this be a simple local variable? > Probably. On the face of it it looks a stack variable should be fine. I'll check. > > +} \ > > +else \ > > +set_xen_guest_handle((_d_)->regions, NULL); \ > > +} while (false) > > + > > XLAT_iommu_op(&nat, &cmp); > > > > +#undef XLAT_iommu_op_query_reserved_HNDL_regions > > + > > iommu_op(&nat); > > > > +status = nat.status; > > + > > +#define XLAT_iommu_op_query_reserved_HNDL_regions(_d_, _s_) \ > > +do \ > > +{ \ > > +if ( !compat_handle_is_null((_d_)->regions) ) \ > > +{ \ > > +unsigned int *nr_entries = COMPAT_ARG_XLAT_VIRT_BASE; \ > > +xen_iommu_reserved_region_t *regions = \ > > +(void *)(nr_entries + 1); \ > > +unsigned int j; \ > > Without any i in an outer scope, using j is a little unusual (but of > course okay). > Oh, that may have been a hangover from a previous incarnation of the code. I'll change it if there's no clash. > > +\ > > +for ( j = 0; \ > > + j < min_t(unsigned int, (_d_)->nr_entries, \ > > +*nr_entries); \ > > Do you really need min_t() here (rather than the more safe min())? > I've been asked to preferentially use min_t() before (although I don't think it was by you) so I'm not sure what the expectation is. I'm happy to use min(). > > + j++ ) \ > > +{ \ > > +compat_iommu_reserved_region_t region; \ > > +\ > > +XLAT_iommu_reserved_region(®ion, ®ions[j]); \ > > +\ > > +if ( __copy_to_compat_offset((_d_)->regions, j, \ > > +
Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
>>> On 12.02.18 at 11:47, wrote: > --- a/xen/arch/x86/iommu_op.c > +++ b/xen/arch/x86/iommu_op.c > @@ -22,6 +22,58 @@ > #include > #include > #include > +#include > + > +struct get_rdm_ctxt { > +unsigned int max_entries; > +unsigned int nr_entries; > +XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions; > +}; > + > +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg) > +{ > +struct get_rdm_ctxt *ctxt = arg; > + > +if ( ctxt->nr_entries < ctxt->max_entries ) > +{ > +xen_iommu_reserved_region_t region = { > +.start_bfn = start, > +.nr_frames = nr, > +}; > + > +if ( copy_to_guest_offset(ctxt->regions, ctxt->nr_entries, ®ion, > + 1) ) > +return -EFAULT; > +} > + > +ctxt->nr_entries++; > + > +return 1; > +} > + > +static int iommuop_query_reserved(struct xen_iommu_op_query_reserved *op) > +{ > +struct get_rdm_ctxt ctxt = { > +.max_entries = op->nr_entries, > +.regions = op->regions, > +}; > +int rc; > + > +if (op->pad != 0) > +return -EINVAL; > + > +rc = iommu_get_reserved_device_memory(get_rdm, &ctxt); > +if ( rc ) > +return rc; > + > +/* Pass back the actual number of reserved regions */ > +op->nr_entries = ctxt.nr_entries; > + > +if ( ctxt.nr_entries > ctxt.max_entries ) > +return -ENOBUFS; > + > +return 0; > +} One more note here: As it looks we can only hope there won't be too many RMRRs, as the number of entries that can be requested here is basically unbounded. Jan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
>>> On 12.02.18 at 11:47, wrote: > --- a/xen/arch/x86/iommu_op.c > +++ b/xen/arch/x86/iommu_op.c > @@ -22,6 +22,58 @@ > #include > #include > #include > +#include > + > +struct get_rdm_ctxt { > +unsigned int max_entries; > +unsigned int nr_entries; > +XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions; > +}; > + > +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg) uint32_t please in new code. > +static int iommuop_query_reserved(struct xen_iommu_op_query_reserved *op) > +{ > +struct get_rdm_ctxt ctxt = { > +.max_entries = op->nr_entries, > +.regions = op->regions, > +}; > +int rc; > + > +if (op->pad != 0) Missing blanks. Perhaps also drop the " != 0". > +return -EINVAL; > + > +rc = iommu_get_reserved_device_memory(get_rdm, &ctxt); > +if ( rc ) > +return rc; > + > +/* Pass back the actual number of reserved regions */ > +op->nr_entries = ctxt.nr_entries; > + > +if ( ctxt.nr_entries > ctxt.max_entries ) > +return -ENOBUFS; Perhaps unless the handle is null? > @@ -132,12 +190,75 @@ int > compat_iommu_op(XEN_GUEST_HANDLE_PARAM(compat_iommu_op_t) uops, > break; > } > > +/* > + * The xlat magic doesn't quite know how to handle the union so > + * we need to fix things up here. > + */ That's quite sad, as this is the second instance in a relatively short period of time. We really should see whether the translation code can't be adjusted suitably. > +#define XLAT_iommu_op_u_query_reserved XEN_IOMMUOP_query_reserved > +u = cmp.op; > + > +#define XLAT_iommu_op_query_reserved_HNDL_regions(_d_, _s_) \ > +do \ > +{ \ > +if ( !compat_handle_is_null((_s_)->regions) ) \ In the context of the earlier missing null handle check I find this a little surprising (but correct). > +{ \ > +unsigned int *nr_entries = COMPAT_ARG_XLAT_VIRT_BASE; \ > +xen_iommu_reserved_region_t *regions = \ > +(void *)(nr_entries + 1); \ > +\ > +if ( sizeof(*nr_entries) + \ > + (sizeof(*regions) * (_s_)->nr_entries) > \ > + COMPAT_ARG_XLAT_SIZE ) \ > +return -E2BIG; \ > +\ > +*nr_entries = (_s_)->nr_entries; \ > +set_xen_guest_handle((_d_)->regions, regions); \ I don't understand why nr_entries has to be a pointer into the translation area. Can't this be a simple local variable? > +} \ > +else \ > +set_xen_guest_handle((_d_)->regions, NULL); \ > +} while (false) > + > XLAT_iommu_op(&nat, &cmp); > > +#undef XLAT_iommu_op_query_reserved_HNDL_regions > + > iommu_op(&nat); > > +status = nat.status; > + > +#define XLAT_iommu_op_query_reserved_HNDL_regions(_d_, _s_) \ > +do \ > +{ \ > +if ( !compat_handle_is_null((_d_)->regions) ) \ > +{ \ > +unsigned int *nr_entries = COMPAT_ARG_XLAT_VIRT_BASE; \ > +xen_iommu_reserved_region_t *regions = \ > +(void *)(nr_entries + 1); \ > +unsigned int j; \ Without any i in an outer scope, using j is a little unusual (but of course okay). > +\ > +for ( j = 0; \ > + j < min_t(unsigned int, (_d_)->nr_entries, \ > +*nr_entries); \ Do you really need min_t() here (rather than the more safe min())? > + j++ ) \ > +{ \ > +compat_iommu_reserved_region_t region; \ > +\ > +XLAT_iommu_reserved_region(®ion, ®ions[j]); \ > +\ > +if ( __copy_to_compat_offset((_d_)->regions, j, \ > + ®ion, 1) ) \ If you use the __-prefixed variant here, where's the address validity check? > --- a/xen/include/public/iommu_op.h > +++ b/xen/include/public/iommu_op.h > @@ -25,11 +25,46 @@ > > #include "xen.h" > > +typedef unsigned long xen_bfn_t; Is this suitable for e.g. ARM, who don't use unsigned long for e.g. xen_pfn_t? Is there in fact any reason not to re-use the generic xen_pfn_t here (also see your get_rdm() above)? Otoh this is an opportunity to not widen the problem of limited addressability in 32-bit guests - the type could be 64-bit wide across the board. > +struct xen_iommu_reserved_region { > +xen_bfn_t start_bfn; > +unsigned int nr_frames; > +unsigned int pad; Fixed width types (i.e. uint32_t) in the public interface please. Also, this not being the main MMU, page granularity needs to be specified somehow (also for the conversion between xen_bfn_t and a bus address). > +struct xen_iommu_op_query_reserved { > +/* > + * IN/OUT - On entries thi
Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
>>> On 23.02.18 at 06:23, wrote: >> From: Paul Durrant [mailto:paul.durr...@citrix.com] >> Sent: Tuesday, February 13, 2018 5:25 PM >> > From: Tian, Kevin [mailto:kevin.t...@intel.com] >> > Sent: 13 February 2018 06:52 >> > > From: Paul Durrant >> > > Sent: Monday, February 12, 2018 6:47 PM >> > > +} >> > > + >> > > +ctxt->nr_entries++; >> > > + >> > > +return 1; >> > > +} >> > > + >> > > +static int iommuop_query_reserved(struct >> > > xen_iommu_op_query_reserved *op) >> > >> > I didn't get why we cannot reuse existing XENMEM_reserved_ >> > device_memory_map? >> > >> >> This hypercall is not intended to be tools-only. That one is, unless I >> misread >> the #ifdefs. >> > > I didn't realize it. Curious how Xen enforces such tools-only policy? What > would happen if calling it from Dom0 kernel? I just felt not good of > creating a new interface just for duplicated purpose... It's not enforced for Dom0; Dom0 (including its kernel) is trusted. How would Xen know whether a request came from user land (through the privcmd driver) or directly from some kernel component? Jan ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
> From: Paul Durrant [mailto:paul.durr...@citrix.com] > Sent: Tuesday, February 13, 2018 5:25 PM > > > -Original Message- > > From: Tian, Kevin [mailto:kevin.t...@intel.com] > > Sent: 13 February 2018 06:52 > > To: Paul Durrant ; xen- > de...@lists.xenproject.org > > Cc: Stefano Stabellini ; Wei Liu > > ; George Dunlap ; > > Andrew Cooper ; Ian Jackson > > ; Tim (Xen.org) ; Jan Beulich > > > > Subject: RE: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query > reserved > > ranges > > > > > From: Paul Durrant > > > Sent: Monday, February 12, 2018 6:47 PM > > > > > > Certain areas of memory, such as RMRRs, must be mapped 1:1 > > > (i.e. BFN == MFN) through the IOMMU. > > > > > > This patch adds an iommu_op to allow these ranges to be queried. > > > > > > Signed-off-by: Paul Durrant > > > --- > > > Cc: Jan Beulich > > > Cc: Andrew Cooper > > > Cc: George Dunlap > > > Cc: Ian Jackson > > > Cc: Konrad Rzeszutek Wilk > > > Cc: Stefano Stabellini > > > Cc: Tim Deegan > > > Cc: Wei Liu > > > --- > > > xen/arch/x86/iommu_op.c | 121 > > > ++ > > > xen/include/public/iommu_op.h | 35 > > > xen/include/xlat.lst | 2 + > > > 3 files changed, 158 insertions(+) > > > > > > diff --git a/xen/arch/x86/iommu_op.c b/xen/arch/x86/iommu_op.c > > > index edd8a384b3..ac81b98b7a 100644 > > > --- a/xen/arch/x86/iommu_op.c > > > +++ b/xen/arch/x86/iommu_op.c > > > @@ -22,6 +22,58 @@ > > > #include > > > #include > > > #include > > > +#include > > > + > > > +struct get_rdm_ctxt { > > > +unsigned int max_entries; > > > +unsigned int nr_entries; > > > +XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions; > > > +}; > > > + > > > +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg) > > > +{ > > > +struct get_rdm_ctxt *ctxt = arg; > > > + > > > +if ( ctxt->nr_entries < ctxt->max_entries ) > > > +{ > > > +xen_iommu_reserved_region_t region = { > > > +.start_bfn = start, > > > +.nr_frames = nr, > > > +}; > > > + > > > +if ( copy_to_guest_offset(ctxt->regions, ctxt->nr_entries, > > > ®ion, > > > + 1) ) > > > +return -EFAULT; > > > > RMRR entries are device specific. it's why a 'id' (i.e. sbdf) field > > is introduced for such check. > > What I want here is the union of all RMRRs for all devices in the domain. I > believe that is what the code will currently query, but I could be wrong. RMRR is per-device. I'm not sure why we want to restrict it for every device if not related. > > > > > > +} > > > + > > > +ctxt->nr_entries++; > > > + > > > +return 1; > > > +} > > > + > > > +static int iommuop_query_reserved(struct > > > xen_iommu_op_query_reserved *op) > > > > I didn't get why we cannot reuse existing XENMEM_reserved_ > > device_memory_map? > > > > This hypercall is not intended to be tools-only. That one is, unless I misread > the #ifdefs. > I didn't realize it. Curious how Xen enforces such tools-only policy? What would happen if calling it from Dom0 kernel? I just felt not good of creating a new interface just for duplicated purpose... Thanks Kevin ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
> -Original Message- > From: Tian, Kevin [mailto:kevin.t...@intel.com] > Sent: 13 February 2018 06:52 > To: Paul Durrant ; xen-devel@lists.xenproject.org > Cc: Stefano Stabellini ; Wei Liu > ; George Dunlap ; > Andrew Cooper ; Ian Jackson > ; Tim (Xen.org) ; Jan Beulich > > Subject: RE: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved > ranges > > > From: Paul Durrant > > Sent: Monday, February 12, 2018 6:47 PM > > > > Certain areas of memory, such as RMRRs, must be mapped 1:1 > > (i.e. BFN == MFN) through the IOMMU. > > > > This patch adds an iommu_op to allow these ranges to be queried. > > > > Signed-off-by: Paul Durrant > > --- > > Cc: Jan Beulich > > Cc: Andrew Cooper > > Cc: George Dunlap > > Cc: Ian Jackson > > Cc: Konrad Rzeszutek Wilk > > Cc: Stefano Stabellini > > Cc: Tim Deegan > > Cc: Wei Liu > > --- > > xen/arch/x86/iommu_op.c | 121 > > ++ > > xen/include/public/iommu_op.h | 35 > > xen/include/xlat.lst | 2 + > > 3 files changed, 158 insertions(+) > > > > diff --git a/xen/arch/x86/iommu_op.c b/xen/arch/x86/iommu_op.c > > index edd8a384b3..ac81b98b7a 100644 > > --- a/xen/arch/x86/iommu_op.c > > +++ b/xen/arch/x86/iommu_op.c > > @@ -22,6 +22,58 @@ > > #include > > #include > > #include > > +#include > > + > > +struct get_rdm_ctxt { > > +unsigned int max_entries; > > +unsigned int nr_entries; > > +XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions; > > +}; > > + > > +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg) > > +{ > > +struct get_rdm_ctxt *ctxt = arg; > > + > > +if ( ctxt->nr_entries < ctxt->max_entries ) > > +{ > > +xen_iommu_reserved_region_t region = { > > +.start_bfn = start, > > +.nr_frames = nr, > > +}; > > + > > +if ( copy_to_guest_offset(ctxt->regions, ctxt->nr_entries, ®ion, > > + 1) ) > > +return -EFAULT; > > RMRR entries are device specific. it's why a 'id' (i.e. sbdf) field > is introduced for such check. What I want here is the union of all RMRRs for all devices in the domain. I believe that is what the code will currently query, but I could be wrong. > > > +} > > + > > +ctxt->nr_entries++; > > + > > +return 1; > > +} > > + > > +static int iommuop_query_reserved(struct > > xen_iommu_op_query_reserved *op) > > I didn't get why we cannot reuse existing XENMEM_reserved_ > device_memory_map? > This hypercall is not intended to be tools-only. That one is, unless I misread the #ifdefs. Paul > Thanks > Kevin ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
> From: Paul Durrant > Sent: Monday, February 12, 2018 6:47 PM > > Certain areas of memory, such as RMRRs, must be mapped 1:1 > (i.e. BFN == MFN) through the IOMMU. > > This patch adds an iommu_op to allow these ranges to be queried. > > Signed-off-by: Paul Durrant > --- > Cc: Jan Beulich > Cc: Andrew Cooper > Cc: George Dunlap > Cc: Ian Jackson > Cc: Konrad Rzeszutek Wilk > Cc: Stefano Stabellini > Cc: Tim Deegan > Cc: Wei Liu > --- > xen/arch/x86/iommu_op.c | 121 > ++ > xen/include/public/iommu_op.h | 35 > xen/include/xlat.lst | 2 + > 3 files changed, 158 insertions(+) > > diff --git a/xen/arch/x86/iommu_op.c b/xen/arch/x86/iommu_op.c > index edd8a384b3..ac81b98b7a 100644 > --- a/xen/arch/x86/iommu_op.c > +++ b/xen/arch/x86/iommu_op.c > @@ -22,6 +22,58 @@ > #include > #include > #include > +#include > + > +struct get_rdm_ctxt { > +unsigned int max_entries; > +unsigned int nr_entries; > +XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions; > +}; > + > +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg) > +{ > +struct get_rdm_ctxt *ctxt = arg; > + > +if ( ctxt->nr_entries < ctxt->max_entries ) > +{ > +xen_iommu_reserved_region_t region = { > +.start_bfn = start, > +.nr_frames = nr, > +}; > + > +if ( copy_to_guest_offset(ctxt->regions, ctxt->nr_entries, ®ion, > + 1) ) > +return -EFAULT; RMRR entries are device specific. it's why a 'id' (i.e. sbdf) field is introduced for such check. > +} > + > +ctxt->nr_entries++; > + > +return 1; > +} > + > +static int iommuop_query_reserved(struct > xen_iommu_op_query_reserved *op) I didn't get why we cannot reuse existing XENMEM_reserved_ device_memory_map? Thanks Kevin ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH 6/7] x86: add iommu_op to query reserved ranges
Certain areas of memory, such as RMRRs, must be mapped 1:1 (i.e. BFN == MFN) through the IOMMU. This patch adds an iommu_op to allow these ranges to be queried. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Konrad Rzeszutek Wilk Cc: Stefano Stabellini Cc: Tim Deegan Cc: Wei Liu --- xen/arch/x86/iommu_op.c | 121 ++ xen/include/public/iommu_op.h | 35 xen/include/xlat.lst | 2 + 3 files changed, 158 insertions(+) diff --git a/xen/arch/x86/iommu_op.c b/xen/arch/x86/iommu_op.c index edd8a384b3..ac81b98b7a 100644 --- a/xen/arch/x86/iommu_op.c +++ b/xen/arch/x86/iommu_op.c @@ -22,6 +22,58 @@ #include #include #include +#include + +struct get_rdm_ctxt { +unsigned int max_entries; +unsigned int nr_entries; +XEN_GUEST_HANDLE(xen_iommu_reserved_region_t) regions; +}; + +static int get_rdm(xen_pfn_t start, xen_ulong_t nr, u32 id, void *arg) +{ +struct get_rdm_ctxt *ctxt = arg; + +if ( ctxt->nr_entries < ctxt->max_entries ) +{ +xen_iommu_reserved_region_t region = { +.start_bfn = start, +.nr_frames = nr, +}; + +if ( copy_to_guest_offset(ctxt->regions, ctxt->nr_entries, ®ion, + 1) ) +return -EFAULT; +} + +ctxt->nr_entries++; + +return 1; +} + +static int iommuop_query_reserved(struct xen_iommu_op_query_reserved *op) +{ +struct get_rdm_ctxt ctxt = { +.max_entries = op->nr_entries, +.regions = op->regions, +}; +int rc; + +if (op->pad != 0) +return -EINVAL; + +rc = iommu_get_reserved_device_memory(get_rdm, &ctxt); +if ( rc ) +return rc; + +/* Pass back the actual number of reserved regions */ +op->nr_entries = ctxt.nr_entries; + +if ( ctxt.nr_entries > ctxt.max_entries ) +return -ENOBUFS; + +return 0; +} static bool can_control_iommu(void) { @@ -45,6 +97,10 @@ static void iommu_op(xen_iommu_op_t *op) { switch ( op->op ) { +case XEN_IOMMUOP_query_reserved: +op->status = iommuop_query_reserved(&op->u.query_reserved); +break; + default: op->status = -EOPNOTSUPP; break; @@ -119,6 +175,8 @@ int compat_iommu_op(XEN_GUEST_HANDLE_PARAM(compat_iommu_op_t) uops, { compat_iommu_op_t cmp; xen_iommu_op_t nat; +unsigned int u; +int32_t status; if ( ((i & 0xff) == 0xff) && hypercall_preempt_check() ) { @@ -132,12 +190,75 @@ int compat_iommu_op(XEN_GUEST_HANDLE_PARAM(compat_iommu_op_t) uops, break; } +/* + * The xlat magic doesn't quite know how to handle the union so + * we need to fix things up here. + */ +#define XLAT_iommu_op_u_query_reserved XEN_IOMMUOP_query_reserved +u = cmp.op; + +#define XLAT_iommu_op_query_reserved_HNDL_regions(_d_, _s_) \ +do \ +{ \ +if ( !compat_handle_is_null((_s_)->regions) ) \ +{ \ +unsigned int *nr_entries = COMPAT_ARG_XLAT_VIRT_BASE; \ +xen_iommu_reserved_region_t *regions = \ +(void *)(nr_entries + 1); \ +\ +if ( sizeof(*nr_entries) + \ + (sizeof(*regions) * (_s_)->nr_entries) > \ + COMPAT_ARG_XLAT_SIZE ) \ +return -E2BIG; \ +\ +*nr_entries = (_s_)->nr_entries; \ +set_xen_guest_handle((_d_)->regions, regions); \ +} \ +else \ +set_xen_guest_handle((_d_)->regions, NULL); \ +} while (false) + XLAT_iommu_op(&nat, &cmp); +#undef XLAT_iommu_op_query_reserved_HNDL_regions + iommu_op(&nat); +status = nat.status; + +#define XLAT_iommu_op_query_reserved_HNDL_regions(_d_, _s_) \ +do \ +{ \ +if ( !compat_handle_is_null((_d_)->regions) ) \ +{ \ +unsigned int *nr_entries = COMPAT_ARG_XLAT_VIRT_BASE; \ +xen_iommu_reserved_region_t *regions = \ +(void *)(nr_entries + 1); \ +unsigned int j; \ +\ +for ( j = 0; \ + j < min_t(unsigned int, (_d_)->nr_entries, \ +*nr_entries); \ + j++ ) \ +{ \ +compat_iommu_reserved_region_t region; \ +\ +XLAT_iommu_reserved_region(®ion, ®ions[j]); \ +\ +if ( __copy_to_compat_offset((_d_)->regions, j, \ + ®ion, 1) ) \ +status = -EFAULT; \ +} \ +} \ +} while (false) + XLAT_iommu_op(&cmp, &nat); +/* The status