Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
On Tue, 12 Dec 2017, Julien Grall wrote: > Hi Stefano, > > On 12/08/2017 10:43 PM, Stefano Stabellini wrote: > > On Fri, 8 Dec 2017, Julien Grall wrote: > > > On 8 Dec 2017 22:26, "Stefano Stabellini"wrote: > > >On Fri, 8 Dec 2017, Julien Grall wrote: > > >> On 06/12/17 12:27, Julien Grall wrote: > > >> > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: > > >> > > On Thu, 23 Nov 2017, Julien Grall wrote: > > >> > > > Hi Andrew, > > >> > > > > > >> > > > On 23/11/17 18:49, Andrew Cooper wrote: > > >> > > > > On 23/11/17 18:32, Julien Grall wrote: > > >> > > > > > This new function will be used in a follow-up patch to > > > copy data to > > >> > > > > > the > > >> > > > > > guest > > >> > > > > > using the IPA (aka guest physical address) and then > > > clean the cache. > > >> > > > > > > > >> > > > > > Signed-off-by: Julien Grall > > >> > > > > > --- > > >> > > > > > xen/arch/arm/guestcopy.c | 10 ++ > > >> > > > > > xen/include/asm-arm/guest_access.h | 6 ++ > > >> > > > > > 2 files changed, 16 insertions(+) > > >> > > > > > > > >> > > > > > diff --git a/xen/arch/arm/guestcopy.c > > > b/xen/arch/arm/guestcopy.c > > >> > > > > > index be53bee559..7958663970 100644 > > >> > > > > > --- a/xen/arch/arm/guestcopy.c > > >> > > > > > +++ b/xen/arch/arm/guestcopy.c > > >> > > > > > @@ -110,6 +110,16 @@ unsigned long > > > raw_copy_from_guest(void *to, > > >> > > > > > const > > >> > > > > > void __user *from, unsigned le > > >> > > > > > COPY_from_guest | > > > COPY_linear); > > >> > > > > > } > > >> > > > > > +unsigned long > > > copy_to_guest_phys_flush_dcache(struct domain *d, > > >> > > > > > + paddr_t > > > gpa, > > >> > > > > > + void > > > *buf, > > >> > > > > > + unsigned > > > int len) > > >> > > > > > +{ > > >> > > > > > + /* P2M is shared between all vCPUs, so the vCPU > > > used does not > > >> > > > > > matter. > > >> > > > > > */ > > >> > > > > > > >> > > > > Be very careful with this line of thinking. It is only > > > works after > > >> > > > > DOMCTL_max_vcpus has succeeded, and before that point, it > > > is a latent > > >> > > > > NULL pointer dereference. > > >> > > > > > >> > > > I really don't expect that function been used before > > > DOMCT_max_vcpus is > > >> > > > set. > > >> > > > It is only used for hardware emulation or Xen loading image > > > into the > > >> > > > hardware > > >> > > > domain memory. I could add a check d->vcpus to be safe. > > >> > > > > > >> > > > > > > >> > > > > Also, what about vcpus configured with alternative views? > > >> > > > > > >> > > > It is not important because the underlying call is > > > get_page_from_gfn > > >> > > > that does > > >> > > > not care about the alternative view (that function take a > > > domain in > > >> > > > parameter). I can update the comment. > > >> > > Since this is a new function, would it make sense to take a > > > struct > > >> > > vcpu* as parameter, instead of a struct domain* ? > > >> > > > >> > Well, I suggested this patch this way because likely everyone > > > will use with > > >> > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] > > > and not > > >> > d->vcpus[1]... > > >> > > >> Thinking a bit more to this, it might be better/safer to pass > > > either a domain > > >> or a vCPU to copy_guest. I can see 2 solutions: > > >> 1# Introduce a union that use the same parameter: > > >> union > > >> { > > >> struct > > >> { > > >> struct domain *d; > > >> } ipa; > > >> struct > > >> { > > >> struct vcpu *v; > > >> } gva; > > >> } > > >> The structure here would be to ensure that it is clear > > > that only > > >> domain (resp. vcpu) should be used with ipa (resp. gva). > > >> > > >> 2# Have 2 parameters, vcpu and domain. > > >> > > >> Any opinions? > > > > > > I think that would be clearer. You could also add a paddr_t/vaddr_t > > > correspondingly inside the union maybe. > > >
Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
Hi Stefano, On 12/08/2017 10:43 PM, Stefano Stabellini wrote: On Fri, 8 Dec 2017, Julien Grall wrote: On 8 Dec 2017 22:26, "Stefano Stabellini"wrote: On Fri, 8 Dec 2017, Julien Grall wrote: > On 06/12/17 12:27, Julien Grall wrote: > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: > > > On Thu, 23 Nov 2017, Julien Grall wrote: > > > > Hi Andrew, > > > > > > > > On 23/11/17 18:49, Andrew Cooper wrote: > > > > > On 23/11/17 18:32, Julien Grall wrote: > > > > > > This new function will be used in a follow-up patch to copy data to > > > > > > the > > > > > > guest > > > > > > using the IPA (aka guest physical address) and then clean the cache. > > > > > > > > > > > > Signed-off-by: Julien Grall > > > > > > --- > > > > > > xen/arch/arm/guestcopy.c | 10 ++ > > > > > > xen/include/asm-arm/guest_access.h | 6 ++ > > > > > > 2 files changed, 16 insertions(+) > > > > > > > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c > > > > > > index be53bee559..7958663970 100644 > > > > > > --- a/xen/arch/arm/guestcopy.c > > > > > > +++ b/xen/arch/arm/guestcopy.c > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, > > > > > > const > > > > > > void __user *from, unsigned le > > > > > > COPY_from_guest | COPY_linear); > > > > > > } > > > > > > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > > > > > > + paddr_t gpa, > > > > > > + void *buf, > > > > > > + unsigned int len) > > > > > > +{ > > > > > > + /* P2M is shared between all vCPUs, so the vCPU used does not > > > > > > matter. > > > > > > */ > > > > > > > > > > Be very careful with this line of thinking. It is only works after > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent > > > > > NULL pointer dereference. > > > > > > > > I really don't expect that function been used before DOMCT_max_vcpus is > > > > set. > > > > It is only used for hardware emulation or Xen loading image into the > > > > hardware > > > > domain memory. I could add a check d->vcpus to be safe. > > > > > > > > > > > > > > Also, what about vcpus configured with alternative views? > > > > > > > > It is not important because the underlying call is get_page_from_gfn > > > > that does > > > > not care about the alternative view (that function take a domain in > > > > parameter). I can update the comment. > > > Since this is a new function, would it make sense to take a struct > > > vcpu* as parameter, instead of a struct domain* ? > > > > Well, I suggested this patch this way because likely everyone will use with > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not > > d->vcpus[1]... > > Thinking a bit more to this, it might be better/safer to pass either a domain > or a vCPU to copy_guest. I can see 2 solutions: > 1# Introduce a union that use the same parameter: > union > { > struct > { > struct domain *d; > } ipa; > struct > { > struct vcpu *v; > } gva; > } > The structure here would be to ensure that it is clear that only > domain (resp. vcpu) should be used with ipa (resp. gva). > > 2# Have 2 parameters, vcpu and domain. > > Any opinions? I think that would be clearer. You could also add a paddr_t/vaddr_t correspondingly inside the union maybe. Well, you will have nameclash happening I think. And vaddr_t and paddr_t are confusing because you don't know which address you speak about (guest vs hypervisor). At least ipa/gpa, gva are known naming. That's not what I meant. ipa and gva are good names. I was suggesting to put an additional address field inside the union to avoid the issue with paddr_t and vaddr_t discussed elsewhere (alpine.DEB.2.10.1712081313070.8052@sstabellini-ThinkPad-X260). I am happy either way, it was just a suggestion. Actually looking at it. It will not be that handy to have the paddr_t/vaddr_t inside the union. Mostly because of the common code using the address parameter.
Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
On 8 Dec 2017 22:26, "Stefano Stabellini"wrote: On Fri, 8 Dec 2017, Julien Grall wrote: > On 06/12/17 12:27, Julien Grall wrote: > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: > > > On Thu, 23 Nov 2017, Julien Grall wrote: > > > > Hi Andrew, > > > > > > > > On 23/11/17 18:49, Andrew Cooper wrote: > > > > > On 23/11/17 18:32, Julien Grall wrote: > > > > > > This new function will be used in a follow-up patch to copy data to > > > > > > the > > > > > > guest > > > > > > using the IPA (aka guest physical address) and then clean the cache. > > > > > > > > > > > > Signed-off-by: Julien Grall > > > > > > --- > > > > > >xen/arch/arm/guestcopy.c | 10 ++ > > > > > >xen/include/asm-arm/guest_access.h | 6 ++ > > > > > >2 files changed, 16 insertions(+) > > > > > > > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c > > > > > > index be53bee559..7958663970 100644 > > > > > > --- a/xen/arch/arm/guestcopy.c > > > > > > +++ b/xen/arch/arm/guestcopy.c > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, > > > > > > const > > > > > > void __user *from, unsigned le > > > > > > COPY_from_guest | COPY_linear); > > > > > >} > > > > > >+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > > > > > > + paddr_t gpa, > > > > > > + void *buf, > > > > > > + unsigned int len) > > > > > > +{ > > > > > > +/* P2M is shared between all vCPUs, so the vCPU used does not > > > > > > matter. > > > > > > */ > > > > > > > > > > Be very careful with this line of thinking. It is only works after > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent > > > > > NULL pointer dereference. > > > > > > > > I really don't expect that function been used before DOMCT_max_vcpus is > > > > set. > > > > It is only used for hardware emulation or Xen loading image into the > > > > hardware > > > > domain memory. I could add a check d->vcpus to be safe. > > > > > > > > > > > > > > Also, what about vcpus configured with alternative views? > > > > > > > > It is not important because the underlying call is get_page_from_gfn > > > > that does > > > > not care about the alternative view (that function take a domain in > > > > parameter). I can update the comment. > > > Since this is a new function, would it make sense to take a struct > > > vcpu* as parameter, instead of a struct domain* ? > > > > Well, I suggested this patch this way because likely everyone will use with > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not > > d->vcpus[1]... > > Thinking a bit more to this, it might be better/safer to pass either a domain > or a vCPU to copy_guest. I can see 2 solutions: > 1# Introduce a union that use the same parameter: > union > { > struct > { > struct domain *d; > } ipa; > struct > { > struct vcpu *v; > } gva; > } > The structure here would be to ensure that it is clear that only > domain (resp. vcpu) should be used with ipa (resp. gva). > > 2# Have 2 parameters, vcpu and domain. > > Any opinions? I think that would be clearer. You could also add a paddr_t/vaddr_t correspondingly inside the union maybe. Well, you will have nameclash happening I think. And vaddr_t and paddr_t are confusing because you don't know which address you speak about (guest vs hypervisor). At least ipa/gpa, gva are known naming. Cheers, ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
On Fri, 8 Dec 2017, Julien Grall wrote: > On 06/12/17 12:27, Julien Grall wrote: > > On 12/06/2017 01:26 AM, Stefano Stabellini wrote: > > > On Thu, 23 Nov 2017, Julien Grall wrote: > > > > Hi Andrew, > > > > > > > > On 23/11/17 18:49, Andrew Cooper wrote: > > > > > On 23/11/17 18:32, Julien Grall wrote: > > > > > > This new function will be used in a follow-up patch to copy data to > > > > > > the > > > > > > guest > > > > > > using the IPA (aka guest physical address) and then clean the cache. > > > > > > > > > > > > Signed-off-by: Julien Grall> > > > > > --- > > > > > > xen/arch/arm/guestcopy.c | 10 ++ > > > > > > xen/include/asm-arm/guest_access.h | 6 ++ > > > > > > 2 files changed, 16 insertions(+) > > > > > > > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c > > > > > > index be53bee559..7958663970 100644 > > > > > > --- a/xen/arch/arm/guestcopy.c > > > > > > +++ b/xen/arch/arm/guestcopy.c > > > > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, > > > > > > const > > > > > > void __user *from, unsigned le > > > > > > COPY_from_guest | COPY_linear); > > > > > > } > > > > > > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > > > > > > + paddr_t gpa, > > > > > > + void *buf, > > > > > > + unsigned int len) > > > > > > +{ > > > > > > + /* P2M is shared between all vCPUs, so the vCPU used does not > > > > > > matter. > > > > > > */ > > > > > > > > > > Be very careful with this line of thinking. It is only works after > > > > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent > > > > > NULL pointer dereference. > > > > > > > > I really don't expect that function been used before DOMCT_max_vcpus is > > > > set. > > > > It is only used for hardware emulation or Xen loading image into the > > > > hardware > > > > domain memory. I could add a check d->vcpus to be safe. > > > > > > > > > > > > > > Also, what about vcpus configured with alternative views? > > > > > > > > It is not important because the underlying call is get_page_from_gfn > > > > that does > > > > not care about the alternative view (that function take a domain in > > > > parameter). I can update the comment. > > > Since this is a new function, would it make sense to take a struct > > > vcpu* as parameter, instead of a struct domain* ? > > > > Well, I suggested this patch this way because likely everyone will use with > > d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not > > d->vcpus[1]... > > Thinking a bit more to this, it might be better/safer to pass either a domain > or a vCPU to copy_guest. I can see 2 solutions: > 1# Introduce a union that use the same parameter: > union > { > struct > { > struct domain *d; > } ipa; > struct > { > struct vcpu *v; > } gva; > } > The structure here would be to ensure that it is clear that only > domain (resp. vcpu) should be used with ipa (resp. gva). > > 2# Have 2 parameters, vcpu and domain. > > Any opinions? I think that would be clearer. You could also add a paddr_t/vaddr_t correspondingly inside the union maybe.___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
Hi, On 06/12/17 12:27, Julien Grall wrote: On 12/06/2017 01:26 AM, Stefano Stabellini wrote: On Thu, 23 Nov 2017, Julien Grall wrote: Hi Andrew, On 23/11/17 18:49, Andrew Cooper wrote: On 23/11/17 18:32, Julien Grall wrote: This new function will be used in a follow-up patch to copy data to the guest using the IPA (aka guest physical address) and then clean the cache. Signed-off-by: Julien Grall--- xen/arch/arm/guestcopy.c | 10 ++ xen/include/asm-arm/guest_access.h | 6 ++ 2 files changed, 16 insertions(+) diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c index be53bee559..7958663970 100644 --- a/xen/arch/arm/guestcopy.c +++ b/xen/arch/arm/guestcopy.c @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le COPY_from_guest | COPY_linear); } +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, + paddr_t gpa, + void *buf, + unsigned int len) +{ + /* P2M is shared between all vCPUs, so the vCPU used does not matter. */ Be very careful with this line of thinking. It is only works after DOMCTL_max_vcpus has succeeded, and before that point, it is a latent NULL pointer dereference. I really don't expect that function been used before DOMCT_max_vcpus is set. It is only used for hardware emulation or Xen loading image into the hardware domain memory. I could add a check d->vcpus to be safe. Also, what about vcpus configured with alternative views? It is not important because the underlying call is get_page_from_gfn that does not care about the alternative view (that function take a domain in parameter). I can update the comment. Since this is a new function, would it make sense to take a struct vcpu* as parameter, instead of a struct domain* ? Well, I suggested this patch this way because likely everyone will use with d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not d->vcpus[1]... Thinking a bit more to this, it might be better/safer to pass either a domain or a vCPU to copy_guest. I can see 2 solutions: 1# Introduce a union that use the same parameter: union { struct { struct domain *d; } ipa; struct { struct vcpu *v; } gva; } The structure here would be to ensure that it is clear that only domain (resp. vcpu) should be used with ipa (resp. gva). 2# Have 2 parameters, vcpu and domain. Any opinions? Cheers, -- Julien Grall ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
Hi Stefano, On 12/06/2017 01:26 AM, Stefano Stabellini wrote: On Thu, 23 Nov 2017, Julien Grall wrote: Hi Andrew, On 23/11/17 18:49, Andrew Cooper wrote: On 23/11/17 18:32, Julien Grall wrote: This new function will be used in a follow-up patch to copy data to the guest using the IPA (aka guest physical address) and then clean the cache. Signed-off-by: Julien Grall--- xen/arch/arm/guestcopy.c | 10 ++ xen/include/asm-arm/guest_access.h | 6 ++ 2 files changed, 16 insertions(+) diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c index be53bee559..7958663970 100644 --- a/xen/arch/arm/guestcopy.c +++ b/xen/arch/arm/guestcopy.c @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le COPY_from_guest | COPY_linear); } +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, + paddr_t gpa, + void *buf, + unsigned int len) +{ +/* P2M is shared between all vCPUs, so the vCPU used does not matter. */ Be very careful with this line of thinking. It is only works after DOMCTL_max_vcpus has succeeded, and before that point, it is a latent NULL pointer dereference. I really don't expect that function been used before DOMCT_max_vcpus is set. It is only used for hardware emulation or Xen loading image into the hardware domain memory. I could add a check d->vcpus to be safe. Also, what about vcpus configured with alternative views? It is not important because the underlying call is get_page_from_gfn that does not care about the alternative view (that function take a domain in parameter). I can update the comment. Since this is a new function, would it make sense to take a struct vcpu* as parameter, instead of a struct domain* ? Well, I suggested this patch this way because likely everyone will use with d->vcpus[0]. And then you would have to wonder why d->vcpus[0] and not d->vcpus[1]... Cheers, -- Julien Grall ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
On Thu, 23 Nov 2017, Julien Grall wrote: > Hi Andrew, > > On 23/11/17 18:49, Andrew Cooper wrote: > > On 23/11/17 18:32, Julien Grall wrote: > > > This new function will be used in a follow-up patch to copy data to the > > > guest > > > using the IPA (aka guest physical address) and then clean the cache. > > > > > > Signed-off-by: Julien Grall> > > --- > > > xen/arch/arm/guestcopy.c | 10 ++ > > > xen/include/asm-arm/guest_access.h | 6 ++ > > > 2 files changed, 16 insertions(+) > > > > > > diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c > > > index be53bee559..7958663970 100644 > > > --- a/xen/arch/arm/guestcopy.c > > > +++ b/xen/arch/arm/guestcopy.c > > > @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const > > > void __user *from, unsigned le > > > COPY_from_guest | COPY_linear); > > > } > > > +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, > > > + paddr_t gpa, > > > + void *buf, > > > + unsigned int len) > > > +{ > > > +/* P2M is shared between all vCPUs, so the vCPU used does not matter. > > > */ > > > > Be very careful with this line of thinking. It is only works after > > DOMCTL_max_vcpus has succeeded, and before that point, it is a latent > > NULL pointer dereference. > > I really don't expect that function been used before DOMCT_max_vcpus is set. > It is only used for hardware emulation or Xen loading image into the hardware > domain memory. I could add a check d->vcpus to be safe. > > > > > Also, what about vcpus configured with alternative views? > > It is not important because the underlying call is get_page_from_gfn that does > not care about the alternative view (that function take a domain in > parameter). I can update the comment. Since this is a new function, would it make sense to take a struct vcpu* as parameter, instead of a struct domain* ?___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
Hi Andrew, On 23/11/17 18:49, Andrew Cooper wrote: On 23/11/17 18:32, Julien Grall wrote: This new function will be used in a follow-up patch to copy data to the guest using the IPA (aka guest physical address) and then clean the cache. Signed-off-by: Julien Grall--- xen/arch/arm/guestcopy.c | 10 ++ xen/include/asm-arm/guest_access.h | 6 ++ 2 files changed, 16 insertions(+) diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c index be53bee559..7958663970 100644 --- a/xen/arch/arm/guestcopy.c +++ b/xen/arch/arm/guestcopy.c @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le COPY_from_guest | COPY_linear); } +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, + paddr_t gpa, + void *buf, + unsigned int len) +{ +/* P2M is shared between all vCPUs, so the vCPU used does not matter. */ Be very careful with this line of thinking. It is only works after DOMCTL_max_vcpus has succeeded, and before that point, it is a latent NULL pointer dereference. I really don't expect that function been used before DOMCT_max_vcpus is set. It is only used for hardware emulation or Xen loading image into the hardware domain memory. I could add a check d->vcpus to be safe. Also, what about vcpus configured with alternative views? It is not important because the underlying call is get_page_from_gfn that does not care about the alternative view (that function take a domain in parameter). I can update the comment. Cheers, -- Julien Grall ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
[Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache
This new function will be used in a follow-up patch to copy data to the guest using the IPA (aka guest physical address) and then clean the cache. Signed-off-by: Julien Grall--- xen/arch/arm/guestcopy.c | 10 ++ xen/include/asm-arm/guest_access.h | 6 ++ 2 files changed, 16 insertions(+) diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c index be53bee559..7958663970 100644 --- a/xen/arch/arm/guestcopy.c +++ b/xen/arch/arm/guestcopy.c @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le COPY_from_guest | COPY_linear); } +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, + paddr_t gpa, + void *buf, + unsigned int len) +{ +/* P2M is shared between all vCPUs, so the vCPU used does not matter. */ +return copy_guest(buf, gpa, len, d->vcpu[0], + COPY_to_guest | COPY_ipa | COPY_flush_dcache); +} + int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf, uint32_t size, bool is_write) { diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h index 6796801cfe..224d2a033b 100644 --- a/xen/include/asm-arm/guest_access.h +++ b/xen/include/asm-arm/guest_access.h @@ -11,6 +11,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from, unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len); unsigned long raw_clear_guest(void *to, unsigned len); +/* Copy data to guest physical address, then clean the region. */ +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, + paddr_t phys, + void *buf, + unsigned int len); + int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf, uint32_t size, bool is_write); -- 2.11.0 ___ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel