Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages

2021-05-24 Thread Julien Grall




On 24/05/2021 11:10, Penny Zheng wrote:

Hi Julien


Hi Penny,


+if ( !pg )
+return NULL;
+
+for ( i = 0; i < nr_pfns; i++)
+{
+/*
+ * Reference count must continuously be zero for free pages
+ * of static memory(PGC_reserved).
+ */
+ASSERT(pg[i].count_info & PGC_reserved);
+if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
+{
+printk(XENLOG_ERR
+"Reference count must continuously be zero for free pages"
+"pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
+i, mfn_x(page_to_mfn(pg + i)),
+pg[i].count_info, pg[i].tlbflush_timestamp);
+BUG();


So we would crash Xen if the caller pass a wrong range. Is it what we want?

Also, who is going to prevent concurrent access?



Sure, to fix concurrency issue, I may need to add one spinlock like `static
DEFINE_SPINLOCK(staticmem_lock);`

In current alloc_heap_pages, it will do similar check, that pages in free state
MUST have zero reference count. I guess, if condition not met, there is no need
to proceed.



Another thought on concurrency problem, when constructing patch v2, do we need 
to
consider concurrency here?
heap_lock is to take care concurrent allocation on the one heap, but static 
memory is
always reserved for only one specific domain.
In theory yes, but you are relying on the admin to correctly write the 
device-tree nodes.


You are probably not going to hit the problem today because the domains 
are created one by one. But, as you may want to allocate memory at 
runtime, this is quite important to get the code protected from 
concurrent access.


Here, you will likely want to use the heaplock rather than a new lock. 
So you are also protect against concurrent access to count_info from 
other part of Xen.



Cheers,

--
Julien Grall



RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages

2021-05-24 Thread Penny Zheng
Hi Julien

> -Original Message-
> From: Penny Zheng
> Sent: Wednesday, May 19, 2021 1:24 PM
> To: Julien Grall ; xen-devel@lists.xenproject.org;
> sstabell...@kernel.org
> Cc: Bertrand Marquis ; Wei Chen
> ; nd 
> Subject: RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> Hi Julien
> 
> > -Original Message-
> > From: Julien Grall 
> > Sent: Tuesday, May 18, 2021 6:15 PM
> > To: Penny Zheng ; xen-devel@lists.xenproject.org;
> > sstabell...@kernel.org
> > Cc: Bertrand Marquis ; Wei Chen
> > ; nd 
> > Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> >
> > Hi Penny,
> >
> > On 18/05/2021 06:21, Penny Zheng wrote:
> > > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > > pages of static memory. And it is the equivalent of alloc_heap_pages
> > > for static memory.
> > > This commit only covers allocating at specified starting address.
> > >
> > > For each page, it shall check if the page is reserved
> > > (PGC_reserved) and free. It shall also do a set of necessary
> > > initialization, which are mostly the same ones in alloc_heap_pages,
> > > like, following the same cache-coherency policy and turning page
> > > status into PGC_state_used, etc.
> > >
> > > Signed-off-by: Penny Zheng 
> > > ---
> > >   xen/common/page_alloc.c | 64
> > +
> > >   1 file changed, 64 insertions(+)
> > >
> > > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > > 58b53c6ac2..adf2889e76 100644
> > > --- a/xen/common/page_alloc.c
> > > +++ b/xen/common/page_alloc.c
> > > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> > >   return pg;
> > >   }
> > >
> > > +/*
> > > + * Allocate nr_pfns contiguous pages, starting at #start, of static 
> > > memory.
> > > + * It is the equivalent of alloc_heap_pages for static memory  */
> > > +static struct page_info *alloc_staticmem_pages(unsigned long
> > > +nr_pfns,
> >
> > This wants to be nr_mfns.
> >
> > > +paddr_t start,
> >
> > I would prefer if this helper takes an mfn_t in parameter.
> >
> 
> Sure, I will change both.
> 
> > > +unsigned int
> > > +memflags) {
> > > +bool need_tlbflush = false;
> > > +uint32_t tlbflush_timestamp = 0;
> > > +unsigned int i;
> > > +struct page_info *pg;
> > > +mfn_t s_mfn;
> > > +
> > > +/* For now, it only supports allocating at specified address. */
> > > +s_mfn = maddr_to_mfn(start);
> > > +pg = mfn_to_page(s_mfn);
> >
> > We should avoid to make the assumption the start address will be valid.
> > So you want to call mfn_valid() first.
> >
> > At the same time, there is no guarantee that if the first page is
> > valid, then the next nr_pfns will be. So the check should be performed for 
> > all
> of them.
> >
> 
> Ok. I'll do validation check on both of them.
> 
> > > +if ( !pg )
> > > +return NULL;
> > > +
> > > +for ( i = 0; i < nr_pfns; i++)
> > > +{
> > > +/*
> > > + * Reference count must continuously be zero for free pages
> > > + * of static memory(PGC_reserved).
> > > + */
> > > +ASSERT(pg[i].count_info & PGC_reserved);
> > > +if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > > +{
> > > +printk(XENLOG_ERR
> > > +"Reference count must continuously be zero for free 
> > > pages"
> > > +"pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > > +i, mfn_x(page_to_mfn(pg + i)),
> > > +pg[i].count_info, pg[i].tlbflush_timestamp);
> > > +BUG();
> >
> > So we would crash Xen if the caller pass a wrong range. Is it what we want?
> >
> > Also, who is going to prevent concurrent access?
> >
> 
> Sure, to fix concurrency issue, I may need to add one spinlock like `static
> DEFINE_SPINLOCK(staticmem_lock);`
> 
> In current alloc_heap_pages, it will do similar check, that pages in free 
> state
> MUST have zero reference count. I guess, if condition not met, ther

RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages

2021-05-18 Thread Penny Zheng
Hi Julien

> -Original Message-
> From: Julien Grall 
> Sent: Tuesday, May 18, 2021 6:15 PM
> To: Penny Zheng ; xen-devel@lists.xenproject.org;
> sstabell...@kernel.org
> Cc: Bertrand Marquis ; Wei Chen
> ; nd 
> Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
> > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > pages of static memory. And it is the equivalent of alloc_heap_pages
> > for static memory.
> > This commit only covers allocating at specified starting address.
> >
> > For each page, it shall check if the page is reserved
> > (PGC_reserved) and free. It shall also do a set of necessary
> > initialization, which are mostly the same ones in alloc_heap_pages,
> > like, following the same cache-coherency policy and turning page
> > status into PGC_state_used, etc.
> >
> > Signed-off-by: Penny Zheng 
> > ---
> >   xen/common/page_alloc.c | 64
> +
> >   1 file changed, 64 insertions(+)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > 58b53c6ac2..adf2889e76 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> >   return pg;
> >   }
> >
> > +/*
> > + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> > + * It is the equivalent of alloc_heap_pages for static memory  */
> > +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> 
> This wants to be nr_mfns.
> 
> > +paddr_t start,
> 
> I would prefer if this helper takes an mfn_t in parameter.
> 

Sure, I will change both.

> > +unsigned int
> > +memflags) {
> > +bool need_tlbflush = false;
> > +uint32_t tlbflush_timestamp = 0;
> > +unsigned int i;
> > +struct page_info *pg;
> > +mfn_t s_mfn;
> > +
> > +/* For now, it only supports allocating at specified address. */
> > +s_mfn = maddr_to_mfn(start);
> > +pg = mfn_to_page(s_mfn);
> 
> We should avoid to make the assumption the start address will be valid.
> So you want to call mfn_valid() first.
> 
> At the same time, there is no guarantee that if the first page is valid, then 
> the
> next nr_pfns will be. So the check should be performed for all of them.
> 

Ok. I'll do validation check on both of them.

> > +if ( !pg )
> > +return NULL;
> > +
> > +for ( i = 0; i < nr_pfns; i++)
> > +{
> > +/*
> > + * Reference count must continuously be zero for free pages
> > + * of static memory(PGC_reserved).
> > + */
> > +ASSERT(pg[i].count_info & PGC_reserved);
> > +if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > +{
> > +printk(XENLOG_ERR
> > +"Reference count must continuously be zero for free 
> > pages"
> > +"pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > +i, mfn_x(page_to_mfn(pg + i)),
> > +pg[i].count_info, pg[i].tlbflush_timestamp);
> > +BUG();
> 
> So we would crash Xen if the caller pass a wrong range. Is it what we want?
> 
> Also, who is going to prevent concurrent access?
> 

Sure, to fix concurrency issue, I may need to add one spinlock like
`static DEFINE_SPINLOCK(staticmem_lock);`

In current alloc_heap_pages, it will do similar check, that pages in free state 
MUST have
zero reference count. I guess, if condition not met, there is no need to 
proceed.

> > +}
> > +
> > +if ( !(memflags & MEMF_no_tlbflush) )
> > +accumulate_tlbflush(_tlbflush, [i],
> > +_timestamp);
> > +
> > +/*
> > + * Reserve flag PGC_reserved and change page state
> > + * to PGC_state_inuse.
> > + */
> > +pg[i].count_info = (pg[i].count_info & PGC_reserved) |
> PGC_state_inuse;
> > +/* Initialise fields which have other uses for free pages. */
> > +pg[i].u.inuse.type_info = 0;
> > +page_set_owner([i], NULL);
> > +
> > +/*
> > + * Ensure cache and RAM are consistent for platforms where the
> > + * guest can control its own visibility of/through the cache.
> > + */
> > +flush_page_to_ram(mfn_x(page_to_mfn([i])),
> > +!(memflags & MEMF_no_icache_flush));
> > +}
> > +
> > +if ( need_tlbflush )
> > +filtered_flush_tlb_mask(tlbflush_timestamp);
> > +
> > +return pg;
> > +}
> > +
> >   /* Remove any offlined page in the buddy pointed to by head. */
> >   static int reserve_offlined_page(struct page_info *head)
> >   {
> >
> 
> Cheers,
> 
> --
> Julien Grall

Cheers,

Penny Zheng


Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages

2021-05-18 Thread Julien Grall

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:

alloc_staticmem_pages is designated to allocate nr_pfns contiguous
pages of static memory. And it is the equivalent of alloc_heap_pages
for static memory.
This commit only covers allocating at specified starting address.

For each page, it shall check if the page is reserved
(PGC_reserved) and free. It shall also do a set of necessary
initialization, which are mostly the same ones in alloc_heap_pages,
like, following the same cache-coherency policy and turning page
status into PGC_state_used, etc.

Signed-off-by: Penny Zheng 
---
  xen/common/page_alloc.c | 64 +
  1 file changed, 64 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 58b53c6ac2..adf2889e76 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
  return pg;
  }
  
+/*

+ * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
+ * It is the equivalent of alloc_heap_pages for static memory
+ */
+static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,


This wants to be nr_mfns.


+paddr_t start,


I would prefer if this helper takes an mfn_t in parameter.


+unsigned int memflags)
+{
+bool need_tlbflush = false;
+uint32_t tlbflush_timestamp = 0;
+unsigned int i;
+struct page_info *pg;
+mfn_t s_mfn;
+
+/* For now, it only supports allocating at specified address. */
+s_mfn = maddr_to_mfn(start);
+pg = mfn_to_page(s_mfn);


We should avoid to make the assumption the start address will be valid. 
So you want to call mfn_valid() first.


At the same time, there is no guarantee that if the first page is valid, 
then the next nr_pfns will be. So the check should be performed for all 
of them.



+if ( !pg )
+return NULL;
+
+for ( i = 0; i < nr_pfns; i++)
+{
+/*
+ * Reference count must continuously be zero for free pages
+ * of static memory(PGC_reserved).
+ */
+ASSERT(pg[i].count_info & PGC_reserved);
+if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
+{
+printk(XENLOG_ERR
+"Reference count must continuously be zero for free pages"
+"pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
+i, mfn_x(page_to_mfn(pg + i)),
+pg[i].count_info, pg[i].tlbflush_timestamp);
+BUG();


So we would crash Xen if the caller pass a wrong range. Is it what we want?

Also, who is going to prevent concurrent access?


+}
+
+if ( !(memflags & MEMF_no_tlbflush) )
+accumulate_tlbflush(_tlbflush, [i],
+_timestamp);
+
+/*
+ * Reserve flag PGC_reserved and change page state
+ * to PGC_state_inuse.
+ */
+pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
+/* Initialise fields which have other uses for free pages. */
+pg[i].u.inuse.type_info = 0;
+page_set_owner([i], NULL);
+
+/*
+ * Ensure cache and RAM are consistent for platforms where the
+ * guest can control its own visibility of/through the cache.
+ */
+flush_page_to_ram(mfn_x(page_to_mfn([i])),
+!(memflags & MEMF_no_icache_flush));
+}
+
+if ( need_tlbflush )
+filtered_flush_tlb_mask(tlbflush_timestamp);
+
+return pg;
+}
+
  /* Remove any offlined page in the buddy pointed to by head. */
  static int reserve_offlined_page(struct page_info *head)
  {



Cheers,

--
Julien Grall



Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages

2021-05-18 Thread Julien Grall

Hi Jan,

On 18/05/2021 08:24, Jan Beulich wrote:

On 18.05.2021 07:21, Penny Zheng wrote:

+ * to PGC_state_inuse.
+ */
+pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
+/* Initialise fields which have other uses for free pages. */
+pg[i].u.inuse.type_info = 0;
+page_set_owner([i], NULL);
+
+/*
+ * Ensure cache and RAM are consistent for platforms where the
+ * guest can control its own visibility of/through the cache.
+ */
+flush_page_to_ram(mfn_x(page_to_mfn([i])),
+!(memflags & MEMF_no_icache_flush));
+}
+
+if ( need_tlbflush )
+filtered_flush_tlb_mask(tlbflush_timestamp);


With reserved pages dedicated to a specific domain, in how far is it
possible that stale mappings from a prior use can still be around,
making such TLB flushing necessary?


I would rather not make the assumption. I can see future where we just 
want to allocate memory from a static pool that may be shared with 
multiple domains.


Cheers,

--
Julien Grall



RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages

2021-05-18 Thread Penny Zheng
Hi Jan

> -Original Message-
> From: Jan Beulich 
> Sent: Tuesday, May 18, 2021 3:24 PM
> To: Penny Zheng 
> Cc: Bertrand Marquis ; Wei Chen
> ; nd ; xen-devel@lists.xenproject.org;
> sstabell...@kernel.org; jul...@xen.org
> Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
> 
> On 18.05.2021 07:21, Penny Zheng wrote:
> > alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> > pages of static memory. And it is the equivalent of alloc_heap_pages
> > for static memory.
> > This commit only covers allocating at specified starting address.
> >
> > For each page, it shall check if the page is reserved
> > (PGC_reserved) and free. It shall also do a set of necessary
> > initialization, which are mostly the same ones in alloc_heap_pages,
> > like, following the same cache-coherency policy and turning page
> > status into PGC_state_used, etc.
> >
> > Signed-off-by: Penny Zheng 
> > ---
> >  xen/common/page_alloc.c | 64
> > +
> >  1 file changed, 64 insertions(+)
> >
> > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index
> > 58b53c6ac2..adf2889e76 100644
> > --- a/xen/common/page_alloc.c
> > +++ b/xen/common/page_alloc.c
> > @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
> >  return pg;
> >  }
> >
> > +/*
> > + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> > + * It is the equivalent of alloc_heap_pages for static memory  */
> > +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> > +paddr_t start,
> > +unsigned int
> > +memflags)
> 
> This is surely breaking the build (at this point in the series - recall that 
> a series
> should build fine at every patch boundary), for introducing an unused static
> function, which most compilers will warn about.
>

Sure, I'll combine it with other commits

> Also again - please avoid introducing code that's always dead for certain
> architectures. Quite likely you want a Kconfig option to put a suitable #ifdef
> around such functions.
> 

Sure, sorry for all the missing #ifdefs.

> And a nit: Please correct the apparently off-by-one indentation.
>

Sure, I'll check through the code more carefully.

> > +{
> > +bool need_tlbflush = false;
> > +uint32_t tlbflush_timestamp = 0;
> > +unsigned int i;
> 
> This variable's type should (again) match nr_pfns'es (albeit I think that
> parameter really wants to be nr_mfns).
> 

Correct if I understand you wrongly, you mean that parameters in 
alloc_staticmem_pages
are better be named after unsigned long nr_mfns, right?

> > +struct page_info *pg;
> > +mfn_t s_mfn;
> > +
> > +/* For now, it only supports allocating at specified address. */
> > +s_mfn = maddr_to_mfn(start);
> > +pg = mfn_to_page(s_mfn);
> > +if ( !pg )
> > +return NULL;
> 
> Under what conditions would mfn_to_page() return NULL?

Right, my mistake.

>
> > +for ( i = 0; i < nr_pfns; i++)
> > +{
> > +/*
> > + * Reference count must continuously be zero for free pages
> > + * of static memory(PGC_reserved).
> > + */
> > +ASSERT(pg[i].count_info & PGC_reserved);
> > +if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> > +{
> > +printk(XENLOG_ERR
> > +"Reference count must continuously be zero for free 
> > pages"
> > +"pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> > +i, mfn_x(page_to_mfn(pg + i)),
> > +pg[i].count_info, pg[i].tlbflush_timestamp);
> 
> Nit: Indentation again.
>
 
Thx

> > +BUG();
> > +}
> > +
> > +if ( !(memflags & MEMF_no_tlbflush) )
> > +accumulate_tlbflush(_tlbflush, [i],
> > +_timestamp);
> > +
> > +/*
> > + * Reserve flag PGC_reserved and change page state
> 
> DYM "Preserve ..."?
> 

Sure, thx

> > + * to PGC_state_inuse.
> > + */
> > +pg[i].count_info = (pg[i].count_info & PGC_reserved) |
> PGC_state_inuse;
> > +/* Initialise fields which have other uses for free pages. */
> > +pg[i].u.inuse.type_info = 0;
> > +page_set_owner([i], NULL);
> > +
> > +/*
> > + * Ensure cache and RAM are consistent for platforms where the
> > + * guest can control its own visibility of/through the cache.
> > + */
> > +flush_page_to_ram(mfn_x(page_to_mfn([i])),
> > +!(memflags & MEMF_no_icache_flush));
> > +}
> > +
> > +if ( need_tlbflush )
> > +filtered_flush_tlb_mask(tlbflush_timestamp);
> 
> With reserved pages dedicated to a specific domain, in how far is it possible
> that stale mappings from a prior use can still be around, making such TLB
> flushing necessary?
> 

Yes, you're right.

> Jan


Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages

2021-05-18 Thread Jan Beulich
On 18.05.2021 07:21, Penny Zheng wrote:
> alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> pages of static memory. And it is the equivalent of alloc_heap_pages
> for static memory.
> This commit only covers allocating at specified starting address.
> 
> For each page, it shall check if the page is reserved
> (PGC_reserved) and free. It shall also do a set of necessary
> initialization, which are mostly the same ones in alloc_heap_pages,
> like, following the same cache-coherency policy and turning page
> status into PGC_state_used, etc.
> 
> Signed-off-by: Penny Zheng 
> ---
>  xen/common/page_alloc.c | 64 +
>  1 file changed, 64 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 58b53c6ac2..adf2889e76 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
>  return pg;
>  }
>  
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> + * It is the equivalent of alloc_heap_pages for static memory
> + */
> +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> +paddr_t start,
> +unsigned int memflags)

This is surely breaking the build (at this point in the series -
recall that a series should build fine at every patch boundary),
for introducing an unused static function, which most compilers
will warn about.

Also again - please avoid introducing code that's always dead for
certain architectures. Quite likely you want a Kconfig option to
put a suitable #ifdef around such functions.

And a nit: Please correct the apparently off-by-one indentation.

> +{
> +bool need_tlbflush = false;
> +uint32_t tlbflush_timestamp = 0;
> +unsigned int i;

This variable's type should (again) match nr_pfns'es (albeit I
think that parameter really wants to be nr_mfns).

> +struct page_info *pg;
> +mfn_t s_mfn;
> +
> +/* For now, it only supports allocating at specified address. */
> +s_mfn = maddr_to_mfn(start);
> +pg = mfn_to_page(s_mfn);
> +if ( !pg )
> +return NULL;

Under what conditions would mfn_to_page() return NULL?

> +for ( i = 0; i < nr_pfns; i++)
> +{
> +/*
> + * Reference count must continuously be zero for free pages
> + * of static memory(PGC_reserved).
> + */
> +ASSERT(pg[i].count_info & PGC_reserved);
> +if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> +{
> +printk(XENLOG_ERR
> +"Reference count must continuously be zero for free 
> pages"
> +"pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> +i, mfn_x(page_to_mfn(pg + i)),
> +pg[i].count_info, pg[i].tlbflush_timestamp);

Nit: Indentation again.

> +BUG();
> +}
> +
> +if ( !(memflags & MEMF_no_tlbflush) )
> +accumulate_tlbflush(_tlbflush, [i],
> +_timestamp);
> +
> +/*
> + * Reserve flag PGC_reserved and change page state

DYM "Preserve ..."?

> + * to PGC_state_inuse.
> + */
> +pg[i].count_info = (pg[i].count_info & PGC_reserved) | 
> PGC_state_inuse;
> +/* Initialise fields which have other uses for free pages. */
> +pg[i].u.inuse.type_info = 0;
> +page_set_owner([i], NULL);
> +
> +/*
> + * Ensure cache and RAM are consistent for platforms where the
> + * guest can control its own visibility of/through the cache.
> + */
> +flush_page_to_ram(mfn_x(page_to_mfn([i])),
> +!(memflags & MEMF_no_icache_flush));
> +}
> +
> +if ( need_tlbflush )
> +filtered_flush_tlb_mask(tlbflush_timestamp);

With reserved pages dedicated to a specific domain, in how far is it
possible that stale mappings from a prior use can still be around,
making such TLB flushing necessary?

Jan



[PATCH 05/10] xen/arm: introduce alloc_staticmem_pages

2021-05-17 Thread Penny Zheng
alloc_staticmem_pages is designated to allocate nr_pfns contiguous
pages of static memory. And it is the equivalent of alloc_heap_pages
for static memory.
This commit only covers allocating at specified starting address.

For each page, it shall check if the page is reserved
(PGC_reserved) and free. It shall also do a set of necessary
initialization, which are mostly the same ones in alloc_heap_pages,
like, following the same cache-coherency policy and turning page
status into PGC_state_used, etc.

Signed-off-by: Penny Zheng 
---
 xen/common/page_alloc.c | 64 +
 1 file changed, 64 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 58b53c6ac2..adf2889e76 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
 return pg;
 }
 
+/*
+ * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
+ * It is the equivalent of alloc_heap_pages for static memory
+ */
+static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
+paddr_t start,
+unsigned int memflags)
+{
+bool need_tlbflush = false;
+uint32_t tlbflush_timestamp = 0;
+unsigned int i;
+struct page_info *pg;
+mfn_t s_mfn;
+
+/* For now, it only supports allocating at specified address. */
+s_mfn = maddr_to_mfn(start);
+pg = mfn_to_page(s_mfn);
+if ( !pg )
+return NULL;
+
+for ( i = 0; i < nr_pfns; i++)
+{
+/*
+ * Reference count must continuously be zero for free pages
+ * of static memory(PGC_reserved).
+ */
+ASSERT(pg[i].count_info & PGC_reserved);
+if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
+{
+printk(XENLOG_ERR
+"Reference count must continuously be zero for free pages"
+"pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
+i, mfn_x(page_to_mfn(pg + i)),
+pg[i].count_info, pg[i].tlbflush_timestamp);
+BUG();
+}
+
+if ( !(memflags & MEMF_no_tlbflush) )
+accumulate_tlbflush(_tlbflush, [i],
+_timestamp);
+
+/*
+ * Reserve flag PGC_reserved and change page state
+ * to PGC_state_inuse.
+ */
+pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
+/* Initialise fields which have other uses for free pages. */
+pg[i].u.inuse.type_info = 0;
+page_set_owner([i], NULL);
+
+/*
+ * Ensure cache and RAM are consistent for platforms where the
+ * guest can control its own visibility of/through the cache.
+ */
+flush_page_to_ram(mfn_x(page_to_mfn([i])),
+!(memflags & MEMF_no_icache_flush));
+}
+
+if ( need_tlbflush )
+filtered_flush_tlb_mask(tlbflush_timestamp);
+
+return pg;
+}
+
 /* Remove any offlined page in the buddy pointed to by head. */
 static int reserve_offlined_page(struct page_info *head)
 {
-- 
2.25.1