Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-07-18 Thread Stefano Stabellini
On Tue, 18 Jul 2017, Zhongze Liu wrote:
> Hi Julien,
> 
> After our discussion during the summit, I have revised my plan, but
> I'm still working on it and haven't sent it to the ML yet.
> I'm planning to send a new version of my proposal together with the
> parsing code later so that I could reference the
> proposal in the commit message.
> But here is what's related to our discussion about the granularity in
> my current draft:
> 
>   @granularity  can be a number with an optional unit: k, m,
> kb or mb,
>  the final result should be a multiple of 4k.
> 
> The actual address of begin/end will then be calculated by multiplying them
> with @granularity. For example, if begin=0x100 and granularity=4k then the
> shared space will begin at the address 0x10.

I would remove "granularity" from the interface and just use full
addresses for begin and end (or begin and size).

 
> Cheers,
> 
> Zhongze Liu
> 
> 2017-07-18 20:10 GMT+08:00 Julien Grall :
> > Hi,
> >
> >
> > On 20/06/17 18:18, Zhongze Liu wrote:
> >>
> >> 
> >> 1. Motivation and Description
> >> 
> >> Virtual machines use grant table hypercalls to setup a share page for
> >> inter-VMs communications. These hypercalls are used by all PV
> >> protocols today. However, very simple guests, such as baremetal
> >> applications, might not have the infrastructure to handle the grant table.
> >> This project is about setting up several shared memory areas for inter-VMs
> >> communications directly from the VM config file.
> >> So that the guest kernel doesn't have to have grant table support (in the
> >> embedded space, this is not unusual) to be able to communicate with
> >> other guests.
> >>
> >> 
> >> 2. Implementation Plan:
> >> 
> >>
> >> ==
> >> 2.1 Introduce a new VM config option in xl:
> >> ==
> >> The shared areas should be shareable among several (>=2) VMs, so
> >> every shared physical memory area is assigned to a set of VMs.
> >> Therefore, a “token” or “identifier” should be used here to uniquely
> >> identify a backing memory area.
> >>
> >> The backing area would be taken from one domain, which we will regard
> >> as the "master domain", and this domain should be created prior to any
> >> other "slave domain"s. Again, we have to use some kind of tag to tell who
> >> is the "master domain".
> >>
> >> And the ability to specify the attributes of the pages (say, WO/RO/X)
> >> to be shared should be also given to the user. For the master domain,
> >> these attributes often describes the maximum permission allowed for the
> >> shared pages, and for the slave domains, these attributes are often used
> >> to describe with what permissions this area will be mapped.
> >> This information should also be specified in the xl config entry.
> >>
> >> To handle all these, I would suggest using an unsigned integer to serve as
> >> the
> >> identifier, and using a "master" tag in the master domain's xl config
> >> entry
> >> to announce that she will provide the backing memory pages. A separate
> >> entry would be used to describe the attributes of the shared memory area,
> >> of
> >> the form "prot=RW".
> >> For example:
> >>
> >> In xl config file of vm1:
> >>
> >> static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
> >>   granularity = 4k, prot = RO, master”,
> >>  "id = ID2, begin = gmfn3, end = gmfn4,
> >>  granularity = 4k, prot = RW, master”]
> >
> >
> > Replying here regarding the discussion we had during the summit. AArch64 is
> > supporting multiple page granularities (4KB, 16KB, 64KB).
> >
> > Each guest and the Hypervisor are free to use different page granularity. To
> > go further, if I am not mistaken, an OS is free to use different page
> > granularity on each processor.
> >
> > In reality, I have only seen OS using the same granularity across all the
> > processors.
> >
> > At the moment, Xen is only supporting 4KB page granularity. Although, there
> > are plan to also support 64KB because this is the only way to support above
> > 48-bit physical address.
> >
> > With that in mind, this interface is a bit confusing. What does the
> > "granularity" refers to? Hypervisor? Guest A? Guest B?
> >
> > Similarly, gmfn* are frames. But what is its granularity?
> >
> > I think it would make sense to start using the full address on the toolstack
> > side, avoiding confusion for the user what is the page granularity to be
> > used here.
> >
> > Cheers,
> >
> > --
> > Julien Grall
> ___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-07-18 Thread Zhongze Liu
Hi Julien,

After our discussion during the summit, I have revised my plan, but
I'm still working on it and haven't sent it to the ML yet.
I'm planning to send a new version of my proposal together with the
parsing code later so that I could reference the
proposal in the commit message.
But here is what's related to our discussion about the granularity in
my current draft:

  @granularity  can be a number with an optional unit: k, m,
kb or mb,
 the final result should be a multiple of 4k.

The actual address of begin/end will then be calculated by multiplying them
with @granularity. For example, if begin=0x100 and granularity=4k then the
shared space will begin at the address 0x10.


Cheers,

Zhongze Liu

2017-07-18 20:10 GMT+08:00 Julien Grall :
> Hi,
>
>
> On 20/06/17 18:18, Zhongze Liu wrote:
>>
>> 
>> 1. Motivation and Description
>> 
>> Virtual machines use grant table hypercalls to setup a share page for
>> inter-VMs communications. These hypercalls are used by all PV
>> protocols today. However, very simple guests, such as baremetal
>> applications, might not have the infrastructure to handle the grant table.
>> This project is about setting up several shared memory areas for inter-VMs
>> communications directly from the VM config file.
>> So that the guest kernel doesn't have to have grant table support (in the
>> embedded space, this is not unusual) to be able to communicate with
>> other guests.
>>
>> 
>> 2. Implementation Plan:
>> 
>>
>> ==
>> 2.1 Introduce a new VM config option in xl:
>> ==
>> The shared areas should be shareable among several (>=2) VMs, so
>> every shared physical memory area is assigned to a set of VMs.
>> Therefore, a “token” or “identifier” should be used here to uniquely
>> identify a backing memory area.
>>
>> The backing area would be taken from one domain, which we will regard
>> as the "master domain", and this domain should be created prior to any
>> other "slave domain"s. Again, we have to use some kind of tag to tell who
>> is the "master domain".
>>
>> And the ability to specify the attributes of the pages (say, WO/RO/X)
>> to be shared should be also given to the user. For the master domain,
>> these attributes often describes the maximum permission allowed for the
>> shared pages, and for the slave domains, these attributes are often used
>> to describe with what permissions this area will be mapped.
>> This information should also be specified in the xl config entry.
>>
>> To handle all these, I would suggest using an unsigned integer to serve as
>> the
>> identifier, and using a "master" tag in the master domain's xl config
>> entry
>> to announce that she will provide the backing memory pages. A separate
>> entry would be used to describe the attributes of the shared memory area,
>> of
>> the form "prot=RW".
>> For example:
>>
>> In xl config file of vm1:
>>
>> static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>>   granularity = 4k, prot = RO, master”,
>>  "id = ID2, begin = gmfn3, end = gmfn4,
>>  granularity = 4k, prot = RW, master”]
>
>
> Replying here regarding the discussion we had during the summit. AArch64 is
> supporting multiple page granularities (4KB, 16KB, 64KB).
>
> Each guest and the Hypervisor are free to use different page granularity. To
> go further, if I am not mistaken, an OS is free to use different page
> granularity on each processor.
>
> In reality, I have only seen OS using the same granularity across all the
> processors.
>
> At the moment, Xen is only supporting 4KB page granularity. Although, there
> are plan to also support 64KB because this is the only way to support above
> 48-bit physical address.
>
> With that in mind, this interface is a bit confusing. What does the
> "granularity" refers to? Hypervisor? Guest A? Guest B?
>
> Similarly, gmfn* are frames. But what is its granularity?
>
> I think it would make sense to start using the full address on the toolstack
> side, avoiding confusion for the user what is the page granularity to be
> used here.
>
> Cheers,
>
> --
> Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-07-18 Thread Julien Grall

Hi,

On 20/06/17 18:18, Zhongze Liu wrote:


1. Motivation and Description

Virtual machines use grant table hypercalls to setup a share page for
inter-VMs communications. These hypercalls are used by all PV
protocols today. However, very simple guests, such as baremetal
applications, might not have the infrastructure to handle the grant table.
This project is about setting up several shared memory areas for inter-VMs
communications directly from the VM config file.
So that the guest kernel doesn't have to have grant table support (in the
embedded space, this is not unusual) to be able to communicate with
other guests.


2. Implementation Plan:


==
2.1 Introduce a new VM config option in xl:
==
The shared areas should be shareable among several (>=2) VMs, so
every shared physical memory area is assigned to a set of VMs.
Therefore, a “token” or “identifier” should be used here to uniquely
identify a backing memory area.

The backing area would be taken from one domain, which we will regard
as the "master domain", and this domain should be created prior to any
other "slave domain"s. Again, we have to use some kind of tag to tell who
is the "master domain".

And the ability to specify the attributes of the pages (say, WO/RO/X)
to be shared should be also given to the user. For the master domain,
these attributes often describes the maximum permission allowed for the
shared pages, and for the slave domains, these attributes are often used
to describe with what permissions this area will be mapped.
This information should also be specified in the xl config entry.

To handle all these, I would suggest using an unsigned integer to serve as the
identifier, and using a "master" tag in the master domain's xl config entry
to announce that she will provide the backing memory pages. A separate
entry would be used to describe the attributes of the shared memory area, of
the form "prot=RW".
For example:

In xl config file of vm1:

static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
  granularity = 4k, prot = RO, master”,
 "id = ID2, begin = gmfn3, end = gmfn4,
 granularity = 4k, prot = RW, master”]


Replying here regarding the discussion we had during the summit. AArch64 
is supporting multiple page granularities (4KB, 16KB, 64KB).


Each guest and the Hypervisor are free to use different page 
granularity. To go further, if I am not mistaken, an OS is free to use 
different page granularity on each processor.


In reality, I have only seen OS using the same granularity across all 
the processors.


At the moment, Xen is only supporting 4KB page granularity. Although, 
there are plan to also support 64KB because this is the only way to 
support above 48-bit physical address.


With that in mind, this interface is a bit confusing. What does the 
"granularity" refers to? Hypervisor? Guest A? Guest B?


Similarly, gmfn* are frames. But what is its granularity?

I think it would make sense to start using the full address on the 
toolstack side, avoiding confusion for the user what is the page 
granularity to be used here.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-28 Thread Wei Liu
Sorry for the late reply.

I can see the thread already contains answers to some of my questions so
I will just reply to the bits that are still relevant.

On Fri, Jun 23, 2017 at 01:27:24AM +0800, Zhongze Liu wrote:
> Hi Wei,
> 
> Thank you for your valuable comments.
> 
> 2017-06-21 23:09 GMT+08:00 Wei Liu :
[...]
> >> To handle all these, I would suggest using an unsigned integer to serve as 
> >> the
> >> identifier, and using a "master" tag in the master domain's xl config entry
> >> to announce that she will provide the backing memory pages. A separate
> >> entry would be used to describe the attributes of the shared memory area, 
> >> of
> >> the form "prot=RW".
> >
> > I think using an integer is too limiting. You would need the user to
> > know if a particular number is already used. Maybe using a number is
> > good enough for the use case you have in mind, but it is not future
> > proof. I don't know how sophisticated we want this to be, though.
> >
> 
> Sounds reasonable. I chose integers because I think integers are fast
> and easy to
> manipulate. But integers are somewhat hard to memorize and this isn't
> a good thing
> from a user's point of view. So maybe I'll make it a string with a
> maximum size of 32
> or longer.
> 

Sounds reasonable.

[...]
> >>  granularity = 4k, prot = RW, master”]
> >>
> >> In xl config file of vm2:
> >>
> >> static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
> >>   granularity = 4k, prot = RO”]
> >>
> >> In xl config file of vm3:
> >>
> >> static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
> >>   granularity = 4k, prot = RW”]
> >>
> >> gmfn's above are all hex of the form "0x2".
> >>
> >> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> >> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> >> The parameter "prot=RO" means that this memory area are offered with 
> >> read-only
> >> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> >> gmfn5~gmfn6.
> >> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
> >> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
> >> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
> >>
> >> The "granularity" is optional in the slaves' config entries. But if it's
> >> presented in the slaves' config entry, it has to be the same with its 
> >> master's.
> >> Besides, the size of the gmfn range must also match. And overlapping 
> >> backing
> >> memory areas are well defined.
> >>
> >
> > What do you mean by "well defined"?
> 
> Em...I think I should have put it in a more clear way. In fact, I mean
> that overlapping
> areas are allowed, and when two areas overlap with each other, any
> operations done
> on the overlapping area will be seen on both sides. Besides this, they
> just act like two
> independent areas. And the job of serializing the access to the
> overlapping area is
> left to the user.
> 

OK. "Well defined" means "clearly defined or described" but I didn't see
any definition or description of it. Just use "allowed" should be OK.

> >
> > Why is inserting a sub-range not allowed?
> >
> 
> This is also a feature under consideration.Maybe the use cases that I have
> in mind is not that complicated, so I chose to keep it simple. But
> after giving it
> a second thought, I found this will not add too much complexity to the code 
> and
> will be useful in some cases. So I think I'll allow this in my next
> version of the proposal.
> 

That's what I thought as well. Essentially it is not any harder than
implementing the overlapping case.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-23 Thread Jarvis Roach


> -Original Message-
> From: Stefano Stabellini [mailto:sstabell...@kernel.org]
> Sent: Friday, June 23, 2017 4:09 PM
> To: Jarvis Roach 
> Cc: Stefano Stabellini ; Julien Grall
> ; Zhongze Liu ; xen-
> de...@lists.xenproject.org; Wei Liu ; Ian Jackson
> ; edg...@xilinx.com; Edgar E. Iglesias
> 
> Subject: RE: [RFC v2]Proposal to allow setting up shared memory areas
> between VMs from xl config file
> 
> On Fri, 23 Jun 2017, Jarvis Roach wrote:
> > > -Original Message-
> > > From: Stefano Stabellini [mailto:sstabell...@kernel.org]
> > > Sent: Friday, June 23, 2017 2:21 PM
> > > To: Julien Grall 
> > > Cc: Stefano Stabellini ; Zhongze Liu
> > > ; xen-de...@lists.xenproject.org; Wei Liu
> > > ; Ian Jackson ;
> > > Jarvis Roach ; edg...@xilinx.com;
> > > Edgar E. Iglesias 
> > > Subject: Re: [RFC v2]Proposal to allow setting up shared memory
> > > areas between VMs from xl config file
> > >
> > > On Fri, 23 Jun 2017, Julien Grall wrote:
> > > > Hi,
> > > >
> > > > On 22/06/17 22:05, Stefano Stabellini wrote:
> > > > > > When we encounter an id IDx during "xl create":
> > > > > >
> > > > > >   + If it’s not under /local/shared_mem:
> > > > > > + If the corresponding entry has a "master" tag, create the
> > > > > >   corresponding entries for IDx in xenstore
> > > > > > + If there isn't a "master" tag, say error.
> > > > > >
> > > > > >   + If it’s found under /local/shared_mem:
> > > > > > + If the corresponding entry has a "master" tag, say error
> > > > > > + If there isn't a "master" tag, map the pages to the newly
> > > > > >   created domain, and add the current domain and necessary
> > > information
> > > > > >   under /local/shared_mem/IDx/slaves.
> > > > >
> > > > > Aside from using "gfn" instead of gmfn everywhere, I think it
> > > > > looks pretty good.
> > > > >
> > > > > I would leave out permissions and cacheability attributes from
> > > > > this version of the work. I would just add a note saying that
> > > > > memory will be mapped as RW regular cacheable RAM. Other
> > > > > permissions and cacheability will be possible, but they are not
> implemented yet.
> > > >
> > > > Well, I think we should design the interface correctly from the
> > > > beginning to facilitate future extension.
> > >
> > > Which interface are you speaking about?
> > >
> > > I don't think we should attemp to write how the hypercall interface
> > > might look like in the future to support setting permissions and
> > > cacheability attributes.
> > >
> > >
> > > > Also, you need to clarify what you mean by "regular cacheable RAM".
> > > > Are they write-through, write-back...? But, on ARM, this would
> > > > only be the caching attribute in stage-2 page table. The final
> > > > caching, memory type, shareability would be a combination of stage-2
> and stage-1 attributes.
> > >
> > > The very same that is used today for the ram of virtual machines, do
> > > we need to say any more than that? (For ARM, p2m_ram_rw and
> > > MATTR_MEM, LPAE_SH_INNER. For stage1, we should refer to
> > > xen/include/public/arch-arm.h.)
> >
> > I have customers who need some buffers LPAE_SH_OUTER and others
> who need NORMAL non-cacheable or inner-cacheable buffers, so my
> suggestion is to provide a way to support the full combination of
> configurations.
> >
> > While the stage 1/stage 2 combination results allow guests (via the stage 1
> translation regime) to force the two combinations I specifically mentioned,  
> in
> the first case the customers want LPAE_SH_OUTER for cache coherency with
> a DMA-capable I/O device. In that case, Xen needs to set the shareability
> attribute to OUTER in the stage 2 table since that's what is used for the
> SMMU. In the second case,  NORMAL non-cacheable or inner-cacheable, the
> customers are in a position where they can't trust the guests to disable their
> cache or set it for inner-cacheable, so it would be good for a way to Xen or
> privileged/trusted domain to do so.
> 
> Let me premise that I would be happy to see the whole set of configurations
> implemented in the long run, we might just not get there on day1. We could
> spec out how the VM config option should look like, but leave the
> cacheability and shareability parameteres unimplemented for now (also to
> address Julien't comment on defining future proof interfaces).
> 
> I understand the need for cache-coherent buffers for dma to/from devices,
> but I think that problem should be solved with the iomem config option. This
> project was meant to setup shared memory regions for VM-to-VM
> communications. It doesn't look like that is the kind of requirement that 

Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-23 Thread Julien Grall

Hi Stefano,

On 06/23/2017 07:21 PM, Stefano Stabellini wrote:

On Fri, 23 Jun 2017, Julien Grall wrote:

Hi,

On 22/06/17 22:05, Stefano Stabellini wrote:

When we encounter an id IDx during "xl create":

   + If it’s not under /local/shared_mem:
 + If the corresponding entry has a "master" tag, create the
   corresponding entries for IDx in xenstore
 + If there isn't a "master" tag, say error.

   + If it’s found under /local/shared_mem:
 + If the corresponding entry has a "master" tag, say error
 + If there isn't a "master" tag, map the pages to the newly
   created domain, and add the current domain and necessary information
   under /local/shared_mem/IDx/slaves.


Aside from using "gfn" instead of gmfn everywhere, I think it looks
pretty good.

I would leave out permissions and cacheability attributes from this
version of the work. I would just add a note saying that memory will be
mapped as RW regular cacheable RAM. Other permissions and cacheability
will be possible, but they are not implemented yet.


Well, I think we should design the interface correctly from the beginning to
facilitate future extension.


Which interface are you speaking about?


The interface with the user, i.e libxl and xl. The hypercall can be 
added later if necessary as this could be a DOMCTL so not part of a 
stable ABI.




I don't think we should attemp to write how the hypercall interface
might look like in the future to support setting permissions and
cacheability attributes.



Also, you need to clarify what you mean by "regular cacheable RAM". Are they
write-through, write-back...? But, on ARM, this would only be the caching
attribute in stage-2 page table. The final caching, memory type, shareability
would be a combination of stage-2 and stage-1 attributes.


The very same that is used today for the ram of virtual machines, do we
need to say any more than that? (For ARM, p2m_ram_rw and MATTR_MEM,
LPAE_SH_INNER. For stage1, we should refer to
xen/include/public/arch-arm.h.)


 * All memory which is shared with other entities in the system
 * (including the hypervisor and other guests) must reside in memory
 * which is mapped as Normal Inner-cacheable. This applies to:
 *  - hypercall arguments passed via a pointer to guest memory.
 *  - memory shared via the grant table mechanism (including PV I/O
 *rings etc).
 *  - memory shared with the hypervisor (struct shared_info, struct
 *
 * Any Inner cache allocation strategy (Write-Back, Write-Through etc)
 * is acceptable. There is no restriction on the Outer-cacheability.

This does not include memory shared between guest via other method than 
grant-table. So the documentation should be at least updated.


But AFAICT, this does not say anything about the shareability of the 
region. It only speaks about outer-cache and inner-cache.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-23 Thread Stefano Stabellini
On Fri, 23 Jun 2017, Jarvis Roach wrote:
> > -Original Message-
> > From: Stefano Stabellini [mailto:sstabell...@kernel.org]
> > Sent: Friday, June 23, 2017 2:21 PM
> > To: Julien Grall 
> > Cc: Stefano Stabellini ; Zhongze Liu
> > ; xen-de...@lists.xenproject.org; Wei Liu
> > ; Ian Jackson ; Jarvis Roach
> > ; edg...@xilinx.com; Edgar E. Iglesias
> > 
> > Subject: Re: [RFC v2]Proposal to allow setting up shared memory areas
> > between VMs from xl config file
> > 
> > On Fri, 23 Jun 2017, Julien Grall wrote:
> > > Hi,
> > >
> > > On 22/06/17 22:05, Stefano Stabellini wrote:
> > > > > When we encounter an id IDx during "xl create":
> > > > >
> > > > >   + If it’s not under /local/shared_mem:
> > > > > + If the corresponding entry has a "master" tag, create the
> > > > >   corresponding entries for IDx in xenstore
> > > > > + If there isn't a "master" tag, say error.
> > > > >
> > > > >   + If it’s found under /local/shared_mem:
> > > > > + If the corresponding entry has a "master" tag, say error
> > > > > + If there isn't a "master" tag, map the pages to the newly
> > > > >   created domain, and add the current domain and necessary
> > information
> > > > >   under /local/shared_mem/IDx/slaves.
> > > >
> > > > Aside from using "gfn" instead of gmfn everywhere, I think it looks
> > > > pretty good.
> > > >
> > > > I would leave out permissions and cacheability attributes from this
> > > > version of the work. I would just add a note saying that memory will
> > > > be mapped as RW regular cacheable RAM. Other permissions and
> > > > cacheability will be possible, but they are not implemented yet.
> > >
> > > Well, I think we should design the interface correctly from the
> > > beginning to facilitate future extension.
> > 
> > Which interface are you speaking about?
> > 
> > I don't think we should attemp to write how the hypercall interface might
> > look like in the future to support setting permissions and cacheability
> > attributes.
> > 
> > 
> > > Also, you need to clarify what you mean by "regular cacheable RAM".
> > > Are they write-through, write-back...? But, on ARM, this would only be
> > > the caching attribute in stage-2 page table. The final caching, memory
> > > type, shareability would be a combination of stage-2 and stage-1 
> > > attributes.
> > 
> > The very same that is used today for the ram of virtual machines, do we need
> > to say any more than that? (For ARM, p2m_ram_rw and MATTR_MEM,
> > LPAE_SH_INNER. For stage1, we should refer to
> > xen/include/public/arch-arm.h.)
> 
> I have customers who need some buffers LPAE_SH_OUTER and others who need 
> NORMAL non-cacheable or inner-cacheable buffers, so my suggestion is to 
> provide a way to support the full combination of configurations. 
> 
> While the stage 1/stage 2 combination results allow guests (via the stage 1 
> translation regime) to force the two combinations I specifically mentioned,  
> in the first case the customers want LPAE_SH_OUTER for cache coherency with a 
> DMA-capable I/O device. In that case, Xen needs to set the shareability 
> attribute to OUTER in the stage 2 table since that's what is used for the 
> SMMU. In the second case,  NORMAL non-cacheable or inner-cacheable, the 
> customers are in a position where they can't trust the guests to disable 
> their cache or set it for inner-cacheable, so it would be good for a way to 
> Xen or privileged/trusted domain to do so.

Let me premise that I would be happy to see the whole set of
configurations implemented in the long run, we might just not get there
on day1. We could spec out how the VM config option should look like,
but leave the cacheability and shareability parameteres unimplemented
for now (also to address Julien't comment on defining future proof
interfaces).

I understand the need for cache-coherent buffers for dma to/from
devices, but I think that problem should be solved with the iomem config
option. This project was meant to setup shared memory regions for
VM-to-VM communications. It doesn't look like that is the kind of
requirement that this framework is meant to meet, unless I am missing
something?

Normal non-cacheable buffers are more interesting: do you actually see
guests running on non-cacheable memory? If not, could you make an
example of a use-case for two VMs sharing a non-cacheable page?___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-23 Thread Jarvis Roach


> -Original Message-
> From: Stefano Stabellini [mailto:sstabell...@kernel.org]
> Sent: Friday, June 23, 2017 2:21 PM
> To: Julien Grall 
> Cc: Stefano Stabellini ; Zhongze Liu
> ; xen-de...@lists.xenproject.org; Wei Liu
> ; Ian Jackson ; Jarvis Roach
> ; edg...@xilinx.com; Edgar E. Iglesias
> 
> Subject: Re: [RFC v2]Proposal to allow setting up shared memory areas
> between VMs from xl config file
> 
> On Fri, 23 Jun 2017, Julien Grall wrote:
> > Hi,
> >
> > On 22/06/17 22:05, Stefano Stabellini wrote:
> > > > When we encounter an id IDx during "xl create":
> > > >
> > > >   + If it’s not under /local/shared_mem:
> > > > + If the corresponding entry has a "master" tag, create the
> > > >   corresponding entries for IDx in xenstore
> > > > + If there isn't a "master" tag, say error.
> > > >
> > > >   + If it’s found under /local/shared_mem:
> > > > + If the corresponding entry has a "master" tag, say error
> > > > + If there isn't a "master" tag, map the pages to the newly
> > > >   created domain, and add the current domain and necessary
> information
> > > >   under /local/shared_mem/IDx/slaves.
> > >
> > > Aside from using "gfn" instead of gmfn everywhere, I think it looks
> > > pretty good.
> > >
> > > I would leave out permissions and cacheability attributes from this
> > > version of the work. I would just add a note saying that memory will
> > > be mapped as RW regular cacheable RAM. Other permissions and
> > > cacheability will be possible, but they are not implemented yet.
> >
> > Well, I think we should design the interface correctly from the
> > beginning to facilitate future extension.
> 
> Which interface are you speaking about?
> 
> I don't think we should attemp to write how the hypercall interface might
> look like in the future to support setting permissions and cacheability
> attributes.
> 
> 
> > Also, you need to clarify what you mean by "regular cacheable RAM".
> > Are they write-through, write-back...? But, on ARM, this would only be
> > the caching attribute in stage-2 page table. The final caching, memory
> > type, shareability would be a combination of stage-2 and stage-1 attributes.
> 
> The very same that is used today for the ram of virtual machines, do we need
> to say any more than that? (For ARM, p2m_ram_rw and MATTR_MEM,
> LPAE_SH_INNER. For stage1, we should refer to
> xen/include/public/arch-arm.h.)

I have customers who need some buffers LPAE_SH_OUTER and others who need NORMAL 
non-cacheable or inner-cacheable buffers, so my suggestion is to provide a way 
to support the full combination of configurations. 

While the stage 1/stage 2 combination results allow guests (via the stage 1 
translation regime) to force the two combinations I specifically mentioned,  in 
the first case the customers want LPAE_SH_OUTER for cache coherency with a 
DMA-capable I/O device. In that case, Xen needs to set the shareability 
attribute to OUTER in the stage 2 table since that's what is used for the SMMU. 
In the second case,  NORMAL non-cacheable or inner-cacheable, the customers are 
in a position where they can't trust the guests to disable their cache or set 
it for inner-cacheable, so it would be good for a way to Xen or 
privileged/trusted domain to do so.




___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-23 Thread Stefano Stabellini
On Fri, 23 Jun 2017, Julien Grall wrote:
> Hi,
> 
> On 22/06/17 22:05, Stefano Stabellini wrote:
> > > When we encounter an id IDx during "xl create":
> > > 
> > >   + If it’s not under /local/shared_mem:
> > > + If the corresponding entry has a "master" tag, create the
> > >   corresponding entries for IDx in xenstore
> > > + If there isn't a "master" tag, say error.
> > > 
> > >   + If it’s found under /local/shared_mem:
> > > + If the corresponding entry has a "master" tag, say error
> > > + If there isn't a "master" tag, map the pages to the newly
> > >   created domain, and add the current domain and necessary information
> > >   under /local/shared_mem/IDx/slaves.
> > 
> > Aside from using "gfn" instead of gmfn everywhere, I think it looks
> > pretty good.
> > 
> > I would leave out permissions and cacheability attributes from this
> > version of the work. I would just add a note saying that memory will be
> > mapped as RW regular cacheable RAM. Other permissions and cacheability
> > will be possible, but they are not implemented yet.
> 
> Well, I think we should design the interface correctly from the beginning to
> facilitate future extension.

Which interface are you speaking about?

I don't think we should attemp to write how the hypercall interface
might look like in the future to support setting permissions and
cacheability attributes.


> Also, you need to clarify what you mean by "regular cacheable RAM". Are they
> write-through, write-back...? But, on ARM, this would only be the caching
> attribute in stage-2 page table. The final caching, memory type, shareability
> would be a combination of stage-2 and stage-1 attributes.

The very same that is used today for the ram of virtual machines, do we
need to say any more than that? (For ARM, p2m_ram_rw and MATTR_MEM,
LPAE_SH_INNER. For stage1, we should refer to
xen/include/public/arch-arm.h.)___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-23 Thread Julien Grall

Hi,

On 22/06/17 22:05, Stefano Stabellini wrote:

When we encounter an id IDx during "xl create":

  + If it’s not under /local/shared_mem:
+ If the corresponding entry has a "master" tag, create the
  corresponding entries for IDx in xenstore
+ If there isn't a "master" tag, say error.

  + If it’s found under /local/shared_mem:
+ If the corresponding entry has a "master" tag, say error
+ If there isn't a "master" tag, map the pages to the newly
  created domain, and add the current domain and necessary information
  under /local/shared_mem/IDx/slaves.


Aside from using "gfn" instead of gmfn everywhere, I think it looks
pretty good.

I would leave out permissions and cacheability attributes from this
version of the work. I would just add a note saying that memory will be
mapped as RW regular cacheable RAM. Other permissions and cacheability
will be possible, but they are not implemented yet.


Well, I think we should design the interface correctly from the 
beginning to facilitate future extension.


Also, you need to clarify what you mean by "regular cacheable RAM". Are 
they write-through, write-back...? But, on ARM, this would only be the 
caching attribute in stage-2 page table. The final caching, memory type, 
shareability would be a combination of stage-2 and stage-1 attributes.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-22 Thread Stefano Stabellini
On Wed, 21 Jun 2017, Zhongze Liu wrote:
> 
> 1. Motivation and Description
> 
> Virtual machines use grant table hypercalls to setup a share page for
> inter-VMs communications. These hypercalls are used by all PV
> protocols today. However, very simple guests, such as baremetal
> applications, might not have the infrastructure to handle the grant table.
> This project is about setting up several shared memory areas for inter-VMs
> communications directly from the VM config file.
> So that the guest kernel doesn't have to have grant table support (in the
> embedded space, this is not unusual) to be able to communicate with
> other guests.
> 
> 
> 2. Implementation Plan:
> 
> 
> ==
> 2.1 Introduce a new VM config option in xl:
> ==
> The shared areas should be shareable among several (>=2) VMs, so
> every shared physical memory area is assigned to a set of VMs.
> Therefore, a “token” or “identifier” should be used here to uniquely
> identify a backing memory area.
> 
> The backing area would be taken from one domain, which we will regard
> as the "master domain", and this domain should be created prior to any
> other "slave domain"s. Again, we have to use some kind of tag to tell who
> is the "master domain".
> 
> And the ability to specify the attributes of the pages (say, WO/RO/X)
> to be shared should be also given to the user. For the master domain,
> these attributes often describes the maximum permission allowed for the
> shared pages, and for the slave domains, these attributes are often used
> to describe with what permissions this area will be mapped.
> This information should also be specified in the xl config entry.
> 
> To handle all these, I would suggest using an unsigned integer to serve as the
> identifier, and using a "master" tag in the master domain's xl config entry
> to announce that she will provide the backing memory pages. A separate
> entry would be used to describe the attributes of the shared memory area, of
> the form "prot=RW".
> For example:
> 
> In xl config file of vm1:
> 
> static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>   granularity = 4k, prot = RO, master”,
>  "id = ID2, begin = gmfn3, end = gmfn4,
>  granularity = 4k, prot = RW, master”]
> 
> In xl config file of vm2:
> 
> static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
>   granularity = 4k, prot = RO”]
> 
> In xl config file of vm3:
> 
> static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
>   granularity = 4k, prot = RW”]
> 
> gmfn's above are all hex of the form "0x2".
> 
> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> The parameter "prot=RO" means that this memory area are offered with read-only
> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> gmfn5~gmfn6.
> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
> 
> The "granularity" is optional in the slaves' config entries. But if it's
> presented in the slaves' config entry, it has to be the same with its 
> master's.
> Besides, the size of the gmfn range must also match. And overlapping backing
> memory areas are well defined.
> 
> Note that the "master" tag in vm1 for both ID1 and ID2 indicates that vm1
> should be created prior to both vm2 and vm3, for they both rely on the pages
> backed by vm1. If one tries to create vm2 or vm3 prior to vm1, she will get
> an error. And in vm1's config file, the "prot=RO" parameter of ID1 indicates
> that if one tries to share this page with vm1 with, say, "WR" permission,
> she will get an error, too.
> 
> ==
> 2.2 Store the mem-sharing information in xenstore
> ==
> For we don't have some persistent storage for xl to store the information
> of the shared memory areas, we have to find some way to keep it between xl
> launches. And xenstore is a good place to do this. The information for one
> shared area should include the ID, master domid and gmfn ranges and
> memory attributes in master and slave domains of this area.
> A current plan is to place the information under /local/shared_mem/ID.
> Still take the above config files as an example:
> 
> If we instantiate vm1, vm2 and vm3, one after another,
> “xenstore ls -f” should output something like this:
> 
> After VM1 was instantiated, the output of “xenstore ls -f”
> will be something like this:
> 
> 

Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-22 Thread Stefano Stabellini
On Fri, 23 Jun 2017, Zhongze Liu wrote:
> Hi Julien,
> 
> 2017-06-21 1:29 GMT+08:00 Julien Grall :
> > Hi,
> >
> > Thank you for the new proposal.
> >
> > On 06/20/2017 06:18 PM, Zhongze Liu wrote:
> >>
> >> In the example above. A memory area ID1 will be shared between vm1 and
> >> vm2.
> >> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> >> The parameter "prot=RO" means that this memory area are offered with
> >> read-only
> >> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> >> gmfn5~gmfn6.
> >
> >
> > [...]
> >
> >>
> >> ==
> >> 2.3 mapping the memory areas
> >> ==
> >> Handle the newly added config option in tools/{xl, libxl} and utilize
> >> toos/libxc to do the actual memory mapping. Specifically, we will use
> >> a wrapper to XENMME_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
> >> do the actual mapping. But since there isn't such a wrapper in libxc,
> >> we'll
> >> have to add a new wrapper, xc_domain_add_to_physmap_batch in
> >> libxc/xc_domain.c
> >
> >
> > In the paragrah above, you suggest the user can select the permission on the
> > shared page. However, the hypercall XENMEM_add_to_physmap does not currently
> > take permission. So how do you plan to handle that?
> >
> 
> I think this could be done via XENMEM_access_op?

I discussed this topic with Zhongze. I suggested to leave permissions as
"TODO" for the moment, given that for the use-case we have in mind they
aren't needed.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-22 Thread Zhongze Liu
Hi,

After talking to Stefano, I know that there seem to be no such
hypercalls to restrict the W/R/X
permissions on the shared backing pages (XENMEM_access_op is for
another purpose,
sorry for getting its usage wrong). And it seems that the ability to
specify these permissions
is not strictly necessary. Since the goal of this project is to setup
VM-to-VM communication,
in most cases, users would just expect that the shared memory is
mapped read-write with
cacheability attributes of normal memory. So the temporary conclusion
is to restrict the
design to sharing read-write pages with normal caching attributes,
with the rest left to
the to-be-done list.


Cheers,

Zhongze Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-22 Thread Zhongze Liu
Hi Wei,

Thank you for your valuable comments.

2017-06-21 23:09 GMT+08:00 Wei Liu :
> On Wed, Jun 21, 2017 at 01:18:38AM +0800, Zhongze Liu wrote:
>> 
>> 1. Motivation and Description
>> 
>> Virtual machines use grant table hypercalls to setup a share page for
>> inter-VMs communications. These hypercalls are used by all PV
>> protocols today. However, very simple guests, such as baremetal
>> applications, might not have the infrastructure to handle the grant table.
>> This project is about setting up several shared memory areas for inter-VMs
>> communications directly from the VM config file.
>> So that the guest kernel doesn't have to have grant table support (in the
>> embedded space, this is not unusual) to be able to communicate with
>> other guests.
>>
>> 
>> 2. Implementation Plan:
>> 
>>
>> ==
>> 2.1 Introduce a new VM config option in xl:
>> ==
>> The shared areas should be shareable among several (>=2) VMs, so
>> every shared physical memory area is assigned to a set of VMs.
>> Therefore, a “token” or “identifier” should be used here to uniquely
>> identify a backing memory area.
>>
>> The backing area would be taken from one domain, which we will regard
>> as the "master domain", and this domain should be created prior to any
>> other "slave domain"s. Again, we have to use some kind of tag to tell who
>> is the "master domain".
>>
>> And the ability to specify the attributes of the pages (say, WO/RO/X)
>> to be shared should be also given to the user. For the master domain,
>> these attributes often describes the maximum permission allowed for the
>> shared pages, and for the slave domains, these attributes are often used
>> to describe with what permissions this area will be mapped.
>> This information should also be specified in the xl config entry.
>>
>
> I don't quite get the attribute settings. If you only insert a backing
> page into guest physical address space with XENMEM hypercall, how do you
> audit the attributes when the guest tries to map the page?
>

I'm still considering about this, and any suggestions are welcomed.
The current plan
I have in mind is XENMEM_access_op.

>> To handle all these, I would suggest using an unsigned integer to serve as 
>> the
>> identifier, and using a "master" tag in the master domain's xl config entry
>> to announce that she will provide the backing memory pages. A separate
>> entry would be used to describe the attributes of the shared memory area, of
>> the form "prot=RW".
>
> I think using an integer is too limiting. You would need the user to
> know if a particular number is already used. Maybe using a number is
> good enough for the use case you have in mind, but it is not future
> proof. I don't know how sophisticated we want this to be, though.
>

Sounds reasonable. I chose integers because I think integers are fast
and easy to
manipulate. But integers are somewhat hard to memorize and this isn't
a good thing
from a user's point of view. So maybe I'll make it a string with a
maximum size of 32
or longer.

>> For example:
>>
>> In xl config file of vm1:
>>
>> static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>>   granularity = 4k, prot = RO, master”,
>>  "id = ID2, begin = gmfn3, end = gmfn4,
>
> I think you mean "gpfn" here and below.
>

Yes, according to https://wiki.xenproject.org/wiki/XenTerminology, the section
"Address Spaces", gmfn == gpfn for auto-translated guests. But this usage
seems to be outdated and should be phased out according to include/xen/mm.h.
And just as what Julien has pointed out, the term "gfn" should be used here.

>>  granularity = 4k, prot = RW, master”]
>>
>> In xl config file of vm2:
>>
>> static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
>>   granularity = 4k, prot = RO”]
>>
>> In xl config file of vm3:
>>
>> static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
>>   granularity = 4k, prot = RW”]
>>
>> gmfn's above are all hex of the form "0x2".
>>
>> In the example above. A memory area ID1 will be shared between vm1 and vm2.
>> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
>> The parameter "prot=RO" means that this memory area are offered with 
>> read-only
>> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
>> gmfn5~gmfn6.
>> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
>> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
>> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
>>
>> The "granularity" is optional in the slaves' config entries. But if it's
>> presented in the 

Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-22 Thread Zhongze Liu
Hi Julien,

2017-06-21 1:29 GMT+08:00 Julien Grall :
> Hi,
>
> Thank you for the new proposal.
>
> On 06/20/2017 06:18 PM, Zhongze Liu wrote:
>>
>> In the example above. A memory area ID1 will be shared between vm1 and
>> vm2.
>> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
>> The parameter "prot=RO" means that this memory area are offered with
>> read-only
>> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
>> gmfn5~gmfn6.
>
>
> [...]
>
>>
>> ==
>> 2.3 mapping the memory areas
>> ==
>> Handle the newly added config option in tools/{xl, libxl} and utilize
>> toos/libxc to do the actual memory mapping. Specifically, we will use
>> a wrapper to XENMME_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
>> do the actual mapping. But since there isn't such a wrapper in libxc,
>> we'll
>> have to add a new wrapper, xc_domain_add_to_physmap_batch in
>> libxc/xc_domain.c
>
>
> In the paragrah above, you suggest the user can select the permission on the
> shared page. However, the hypercall XENMEM_add_to_physmap does not currently
> take permission. So how do you plan to handle that?
>

I think this could be done via XENMEM_access_op?

Cheers,

Zhongze Liu

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-21 Thread Julien Grall



On 21/06/17 16:09, Wei Liu wrote:

On Wed, Jun 21, 2017 at 01:18:38AM +0800, Zhongze Liu wrote:

For example:

In xl config file of vm1:

static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
  granularity = 4k, prot = RO, master”,
 "id = ID2, begin = gmfn3, end = gmfn4,


I think you mean "gpfn" here and below.


It would be better to use gfn in that case to follow the convention of 
the hypervisor (see xen/include/memory.h).


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-21 Thread Wei Liu
On Wed, Jun 21, 2017 at 01:18:38AM +0800, Zhongze Liu wrote:
> 
> 1. Motivation and Description
> 
> Virtual machines use grant table hypercalls to setup a share page for
> inter-VMs communications. These hypercalls are used by all PV
> protocols today. However, very simple guests, such as baremetal
> applications, might not have the infrastructure to handle the grant table.
> This project is about setting up several shared memory areas for inter-VMs
> communications directly from the VM config file.
> So that the guest kernel doesn't have to have grant table support (in the
> embedded space, this is not unusual) to be able to communicate with
> other guests.
> 
> 
> 2. Implementation Plan:
> 
> 
> ==
> 2.1 Introduce a new VM config option in xl:
> ==
> The shared areas should be shareable among several (>=2) VMs, so
> every shared physical memory area is assigned to a set of VMs.
> Therefore, a “token” or “identifier” should be used here to uniquely
> identify a backing memory area.
> 
> The backing area would be taken from one domain, which we will regard
> as the "master domain", and this domain should be created prior to any
> other "slave domain"s. Again, we have to use some kind of tag to tell who
> is the "master domain".
> 
> And the ability to specify the attributes of the pages (say, WO/RO/X)
> to be shared should be also given to the user. For the master domain,
> these attributes often describes the maximum permission allowed for the
> shared pages, and for the slave domains, these attributes are often used
> to describe with what permissions this area will be mapped.
> This information should also be specified in the xl config entry.
> 

I don't quite get the attribute settings. If you only insert a backing
page into guest physical address space with XENMEM hypercall, how do you
audit the attributes when the guest tries to map the page?

> To handle all these, I would suggest using an unsigned integer to serve as the
> identifier, and using a "master" tag in the master domain's xl config entry
> to announce that she will provide the backing memory pages. A separate
> entry would be used to describe the attributes of the shared memory area, of
> the form "prot=RW".

I think using an integer is too limiting. You would need the user to
know if a particular number is already used. Maybe using a number is
good enough for the use case you have in mind, but it is not future
proof. I don't know how sophisticated we want this to be, though.

> For example:
> 
> In xl config file of vm1:
> 
> static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
>   granularity = 4k, prot = RO, master”,
>  "id = ID2, begin = gmfn3, end = gmfn4,

I think you mean "gpfn" here and below.

>  granularity = 4k, prot = RW, master”]
> 
> In xl config file of vm2:
> 
> static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
>   granularity = 4k, prot = RO”]
> 
> In xl config file of vm3:
> 
> static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
>   granularity = 4k, prot = RW”]
> 
> gmfn's above are all hex of the form "0x2".
> 
> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> The parameter "prot=RO" means that this memory area are offered with read-only
> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> gmfn5~gmfn6.
> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
> 
> The "granularity" is optional in the slaves' config entries. But if it's
> presented in the slaves' config entry, it has to be the same with its 
> master's.
> Besides, the size of the gmfn range must also match. And overlapping backing
> memory areas are well defined.
> 

What do you mean by "well defined"?

Why is inserting a sub-range not allowed?

> Note that the "master" tag in vm1 for both ID1 and ID2 indicates that vm1
> should be created prior to both vm2 and vm3, for they both rely on the pages
> backed by vm1. If one tries to create vm2 or vm3 prior to vm1, she will get
> an error. And in vm1's config file, the "prot=RO" parameter of ID1 indicates
> that if one tries to share this page with vm1 with, say, "WR" permission,
> she will get an error, too.
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-20 Thread Julien Grall

Hi,

Thank you for the new proposal.

On 06/20/2017 06:18 PM, Zhongze Liu wrote:

In the example above. A memory area ID1 will be shared between vm1 and vm2.
This area will be taken from vm1 and mapped into vm2's stage-2 page table.
The parameter "prot=RO" means that this memory area are offered with read-only
permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
gmfn5~gmfn6.


[...]



==
2.3 mapping the memory areas
==
Handle the newly added config option in tools/{xl, libxl} and utilize
toos/libxc to do the actual memory mapping. Specifically, we will use
a wrapper to XENMME_add_to_physmap_batch with XENMAPSPACE_gmfn_foreign to
do the actual mapping. But since there isn't such a wrapper in libxc, we'll
have to add a new wrapper, xc_domain_add_to_physmap_batch in libxc/xc_domain.c


In the paragrah above, you suggest the user can select the permission on 
the shared page. However, the hypercall XENMEM_add_to_physmap does not 
currently take permission. So how do you plan to handle that?


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-06-20 Thread Zhongze Liu

1. Motivation and Description

Virtual machines use grant table hypercalls to setup a share page for
inter-VMs communications. These hypercalls are used by all PV
protocols today. However, very simple guests, such as baremetal
applications, might not have the infrastructure to handle the grant table.
This project is about setting up several shared memory areas for inter-VMs
communications directly from the VM config file.
So that the guest kernel doesn't have to have grant table support (in the
embedded space, this is not unusual) to be able to communicate with
other guests.


2. Implementation Plan:


==
2.1 Introduce a new VM config option in xl:
==
The shared areas should be shareable among several (>=2) VMs, so
every shared physical memory area is assigned to a set of VMs.
Therefore, a “token” or “identifier” should be used here to uniquely
identify a backing memory area.

The backing area would be taken from one domain, which we will regard
as the "master domain", and this domain should be created prior to any
other "slave domain"s. Again, we have to use some kind of tag to tell who
is the "master domain".

And the ability to specify the attributes of the pages (say, WO/RO/X)
to be shared should be also given to the user. For the master domain,
these attributes often describes the maximum permission allowed for the
shared pages, and for the slave domains, these attributes are often used
to describe with what permissions this area will be mapped.
This information should also be specified in the xl config entry.

To handle all these, I would suggest using an unsigned integer to serve as the
identifier, and using a "master" tag in the master domain's xl config entry
to announce that she will provide the backing memory pages. A separate
entry would be used to describe the attributes of the shared memory area, of
the form "prot=RW".
For example:

In xl config file of vm1:

static_shared_mem = ["id = ID1, begin = gmfn1, end = gmfn2,
  granularity = 4k, prot = RO, master”,
 "id = ID2, begin = gmfn3, end = gmfn4,
 granularity = 4k, prot = RW, master”]

In xl config file of vm2:

static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
  granularity = 4k, prot = RO”]

In xl config file of vm3:

static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
  granularity = 4k, prot = RW”]

gmfn's above are all hex of the form "0x2".

In the example above. A memory area ID1 will be shared between vm1 and vm2.
This area will be taken from vm1 and mapped into vm2's stage-2 page table.
The parameter "prot=RO" means that this memory area are offered with read-only
permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
gmfn5~gmfn6.
Likewise, a memory area ID will be shared between vm1 and vm3 with read and
write permissions. vm1 is the master and vm2 the slave. vm1 can access the
area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.

The "granularity" is optional in the slaves' config entries. But if it's
presented in the slaves' config entry, it has to be the same with its master's.
Besides, the size of the gmfn range must also match. And overlapping backing
memory areas are well defined.

Note that the "master" tag in vm1 for both ID1 and ID2 indicates that vm1
should be created prior to both vm2 and vm3, for they both rely on the pages
backed by vm1. If one tries to create vm2 or vm3 prior to vm1, she will get
an error. And in vm1's config file, the "prot=RO" parameter of ID1 indicates
that if one tries to share this page with vm1 with, say, "WR" permission,
she will get an error, too.

==
2.2 Store the mem-sharing information in xenstore
==
For we don't have some persistent storage for xl to store the information
of the shared memory areas, we have to find some way to keep it between xl
launches. And xenstore is a good place to do this. The information for one
shared area should include the ID, master domid and gmfn ranges and
memory attributes in master and slave domains of this area.
A current plan is to place the information under /local/shared_mem/ID.
Still take the above config files as an example:

If we instantiate vm1, vm2 and vm3, one after another,
“xenstore ls -f” should output something like this:

After VM1 was instantiated, the output of “xenstore ls -f”
will be something like this:

/local/shared_mem/ID1/master = domid_of_vm1
/local/shared_mem/ID1/gmfn_begin = gmfn1
/local/shared_mem/ID1/gmfn_end = gmfn2
/local/shared_mem/ID1/granularity = "4k"
/local/shared_mem/ID1/permissions = "RO"