Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-06-01 Thread Edgar E. Iglesias
On Tue, May 31, 2016 at 05:04:42PM +0300, Oleksandr Dmytryshyn wrote:
> On Fri, May 20, 2016 at 7:05 PM, Edgar E. Iglesias
>  wrote:
> > Hi,
> >
> > We have similar needs (not exactly the same) in some of our setups.
> > We need to map certain OCMs (On Chip Memories) to dom0. Among other things,
> > these are used to communicate with remote accelerators/CPUs that have
> > "hardcoded" addresses to these RAMs.
> >
> > Our approach is more along the lines of Juliens second suggestion. We're
> > trying to use the mmio-sram DTS bindings to bring in these memories into
> > dom0.
> >
> > IIUC the Ducati FW issue correctly, you need to allocate a chunk of DDR.
> >
> > Another possible solution:
> > I think you could reserve the memory area by simply not mentioning it
> > in the main memory node (these nodes support multiple ranges so you can
> > introduce gaps). Then you could for example create an mmio-sram node to
> > get the memory explicitely mapped 1:1 into dom0.
> >
> > Just a moment ago, I posted an RFC for the mmio-sram support to the list.
> Hi, Edgar.
> 
> How do You access to the mapped OCMs in dom0?
> Are genalloc-related functions (gen_pool_get/_alloc/_virt_to_phys)
> only way to work with mmio-sram memory?


Hi Oleksandr,

I'm not familiar enough with the Linux APIs to give a good answer on which
APIs are considered OK and which not.

There are examples in the tree on other ways of using the srams though.
Look for example at this (search for smp-sram):

arch/arm/mach-rockchip/platsmp.c
Documentation/devicetree/bindings/sram/rockchip-smp-sram.txt

The allwinner,sun4i-a10-emac is another example.

Best regards,
Edgar


> 
> > Cheers,
> > Edgar
> >
> >
> >>
> >> Regards,
> >>
> >> [1]
> >> http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg01879.html
> >> [2]
> >> http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg01894.html
> >>
> >> --
> >> Julien Grall
> >>
> >> ___
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org
> >> http://lists.xen.org/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-31 Thread Oleksandr Dmytryshyn
On Fri, May 20, 2016 at 7:05 PM, Edgar E. Iglesias
 wrote:
> Hi,
>
> We have similar needs (not exactly the same) in some of our setups.
> We need to map certain OCMs (On Chip Memories) to dom0. Among other things,
> these are used to communicate with remote accelerators/CPUs that have
> "hardcoded" addresses to these RAMs.
>
> Our approach is more along the lines of Juliens second suggestion. We're
> trying to use the mmio-sram DTS bindings to bring in these memories into
> dom0.
>
> IIUC the Ducati FW issue correctly, you need to allocate a chunk of DDR.
>
> Another possible solution:
> I think you could reserve the memory area by simply not mentioning it
> in the main memory node (these nodes support multiple ranges so you can
> introduce gaps). Then you could for example create an mmio-sram node to
> get the memory explicitely mapped 1:1 into dom0.
>
> Just a moment ago, I posted an RFC for the mmio-sram support to the list.
Hi, Edgar.

How do You access to the mapped OCMs in dom0?
Are genalloc-related functions (gen_pool_get/_alloc/_virt_to_phys)
only way to work with mmio-sram memory?

> Cheers,
> Edgar
>
>
>>
>> Regards,
>>
>> [1]
>> http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg01879.html
>> [2]
>> http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg01894.html
>>
>> --
>> Julien Grall
>>
>> ___
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-30 Thread Stefano Stabellini
On Fri, 20 May 2016, Edgar E. Iglesias wrote:
> On Fri, May 20, 2016 at 04:04:43PM +0100, Julien Grall wrote:
> > Hello Oleksandr,
> > 
> > On 20/05/16 15:19, Oleksandr Dmytryshyn wrote:
> > >On Fri, May 20, 2016 at 12:59 PM, Jan Beulich  wrote:
> > >On 20.05.16 at 10:45,  wrote:
> > >>>On Thu, May 19, 2016 at 5:36 PM, Jan Beulich  wrote:
> > >>>On 19.05.16 at 15:58,  wrote:
> > >Case 1: Dom0 is driver domain:
> > >There is a Ducati firmware which runs on dedicated M4 core and decodes
> > >video. This firmware uses hardcoded physical addresses for graphics
> > >buffers. Those addresses should be inside address-space of the driver
> > >domain (Dom0). Ducati firmware is proprietary and we have no ability
> > >to rework it. So Dom0 kernel should be placed to the configured
> > >address (to the DOM0 RAM bank with specific address).
> > >
> > >Case 2: Dom0 is Thin and DomD is driver domain.
> > >All is the same: Ducati firmware requires special (hardcoded) 
> > >addresses.
> > 
> > For both of these cases I would then wonder whether such
> > environments are actually suitable for doing virtualization on.
> > >>>Currently we use Jacinto 6 evaluation board with DRA74X processor.
> > >>>We have both configurations (Thin Dom0 and Thich Dom0).
> > >>
> > >>Which says nothing about their suitability for virtualization.
> > >Our solution is based on Jacinto 6 evaluation board with DRA74X
> > >processor. We need video-playback. Ducati firmware decodes video and
> > >it works only with hardcoded addresses so we need this patch.
> > 
> > This patch is a way to solve the problem and may not be the only one.
> > I would like to explore all the possibilities before taking an approach that
> > requires to modify the memory allocator in Xen.
> > 
> > In my previous mails, I suggested a different solution (see [1] and [2]). If
> > you think it is not suitable, please share more details or explain why you
> > think your patch is the only way to solve it.
> 
> Hi,
> 
> We have similar needs (not exactly the same) in some of our setups.
> We need to map certain OCMs (On Chip Memories) to dom0. Among other things,
> these are used to communicate with remote accelerators/CPUs that have
> "hardcoded" addresses to these RAMs.
> 
> Our approach is more along the lines of Juliens second suggestion. We're
> trying to use the mmio-sram DTS bindings to bring in these memories into
> dom0.
> 
> IIUC the Ducati FW issue correctly, you need to allocate a chunk of DDR.
> 
> Another possible solution:
> I think you could reserve the memory area by simply not mentioning it
> in the main memory node (these nodes support multiple ranges so you can
> introduce gaps). Then you could for example create an mmio-sram node to
> get the memory explicitely mapped 1:1 into dom0.
> 
> Just a moment ago, I posted an RFC for the mmio-sram support to the list.

This sounds much nicer that the solution proposed here.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-23 Thread Oleksandr Dmytryshyn
Hi, Edgar.

On Fri, May 20, 2016 at 7:05 PM, Edgar E. Iglesias
 wrote:
> Hi,
>
> We have similar needs (not exactly the same) in some of our setups.
> We need to map certain OCMs (On Chip Memories) to dom0. Among other things,
> these are used to communicate with remote accelerators/CPUs that have
> "hardcoded" addresses to these RAMs.
>
> Our approach is more along the lines of Juliens second suggestion. We're
> trying to use the mmio-sram DTS bindings to bring in these memories into
> dom0.
>
> IIUC the Ducati FW issue correctly, you need to allocate a chunk of DDR.
>
> Another possible solution:
> I think you could reserve the memory area by simply not mentioning it
> in the main memory node (these nodes support multiple ranges so you can
> introduce gaps). Then you could for example create an mmio-sram node to
> get the memory explicitely mapped 1:1 into dom0.
>
> Just a moment ago, I posted an RFC for the mmio-sram support to the list.
Thank You for advice.
I'll try to use Your solution.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-20 Thread Edgar E. Iglesias
On Fri, May 20, 2016 at 04:04:43PM +0100, Julien Grall wrote:
> Hello Oleksandr,
> 
> On 20/05/16 15:19, Oleksandr Dmytryshyn wrote:
> >On Fri, May 20, 2016 at 12:59 PM, Jan Beulich  wrote:
> >On 20.05.16 at 10:45,  wrote:
> >>>On Thu, May 19, 2016 at 5:36 PM, Jan Beulich  wrote:
> >>>On 19.05.16 at 15:58,  wrote:
> >Case 1: Dom0 is driver domain:
> >There is a Ducati firmware which runs on dedicated M4 core and decodes
> >video. This firmware uses hardcoded physical addresses for graphics
> >buffers. Those addresses should be inside address-space of the driver
> >domain (Dom0). Ducati firmware is proprietary and we have no ability
> >to rework it. So Dom0 kernel should be placed to the configured
> >address (to the DOM0 RAM bank with specific address).
> >
> >Case 2: Dom0 is Thin and DomD is driver domain.
> >All is the same: Ducati firmware requires special (hardcoded) addresses.
> 
> For both of these cases I would then wonder whether such
> environments are actually suitable for doing virtualization on.
> >>>Currently we use Jacinto 6 evaluation board with DRA74X processor.
> >>>We have both configurations (Thin Dom0 and Thich Dom0).
> >>
> >>Which says nothing about their suitability for virtualization.
> >Our solution is based on Jacinto 6 evaluation board with DRA74X
> >processor. We need video-playback. Ducati firmware decodes video and
> >it works only with hardcoded addresses so we need this patch.
> 
> This patch is a way to solve the problem and may not be the only one.
> I would like to explore all the possibilities before taking an approach that
> requires to modify the memory allocator in Xen.
> 
> In my previous mails, I suggested a different solution (see [1] and [2]). If
> you think it is not suitable, please share more details or explain why you
> think your patch is the only way to solve it.

Hi,

We have similar needs (not exactly the same) in some of our setups.
We need to map certain OCMs (On Chip Memories) to dom0. Among other things,
these are used to communicate with remote accelerators/CPUs that have
"hardcoded" addresses to these RAMs.

Our approach is more along the lines of Juliens second suggestion. We're
trying to use the mmio-sram DTS bindings to bring in these memories into
dom0.

IIUC the Ducati FW issue correctly, you need to allocate a chunk of DDR.

Another possible solution:
I think you could reserve the memory area by simply not mentioning it
in the main memory node (these nodes support multiple ranges so you can
introduce gaps). Then you could for example create an mmio-sram node to
get the memory explicitely mapped 1:1 into dom0.

Just a moment ago, I posted an RFC for the mmio-sram support to the list.

Cheers,
Edgar


> 
> Regards,
> 
> [1]
> http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg01879.html
> [2]
> http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg01894.html
> 
> -- 
> Julien Grall
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-20 Thread Julien Grall

Hello Oleksandr,

On 20/05/16 15:19, Oleksandr Dmytryshyn wrote:

On Fri, May 20, 2016 at 12:59 PM, Jan Beulich  wrote:

On 20.05.16 at 10:45,  wrote:

On Thu, May 19, 2016 at 5:36 PM, Jan Beulich  wrote:

On 19.05.16 at 15:58,  wrote:

Case 1: Dom0 is driver domain:
There is a Ducati firmware which runs on dedicated M4 core and decodes
video. This firmware uses hardcoded physical addresses for graphics
buffers. Those addresses should be inside address-space of the driver
domain (Dom0). Ducati firmware is proprietary and we have no ability
to rework it. So Dom0 kernel should be placed to the configured
address (to the DOM0 RAM bank with specific address).

Case 2: Dom0 is Thin and DomD is driver domain.
All is the same: Ducati firmware requires special (hardcoded) addresses.


For both of these cases I would then wonder whether such
environments are actually suitable for doing virtualization on.

Currently we use Jacinto 6 evaluation board with DRA74X processor.
We have both configurations (Thin Dom0 and Thich Dom0).


Which says nothing about their suitability for virtualization.

Our solution is based on Jacinto 6 evaluation board with DRA74X
processor. We need video-playback. Ducati firmware decodes video and
it works only with hardcoded addresses so we need this patch.


This patch is a way to solve the problem and may not be the only one.
I would like to explore all the possibilities before taking an approach 
that requires to modify the memory allocator in Xen.


In my previous mails, I suggested a different solution (see [1] and 
[2]). If you think it is not suitable, please share more details or 
explain why you think your patch is the only way to solve it.


Regards,

[1] 
http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg01879.html
[2] 
http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg01894.html


--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-20 Thread Oleksandr Dmytryshyn
On Fri, May 20, 2016 at 12:59 PM, Jan Beulich  wrote:
 On 20.05.16 at 10:45,  wrote:
>> On Thu, May 19, 2016 at 5:36 PM, Jan Beulich  wrote:
>> On 19.05.16 at 15:58,  wrote:
 Case 1: Dom0 is driver domain:
 There is a Ducati firmware which runs on dedicated M4 core and decodes
 video. This firmware uses hardcoded physical addresses for graphics
 buffers. Those addresses should be inside address-space of the driver
 domain (Dom0). Ducati firmware is proprietary and we have no ability
 to rework it. So Dom0 kernel should be placed to the configured
 address (to the DOM0 RAM bank with specific address).

 Case 2: Dom0 is Thin and DomD is driver domain.
 All is the same: Ducati firmware requires special (hardcoded) addresses.
>>>
>>> For both of these cases I would then wonder whether such
>>> environments are actually suitable for doing virtualization on.
>> Currently we use Jacinto 6 evaluation board with DRA74X processor.
>> We have both configurations (Thin Dom0 and Thich Dom0).
>
> Which says nothing about their suitability for virtualization.
Our solution is based on Jacinto 6 evaluation board with DRA74X
processor. We need video-playback. Ducati firmware decodes video and
it works only with hardcoded addresses so we need this patch.

> Jan
>

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-20 Thread Jan Beulich
>>> On 20.05.16 at 10:45,  wrote:
> On Thu, May 19, 2016 at 5:36 PM, Jan Beulich  wrote:
> On 19.05.16 at 15:58,  wrote:
>>> Case 1: Dom0 is driver domain:
>>> There is a Ducati firmware which runs on dedicated M4 core and decodes
>>> video. This firmware uses hardcoded physical addresses for graphics
>>> buffers. Those addresses should be inside address-space of the driver
>>> domain (Dom0). Ducati firmware is proprietary and we have no ability
>>> to rework it. So Dom0 kernel should be placed to the configured
>>> address (to the DOM0 RAM bank with specific address).
>>>
>>> Case 2: Dom0 is Thin and DomD is driver domain.
>>> All is the same: Ducati firmware requires special (hardcoded) addresses.
>>
>> For both of these cases I would then wonder whether such
>> environments are actually suitable for doing virtualization on.
> Currently we use Jacinto 6 evaluation board with DRA74X processor.
> We have both configurations (Thin Dom0 and Thich Dom0).

Which says nothing about their suitability for virtualization.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-20 Thread Oleksandr Dmytryshyn
On Thu, May 19, 2016 at 5:36 PM, Jan Beulich  wrote:
 On 19.05.16 at 15:58,  wrote:
>> Case 1: Dom0 is driver domain:
>> There is a Ducati firmware which runs on dedicated M4 core and decodes
>> video. This firmware uses hardcoded physical addresses for graphics
>> buffers. Those addresses should be inside address-space of the driver
>> domain (Dom0). Ducati firmware is proprietary and we have no ability
>> to rework it. So Dom0 kernel should be placed to the configured
>> address (to the DOM0 RAM bank with specific address).
>>
>> Case 2: Dom0 is Thin and DomD is driver domain.
>> All is the same: Ducati firmware requires special (hardcoded) addresses.
>
> For both of these cases I would then wonder whether such
> environments are actually suitable for doing virtualization on.
Currently we use Jacinto 6 evaluation board with DRA74X processor.
We have both configurations (Thin Dom0 and Thich Dom0).

> Jan
>

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-20 Thread Oleksandr Dmytryshyn
Oleksandr Dmytryshyn | Product Engineering and Development
GlobalLogic
M +38.067.382.2525
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt


On Thu, May 19, 2016 at 5:34 PM, Julien Grall  wrote:
> Hello Oleksandr,
>
>
> On 19/05/16 14:58, Oleksandr Dmytryshyn wrote:
>>>
>>> Why would a user want to allocate DOM0 RAM bank to a specific address?
>>>
>>> If I understand correctly your patch, DOM0 will only able to allocate one
>>> bank of the given size at the specific address. You also add this
>>> possibility for guest domain (see patch #4) and try to control where the
>>> guest memory will be allocated. This will increase a lot the chance of the
>>> memory allocation to fail.
>>>
>>> For instance, the RAM region requested for DOM0 may have been used to
>>> allocate memory for Xen internal. So you need a way to reserve memory in
>>> order to avoid Xen using it.
>>>
>>> I expect most of the users who want to use direct memory mapped guest to
>>> know the number of guests which will use this feature.
>>>
>>> A such feature is only useful when pass-through a device to the guest on
>>> platfom without SMMU, so it is by default insecure.
>>>
>>> So I would suggest to create a new device-tree binding (or re-use an
>>> actual one) to reserve memory region to be used for direct memory mapped
>>> domain.
>>>
>>> Those regions could have an identifier to be used later during the
>>> allocation. This would avoid memory fragmentation, allow multiple RAM bank
>>> for DOM0,...
>>>
>>> Any opinions?
>>
>>
>> Case 1: Dom0 is driver domain:
>> There is a Ducati firmware which runs on dedicated M4 core and decodes
>> video. This firmware uses hardcoded physical addresses for graphics
>> buffers. Those addresses should be inside address-space of the driver
>> domain (Dom0). Ducati firmware is proprietary and we have no ability
>> to rework it. So Dom0 kernel should be placed to the configured
>> address (to the DOM0 RAM bank with specific address).
>>
>> Case 2: Dom0 is Thin and DomD is driver domain.
>> All is the same: Ducati firmware requires special (hardcoded) addresses.
>
>
> So if I understand correctly, patches #4, #13, #16 are only here to
> workaround a firmware which does not do the right thing?
>
> IHMO, modifying the memory allocator in Xen to make a firmware happy is just
> overkill. We need to explore all the possible solutions before going
> forward.
Yes, You are right. This patch written to make a firmware happy.

> From your description, it looks like to me that the device-tree does not
> correctly describe the platform. The graphic buffers should be reserved
> using /memreserve or via a specific binding.
>
> This would be used later by Xen to map the buffer into dom0 or allow dom0 to
> map it to a guest.
>
> Regards,
>
> --
> Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-19 Thread Jan Beulich
>>> On 19.05.16 at 15:58,  wrote:
> Case 1: Dom0 is driver domain:
> There is a Ducati firmware which runs on dedicated M4 core and decodes
> video. This firmware uses hardcoded physical addresses for graphics
> buffers. Those addresses should be inside address-space of the driver
> domain (Dom0). Ducati firmware is proprietary and we have no ability
> to rework it. So Dom0 kernel should be placed to the configured
> address (to the DOM0 RAM bank with specific address).
> 
> Case 2: Dom0 is Thin and DomD is driver domain.
> All is the same: Ducati firmware requires special (hardcoded) addresses.

For both of these cases I would then wonder whether such
environments are actually suitable for doing virtualization on.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-19 Thread Julien Grall

Hello Oleksandr,

On 19/05/16 14:58, Oleksandr Dmytryshyn wrote:

Why would a user want to allocate DOM0 RAM bank to a specific address?

If I understand correctly your patch, DOM0 will only able to allocate one bank 
of the given size at the specific address. You also add this possibility for 
guest domain (see patch #4) and try to control where the guest memory will be 
allocated. This will increase a lot the chance of the memory allocation to fail.

For instance, the RAM region requested for DOM0 may have been used to allocate 
memory for Xen internal. So you need a way to reserve memory in order to avoid 
Xen using it.

I expect most of the users who want to use direct memory mapped guest to know 
the number of guests which will use this feature.

A such feature is only useful when pass-through a device to the guest on 
platfom without SMMU, so it is by default insecure.

So I would suggest to create a new device-tree binding (or re-use an actual 
one) to reserve memory region to be used for direct memory mapped domain.

Those regions could have an identifier to be used later during the allocation. 
This would avoid memory fragmentation, allow multiple RAM bank for DOM0,...

Any opinions?


Case 1: Dom0 is driver domain:
There is a Ducati firmware which runs on dedicated M4 core and decodes
video. This firmware uses hardcoded physical addresses for graphics
buffers. Those addresses should be inside address-space of the driver
domain (Dom0). Ducati firmware is proprietary and we have no ability
to rework it. So Dom0 kernel should be placed to the configured
address (to the DOM0 RAM bank with specific address).

Case 2: Dom0 is Thin and DomD is driver domain.
All is the same: Ducati firmware requires special (hardcoded) addresses.


So if I understand correctly, patches #4, #13, #16 are only here to 
workaround a firmware which does not do the right thing?


IHMO, modifying the memory allocator in Xen to make a firmware happy is 
just overkill. We need to explore all the possible solutions before 
going forward.


From your description, it looks like to me that the device-tree does 
not correctly describe the platform. The graphic buffers should be 
reserved using /memreserve or via a specific binding.


This would be used later by Xen to map the buffer into dom0 or allow 
dom0 to map it to a guest.


Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-19 Thread Oleksandr Dmytryshyn
> Why would a user want to allocate DOM0 RAM bank to a specific address?
>
> If I understand correctly your patch, DOM0 will only able to allocate one 
> bank of the given size at the specific address. You also add this possibility 
> for guest domain (see patch #4) and try to control where the guest memory 
> will be allocated. This will increase a lot the chance of the memory 
> allocation to fail.
>
> For instance, the RAM region requested for DOM0 may have been used to 
> allocate memory for Xen internal. So you need a way to reserve memory in 
> order to avoid Xen using it.
>
> I expect most of the users who want to use direct memory mapped guest to know 
> the number of guests which will use this feature.
>
> A such feature is only useful when pass-through a device to the guest on 
> platfom without SMMU, so it is by default insecure.
>
> So I would suggest to create a new device-tree binding (or re-use an actual 
> one) to reserve memory region to be used for direct memory mapped domain.
>
> Those regions could have an identifier to be used later during the 
> allocation. This would avoid memory fragmentation, allow multiple RAM bank 
> for DOM0,...
>
> Any opinions?

Case 1: Dom0 is driver domain:
There is a Ducati firmware which runs on dedicated M4 core and decodes
video. This firmware uses hardcoded physical addresses for graphics
buffers. Those addresses should be inside address-space of the driver
domain (Dom0). Ducati firmware is proprietary and we have no ability
to rework it. So Dom0 kernel should be placed to the configured
address (to the DOM0 RAM bank with specific address).

Case 2: Dom0 is Thin and DomD is driver domain.
All is the same: Ducati firmware requires special (hardcoded) addresses.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-19 Thread Julien Grall

Hello,

On 18/05/16 17:32, Andrii Anisov wrote:

From: Oleksandr Dmytryshyn 

This setting is used to adjust starting memory address allocated
for kernel Dom0. To use 'rambase_pfn' setting just add for example
'dom0_rambase_pfn=0x8' to the hypervisor command line. Note that
'dom0_rambase_pfn' should be aligned with the smallest memory chunk
which use xen memory allocator.


Why would a user want to allocate DOM0 RAM bank to a specific address?

If I understand correctly your patch, DOM0 will only able to allocate 
one bank of the given size at the specific address. You also add this 
possibility for guest domain (see patch #4) and try to control where the 
guest memory will be allocated. This will increase a lot the chance of 
the memory allocation to fail.


For instance, the RAM region requested for DOM0 may have been used to 
allocate memory for Xen internal. So you need a way to reserve memory in 
order to avoid Xen using it.


I expect most of the users who want to use direct memory mapped guest to 
know the number of guests which will use this feature.


A such feature is only useful when pass-through a device to the guest on 
platfom without SMMU, so it is by default insecure.


So I would suggest to create a new device-tree binding (or re-use an 
actual one) to reserve memory region to be used for direct memory mapped 
domain.


Those regions could have an identifier to be used later during the 
allocation. This would avoid memory fragmentation, allow multiple RAM 
bank for DOM0,...


Any opinions?



Signed-off-by: Oleksandr Dmytryshyn 
---
  xen/arch/arm/domain_build.c | 24 +---
  xen/common/page_alloc.c | 68 +++--
  xen/include/xen/mm.h|  2 ++
  3 files changed, 75 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 2937ff7..b48718d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -27,6 +27,9 @@
  static unsigned int __initdata opt_dom0_max_vcpus;
  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);

+static u64 __initdata opt_dom0_rambase_pfn = 0;
+integer_param("dom0_rambase_pfn", opt_dom0_rambase_pfn);
+
  int dom0_11_mapping = 1;

  #define DOM0_MEM_DEFAULT 0x800 /* 128 MiB */
@@ -248,6 +251,8 @@ static void allocate_memory_11(struct domain *d, struct 
kernel_info *kinfo)
  const unsigned int min_order = get_order_from_bytes(MB(4));
  struct page_info *pg;
  unsigned int order = get_11_allocation_size(kinfo->unassigned_mem);
+u64 rambase_pfn = opt_dom0_rambase_pfn;
+paddr_t mem_size = kinfo->unassigned_mem;
  int i;

  bool_t lowmem = is_32bit_domain(d);
@@ -267,7 +272,7 @@ static void allocate_memory_11(struct domain *d, struct 
kernel_info *kinfo)
  {
  for ( bits = order ; bits <= (lowmem ? 32 : PADDR_BITS); bits++ )
  {
-pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
+pg = alloc_domheap_pages_pfn(d, order, MEMF_bits(bits), 
rambase_pfn);
  if ( pg != NULL )
  goto got_bank0;
  }
@@ -284,16 +289,21 @@ static void allocate_memory_11(struct domain *d, struct 
kernel_info *kinfo)
  /* Now allocate more memory and fill in additional banks */

  order = get_11_allocation_size(kinfo->unassigned_mem);
+if ( opt_dom0_rambase_pfn )
+rambase_pfn += (mem_size - kinfo->unassigned_mem) >> PAGE_SHIFT;
+
  while ( kinfo->unassigned_mem && kinfo->mem.nr_banks < NR_MEM_BANKS )
  {
-pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(32) : 0);
+pg = alloc_domheap_pages_pfn(d, order, lowmem ? MEMF_bits(32) : 0,
+ rambase_pfn);


From my understanding, when rambase_pfn is not 0, the memory must be 
allocated contiguously at this specific address. So if the first call of 
alloc_domheap_pages (see a bit above) as failed, then this one will 
always fail because it means that someone has allocated some page in 
this region.



  if ( !pg )
  {
  order --;

  if ( lowmem && order < min_low_order)
  {
-D11PRINT("Failed at min_low_order, allow high allocations\n");
+if ( !opt_dom0_rambase_pfn )
+D11PRINT("Failed at min_low_order, allow high 
allocations\n");
  order = get_11_allocation_size(kinfo->unassigned_mem);
  lowmem = false;
  continue;
@@ -313,7 +323,8 @@ static void allocate_memory_11(struct domain *d, struct 
kernel_info *kinfo)

  if ( lowmem )
  {
-D11PRINT("Allocation below bank 0, allow high allocations\n");
+if ( !opt_dom0_rambase_pfn )
+D11PRINT("Allocation below bank 0, allow high 
allocations\n");
  order = 

Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-19 Thread Jan Beulich
>>> On 19.05.16 at 14:26,  wrote:
> On 19/05/16 10:41, Jan Beulich wrote:
> On 18.05.16 at 18:32,  wrote:
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -27,6 +27,9 @@
>>>   static unsigned int __initdata opt_dom0_max_vcpus;
>>>   integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
>>>
>>> +static u64 __initdata opt_dom0_rambase_pfn = 0;
>>> +integer_param("dom0_rambase_pfn", opt_dom0_rambase_pfn);
>>
>> Any addition of a command line option needs to be accompanied by
>> an entry in the command line doc.
>>
>>> @@ -248,6 +251,8 @@ static void allocate_memory_11(struct domain *d, struct
>>> kernel_info *kinfo)
>>>   const unsigned int min_order = get_order_from_bytes(MB(4));
>>>   struct page_info *pg;
>>>   unsigned int order = get_11_allocation_size(kinfo->unassigned_mem);
>>> +u64 rambase_pfn = opt_dom0_rambase_pfn;
>>
>> Use of __initdata in a non-__init function.
> 
> All the functions within domain_build.c should have the attributes 
> __init. However, it has been forgotten for half of them.
> 
> I am planning to send a patch to enforce it using the Makefile rules.

I have such a work item (low) on my todo list too, actually.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-19 Thread Julien Grall

Hi Jan,

On 19/05/16 10:41, Jan Beulich wrote:

On 18.05.16 at 18:32,  wrote:

--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -27,6 +27,9 @@
  static unsigned int __initdata opt_dom0_max_vcpus;
  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);

+static u64 __initdata opt_dom0_rambase_pfn = 0;
+integer_param("dom0_rambase_pfn", opt_dom0_rambase_pfn);


Any addition of a command line option needs to be accompanied by
an entry in the command line doc.


@@ -248,6 +251,8 @@ static void allocate_memory_11(struct domain *d, struct
kernel_info *kinfo)
  const unsigned int min_order = get_order_from_bytes(MB(4));
  struct page_info *pg;
  unsigned int order = get_11_allocation_size(kinfo->unassigned_mem);
+u64 rambase_pfn = opt_dom0_rambase_pfn;


Use of __initdata in a non-__init function.


All the functions within domain_build.c should have the attributes 
__init. However, it has been forgotten for half of them.


I am planning to send a patch to enforce it using the Makefile rules.

Regards,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-19 Thread Jan Beulich
>>> On 18.05.16 at 18:32,  wrote:
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -27,6 +27,9 @@
>  static unsigned int __initdata opt_dom0_max_vcpus;
>  integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
>  
> +static u64 __initdata opt_dom0_rambase_pfn = 0;
> +integer_param("dom0_rambase_pfn", opt_dom0_rambase_pfn);

Any addition of a command line option needs to be accompanied by
an entry in the command line doc.

> @@ -248,6 +251,8 @@ static void allocate_memory_11(struct domain *d, struct 
> kernel_info *kinfo)
>  const unsigned int min_order = get_order_from_bytes(MB(4));
>  struct page_info *pg;
>  unsigned int order = get_11_allocation_size(kinfo->unassigned_mem);
> +u64 rambase_pfn = opt_dom0_rambase_pfn;

Use of __initdata in a non-__init function.

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -583,16 +583,17 @@ static void check_low_mem_virq(void)
>  }
>  }
>  
> -/* Allocate 2^@order contiguous pages. */
> -static struct page_info *alloc_heap_pages(
> +/* Allocate 2^@order contiguous pages at given pfn. */
> +static struct page_info *alloc_heap_pages_pfn(
>  unsigned int zone_lo, unsigned int zone_hi,
>  unsigned int order, unsigned int memflags,
> -struct domain *d)
> +struct domain *d, xen_pfn_t pfn)

Altering generic allocation interfaces like this, for a boot time only
purpose, doesn't seem warranted. Please reconsider the entire
approach.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH RFC 13/18] xen: introduce and use 'dom0_rambase_pfn' setting for kernel Dom0

2016-05-18 Thread Andrii Anisov
From: Oleksandr Dmytryshyn 

This setting is used to adjust starting memory address allocated
for kernel Dom0. To use 'rambase_pfn' setting just add for example
'dom0_rambase_pfn=0x8' to the hypervisor command line. Note that
'dom0_rambase_pfn' should be aligned with the smallest memory chunk
which use xen memory allocator.

Signed-off-by: Oleksandr Dmytryshyn 
---
 xen/arch/arm/domain_build.c | 24 +---
 xen/common/page_alloc.c | 68 +++--
 xen/include/xen/mm.h|  2 ++
 3 files changed, 75 insertions(+), 19 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 2937ff7..b48718d 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -27,6 +27,9 @@
 static unsigned int __initdata opt_dom0_max_vcpus;
 integer_param("dom0_max_vcpus", opt_dom0_max_vcpus);
 
+static u64 __initdata opt_dom0_rambase_pfn = 0;
+integer_param("dom0_rambase_pfn", opt_dom0_rambase_pfn);
+
 int dom0_11_mapping = 1;
 
 #define DOM0_MEM_DEFAULT 0x800 /* 128 MiB */
@@ -248,6 +251,8 @@ static void allocate_memory_11(struct domain *d, struct 
kernel_info *kinfo)
 const unsigned int min_order = get_order_from_bytes(MB(4));
 struct page_info *pg;
 unsigned int order = get_11_allocation_size(kinfo->unassigned_mem);
+u64 rambase_pfn = opt_dom0_rambase_pfn;
+paddr_t mem_size = kinfo->unassigned_mem;
 int i;
 
 bool_t lowmem = is_32bit_domain(d);
@@ -267,7 +272,7 @@ static void allocate_memory_11(struct domain *d, struct 
kernel_info *kinfo)
 {
 for ( bits = order ; bits <= (lowmem ? 32 : PADDR_BITS); bits++ )
 {
-pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
+pg = alloc_domheap_pages_pfn(d, order, MEMF_bits(bits), 
rambase_pfn);
 if ( pg != NULL )
 goto got_bank0;
 }
@@ -284,16 +289,21 @@ static void allocate_memory_11(struct domain *d, struct 
kernel_info *kinfo)
 /* Now allocate more memory and fill in additional banks */
 
 order = get_11_allocation_size(kinfo->unassigned_mem);
+if ( opt_dom0_rambase_pfn )
+rambase_pfn += (mem_size - kinfo->unassigned_mem) >> PAGE_SHIFT;
+
 while ( kinfo->unassigned_mem && kinfo->mem.nr_banks < NR_MEM_BANKS )
 {
-pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(32) : 0);
+pg = alloc_domheap_pages_pfn(d, order, lowmem ? MEMF_bits(32) : 0,
+ rambase_pfn);
 if ( !pg )
 {
 order --;
 
 if ( lowmem && order < min_low_order)
 {
-D11PRINT("Failed at min_low_order, allow high allocations\n");
+if ( !opt_dom0_rambase_pfn )
+D11PRINT("Failed at min_low_order, allow high 
allocations\n");
 order = get_11_allocation_size(kinfo->unassigned_mem);
 lowmem = false;
 continue;
@@ -313,7 +323,8 @@ static void allocate_memory_11(struct domain *d, struct 
kernel_info *kinfo)
 
 if ( lowmem )
 {
-D11PRINT("Allocation below bank 0, allow high allocations\n");
+if ( !opt_dom0_rambase_pfn )
+D11PRINT("Allocation below bank 0, allow high 
allocations\n");
 order = get_11_allocation_size(kinfo->unassigned_mem);
 lowmem = false;
 continue;
@@ -330,6 +341,11 @@ static void allocate_memory_11(struct domain *d, struct 
kernel_info *kinfo)
  * allocation possible.
  */
 order = get_11_allocation_size(kinfo->unassigned_mem);
+if ( opt_dom0_rambase_pfn )
+{
+rambase_pfn += (mem_size - kinfo->unassigned_mem) >> PAGE_SHIFT;
+mem_size = kinfo->unassigned_mem;
+}
 }
 
 if ( kinfo->unassigned_mem )
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 74fc1de..d0c0fbb 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -583,16 +583,17 @@ static void check_low_mem_virq(void)
 }
 }
 
-/* Allocate 2^@order contiguous pages. */
-static struct page_info *alloc_heap_pages(
+/* Allocate 2^@order contiguous pages at given pfn. */
+static struct page_info *alloc_heap_pages_pfn(
 unsigned int zone_lo, unsigned int zone_hi,
 unsigned int order, unsigned int memflags,
-struct domain *d)
+struct domain *d, xen_pfn_t pfn)
 {
 unsigned int i, j, zone = 0, nodemask_retry = 0;
 nodeid_t first_node, node = MEMF_get_node(memflags), req_node = node;
 unsigned long request = 1UL << order;
-struct page_info *pg;
+struct page_info *pg, *tmp_pg;
+struct page_list_head *pg_list;
 nodemask_t nodemask = (d != NULL ) ? d->node_affinity : node_online_map;
 bool_t need_tlbflush = 0;
 uint32_t tlbflush_timestamp = 0;
@@ -657,9 +658,25 @@