Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Thierry Reding
* Arnd Bergmann wrote:
 On Wednesday 11 April 2012, Thierry Reding wrote:
  Daniel Vetter wrote:
   Well, you use the iommu api to map/unmap memory into the iommu for tegra,
   whereas usually device drivers just use the dma api to do that. The usual
   interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
   around. I'm just wondering why this you've choosen this.
  
  I don't think this works on ARM. Maybe I'm not seeing the whole picture but
  judging by a quick look through the kernel tree there aren't any users that
  map DMA memory through an IOMMU.
 
 dma_map_sg is certainly the right interface to use, and Marek Szyprowski has
 patches to make that work on ARM, hopefully going into v3.5, so you could
 use those.

I've looked at Marek's patches but I don't think they'll work for Tegra 2 or
Tegra 3. The corresponding iommu_map() functions only set one PTE, regardless
of the number of bytes passed to them. However, the Tegra TRM indicates that
mapping needs to be done on a per-page basis so contiguous regions cannot be
combined. I suppose the IOMMU driver would have to be fixed to program more
than a single page in that case.

Also this doesn't yet solve the vmap() problem that is needed for the kernel
virtual mapping. I did try using dma_alloc_writecombine(), but that only
works for chunks of 2 MB or smaller, unless I use init_consistent_dma_size()
during board setup, which isn't provided for in a DT setup. I couldn't find
a better alternative, but I admit I'm not very familiar with all the VM APIs.
Do you have any suggestions on how to solve this? Otherwise I'll try and dig
in some more.

Thierry


pgpN1I5gnKtdt.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 3/4] drm: fixed: Add dfixed_frac() macro

2012-04-12 Thread Thierry Reding
* Stephen Warren wrote:
 On 04/11/2012 06:10 AM, Thierry Reding wrote:
  This commit is taken from the Chromium tree and was originally written
  by Robert Morell.
 
 Maybe just cherry-pick it from there? That way, the git authorship will
 show up as Robert.

I can do that. Though I'll have to remove the Chromium-specific tags.

Thierry


pgpernz1zQbKU.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 2/4] iommu: tegra/gart: Add device tree support

2012-04-12 Thread Thierry Reding
* Stephen Warren wrote:
 On 04/11/2012 06:10 AM, Thierry Reding wrote:
  This commit adds device tree support for the GART hardware available on
  NVIDIA Tegra 20 SoCs.
  
  Signed-off-by: Thierry Reding thierry.red...@avionic-design.de
  ---
   arch/arm/boot/dts/tegra20.dtsi |6 ++
   arch/arm/mach-tegra/board-dt-tegra20.c |1 +
   drivers/iommu/tegra-gart.c |   10 ++
   3 files changed, 17 insertions(+)
 
 I think I'd prefer at least the tegra20.dtsi change split out into a
 separate patch so I can queue it in a dt topic branch in the Tegra repo.
 
 It might be a good idea to split this into two on an arch/arm vs.
 drivers/iommu boundary. Looking at Olof's branches for 3.4, that split
 is what happened there.
 
 Finally, there should be a binding documentation file in
 Documentation/devicetree/bindings in order to specify the number of reg
 property entries needed, and their meaning, since there's more than 1
 (even though you did comment it nicely in the .dtsi file)

Okay, I'll do that.

Thierry


pgpd4sliiEaEi.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH 0/2] Exynos: fix SYSMMU driver to work with power domains

2012-04-12 Thread KyongHo Cho
On Wed, Apr 11, 2012 at 11:34 PM, Marek Szyprowski
m.szyprow...@samsung.com wrote:
 Hi!

 These two patches fixes operation of the SYSMMU driver (v12 version [1])
 with the new power domain driver based on generic power domains and
 runtime pw, which has been merged to Linux kernel v3.4-rc1.

Thanks, Marek.

Your way of power gating is right
and I will look into the changed runtime PM scheme in Exynos tree.

 [1] https://lkml.org/lkml/2012/3/15/51

Regards,

KyongHo
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Marek Szyprowski
Hi Thierry,

On Thursday, April 12, 2012 9:18 AM Thierry Reding wrote:

 * Arnd Bergmann wrote:
  On Wednesday 11 April 2012, Thierry Reding wrote:
   Daniel Vetter wrote:
Well, you use the iommu api to map/unmap memory into the iommu for 
tegra,
whereas usually device drivers just use the dma api to do that. The 
usual
interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
around. I'm just wondering why this you've choosen this.
  
   I don't think this works on ARM. Maybe I'm not seeing the whole picture 
   but
   judging by a quick look through the kernel tree there aren't any users 
   that
   map DMA memory through an IOMMU.
 
  dma_map_sg is certainly the right interface to use, and Marek Szyprowski has
  patches to make that work on ARM, hopefully going into v3.5, so you could
  use those.
 
 I've looked at Marek's patches but I don't think they'll work for Tegra 2 or
 Tegra 3. The corresponding iommu_map() functions only set one PTE, regardless
 of the number of bytes passed to them. However, the Tegra TRM indicates that
 mapping needs to be done on a per-page basis so contiguous regions cannot be
 combined. I suppose the IOMMU driver would have to be fixed to program more
 than a single page in that case.

I assume you want to map a set of pages into contiguous chunk in io address 
space.
This can be done with dma_map_sg() call once IOMMU aware implementation has been
assigned to the given device. DMA-mapping implementation is able to merge 
consecutive chunks of the scatter list in the dma/io address space if possible
(i.e. there are no in-page offsets between the chunks). With my implementation 
of IOMMU aware dma-mapping you usually you get a single DMA chunk from the 
provided scatter-list.

I know that this approach causes a lot of confusion at the first look, but that
how dma mapping api has been designed. The scatter list based approach has some
drawbacks - it is a bit oversized for most of the typical use cases for the 
gfx/multimedia buffers, but that's all we have now. 

Scatter lists were initially designed for the disk based block io operations, 
hence the presence of the in-page offsets and lengths for each chunk. For 
multimedia use cases providing an array of struct pages and asking dma-mapping 
to map them into contiguous memory is probably all we need. I wonder if 
introducing such new calls is a good idea. Anrd, what do think? It will 
definitely simplify the drivers and improve the code understanding. On the 
other hand it requires a significant amount of work in the dma-mapping 
framework for all architectures, but that's not a big issue for me.

 Also this doesn't yet solve the vmap() problem that is needed for the kernel
 virtual mapping. I did try using dma_alloc_writecombine(), but that only
 works for chunks of 2 MB or smaller, unless I use init_consistent_dma_size()
 during board setup, which isn't provided for in a DT setup. I couldn't find
 a better alternative, but I admit I'm not very familiar with all the VM APIs.
 Do you have any suggestions on how to solve this? Otherwise I'll try and dig
 in some more.

Yes, I'm aware of this issue I'm currently working on solving it. I hope to use 
standard vmalloc range for all coherent/writecombine allocations and get rid of
the custom 'consistent_dma' region at all.

Best regards
-- 
Marek Szyprowski
Samsung Poland RD Center


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] ARM: Exynos4: integrate SYSMMU driver with DMA-mapping interface

2012-04-12 Thread Subash Patel

Hello Marek,

On 04/11/2012 08:06 PM, Marek Szyprowski wrote:

This patch provides an provides setup code which assigns IOMMU controllers
to FIMC and MFC devices and enables IOMMU aware DMA-mapping for them.
It has been tested on Samsung Exynos4 platform, NURI board.

Most of the work is done in the s5p_sysmmu_late_init() function, which
first assigns SYSMMU controller to respective client device and then
creates IO address space mapping structures. In this example 128 MiB of
address space is created at 0x2000 for most of the devices. IO address
allocation precision is set to 2^4 pages, so all small allocations will be
aligned to 64 pages. This reduces the size of the io address space bitmap
to 4 KiB.

To solve the clock dependency issues, parent clocks have been added to each
SYSMMU controller bus clock. This models the true hardware behavior,
because client's device bus clock also gates the respective sysmmu bus
clock.

Signed-off-by: Marek Szyprowskim.szyprow...@samsung.com
Acked-by: Kyungmin Parkkyungmin.p...@samsung.com
---
  arch/arm/mach-exynos/Kconfig   |1 +
  arch/arm/mach-exynos/clock-exynos4.c   |   64 +++-
  arch/arm/mach-exynos/dev-sysmmu.c  |   44 +++
  arch/arm/mach-exynos/include/mach/sysmmu.h |3 +
  drivers/iommu/Kconfig  |1 +
  5 files changed, 84 insertions(+), 29 deletions(-)

diff --git a/arch/arm/mach-exynos/Kconfig b/arch/arm/mach-exynos/Kconfig
index 801c738..25b9ba5 100644
--- a/arch/arm/mach-exynos/Kconfig
+++ b/arch/arm/mach-exynos/Kconfig
@@ -288,6 +288,7 @@ config MACH_NURI
select S5P_DEV_USB_EHCI
select S5P_SETUP_MIPIPHY
select EXYNOS4_DEV_DMA
+   select EXYNOS_DEV_SYSMMU
select EXYNOS4_SETUP_FIMC
select EXYNOS4_SETUP_FIMD0
select EXYNOS4_SETUP_I2C1
diff --git a/arch/arm/mach-exynos/clock-exynos4.c 
b/arch/arm/mach-exynos/clock-exynos4.c
index 29ae4df..fe459a3 100644
--- a/arch/arm/mach-exynos/clock-exynos4.c
+++ b/arch/arm/mach-exynos/clock-exynos4.c
@@ -497,29 +497,6 @@ static struct clk *exynos4_gate_clocks[] = {

  static struct clk exynos4_init_clocks_off[] = {
{
-   .name   = timers,
-   .parent =exynos4_clk_aclk_100.clk,
-   .enable = exynos4_clk_ip_peril_ctrl,
-   .ctrlbit= (124),
-   }, {
-   .name   = csis,
-   .devname= s5p-mipi-csis.0,
-   .enable = exynos4_clk_ip_cam_ctrl,
-   .ctrlbit= (1  4),
-   .parent =exynos4_clk_gate_cam,
-   }, {
-   .name   = csis,
-   .devname= s5p-mipi-csis.1,
-   .enable = exynos4_clk_ip_cam_ctrl,
-   .ctrlbit= (1  5),
-   .parent =exynos4_clk_gate_cam,
-   }, {
-   .name   = jpeg,
-   .id = 0,
-   .enable = exynos4_clk_ip_cam_ctrl,
-   .ctrlbit= (1  6),
-   .parent =exynos4_clk_gate_cam,
-   }, {
.name   = fimc,
.devname= exynos4-fimc.0,
.enable = exynos4_clk_ip_cam_ctrl,
@@ -544,6 +521,35 @@ static struct clk exynos4_init_clocks_off[] = {
.ctrlbit= (1  3),
.parent =exynos4_clk_gate_cam,
}, {
+   .name   = mfc,
+   .devname= s5p-mfc,
+   .enable = exynos4_clk_ip_mfc_ctrl,
+   .ctrlbit= (1  0),
+   .parent =exynos4_clk_gate_mfc,
+   }, {
+   .name   = timers,
+   .parent =exynos4_clk_aclk_100.clk,
+   .enable = exynos4_clk_ip_peril_ctrl,
+   .ctrlbit= (124),
+   }, {
+   .name   = csis,
+   .devname= s5p-mipi-csis.0,
+   .enable = exynos4_clk_ip_cam_ctrl,
+   .ctrlbit= (1  4),
+   .parent =exynos4_clk_gate_cam,
+   }, {
+   .name   = csis,
+   .devname= s5p-mipi-csis.1,
+   .enable = exynos4_clk_ip_cam_ctrl,
+   .ctrlbit= (1  5),
+   .parent =exynos4_clk_gate_cam,
+   }, {
+   .name   = jpeg,
+   .id = 0,
+   .enable = exynos4_clk_ip_cam_ctrl,
+   .ctrlbit= (1  6),
+   .parent =exynos4_clk_gate_cam,
+   }, {
.name   = hsmmc,
.devname= exynos4-sdhci.0,
.parent =exynos4_clk_aclk_133.clk,
@@ -674,12 +680,6 @@ static struct clk exynos4_init_clocks_off[] = {
.ctrlbit= (1  0),

RE: [PATCH] ARM: Exynos4: integrate SYSMMU driver with DMA-mapping interface

2012-04-12 Thread Marek Szyprowski
Hi Subash,

On Thursday, April 12, 2012 11:06 AM Subash Patel wrote:

 On 04/11/2012 08:06 PM, Marek Szyprowski wrote:
  This patch provides an provides setup code which assigns IOMMU controllers
  to FIMC and MFC devices and enables IOMMU aware DMA-mapping for them.
  It has been tested on Samsung Exynos4 platform, NURI board.
 
  Most of the work is done in the s5p_sysmmu_late_init() function, which
  first assigns SYSMMU controller to respective client device and then
  creates IO address space mapping structures. In this example 128 MiB of
  address space is created at 0x2000 for most of the devices. IO address
  allocation precision is set to 2^4 pages, so all small allocations will be
  aligned to 64 pages. This reduces the size of the io address space bitmap
  to 4 KiB.
 
  To solve the clock dependency issues, parent clocks have been added to each
  SYSMMU controller bus clock. This models the true hardware behavior,
  because client's device bus clock also gates the respective sysmmu bus
  clock.
 
  Signed-off-by: Marek Szyprowskim.szyprow...@samsung.com
  Acked-by: Kyungmin Parkkyungmin.p...@samsung.com
  ---
arch/arm/mach-exynos/Kconfig   |1 +
arch/arm/mach-exynos/clock-exynos4.c   |   64 
  +++-
arch/arm/mach-exynos/dev-sysmmu.c  |   44 +++
arch/arm/mach-exynos/include/mach/sysmmu.h |3 +
drivers/iommu/Kconfig  |1 +
5 files changed, 84 insertions(+), 29 deletions(-)
 
  diff --git a/arch/arm/mach-exynos/Kconfig b/arch/arm/mach-exynos/Kconfig
  index 801c738..25b9ba5 100644
  --- a/arch/arm/mach-exynos/Kconfig
  +++ b/arch/arm/mach-exynos/Kconfig
  @@ -288,6 +288,7 @@ config MACH_NURI
  select S5P_DEV_USB_EHCI
  select S5P_SETUP_MIPIPHY
  select EXYNOS4_DEV_DMA
  +   select EXYNOS_DEV_SYSMMU
  select EXYNOS4_SETUP_FIMC
  select EXYNOS4_SETUP_FIMD0
  select EXYNOS4_SETUP_I2C1
  diff --git a/arch/arm/mach-exynos/clock-exynos4.c 
  b/arch/arm/mach-exynos/clock-exynos4.c
  index 29ae4df..fe459a3 100644
  --- a/arch/arm/mach-exynos/clock-exynos4.c
  +++ b/arch/arm/mach-exynos/clock-exynos4.c
  @@ -497,29 +497,6 @@ static struct clk *exynos4_gate_clocks[] = {
 
static struct clk exynos4_init_clocks_off[] = {
  {
  -   .name   = timers,
  -   .parent =exynos4_clk_aclk_100.clk,
  -   .enable = exynos4_clk_ip_peril_ctrl,
  -   .ctrlbit= (124),
  -   }, {
  -   .name   = csis,
  -   .devname= s5p-mipi-csis.0,
  -   .enable = exynos4_clk_ip_cam_ctrl,
  -   .ctrlbit= (1  4),
  -   .parent =exynos4_clk_gate_cam,
  -   }, {
  -   .name   = csis,
  -   .devname= s5p-mipi-csis.1,
  -   .enable = exynos4_clk_ip_cam_ctrl,
  -   .ctrlbit= (1  5),
  -   .parent =exynos4_clk_gate_cam,
  -   }, {
  -   .name   = jpeg,
  -   .id = 0,
  -   .enable = exynos4_clk_ip_cam_ctrl,
  -   .ctrlbit= (1  6),
  -   .parent =exynos4_clk_gate_cam,
  -   }, {
  .name   = fimc,
  .devname= exynos4-fimc.0,
  .enable = exynos4_clk_ip_cam_ctrl,
  @@ -544,6 +521,35 @@ static struct clk exynos4_init_clocks_off[] = {
  .ctrlbit= (1  3),
  .parent =exynos4_clk_gate_cam,
  }, {
  +   .name   = mfc,
  +   .devname= s5p-mfc,
  +   .enable = exynos4_clk_ip_mfc_ctrl,
  +   .ctrlbit= (1  0),
  +   .parent =exynos4_clk_gate_mfc,
  +   }, {
  +   .name   = timers,
  +   .parent =exynos4_clk_aclk_100.clk,
  +   .enable = exynos4_clk_ip_peril_ctrl,
  +   .ctrlbit= (124),
  +   }, {
  +   .name   = csis,
  +   .devname= s5p-mipi-csis.0,
  +   .enable = exynos4_clk_ip_cam_ctrl,
  +   .ctrlbit= (1  4),
  +   .parent =exynos4_clk_gate_cam,
  +   }, {
  +   .name   = csis,
  +   .devname= s5p-mipi-csis.1,
  +   .enable = exynos4_clk_ip_cam_ctrl,
  +   .ctrlbit= (1  5),
  +   .parent =exynos4_clk_gate_cam,
  +   }, {
  +   .name   = jpeg,
  +   .id = 0,
  +   .enable = exynos4_clk_ip_cam_ctrl,
  +   .ctrlbit= (1  6),
  +   .parent =exynos4_clk_gate_cam,
  +   }, {
  .name   = hsmmc,
  .devname= exynos4-sdhci.0,
  .parent =exynos4_clk_aclk_133.clk,
  @@ -674,12 +680,6 @@ static struct clk exynos4_init_clocks_off[] = {
  .ctrlbit= (1  0),
   

RE: [PATCHv8 10/10] ARM: dma-mapping: add support for IOMMU mapper

2012-04-12 Thread Marek Szyprowski
Hi Arnd,

On Tuesday, April 10, 2012 1:58 PM Arnd Bergmann wrote:

 On Tuesday 10 April 2012, Marek Szyprowski wrote:
  +/**
  + * arm_iommu_create_mapping
  + * @bus: pointer to the bus holding the client device (for IOMMU calls)
  + * @base: start address of the valid IO address space
  + * @size: size of the valid IO address space
  + * @order: accuracy of the IO addresses allocations
  + *
  + * Creates a mapping structure which holds information about used/unused
  + * IO address ranges, which is required to perform memory allocation and
  + * mapping with IOMMU aware functions.
  + *
  + * The client device need to be attached to the mapping with
  + * arm_iommu_attach_device function.
  + */
  +struct dma_iommu_mapping *
  +arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t 
  size,
  +int order)
  +{
  +   unsigned int count = size  (PAGE_SHIFT + order);
  +   unsigned int bitmap_size = BITS_TO_LONGS(count) * sizeof(long);
  +   struct dma_iommu_mapping *mapping;
  +   int err = -ENOMEM;
  +
  +   if (!count)
  +   return ERR_PTR(-EINVAL);
  +
  +   mapping = kzalloc(sizeof(struct dma_iommu_mapping), GFP_KERNEL);
  +   if (!mapping)
  +   goto err;
  +
  +   mapping-bitmap = kzalloc(bitmap_size, GFP_KERNEL);
  +   if (!mapping-bitmap)
  +   goto err2;
  +
  +   mapping-base = base;
  +   mapping-bits = BITS_PER_BYTE * bitmap_size;
  +   mapping-order = order;
  +   spin_lock_init(mapping-lock);
  +
  +   mapping-domain = iommu_domain_alloc(bus);
  +   if (!mapping-domain)
  +   goto err3;
  +
  +   kref_init(mapping-kref);
  +   return mapping;
  +err3:
  +   kfree(mapping-bitmap);
  +err2:
  +   kfree(mapping);
  +err:
  +   return ERR_PTR(err);
  +}
  +EXPORT_SYMBOL(arm_iommu_create_mapping);
 
 I don't understand this function, mostly I guess because you have not
 provided any users. A few questions here:
 
 * What is ARM specific about it that it is named arm_iommu_create_mapping?
   Isn't this completely generic, at least on the interface side?

This function is quite generic. It creates 'struct dma_iommu_mapping' object,
which is stored in the client's device arch data. This object mainly stores
information about io/dma address space: base address, allocation bitmap and
respective iommu domain. Please note that more than one device can be assigned
to the given dma_iommu_mapping to match different hardware topologies.

This function is called by the board/(sub-)platform startup code to initialize
iommu based dma-mapping. For the example usage please refer to 
s5p_create_iommu_mapping() function in arch/arm/mach-exynos/dev-sysmmu.c on 
3.4-rc2-arm-dma-v8-samsung branch in 
git://git.linaro.org/people/mszyprowski/linux-dma-mapping.git

GITWeb shortcut: 
http://git.linaro.org/gitweb?p=people/mszyprowski/linux-dma-mapping.git;a=blob;f=arch/arm/mach-exyno
s/dev-sysmmu.c;h=31f2d6caf0e9949def18abd18af3f9d16737ae19;hb=6025093750d41f88406042e6486e331b806dc87
5#l283

 * Why is this exported to modules? Which device drivers do you expect
   to call it?

I thought it might be useful to use modules for registering devices, but 
now I see that no platform use such approach. I will drop these exports 
unless someone finds a real use case for them.

 * Why do you pass the bus_type in here? That seems like the completely
   wrong thing to do when all devices are on the same bus type (e.g.
   amba or platform) but are connected to different instances that each
   have their own iommu. I guess this is a question for Jörg, because the
   base iommu interface provides iommu_domain_alloc().

That's only a consequence of the iommu api. I would also prefer to use client 
device pointer here instead of the bus id, but maybe I don't have enough 
knowledge about desktop IOMMUs. I suspect that there is also a need to assign
one IOMMU driver to the whole bus (like pci bus) and it originates from such
systems. In embedded world we usually have only one iommu driver which 
operates on the platform bus devices. On Samsung Exynos4 we have over a dozen
SYSMMU controllers for various multimedia blocks, but they are all exactly 
the same. We use one iommu ops structure and store a pointer to the real 
iommu controller instance inside arch data of the client struct device.
 
Best regards
-- 
Marek Szyprowski
Samsung Poland RD Center


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Thierry Reding
* Sascha Hauer wrote:
 You might want to have a look at the sdrm patches I recently posted to
 dri-devel and arm Linux Kernel. Among other things they allow to
 register crtcs/connectors/encoders seperately so that each of them can
 have its own representation in the devicetree. I haven't looked into
 devicetree support for DRM, but with or without devicetree the problem
 that we do not have a single PCI card for registering all DRM components
 is the same.

I'll do that. One interesting use-case that's been on my mind for some time
is if it would be possible to provide a CRTC via DRM that isn't part of the
SoC or DRM device but which can display a framebuffer prepared by the DRM
framework.

In other words I would like to use the Tegra hardware to render content into
a framebuffer (using potentially the 3D engine or HW accelerated video
decoding blocks) but display that framebuffer with a CRTC registered by a
different driver (perhaps provided by a PCIe or USB device).

I think such a setup would be possible if the CRTC registration can be
decoupled from the DRM driver. Perhaps sdrm even supports that already?

Thierry


pgpv190ioPDy1.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH] iommu: OMAP: device detach on domain destroy

2012-04-12 Thread Joerg Roedel
On Fri, Mar 30, 2012 at 11:03:49AM -0500, Omar Ramirez Luna wrote:
 'domain_destroy with devices attached' case isn't yet handled, instead
 code assumes that the device was already detached.
 
 If the domain is destroyed the hardware still has access to invalid
 pointers to its page table and internal iommu object. In order to
 detach the users we need to track devices using the iommu, current
 use cases only have one user of iommu per instance. When required
 this can evolve to a list with the devices using the iommu_dev.
 
 Reported-by: Joerg Roedel j...@8bytes.org
 Reviewed-by: Ohad Ben-Cohen o...@wizery.com
 Signed-off-by: Omar Ramirez Luna omar.l...@linaro.org

Doesn't apply against 3.4-rc2. Please rebase and send a new version.


Joerg


-- 
AMD Operating System Research Center

Advanced Micro Devices GmbH Einsteinring 24 85609 Dornach
General Managers: Alberto Bozzo
Registration: Dornach, Landkr. Muenchen; Registerger. Muenchen, HRB Nr. 43632

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] ARM: Exynos4: integrate SYSMMU driver with DMA-mapping interface

2012-04-12 Thread Arnd Bergmann
On Thursday 12 April 2012, Marek Szyprowski wrote:
 +
   +/*
   + * s5p_sysmmu_late_init
   + * Create DMA-mapping IOMMU context for specified devices. This function 
   must
   + * be called later, once SYSMMU driver gets registered and probed.
   + */
   +static int __init s5p_sysmmu_late_init(void)
   +{
   +   platform_set_sysmmu(SYSMMU_PLATDEV(fimc0).dev,s5p_device_fimc0.dev);
   +   platform_set_sysmmu(SYSMMU_PLATDEV(fimc1).dev,s5p_device_fimc1.dev);
   +   platform_set_sysmmu(SYSMMU_PLATDEV(fimc2).dev,s5p_device_fimc2.dev);
   +   platform_set_sysmmu(SYSMMU_PLATDEV(fimc3).dev,s5p_device_fimc3.dev);
   +   platform_set_sysmmu(SYSMMU_PLATDEV(mfc_l).dev,s5p_device_mfc_l.dev);
   +   platform_set_sysmmu(SYSMMU_PLATDEV(mfc_r).dev,s5p_device_mfc_r.dev);
   +
   +   s5p_create_iommu_mapping(s5p_device_fimc0.dev, 0x2000, SZ_128M, 
   4);
   +   s5p_create_iommu_mapping(s5p_device_fimc1.dev, 0x2000, SZ_128M, 
   4);
   +   s5p_create_iommu_mapping(s5p_device_fimc2.dev, 0x2000, SZ_128M, 
   4);
   +   s5p_create_iommu_mapping(s5p_device_fimc3.dev, 0x2000, SZ_128M, 
   4);
   +   s5p_create_iommu_mapping(s5p_device_mfc_l.dev, 0x2000, SZ_128M, 
   4);
   +   s5p_create_iommu_mapping(s5p_device_mfc_r.dev, 0x4000, SZ_128M, 
   4);
   +
   +   return 0;
   +}
   +device_initcall(s5p_sysmmu_late_init);
  
  Shouldn't these things be specific to a SoC? With this RFC, it happens
  that you will predefine the IOMMU attachment and mapping information for
  devices in common location (dev-sysmmu.c)? This may lead to problems
  because there are some IP's with SYSMMU support in exynos5, but not
  available in exynos4 (eg: GSC, FIMC-LITE, FIMC-ISP) Previously we used
  to do above declaration in individual machine file, which I think was
  more meaningful.
 
 Right, I simplified the code too much. Keeping these definitions inside 
 machine 
 files was a better idea. I completely forgot that Exynos sub-platform now 
 covers
 both Exynos4 and Exynos5 SoC families.

Ideally the information about iommu attachment should come from the
device tree. We have the dma-ranges properties that define how a dma
address space is mapped. I am not entirely sure how that works when you
have multiple IOMMUs and if that requires defining addititional properties,
but I think we should make it so that we don't have to hardcode specific
devices in the source.

Arnd
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Arnd Bergmann
On Thursday 12 April 2012, Marek Szyprowski wrote:
 Scatter lists were initially designed for the disk based block io operations, 
 hence the presence of the in-page offsets and lengths for each chunk. For 
 multimedia use cases providing an array of struct pages and asking 
 dma-mapping 
 to map them into contiguous memory is probably all we need. I wonder if 
 introducing such new calls is a good idea. Anrd, what do think? It will 
 definitely simplify the drivers and improve the code understanding. On the 
 other hand it requires a significant amount of work in the dma-mapping 
 framework for all architectures, but that's not a big issue for me.

My feeling is that it's too much like the existing _sg version, so I wouldn't
add yet another variant. While having a simple page array is definitely
simpler and potentially faster, I think the API is already too complex
and we need to be very careful with new additions.

Arnd
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [PATCH] ARM: Exynos4: integrate SYSMMU driver with DMA-mapping interface

2012-04-12 Thread Marek Szyprowski




 -Original Message-
 From: Arnd Bergmann [mailto:a...@arndb.de]
 Sent: Thursday, April 12, 2012 1:09 PM
 To: Marek Szyprowski
 Cc: 'Subash Patel'; linux-arm-ker...@lists.infradead.org; 
 linaro-mm-...@lists.linaro.org;
 linux...@kvack.org; linux-a...@vger.kernel.org; 
 iommu@lists.linux-foundation.org; 'Kyungmin
 Park'; 'Joerg Roedel'; 'Russell King - ARM Linux'; 'Chunsang Jeong'; 'Krishna 
 Reddy'; 'KyongHo
 Cho'; Andrzej Pietrasiewicz; 'Benjamin Herrenschmidt'; 'Konrad Rzeszutek 
 Wilk'; 'Hiroshi Doyu'
 Subject: Re: [PATCH] ARM: Exynos4: integrate SYSMMU driver with DMA-mapping 
 interface
 
 On Thursday 12 April 2012, Marek Szyprowski wrote:
  +
+/*
+ * s5p_sysmmu_late_init
+ * Create DMA-mapping IOMMU context for specified devices. This 
function must
+ * be called later, once SYSMMU driver gets registered and probed.
+ */
+static int __init s5p_sysmmu_late_init(void)
+{
+   
platform_set_sysmmu(SYSMMU_PLATDEV(fimc0).dev,s5p_device_fimc0.dev);
+   
platform_set_sysmmu(SYSMMU_PLATDEV(fimc1).dev,s5p_device_fimc1.dev);
+   
platform_set_sysmmu(SYSMMU_PLATDEV(fimc2).dev,s5p_device_fimc2.dev);
+   
platform_set_sysmmu(SYSMMU_PLATDEV(fimc3).dev,s5p_device_fimc3.dev);
+   
platform_set_sysmmu(SYSMMU_PLATDEV(mfc_l).dev,s5p_device_mfc_l.dev);
+   
platform_set_sysmmu(SYSMMU_PLATDEV(mfc_r).dev,s5p_device_mfc_r.dev);
+
+   s5p_create_iommu_mapping(s5p_device_fimc0.dev, 0x2000, 
SZ_128M, 4);
+   s5p_create_iommu_mapping(s5p_device_fimc1.dev, 0x2000, 
SZ_128M, 4);
+   s5p_create_iommu_mapping(s5p_device_fimc2.dev, 0x2000, 
SZ_128M, 4);
+   s5p_create_iommu_mapping(s5p_device_fimc3.dev, 0x2000, 
SZ_128M, 4);
+   s5p_create_iommu_mapping(s5p_device_mfc_l.dev, 0x2000, 
SZ_128M, 4);
+   s5p_create_iommu_mapping(s5p_device_mfc_r.dev, 0x4000, 
SZ_128M, 4);
+
+   return 0;
+}
+device_initcall(s5p_sysmmu_late_init);
  
   Shouldn't these things be specific to a SoC? With this RFC, it happens
   that you will predefine the IOMMU attachment and mapping information for
   devices in common location (dev-sysmmu.c)? This may lead to problems
   because there are some IP's with SYSMMU support in exynos5, but not
   available in exynos4 (eg: GSC, FIMC-LITE, FIMC-ISP) Previously we used
   to do above declaration in individual machine file, which I think was
   more meaningful.
 
  Right, I simplified the code too much. Keeping these definitions inside 
  machine
  files was a better idea. I completely forgot that Exynos sub-platform now 
  covers
  both Exynos4 and Exynos5 SoC families.
 
 Ideally the information about iommu attachment should come from the
 device tree. We have the dma-ranges properties that define how a dma
 address space is mapped. I am not entirely sure how that works when you
 have multiple IOMMUs and if that requires defining addititional properties,
 but I think we should make it so that we don't have to hardcode specific
 devices in the source.

Right, until that time machine/board files are imho ok.

Best regards
-- 
Marek Szyprowski
Samsung Poland RD Center


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] ARM: Exynos4: integrate SYSMMU driver with DMA-mapping interface

2012-04-12 Thread Hiroshi Doyu
From: Marek Szyprowski m.szyprow...@samsung.com
Subject: RE: [PATCH] ARM: Exynos4: integrate SYSMMU driver with DMA-mapping 
interface
Date: Thu, 12 Apr 2012 14:13:37 +0200
Message-ID: 028f01cd18a5$b0721770$11564650$%szyprow...@samsung.com





  -Original Message-
  From: Arnd Bergmann [mailto:a...@arndb.de]
  Sent: Thursday, April 12, 2012 1:09 PM
  To: Marek Szyprowski
  Cc: 'Subash Patel'; linux-arm-ker...@lists.infradead.org; 
  linaro-mm-...@lists.linaro.org;
  linux...@kvack.org; linux-a...@vger.kernel.org; 
  iommu@lists.linux-foundation.org; 'Kyungmin
  Park'; 'Joerg Roedel'; 'Russell King - ARM Linux'; 'Chunsang Jeong'; 
  'Krishna Reddy'; 'KyongHo
  Cho'; Andrzej Pietrasiewicz; 'Benjamin Herrenschmidt'; 'Konrad Rzeszutek 
  Wilk'; 'Hiroshi Doyu'
  Subject: Re: [PATCH] ARM: Exynos4: integrate SYSMMU driver with DMA-mapping 
  interface
 
  On Thursday 12 April 2012, Marek Szyprowski wrote:
   +
 +/*
 + * s5p_sysmmu_late_init
 + * Create DMA-mapping IOMMU context for specified devices. This 
 function must
 + * be called later, once SYSMMU driver gets registered and probed.
 + */
 +static int __init s5p_sysmmu_late_init(void)
 +{
 +   
 platform_set_sysmmu(SYSMMU_PLATDEV(fimc0).dev,s5p_device_fimc0.dev);
 +   
 platform_set_sysmmu(SYSMMU_PLATDEV(fimc1).dev,s5p_device_fimc1.dev);
 +   
 platform_set_sysmmu(SYSMMU_PLATDEV(fimc2).dev,s5p_device_fimc2.dev);
 +   
 platform_set_sysmmu(SYSMMU_PLATDEV(fimc3).dev,s5p_device_fimc3.dev);
 +   
 platform_set_sysmmu(SYSMMU_PLATDEV(mfc_l).dev,s5p_device_mfc_l.dev);
 +   
 platform_set_sysmmu(SYSMMU_PLATDEV(mfc_r).dev,s5p_device_mfc_r.dev);
 +
 +   s5p_create_iommu_mapping(s5p_device_fimc0.dev, 0x2000, 
 SZ_128M, 4);
 +   s5p_create_iommu_mapping(s5p_device_fimc1.dev, 0x2000, 
 SZ_128M, 4);
 +   s5p_create_iommu_mapping(s5p_device_fimc2.dev, 0x2000, 
 SZ_128M, 4);
 +   s5p_create_iommu_mapping(s5p_device_fimc3.dev, 0x2000, 
 SZ_128M, 4);
 +   s5p_create_iommu_mapping(s5p_device_mfc_l.dev, 0x2000, 
 SZ_128M, 4);
 +   s5p_create_iommu_mapping(s5p_device_mfc_r.dev, 0x4000, 
 SZ_128M, 4);
 +
 +   return 0;
 +}
 +device_initcall(s5p_sysmmu_late_init);
   
Shouldn't these things be specific to a SoC? With this RFC, it happens
that you will predefine the IOMMU attachment and mapping information for
devices in common location (dev-sysmmu.c)? This may lead to problems
because there are some IP's with SYSMMU support in exynos5, but not
available in exynos4 (eg: GSC, FIMC-LITE, FIMC-ISP) Previously we used
to do above declaration in individual machine file, which I think was
more meaningful.
  
   Right, I simplified the code too much. Keeping these definitions inside 
   machine
   files was a better idea. I completely forgot that Exynos sub-platform now 
   covers
   both Exynos4 and Exynos5 SoC families.
 
  Ideally the information about iommu attachment should come from the
  device tree. We have the dma-ranges properties that define how a dma
  address space is mapped. I am not entirely sure how that works when you
  have multiple IOMMUs and if that requires defining addititional properties,
  but I think we should make it so that we don't have to hardcode specific
  devices in the source.

 Right, until that time machine/board files are imho ok.

In Tegra30, there are quite many IOMMU attachable (platform)devices,
and it's quite nice for us to configure them (un)attached with address
range and IOMMU device ID(ASID) in DT in advance rather than inserting
the code to attach those devices here and there.

Experimentally I added some hook in platform_device_add() as below,
but apparently this won't be accepted.

From cb34373ebbf025e42ec6d8fea2e19e74ba41231e Mon Sep 17 00:00:00 2001
From: Hiroshi DOYU hd...@nvidia.com
Date: Thu, 22 Mar 2012 16:06:27 +0200
Subject: [PATCH 1/1] ARM: dma-mapping: All platform_device DMA API'able

Signed-off-by: Hiroshi DOYU hd...@nvidia.com
---
 arch/arm/mach-tegra/board-cardhu.c |7 ++-
 arch/arm/mm/dma-mapping.c  |5 -
 drivers/base/platform.c|   11 +--
 include/linux/device.h |2 ++
 4 files changed, 21 insertions(+), 4 deletions(-)

diff --git a/arch/arm/mach-tegra/board-cardhu.c 
b/arch/arm/mach-tegra/board-cardhu.c
index 339011e..38f5292 100644
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index ca5544e..5e6dbe0 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -758,6 +758,9 @@ size_t arm_iommu_iova_avail(struct device *dev)
size_t size = 0;
unsigned long start = 0;

+   BUG_ON(!dev);
+   BUG_ON(!mapping);
+
spin_lock_irqsave(mapping-lock, flags);
while (1) {
unsigned long end;
@@ -1513,7 +1516,7 @@ arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t 
base, size_t 

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Thierry Reding
* Marek Szyprowski wrote:
[...]
 We already have dma_map_page() and dma_map_single() which are very similar. 
 Maybe adding dma_map_pages() won't be such a bad idea? 
 
 If not maybe we should provide some kind of helper functions which converts 
 page array to scatterlist and then maps them.

drm_prime_pages_to_sg() seems to do exactly that.

Thierry


pgpPDKFjI5TYg.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Sascha Hauer
On Wed, Apr 11, 2012 at 12:12:14PM -0600, Stephen Warren wrote:
 On 04/11/2012 06:10 AM, Thierry Reding wrote:
  This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
  currently has rudimentary GEM support and can run a console on the
  framebuffer as well as X using the xf86-video-modesetting driver.
  Only the RGB output is supported. Quite a lot of things still need
  to be worked out and there is a lot of room for cleanup.
 
 I'll let Jon Mayo comment on the actual driver implementation, since
 he's a lot more familiar with Tegra's display hardware. However, I have
 some general comments below.
 
   .../devicetree/bindings/gpu/drm/tegra.txt  |   24 +
   arch/arm/mach-tegra/board-dt-tegra20.c |3 +
   arch/arm/mach-tegra/tegra2_clocks.c|8 +-
   drivers/gpu/drm/Kconfig|2 +
   drivers/gpu/drm/Makefile   |1 +
   drivers/gpu/drm/tegra/Kconfig  |   10 +
   drivers/gpu/drm/tegra/Makefile |5 +
   drivers/gpu/drm/tegra/tegra_drv.c  | 2241 
  
   drivers/gpu/drm/tegra/tegra_drv.h  |  184 ++
   include/drm/tegra_drm.h|   44 +
 
 Splitting this patch into two, between arch/arm and drivers/gpu would be
 a good idea.
 
  diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
  b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
 
  +   drm@5420 {
  +   compatible = nvidia,tegra20-drm;
 
 This doesn't seem right; there isn't a DRM hardware module on Tegra,
 since DRM is a Linux/software-specific term.
 
 I'd at least expect to see this compatible flag be renamed to something
 more like nvidia,tegra20-dc (dc==display controller).
 
 Since Tegra has two display controller modules (I believe identical?),
 and numerous other independent(?) blocks, I'd expect to see multiple
 nodes in device tree, one per hardware block, such that each block gets
 its own device and driver. That said, I'm not familiar enough with
 Tegra's display and graphics HW to know if this makes sense. Jon, what's
 your take here? The clock change below, and in particular the original
 code there that we use downstream, lends weight to my argument.
 
  +   reg =  0x5420 0x0004/* display A */
  +   0x5424 0x0004/* display B */
  +   0x5800 0x0200 ; /* GART aperture */
  +   interrupts =  0 73 0x04/* display A */
  +  0 74 0x04 ; /* display B */
  +
  +   lvds {
  +   type = rgb;
 
 These sub-nodes probably want a compatible property rather than a
 type property.
 
  +   size = 345 194;
  +
  +   default-mode {
  +   pixel-clock = 61715000;
  +   vertical-refresh = 50;
  +   resolution = 1366 768;
  +   bits-per-pixel = 16;
  +   horizontal-timings = 4 136 2 36;
  +   vertical-timings = 2 4 21 10;
  +   };
  +   };
 
 I imagine that quite a bit of thought needs to be put into the output
 part of the binding in order to:
 
 * Model the outputs/connectors separately from display controllers.
 * Make sure that the basic infra-structure for representing an output is
 general enough to be extensible to all the kinds of outputs we support,
 not just the LVDS output.
 * We were wondering about putting an EDID into the DT to represent the
 display modes, so that all outputs had EDIDs rather than real monitors
 having EDIDs, and fixed internal displays having some other
 representation of capabilities.

You might want to have a look at the sdrm patches I recently posted to
dri-devel and arm Linux Kernel. Among other things they allow to
register crtcs/connectors/encoders seperately so that each of them can
have its own representation in the devicetree. I haven't looked into
devicetree support for DRM, but with or without devicetree the problem
that we do not have a single PCI card for registering all DRM components
is the same.

Sascha

-- 
Pengutronix e.K.   | |
Industrial Linux Solutions | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0|
Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Alex Deucher
On Thu, Apr 12, 2012 at 5:33 AM, Thierry Reding
thierry.red...@avionic-design.de wrote:
 * Sascha Hauer wrote:
 You might want to have a look at the sdrm patches I recently posted to
 dri-devel and arm Linux Kernel. Among other things they allow to
 register crtcs/connectors/encoders seperately so that each of them can
 have its own representation in the devicetree. I haven't looked into
 devicetree support for DRM, but with or without devicetree the problem
 that we do not have a single PCI card for registering all DRM components
 is the same.

 I'll do that. One interesting use-case that's been on my mind for some time
 is if it would be possible to provide a CRTC via DRM that isn't part of the
 SoC or DRM device but which can display a framebuffer prepared by the DRM
 framework.

 In other words I would like to use the Tegra hardware to render content into
 a framebuffer (using potentially the 3D engine or HW accelerated video
 decoding blocks) but display that framebuffer with a CRTC registered by a
 different driver (perhaps provided by a PCIe or USB device).

 I think such a setup would be possible if the CRTC registration can be
 decoupled from the DRM driver. Perhaps sdrm even supports that already?

You should be able to do something like that already with dma_buf and
the drm prime infrastructure.  There's even a drm driver for the udl
USB framebuffer devices.

Alex
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Stephen Warren
On 04/12/2012 12:50 AM, Thierry Reding wrote:
 * Stephen Warren wrote:
 On 04/11/2012 06:10 AM, Thierry Reding wrote:
 This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
 currently has rudimentary GEM support and can run a console on the
 framebuffer as well as X using the xf86-video-modesetting driver.
 Only the RGB output is supported. Quite a lot of things still need
 to be worked out and there is a lot of room for cleanup.
...
 diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
 b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
...
 This doesn't seem right, and couples back to my assertion above that the
 two display controller modules probably deserve separate device objects,
 named e.g. tegradc.*.
 
 I think I understand where you're going with this. Does the following look
 more correct?
 
   disp1 : dc@5420 {
   compatible = nvidia,tegra20-dc;
   reg = 0x5420, 0x0004;
   interrupts = 0 73 0x04;
   };
 
   disp2 : dc@5424 {
   compatible = nvidia,tegra20-dc;
   reg = 0x5424, 0x0004;
   interrupts = 0 74 0x04;
   };

Those look good.

   drm {
   compatible = nvidia,tegra20-drm;

I'm don't think having an explicit drm node is the right approach; drm
is after all a SW term and the DT should be describing HW. Having some
kind of top-level node almost certainly makes sense, but naming it
something related to tegra display than drm would be appropriate.

   lvds {
   compatible = ...;
   dc = disp1;
   };

Aren't the outputs separate HW blocks too, such that they would have
their own compatible/reg properties and their own drivers, and be
outside the top-level drm/display node?

I believe the mapping between the output this node represents and the
display controller (dc above) that it uses is not static; the
connectivity should be set up at runtime, and possibly dynamically
alterable via xrandr or equivalent.

   hdmi {
   compatible = ...;
   dc = disp2;
   };
   };

 +static int tegra_drm_parse_dt(struct platform_device *pdev)
 +{
 ...
 +   pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
 +   if (!pdata)
 +   return -ENOMEM;
 ...
 +   dev-platform_data = pdata;

 I don't think you should assign to dev-platform_data. If you do, then I
 think the following could happen:

 * During first probe, the assignment above happens
 * Module is removed, hence device removed, hence dev-platform_data
 freed, but not zero'd out
 * Module is re-inserted, finds that dev-platform_data!=NULL and
 proceeds to use it.
 
 Actually the code does zero out platform_data in tegra_drm_remove(). In fact
 I did test module unloading and reloading and it works properly. But it
 should probably be zeroed in case drm_platform_init() fails as well.

 Instead, the active platform data should probably be stored in a
 tegra_drm struct that's stored in the dev's private data.
 tegra_drm_probe() might then look more like:

 struct tegra_drm *tdev;

 tdev = devm_kzalloc();
 tdev-pdata = pdev-dev.platform_data;
 if (!tdev-pdata)
 tdev-pdata = tegra_drm_parse_dt();
 if (!tdev-pdata)
 return -EINVAL;

 dev_set_drvdata(dev, tdev);

 This is safe, since probe() will never assume that dev_get_drvdata()
 might contain something valid before probe() sets it.
 
 I prefer my approach over storing the data in an extra field because the
 device platform_data field is where everybody would expect it. Furthermore
 this wouldn't be relevant if we decided not to support non-DT setups.

Drivers are expected to use pre-existing platform data, if provided.
This might happen in order to work around bugs in device tree content.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Thierry Reding
* Stephen Warren wrote:
 On 04/12/2012 12:50 AM, Thierry Reding wrote:
  drm {
  compatible = nvidia,tegra20-drm;
 
 I'm don't think having an explicit drm node is the right approach; drm
 is after all a SW term and the DT should be describing HW. Having some
 kind of top-level node almost certainly makes sense, but naming it
 something related to tegra display than drm would be appropriate.

In this case there really isn't a HW device that can be represented. But in
the end it's still the DRM driver that needs to bind to the device. However
the other graphics devices (MPE, VI/CSI, EPP, GR2D and GR3D) probably need
to be bound against.

Would it be possible for someone at NVIDIA to provide some more details about
what those other devices are? GR2D and GR3D seem obvious, MPE might be video
decoding, VI/CSI video input and camera interface? As to EPP I have no idea.

Maybe one solution would be to have a top-level DRM device with a register
map from 0x5400 to 0x547f, which the TRM designates as host
registers. Then subnodes could be used for the subdevices.

  lvds {
  compatible = ...;
  dc = disp1;
  };
 
 Aren't the outputs separate HW blocks too, such that they would have
 their own compatible/reg properties and their own drivers, and be
 outside the top-level drm/display node?

The RGB output is programmed via the display controller registers. For HDMI,
TVO and DSI there are indeed separate sets of registers in addition to the
display controller's. So perhaps for those more nodes would be required:

hdmi : hdmi@5428 {
compatible = nvidia,tegra20-hdmi;
reg = 0x5428 0x0004;
};

And hook that up with the HDMI output node of the DRM node:

drm {
hdmi {
compatible = ...;
connector = hdmi;
dc = disp2;
};
};

Maybe with this setup we no longer need the compatible property since it
will already be inherent in the connector property. There will have to be
special handling for the RGB output, which could be the default if the
connector property is missing.

 I believe the mapping between the output this node represents and the
 display controller (dc above) that it uses is not static; the
 connectivity should be set up at runtime, and possibly dynamically
 alterable via xrandr or equivalent.

I think the mapping is always static for a given board. There is no way to
switch an HDMI port to LVDS at runtime. But maybe I misunderstand what you're
saying.

  Instead, the active platform data should probably be stored in a
  tegra_drm struct that's stored in the dev's private data.
  tegra_drm_probe() might then look more like:
 
  struct tegra_drm *tdev;
 
  tdev = devm_kzalloc();
  tdev-pdata = pdev-dev.platform_data;
  if (!tdev-pdata)
  tdev-pdata = tegra_drm_parse_dt();
  if (!tdev-pdata)
  return -EINVAL;
 
  dev_set_drvdata(dev, tdev);
 
  This is safe, since probe() will never assume that dev_get_drvdata()
  might contain something valid before probe() sets it.
  
  I prefer my approach over storing the data in an extra field because the
  device platform_data field is where everybody would expect it. Furthermore
  this wouldn't be relevant if we decided not to support non-DT setups.
 
 Drivers are expected to use pre-existing platform data, if provided.
 This might happen in order to work around bugs in device tree content.

Okay I see. I'll have to store it in a separate field in the private
structure then.

Thierry


pgpCkt2Jf8sQa.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Lucas Stach
Am Mittwoch, den 11.04.2012, 15:18 + schrieb Arnd Bergmann:
 On Wednesday 11 April 2012, Thierry Reding wrote:
* Daniel Vetter wrote:
   On Wed, Apr 11, 2012 at 03:23:26PM +0200, Thierry Reding wrote:
* Daniel Vetter wrote:
 On Wed, Apr 11, 2012 at 02:10:30PM +0200, Thierry Reding wrote:
  This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
  currently has rudimentary GEM support and can run a console on the
  framebuffer as well as X using the xf86-video-modesetting driver.
  Only the RGB output is supported. Quite a lot of things still need
  to be worked out and there is a lot of room for cleanup.
 
 Indeed, after a quick look there are tons of functions that are just 
 stubs
 ;-) One thing I wonder though is why you directly use the iommu api 
 and
 not wrap it up into dma_map? Is arm infrastructure just not there yet 
 or
 do you plan to tightly integrate the tegra drm with the iommu (e.g. 
 for
 process space switching or similarly funky stuff)?

I'm not sure I know what you are referring to. Looking for all users of
iommu_map() doesn't turn up anything related to dma_map. Can you point 
me in
the right direction?
   
   Well, you use the iommu api to map/unmap memory into the iommu for tegra,
   whereas usually device drivers just use the dma api to do that. The usual
   interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
   around. I'm just wondering why this you've choosen this.
  
  I don't think this works on ARM. Maybe I'm not seeing the whole picture but
  judging by a quick look through the kernel tree there aren't any users that
  map DMA memory through an IOMMU.
 
 
 dma_map_sg is certainly the right interface to use, and Marek Szyprowski has
 patches to make that work on ARM, hopefully going into v3.5, so you could
 use those.

Just jumping in here to make sure everyone understands the limitations
of the Tegra 2 GART IOMMU we are talking about here. It has no isolation
capabilities and a really small remapping window of 32MB. So it's
impossible to remap every buffer used by the graphics engines. The only
sane way to handle this is to set aside a chunk of stolen system memory
as VRAM and let a memory manager like TTM handle the allocation of
linear regions and GART mappings. This means a more tight integration of
the DRM driver and the IOMMU, where I think that using the IOMMU API
directly and completely controlling the GART from one driver is the
right way to go for a number of reasons, where my biggest concern is
that we can't implement a sane out-of-remapping space when we go through
the dma_map API.

It's too late for me to go into the details now, but I wanted to make it
clear that I think that using the IOMMU only and exclusively from the
DRM driver with a high level of tie in is the way to go. If you want to
know more details I'm available to discuss this matter in the next days.

-- Lucas
 
   Arnd
 ___
 dri-devel mailing list
 dri-de...@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/dri-devel
 


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu