Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Stephen Warren
On 04/15/2012 02:39 AM, Thierry Reding wrote:
 * Stephen Warren wrote:
 On 04/13/2012 03:14 AM, Thierry Reding wrote:
 display-controllers = disp1 disp2;
 outputs = lvds hdmi tvo dsi;

 I don't think you need both the child nodes and those two properties.

 In other words, I think you either want:

  graphics@5400 {
  ... a bunch of child nodes
  };

 or you want:

  disp1 : dc@5420 {
  ...
  };
  disp2 : dc@5424 {
  ...
  };
  ... all the other graphics nodes

  graphics@5400 {
  display-controllers = disp1 disp2;
  outputs = lvds hdmi tvo dsi;
  };

 In the former case, presumably the drivers for the child nodes would
 make some API call into the parent node and just register themselves
 directly as a certain type of driver, so avoiding the
 display-controllers/outputs properties.
 
 I think I like the former better. The way I understand it the children of the
 graphics node will have to be registered explicitly by the DRM driver because
 of_platform_populate() doesn't work recursively. That would ensure that the
 DRM driver can setup the CRTC and output registries before the other devices
 call back into the DRM to register themselves.

Yes, with the first way, the DRM driver will have to call
of_platform_populate() recursively to make this work.

The thing here is that the device tree should model hardware, not be
designed purely to match the device registration needs of the DRM
driver. I'm not sure that it's correct to model all those devices as
children of the top-level graphics object; I /think/ all the devices are
flat on a single bus, and hence not children of each-other. That all
said, I guess having the nodes as children isn't too far off how the HW
is designed (even if the register accesses aren't on a child bus, the
modules at least logically are grouped together in an umbrella
situation), so I wouldn't push back on the first option above that you
prefer.

 /* initial configuration */
 configuration {
 lvds {
 display-controller = disp1;
 output = lvds;
 };

 hdmi {
 display-controller = disp2;
 output = hdmi;
 };
 };
 };

 I added an additional node for the initial configuration so that the driver
 knows which mapping to setup at boot.

 Isn't that kind of thing usually set up by the video= KMS-related kernel
 command-line option? See Documentation/fb/modedb.txt. Again here, I
 think the actual display controllers would be allocated to whichever
 outputs get used on a first-come first-serve based, so no need for the
 display-controller property above either way.
 
 Boards should still be able to boot and display a console on the standard
 output even if the user doesn't provide a video= option. Shouldn't there be a
 way for a board DTS to specify what the default (or even allowed) connections
 are?

Why wouldn't the default be to light up all outputs that have an
attached display, or an algorithm something like:

* If internal LCD is present, use that
* Else, if HDMI display plugged in, use that
...

 Evaluation hardware like the Harmony might have LVDS, HDMI and VGA connectors
 to provide for a wide range of use cases. The Plutux for instance has only an
 HDMI connector and the Medcom has only LVDS. For the Medcom it would be quite
 confusing for people to suddenly see an HDMI-1 connector pop up f.e. in
 xrandr. It would be equally useless for the Plutux to show up as supporting
 an LVDS or VGA connector.

So the device tree for those devices would disable (or not include) the
connectors that were not present on the board.

...
 I see. Maybe this could be used for board-specific configuration? For
 example, the Plutux could have something like this:
 
   connectors {
   hdmi {
   output = hdmi;
   ddc = i2c2;
   };
   };
 
 The Medcom could have:
 
   connectors {
   lvds {
   output = lvds;
   edid = edid;
   };
   };
 
 While Harmony could be more generic and provide more outputs:
 
   connectors {
   lvds {
   output = lvds;
   ddc = i2c1;
   };
 
   vga {
   /* which output is used for VGA? */
   output = ...;
   ddc = i2c2;
 
   hdmi {
   output = hdmi;
   ddc = i2c3;
   };
   };

That looks like a reasonable start.

 Has there been any discussion as to how EDID data would best be represented
 in DT? Should it just be a binary blob or rather some textual representation?

I think 

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Thierry Reding
* Stephen Warren wrote:
 On 04/15/2012 02:39 AM, Thierry Reding wrote:
  I think I like the former better. The way I understand it the children of 
  the
  graphics node will have to be registered explicitly by the DRM driver 
  because
  of_platform_populate() doesn't work recursively. That would ensure that the
  DRM driver can setup the CRTC and output registries before the other devices
  call back into the DRM to register themselves.
 
 Yes, with the first way, the DRM driver will have to call
 of_platform_populate() recursively to make this work.
 
 The thing here is that the device tree should model hardware, not be
 designed purely to match the device registration needs of the DRM
 driver. I'm not sure that it's correct to model all those devices as
 children of the top-level graphics object; I /think/ all the devices are
 flat on a single bus, and hence not children of each-other. That all
 said, I guess having the nodes as children isn't too far off how the HW
 is designed (even if the register accesses aren't on a child bus, the
 modules at least logically are grouped together in an umbrella
 situation), so I wouldn't push back on the first option above that you
 prefer.

After trying to implement this I'm not so sure anymore that this is the best
approach. I think I'll have to play around with this some more to see what
fits best.

  Boards should still be able to boot and display a console on the standard
  output even if the user doesn't provide a video= option. Shouldn't there be 
  a
  way for a board DTS to specify what the default (or even allowed) 
  connections
  are?
 
 Why wouldn't the default be to light up all outputs that have an
 attached display, or an algorithm something like:
 
 * If internal LCD is present, use that
 * Else, if HDMI display plugged in, use that
 ...

That sounds doable.

  Evaluation hardware like the Harmony might have LVDS, HDMI and VGA 
  connectors
  to provide for a wide range of use cases. The Plutux for instance has only 
  an
  HDMI connector and the Medcom has only LVDS. For the Medcom it would be 
  quite
  confusing for people to suddenly see an HDMI-1 connector pop up f.e. in
  xrandr. It would be equally useless for the Plutux to show up as supporting
  an LVDS or VGA connector.
 
 So the device tree for those devices would disable (or not include) the
 connectors that were not present on the board.

Okay, makes sense.

  Has there been any discussion as to how EDID data would best be represented
  in DT? Should it just be a binary blob or rather some textual 
  representation?
 
 I think a binary blob makes sense - that's the exact same format it'd
 have if read over the DDC I2C bus.

DTC has /incbin/ for that. Is arch/arm/boot/dts still the correct place for
EDID blobs? I could add tegra-medcom.edid if that's okay.

Thierry


pgpHj7Wa0zx4D.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Stephen Warren
On 04/16/2012 12:48 PM, Thierry Reding wrote:
 * Stephen Warren wrote:
...
 Has there been any discussion as to how EDID data would best be represented
 in DT? Should it just be a binary blob or rather some textual 
 representation?

 I think a binary blob makes sense - that's the exact same format it'd
 have if read over the DDC I2C bus.
 
 DTC has /incbin/ for that. Is arch/arm/boot/dts still the correct place for
 EDID blobs? I could add tegra-medcom.edid if that's okay.

As far as I know, yes.

Perhaps we'll want to start putting stuff in SoC-specific
sub-directories given the number of files we'll end up with here
(irrespective of EDID etc.), but I haven't seen any move towards that yet.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-16 Thread Thierry Reding
* Stephen Warren wrote:
 On 04/16/2012 12:48 PM, Thierry Reding wrote:
  * Stephen Warren wrote:
 ...
  Has there been any discussion as to how EDID data would best be 
  represented
  in DT? Should it just be a binary blob or rather some textual 
  representation?
 
  I think a binary blob makes sense - that's the exact same format it'd
  have if read over the DDC I2C bus.
  
  DTC has /incbin/ for that. Is arch/arm/boot/dts still the correct place for
  EDID blobs? I could add tegra-medcom.edid if that's okay.
 
 As far as I know, yes.
 
 Perhaps we'll want to start putting stuff in SoC-specific
 sub-directories given the number of files we'll end up with here
 (irrespective of EDID etc.), but I haven't seen any move towards that yet.

Yes, especially as more machines are moving to DT that directory will soon
become quite cluttered. I suppose a tegra subdirectory wouldn't hurt.

I've been looking about for tools to generate EDID data but didn't find
anything useful. Does anyone know of any tool that's more convenient than
manually filling a struct edid and writing that to a file?

Thierry


pgpF139gJ4WaT.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-13 Thread Thierry Reding
* Stephen Warren wrote:
 On 04/12/2012 11:44 AM, Thierry Reding wrote:
[...]
 And given that, I don't think we should name the node after some
 OS-specific software concept. Device tree is intended to model hardware.
[...]
  Maybe one solution would be to have a top-level DRM device with a register
  map from 0x5400 to 0x547f, which the TRM designates as host
  registers. Then subnodes could be used for the subdevices.
 
 Ah yes, just what I was thinking above:-)

I came up with the following:

/* host1x */
host1x : host1x@5000 {
reg = 0x5000 0x00024000;
interrupts = 0 64 0x04   /* cop syncpt */
  0 65 0x04   /* mpcore syncpt */
  0 66 0x04   /* cop general */
  0 67 0x04; /* mpcore general */
};

/* graphics host */
graphics@5400 {
compatible = nvidia,tegra20-graphics;

#address-cells = 1;
#size-cells = 1;
ranges = 0 0x5400 0x0800;

host1x = host1x;

/* video-encoding/decoding */
mpe@5404 {
reg = 0x5404 0x0004;
interrupts = 0 68 0x04;
};

/* video input */
vi@5408 {
reg = 0x5408 0x0004;
interrupts = 0 69 0x04;
};

/* EPP */
epp@540c {
reg = 0x540c 0x0004;
interrupts = 0 70 0x04;
}

/* ISP */
isp@5410 {
reg = 0x5410 0x0004;
interrupts = 0 71 0x04;
};

/* 2D engine */
gr2d@5414 {
reg = 0x5414 0x0004;
interrupts = 0 72 0x04;
};

/* 3D engine */
gr3d@5418 {
reg = 0x5418 0x0004;
};

/* display controllers */
disp1 : dc@5420 {
compatible = nvidia,tegra20-dc;
reg = 0x5420 0x0004;
interrupts = 0 73 0x04;
};

disp2 : dc@5424 {
compatible = nvidia,tegra20-dc;
reg = 0x5424 0x0004;
interrupts = 0 74 0x04;
};

/* outputs */
lvds : rgb {
compatible = nvidia,tegra20-rgb;
};

hdmi : hdmi@5428 {
compatible = nvidia,tegra20-hdmi;
reg = 0x5428 0x0004;
interrupts = 0 75 0x04;
};

tvo : tvo@542c {
compatible = nvidia,tegra20-tvo;
reg = 0x542c 0x0004;
interrupts = 0 76 0x04;
};

dsi : dsi@5430 {
compatible = nvidia,tegra20-dsi;
reg = 0x5430 0x0004;
};

display-controllers = disp1 disp2;
outputs = lvds hdmi tvo dsi;

/* initial configuration */
configuration {
lvds {
display-controller = disp1;
output = lvds;
};

hdmi {
display-controller = disp2;
output = hdmi;
};
};
};

I added an additional node for the initial configuration so that the driver
knows which mapping to setup at boot. What I don't quite see yet is where to
attach EDID data or pass the phandle to the I2C controller for DDC/EDID
probing. The initial configuration is certainly not the right place. Perhaps
the outputs property should be made a node instead:

outputs {
lvds_out {
output = lvds;
edid = edid;
};

hdmi_out {
output = hdmi;
ddc = i2c2;
};
};

But then outputs should probably become something like connectors
instead and the initial configuration refers to the _out phandles.

Thierry


pgpV7A9U5ob2r.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Thierry Reding
* Arnd Bergmann wrote:
 On Wednesday 11 April 2012, Thierry Reding wrote:
  Daniel Vetter wrote:
   Well, you use the iommu api to map/unmap memory into the iommu for tegra,
   whereas usually device drivers just use the dma api to do that. The usual
   interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
   around. I'm just wondering why this you've choosen this.
  
  I don't think this works on ARM. Maybe I'm not seeing the whole picture but
  judging by a quick look through the kernel tree there aren't any users that
  map DMA memory through an IOMMU.
 
 dma_map_sg is certainly the right interface to use, and Marek Szyprowski has
 patches to make that work on ARM, hopefully going into v3.5, so you could
 use those.

I've looked at Marek's patches but I don't think they'll work for Tegra 2 or
Tegra 3. The corresponding iommu_map() functions only set one PTE, regardless
of the number of bytes passed to them. However, the Tegra TRM indicates that
mapping needs to be done on a per-page basis so contiguous regions cannot be
combined. I suppose the IOMMU driver would have to be fixed to program more
than a single page in that case.

Also this doesn't yet solve the vmap() problem that is needed for the kernel
virtual mapping. I did try using dma_alloc_writecombine(), but that only
works for chunks of 2 MB or smaller, unless I use init_consistent_dma_size()
during board setup, which isn't provided for in a DT setup. I couldn't find
a better alternative, but I admit I'm not very familiar with all the VM APIs.
Do you have any suggestions on how to solve this? Otherwise I'll try and dig
in some more.

Thierry


pgpN1I5gnKtdt.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

RE: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Marek Szyprowski
Hi Thierry,

On Thursday, April 12, 2012 9:18 AM Thierry Reding wrote:

 * Arnd Bergmann wrote:
  On Wednesday 11 April 2012, Thierry Reding wrote:
   Daniel Vetter wrote:
Well, you use the iommu api to map/unmap memory into the iommu for 
tegra,
whereas usually device drivers just use the dma api to do that. The 
usual
interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
around. I'm just wondering why this you've choosen this.
  
   I don't think this works on ARM. Maybe I'm not seeing the whole picture 
   but
   judging by a quick look through the kernel tree there aren't any users 
   that
   map DMA memory through an IOMMU.
 
  dma_map_sg is certainly the right interface to use, and Marek Szyprowski has
  patches to make that work on ARM, hopefully going into v3.5, so you could
  use those.
 
 I've looked at Marek's patches but I don't think they'll work for Tegra 2 or
 Tegra 3. The corresponding iommu_map() functions only set one PTE, regardless
 of the number of bytes passed to them. However, the Tegra TRM indicates that
 mapping needs to be done on a per-page basis so contiguous regions cannot be
 combined. I suppose the IOMMU driver would have to be fixed to program more
 than a single page in that case.

I assume you want to map a set of pages into contiguous chunk in io address 
space.
This can be done with dma_map_sg() call once IOMMU aware implementation has been
assigned to the given device. DMA-mapping implementation is able to merge 
consecutive chunks of the scatter list in the dma/io address space if possible
(i.e. there are no in-page offsets between the chunks). With my implementation 
of IOMMU aware dma-mapping you usually you get a single DMA chunk from the 
provided scatter-list.

I know that this approach causes a lot of confusion at the first look, but that
how dma mapping api has been designed. The scatter list based approach has some
drawbacks - it is a bit oversized for most of the typical use cases for the 
gfx/multimedia buffers, but that's all we have now. 

Scatter lists were initially designed for the disk based block io operations, 
hence the presence of the in-page offsets and lengths for each chunk. For 
multimedia use cases providing an array of struct pages and asking dma-mapping 
to map them into contiguous memory is probably all we need. I wonder if 
introducing such new calls is a good idea. Anrd, what do think? It will 
definitely simplify the drivers and improve the code understanding. On the 
other hand it requires a significant amount of work in the dma-mapping 
framework for all architectures, but that's not a big issue for me.

 Also this doesn't yet solve the vmap() problem that is needed for the kernel
 virtual mapping. I did try using dma_alloc_writecombine(), but that only
 works for chunks of 2 MB or smaller, unless I use init_consistent_dma_size()
 during board setup, which isn't provided for in a DT setup. I couldn't find
 a better alternative, but I admit I'm not very familiar with all the VM APIs.
 Do you have any suggestions on how to solve this? Otherwise I'll try and dig
 in some more.

Yes, I'm aware of this issue I'm currently working on solving it. I hope to use 
standard vmalloc range for all coherent/writecombine allocations and get rid of
the custom 'consistent_dma' region at all.

Best regards
-- 
Marek Szyprowski
Samsung Poland RD Center


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Thierry Reding
* Sascha Hauer wrote:
 You might want to have a look at the sdrm patches I recently posted to
 dri-devel and arm Linux Kernel. Among other things they allow to
 register crtcs/connectors/encoders seperately so that each of them can
 have its own representation in the devicetree. I haven't looked into
 devicetree support for DRM, but with or without devicetree the problem
 that we do not have a single PCI card for registering all DRM components
 is the same.

I'll do that. One interesting use-case that's been on my mind for some time
is if it would be possible to provide a CRTC via DRM that isn't part of the
SoC or DRM device but which can display a framebuffer prepared by the DRM
framework.

In other words I would like to use the Tegra hardware to render content into
a framebuffer (using potentially the 3D engine or HW accelerated video
decoding blocks) but display that framebuffer with a CRTC registered by a
different driver (perhaps provided by a PCIe or USB device).

I think such a setup would be possible if the CRTC registration can be
decoupled from the DRM driver. Perhaps sdrm even supports that already?

Thierry


pgpv190ioPDy1.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Arnd Bergmann
On Thursday 12 April 2012, Marek Szyprowski wrote:
 Scatter lists were initially designed for the disk based block io operations, 
 hence the presence of the in-page offsets and lengths for each chunk. For 
 multimedia use cases providing an array of struct pages and asking 
 dma-mapping 
 to map them into contiguous memory is probably all we need. I wonder if 
 introducing such new calls is a good idea. Anrd, what do think? It will 
 definitely simplify the drivers and improve the code understanding. On the 
 other hand it requires a significant amount of work in the dma-mapping 
 framework for all architectures, but that's not a big issue for me.

My feeling is that it's too much like the existing _sg version, so I wouldn't
add yet another variant. While having a simple page array is definitely
simpler and potentially faster, I think the API is already too complex
and we need to be very careful with new additions.

Arnd
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Thierry Reding
* Marek Szyprowski wrote:
[...]
 We already have dma_map_page() and dma_map_single() which are very similar. 
 Maybe adding dma_map_pages() won't be such a bad idea? 
 
 If not maybe we should provide some kind of helper functions which converts 
 page array to scatterlist and then maps them.

drm_prime_pages_to_sg() seems to do exactly that.

Thierry


pgpPDKFjI5TYg.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Sascha Hauer
On Wed, Apr 11, 2012 at 12:12:14PM -0600, Stephen Warren wrote:
 On 04/11/2012 06:10 AM, Thierry Reding wrote:
  This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
  currently has rudimentary GEM support and can run a console on the
  framebuffer as well as X using the xf86-video-modesetting driver.
  Only the RGB output is supported. Quite a lot of things still need
  to be worked out and there is a lot of room for cleanup.
 
 I'll let Jon Mayo comment on the actual driver implementation, since
 he's a lot more familiar with Tegra's display hardware. However, I have
 some general comments below.
 
   .../devicetree/bindings/gpu/drm/tegra.txt  |   24 +
   arch/arm/mach-tegra/board-dt-tegra20.c |3 +
   arch/arm/mach-tegra/tegra2_clocks.c|8 +-
   drivers/gpu/drm/Kconfig|2 +
   drivers/gpu/drm/Makefile   |1 +
   drivers/gpu/drm/tegra/Kconfig  |   10 +
   drivers/gpu/drm/tegra/Makefile |5 +
   drivers/gpu/drm/tegra/tegra_drv.c  | 2241 
  
   drivers/gpu/drm/tegra/tegra_drv.h  |  184 ++
   include/drm/tegra_drm.h|   44 +
 
 Splitting this patch into two, between arch/arm and drivers/gpu would be
 a good idea.
 
  diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
  b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
 
  +   drm@5420 {
  +   compatible = nvidia,tegra20-drm;
 
 This doesn't seem right; there isn't a DRM hardware module on Tegra,
 since DRM is a Linux/software-specific term.
 
 I'd at least expect to see this compatible flag be renamed to something
 more like nvidia,tegra20-dc (dc==display controller).
 
 Since Tegra has two display controller modules (I believe identical?),
 and numerous other independent(?) blocks, I'd expect to see multiple
 nodes in device tree, one per hardware block, such that each block gets
 its own device and driver. That said, I'm not familiar enough with
 Tegra's display and graphics HW to know if this makes sense. Jon, what's
 your take here? The clock change below, and in particular the original
 code there that we use downstream, lends weight to my argument.
 
  +   reg =  0x5420 0x0004/* display A */
  +   0x5424 0x0004/* display B */
  +   0x5800 0x0200 ; /* GART aperture */
  +   interrupts =  0 73 0x04/* display A */
  +  0 74 0x04 ; /* display B */
  +
  +   lvds {
  +   type = rgb;
 
 These sub-nodes probably want a compatible property rather than a
 type property.
 
  +   size = 345 194;
  +
  +   default-mode {
  +   pixel-clock = 61715000;
  +   vertical-refresh = 50;
  +   resolution = 1366 768;
  +   bits-per-pixel = 16;
  +   horizontal-timings = 4 136 2 36;
  +   vertical-timings = 2 4 21 10;
  +   };
  +   };
 
 I imagine that quite a bit of thought needs to be put into the output
 part of the binding in order to:
 
 * Model the outputs/connectors separately from display controllers.
 * Make sure that the basic infra-structure for representing an output is
 general enough to be extensible to all the kinds of outputs we support,
 not just the LVDS output.
 * We were wondering about putting an EDID into the DT to represent the
 display modes, so that all outputs had EDIDs rather than real monitors
 having EDIDs, and fixed internal displays having some other
 representation of capabilities.

You might want to have a look at the sdrm patches I recently posted to
dri-devel and arm Linux Kernel. Among other things they allow to
register crtcs/connectors/encoders seperately so that each of them can
have its own representation in the devicetree. I haven't looked into
devicetree support for DRM, but with or without devicetree the problem
that we do not have a single PCI card for registering all DRM components
is the same.

Sascha

-- 
Pengutronix e.K.   | |
Industrial Linux Solutions | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0|
Amtsgericht Hildesheim, HRA 2686   | Fax:   +49-5121-206917- |
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Alex Deucher
On Thu, Apr 12, 2012 at 5:33 AM, Thierry Reding
thierry.red...@avionic-design.de wrote:
 * Sascha Hauer wrote:
 You might want to have a look at the sdrm patches I recently posted to
 dri-devel and arm Linux Kernel. Among other things they allow to
 register crtcs/connectors/encoders seperately so that each of them can
 have its own representation in the devicetree. I haven't looked into
 devicetree support for DRM, but with or without devicetree the problem
 that we do not have a single PCI card for registering all DRM components
 is the same.

 I'll do that. One interesting use-case that's been on my mind for some time
 is if it would be possible to provide a CRTC via DRM that isn't part of the
 SoC or DRM device but which can display a framebuffer prepared by the DRM
 framework.

 In other words I would like to use the Tegra hardware to render content into
 a framebuffer (using potentially the 3D engine or HW accelerated video
 decoding blocks) but display that framebuffer with a CRTC registered by a
 different driver (perhaps provided by a PCIe or USB device).

 I think such a setup would be possible if the CRTC registration can be
 decoupled from the DRM driver. Perhaps sdrm even supports that already?

You should be able to do something like that already with dma_buf and
the drm prime infrastructure.  There's even a drm driver for the udl
USB framebuffer devices.

Alex
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Stephen Warren
On 04/12/2012 12:50 AM, Thierry Reding wrote:
 * Stephen Warren wrote:
 On 04/11/2012 06:10 AM, Thierry Reding wrote:
 This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
 currently has rudimentary GEM support and can run a console on the
 framebuffer as well as X using the xf86-video-modesetting driver.
 Only the RGB output is supported. Quite a lot of things still need
 to be worked out and there is a lot of room for cleanup.
...
 diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
 b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
...
 This doesn't seem right, and couples back to my assertion above that the
 two display controller modules probably deserve separate device objects,
 named e.g. tegradc.*.
 
 I think I understand where you're going with this. Does the following look
 more correct?
 
   disp1 : dc@5420 {
   compatible = nvidia,tegra20-dc;
   reg = 0x5420, 0x0004;
   interrupts = 0 73 0x04;
   };
 
   disp2 : dc@5424 {
   compatible = nvidia,tegra20-dc;
   reg = 0x5424, 0x0004;
   interrupts = 0 74 0x04;
   };

Those look good.

   drm {
   compatible = nvidia,tegra20-drm;

I'm don't think having an explicit drm node is the right approach; drm
is after all a SW term and the DT should be describing HW. Having some
kind of top-level node almost certainly makes sense, but naming it
something related to tegra display than drm would be appropriate.

   lvds {
   compatible = ...;
   dc = disp1;
   };

Aren't the outputs separate HW blocks too, such that they would have
their own compatible/reg properties and their own drivers, and be
outside the top-level drm/display node?

I believe the mapping between the output this node represents and the
display controller (dc above) that it uses is not static; the
connectivity should be set up at runtime, and possibly dynamically
alterable via xrandr or equivalent.

   hdmi {
   compatible = ...;
   dc = disp2;
   };
   };

 +static int tegra_drm_parse_dt(struct platform_device *pdev)
 +{
 ...
 +   pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
 +   if (!pdata)
 +   return -ENOMEM;
 ...
 +   dev-platform_data = pdata;

 I don't think you should assign to dev-platform_data. If you do, then I
 think the following could happen:

 * During first probe, the assignment above happens
 * Module is removed, hence device removed, hence dev-platform_data
 freed, but not zero'd out
 * Module is re-inserted, finds that dev-platform_data!=NULL and
 proceeds to use it.
 
 Actually the code does zero out platform_data in tegra_drm_remove(). In fact
 I did test module unloading and reloading and it works properly. But it
 should probably be zeroed in case drm_platform_init() fails as well.

 Instead, the active platform data should probably be stored in a
 tegra_drm struct that's stored in the dev's private data.
 tegra_drm_probe() might then look more like:

 struct tegra_drm *tdev;

 tdev = devm_kzalloc();
 tdev-pdata = pdev-dev.platform_data;
 if (!tdev-pdata)
 tdev-pdata = tegra_drm_parse_dt();
 if (!tdev-pdata)
 return -EINVAL;

 dev_set_drvdata(dev, tdev);

 This is safe, since probe() will never assume that dev_get_drvdata()
 might contain something valid before probe() sets it.
 
 I prefer my approach over storing the data in an extra field because the
 device platform_data field is where everybody would expect it. Furthermore
 this wouldn't be relevant if we decided not to support non-DT setups.

Drivers are expected to use pre-existing platform data, if provided.
This might happen in order to work around bugs in device tree content.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Thierry Reding
* Stephen Warren wrote:
 On 04/12/2012 12:50 AM, Thierry Reding wrote:
  drm {
  compatible = nvidia,tegra20-drm;
 
 I'm don't think having an explicit drm node is the right approach; drm
 is after all a SW term and the DT should be describing HW. Having some
 kind of top-level node almost certainly makes sense, but naming it
 something related to tegra display than drm would be appropriate.

In this case there really isn't a HW device that can be represented. But in
the end it's still the DRM driver that needs to bind to the device. However
the other graphics devices (MPE, VI/CSI, EPP, GR2D and GR3D) probably need
to be bound against.

Would it be possible for someone at NVIDIA to provide some more details about
what those other devices are? GR2D and GR3D seem obvious, MPE might be video
decoding, VI/CSI video input and camera interface? As to EPP I have no idea.

Maybe one solution would be to have a top-level DRM device with a register
map from 0x5400 to 0x547f, which the TRM designates as host
registers. Then subnodes could be used for the subdevices.

  lvds {
  compatible = ...;
  dc = disp1;
  };
 
 Aren't the outputs separate HW blocks too, such that they would have
 their own compatible/reg properties and their own drivers, and be
 outside the top-level drm/display node?

The RGB output is programmed via the display controller registers. For HDMI,
TVO and DSI there are indeed separate sets of registers in addition to the
display controller's. So perhaps for those more nodes would be required:

hdmi : hdmi@5428 {
compatible = nvidia,tegra20-hdmi;
reg = 0x5428 0x0004;
};

And hook that up with the HDMI output node of the DRM node:

drm {
hdmi {
compatible = ...;
connector = hdmi;
dc = disp2;
};
};

Maybe with this setup we no longer need the compatible property since it
will already be inherent in the connector property. There will have to be
special handling for the RGB output, which could be the default if the
connector property is missing.

 I believe the mapping between the output this node represents and the
 display controller (dc above) that it uses is not static; the
 connectivity should be set up at runtime, and possibly dynamically
 alterable via xrandr or equivalent.

I think the mapping is always static for a given board. There is no way to
switch an HDMI port to LVDS at runtime. But maybe I misunderstand what you're
saying.

  Instead, the active platform data should probably be stored in a
  tegra_drm struct that's stored in the dev's private data.
  tegra_drm_probe() might then look more like:
 
  struct tegra_drm *tdev;
 
  tdev = devm_kzalloc();
  tdev-pdata = pdev-dev.platform_data;
  if (!tdev-pdata)
  tdev-pdata = tegra_drm_parse_dt();
  if (!tdev-pdata)
  return -EINVAL;
 
  dev_set_drvdata(dev, tdev);
 
  This is safe, since probe() will never assume that dev_get_drvdata()
  might contain something valid before probe() sets it.
  
  I prefer my approach over storing the data in an extra field because the
  device platform_data field is where everybody would expect it. Furthermore
  this wouldn't be relevant if we decided not to support non-DT setups.
 
 Drivers are expected to use pre-existing platform data, if provided.
 This might happen in order to work around bugs in device tree content.

Okay I see. I'll have to store it in a separate field in the private
structure then.

Thierry


pgpCkt2Jf8sQa.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-12 Thread Lucas Stach
Am Mittwoch, den 11.04.2012, 15:18 + schrieb Arnd Bergmann:
 On Wednesday 11 April 2012, Thierry Reding wrote:
* Daniel Vetter wrote:
   On Wed, Apr 11, 2012 at 03:23:26PM +0200, Thierry Reding wrote:
* Daniel Vetter wrote:
 On Wed, Apr 11, 2012 at 02:10:30PM +0200, Thierry Reding wrote:
  This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
  currently has rudimentary GEM support and can run a console on the
  framebuffer as well as X using the xf86-video-modesetting driver.
  Only the RGB output is supported. Quite a lot of things still need
  to be worked out and there is a lot of room for cleanup.
 
 Indeed, after a quick look there are tons of functions that are just 
 stubs
 ;-) One thing I wonder though is why you directly use the iommu api 
 and
 not wrap it up into dma_map? Is arm infrastructure just not there yet 
 or
 do you plan to tightly integrate the tegra drm with the iommu (e.g. 
 for
 process space switching or similarly funky stuff)?

I'm not sure I know what you are referring to. Looking for all users of
iommu_map() doesn't turn up anything related to dma_map. Can you point 
me in
the right direction?
   
   Well, you use the iommu api to map/unmap memory into the iommu for tegra,
   whereas usually device drivers just use the dma api to do that. The usual
   interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
   around. I'm just wondering why this you've choosen this.
  
  I don't think this works on ARM. Maybe I'm not seeing the whole picture but
  judging by a quick look through the kernel tree there aren't any users that
  map DMA memory through an IOMMU.
 
 
 dma_map_sg is certainly the right interface to use, and Marek Szyprowski has
 patches to make that work on ARM, hopefully going into v3.5, so you could
 use those.

Just jumping in here to make sure everyone understands the limitations
of the Tegra 2 GART IOMMU we are talking about here. It has no isolation
capabilities and a really small remapping window of 32MB. So it's
impossible to remap every buffer used by the graphics engines. The only
sane way to handle this is to set aside a chunk of stolen system memory
as VRAM and let a memory manager like TTM handle the allocation of
linear regions and GART mappings. This means a more tight integration of
the DRM driver and the IOMMU, where I think that using the IOMMU API
directly and completely controlling the GART from one driver is the
right way to go for a number of reasons, where my biggest concern is
that we can't implement a sane out-of-remapping space when we go through
the dma_map API.

It's too late for me to go into the details now, but I wanted to make it
clear that I think that using the IOMMU only and exclusively from the
DRM driver with a high level of tie in is the way to go. If you want to
know more details I'm available to discuss this matter in the next days.

-- Lucas
 
   Arnd
 ___
 dri-devel mailing list
 dri-de...@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/dri-devel
 


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Alan Cox
 Hm, in that case it looks like your iommu works more like the gtt on intel
 chips 

Don't overgeneralize there - on the GMA500/600 the GTT doesn't allow CPU
side access of the GTT map (ie you can't use it to linearise pages for
CPU view) and the 3600 is even stranger

Alan
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Thierry Reding
* Daniel Vetter wrote:
 On Wed, Apr 11, 2012 at 04:11:08PM +0200, Thierry Reding wrote:
  * Daniel Vetter wrote:
   On Wed, Apr 11, 2012 at 03:23:26PM +0200, Thierry Reding wrote:
* Daniel Vetter wrote:
 On Wed, Apr 11, 2012 at 02:10:30PM +0200, Thierry Reding wrote:
  This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
  currently has rudimentary GEM support and can run a console on the
  framebuffer as well as X using the xf86-video-modesetting driver.
  Only the RGB output is supported. Quite a lot of things still need
  to be worked out and there is a lot of room for cleanup.
 
 Indeed, after a quick look there are tons of functions that are just 
 stubs
 ;-) One thing I wonder though is why you directly use the iommu api 
 and
 not wrap it up into dma_map? Is arm infrastructure just not there yet 
 or
 do you plan to tightly integrate the tegra drm with the iommu (e.g. 
 for
 process space switching or similarly funky stuff)?

I'm not sure I know what you are referring to. Looking for all users of
iommu_map() doesn't turn up anything related to dma_map. Can you point 
me in
the right direction?
   
   Well, you use the iommu api to map/unmap memory into the iommu for tegra,
   whereas usually device drivers just use the dma api to do that. The usual
   interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
   around. I'm just wondering why this you've choosen this.
  
  I don't think this works on ARM. Maybe I'm not seeing the whole picture but
  judging by a quick look through the kernel tree there aren't any users that
  map DMA memory through an IOMMU.
  
  Maybe your question is answered by my reply to Alan's comment. The mapping
  is actually done to get a linear view for the display controller which
  doesn't support SG transfers. The kernel and user-space already have virtual
  linear buffers.
 
 Hm, in that case it looks like your iommu works more like the gtt on intel
 chips and less like the iommu on intel platforms (which we access through
 the dma_map api).

Yes, it's very much like the GTT on Intel chips. In fact I've been using the
gma500 driver as a source for inspiration. Wikipedia confirms that GTT and
GART are synonymous.

 I wonder whether that will end up in some layering fun together with
 dma_buf, which conceptually is at the same level as the dma api, which
 usually uses an underlying iommu exposed with the iommu api you're using.

That's odd. The only users of the IOMMU API that I can find in the kernel
tree are in drivers/remoteproc and drivers/media/video/omap3isp. And omap3isp
doesn't do any actual mapping at a quick glance. Can you point me to where
this is hooked up with the Intel IOMMU?

Thierry


pgp6tBMalmVXB.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Thierry Reding
* Alan Cox wrote:
  Maybe your question is answered by my reply to Alan's comment. The mapping
  is actually done to get a linear view for the display controller which
  doesn't support SG transfers. The kernel and user-space already have virtual
  linear buffers.
 
 The framebuffer currently needs a physically contiguous map for the
 console devices. Well you could vmap them but that is pretty hideous on a
 32bit platform with 32bit 1080p display plugged into it!

Heh, vmap() is exactly what I do. =) Would you mind explaining why exactly it
is hideous?

I'll have to investigate what an appropriate alternative would look like.

Thierry


pgp4S1plaNGPQ.pgp
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Alan Cox
 Heh, vmap() is exactly what I do. =) Would you mind explaining why exactly it
 is hideous?

On x86 we don't have a vast amount of address space available for virtual
remappings and the framebuffer then eats over 8MB of it.

The ideal case is that the fb layer can be taught to do page/offset
addressing nicely. At that point we'd be able to attach the text consoles
to arbitary GEM objects.. which means we can do really cool stuff.

Alan
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Arnd Bergmann
On Wednesday 11 April 2012, Thierry Reding wrote:
   * Daniel Vetter wrote:
  On Wed, Apr 11, 2012 at 03:23:26PM +0200, Thierry Reding wrote:
   * Daniel Vetter wrote:
On Wed, Apr 11, 2012 at 02:10:30PM +0200, Thierry Reding wrote:
 This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
 currently has rudimentary GEM support and can run a console on the
 framebuffer as well as X using the xf86-video-modesetting driver.
 Only the RGB output is supported. Quite a lot of things still need
 to be worked out and there is a lot of room for cleanup.

Indeed, after a quick look there are tons of functions that are just 
stubs
;-) One thing I wonder though is why you directly use the iommu api and
not wrap it up into dma_map? Is arm infrastructure just not there yet or
do you plan to tightly integrate the tegra drm with the iommu (e.g. for
process space switching or similarly funky stuff)?
   
   I'm not sure I know what you are referring to. Looking for all users of
   iommu_map() doesn't turn up anything related to dma_map. Can you point me 
   in
   the right direction?
  
  Well, you use the iommu api to map/unmap memory into the iommu for tegra,
  whereas usually device drivers just use the dma api to do that. The usual
  interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
  around. I'm just wondering why this you've choosen this.
 
 I don't think this works on ARM. Maybe I'm not seeing the whole picture but
 judging by a quick look through the kernel tree there aren't any users that
 map DMA memory through an IOMMU.


dma_map_sg is certainly the right interface to use, and Marek Szyprowski has
patches to make that work on ARM, hopefully going into v3.5, so you could
use those.

Arnd
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Daniel Vetter
On Wed, Apr 11, 2012 at 02:10:30PM +0200, Thierry Reding wrote:
 This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
 currently has rudimentary GEM support and can run a console on the
 framebuffer as well as X using the xf86-video-modesetting driver.
 Only the RGB output is supported. Quite a lot of things still need
 to be worked out and there is a lot of room for cleanup.

Indeed, after a quick look there are tons of functions that are just stubs
;-) One thing I wonder though is why you directly use the iommu api and
not wrap it up into dma_map? Is arm infrastructure just not there yet or
do you plan to tightly integrate the tegra drm with the iommu (e.g. for
process space switching or similarly funky stuff)?

Yours, Daniel
-- 
Daniel Vetter
Mail: dan...@ffwll.ch
Mobile: +41 (0)79 365 57 48
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Daniel Vetter
On Wed, Apr 11, 2012 at 03:23:26PM +0200, Thierry Reding wrote:
 * Daniel Vetter wrote:
  On Wed, Apr 11, 2012 at 02:10:30PM +0200, Thierry Reding wrote:
   This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
   currently has rudimentary GEM support and can run a console on the
   framebuffer as well as X using the xf86-video-modesetting driver.
   Only the RGB output is supported. Quite a lot of things still need
   to be worked out and there is a lot of room for cleanup.
  
  Indeed, after a quick look there are tons of functions that are just stubs
  ;-) One thing I wonder though is why you directly use the iommu api and
  not wrap it up into dma_map? Is arm infrastructure just not there yet or
  do you plan to tightly integrate the tegra drm with the iommu (e.g. for
  process space switching or similarly funky stuff)?
 
 I'm not sure I know what you are referring to. Looking for all users of
 iommu_map() doesn't turn up anything related to dma_map. Can you point me in
 the right direction?

Well, you use the iommu api to map/unmap memory into the iommu for tegra,
whereas usually device drivers just use the dma api to do that. The usual
interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
around. I'm just wondering why this you've choosen this.
-Daniel
-- 
Daniel Vetter
Mail: dan...@ffwll.ch
Mobile: +41 (0)79 365 57 48
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Daniel Vetter
On Wed, Apr 11, 2012 at 04:11:08PM +0200, Thierry Reding wrote:
 * Daniel Vetter wrote:
  On Wed, Apr 11, 2012 at 03:23:26PM +0200, Thierry Reding wrote:
   * Daniel Vetter wrote:
On Wed, Apr 11, 2012 at 02:10:30PM +0200, Thierry Reding wrote:
 This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
 currently has rudimentary GEM support and can run a console on the
 framebuffer as well as X using the xf86-video-modesetting driver.
 Only the RGB output is supported. Quite a lot of things still need
 to be worked out and there is a lot of room for cleanup.

Indeed, after a quick look there are tons of functions that are just 
stubs
;-) One thing I wonder though is why you directly use the iommu api and
not wrap it up into dma_map? Is arm infrastructure just not there yet or
do you plan to tightly integrate the tegra drm with the iommu (e.g. for
process space switching or similarly funky stuff)?
   
   I'm not sure I know what you are referring to. Looking for all users of
   iommu_map() doesn't turn up anything related to dma_map. Can you point me 
   in
   the right direction?
  
  Well, you use the iommu api to map/unmap memory into the iommu for tegra,
  whereas usually device drivers just use the dma api to do that. The usual
  interface is dma_map_sg/dma_unmap_sg, but there are quite a few variants
  around. I'm just wondering why this you've choosen this.
 
 I don't think this works on ARM. Maybe I'm not seeing the whole picture but
 judging by a quick look through the kernel tree there aren't any users that
 map DMA memory through an IOMMU.
 
 Maybe your question is answered by my reply to Alan's comment. The mapping
 is actually done to get a linear view for the display controller which
 doesn't support SG transfers. The kernel and user-space already have virtual
 linear buffers.

Hm, in that case it looks like your iommu works more like the gtt on intel
chips and less like the iommu on intel platforms (which we access through
the dma_map api). I wonder whether that will end up in some layering fun
together with dma_buf, which conceptually is at the same level as the dma
api, which usually uses an underlying iommu exposed with the iommu api
you're using.

 Perhaps I'm being dense?

Doesn't sound like that over here ;-)
-Daniel
-- 
Daniel Vetter
Mail: dan...@ffwll.ch
Mobile: +41 (0)79 365 57 48
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Daniel Vetter
On Wed, Apr 11, 2012 at 03:43:09PM +0100, Alan Cox wrote:
  Hm, in that case it looks like your iommu works more like the gtt on intel
  chips 
 
 Don't overgeneralize there - on the GMA500/600 the GTT doesn't allow CPU
 side access of the GTT map (ie you can't use it to linearise pages for
 CPU view) and the 3600 is even stranger

Sorry, I really try to totally ignore everything related to gma500 ;-)
-Daniel
-- 
Daniel Vetter
Mail: dan...@ffwll.ch
Mobile: +41 (0)79 365 57 48
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
 This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
 currently has rudimentary GEM support and can run a console on the
 framebuffer as well as X using the xf86-video-modesetting driver.
 Only the RGB output is supported. Quite a lot of things still need
 to be worked out and there is a lot of room for cleanup.

I'll let Jon Mayo comment on the actual driver implementation, since
he's a lot more familiar with Tegra's display hardware. However, I have
some general comments below.

  .../devicetree/bindings/gpu/drm/tegra.txt  |   24 +
  arch/arm/mach-tegra/board-dt-tegra20.c |3 +
  arch/arm/mach-tegra/tegra2_clocks.c|8 +-
  drivers/gpu/drm/Kconfig|2 +
  drivers/gpu/drm/Makefile   |1 +
  drivers/gpu/drm/tegra/Kconfig  |   10 +
  drivers/gpu/drm/tegra/Makefile |5 +
  drivers/gpu/drm/tegra/tegra_drv.c  | 2241 
 
  drivers/gpu/drm/tegra/tegra_drv.h  |  184 ++
  include/drm/tegra_drm.h|   44 +

Splitting this patch into two, between arch/arm and drivers/gpu would be
a good idea.

 diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
 b/Documentation/devicetree/bindings/gpu/drm/tegra.txt

 + drm@5420 {
 + compatible = nvidia,tegra20-drm;

This doesn't seem right; there isn't a DRM hardware module on Tegra,
since DRM is a Linux/software-specific term.

I'd at least expect to see this compatible flag be renamed to something
more like nvidia,tegra20-dc (dc==display controller).

Since Tegra has two display controller modules (I believe identical?),
and numerous other independent(?) blocks, I'd expect to see multiple
nodes in device tree, one per hardware block, such that each block gets
its own device and driver. That said, I'm not familiar enough with
Tegra's display and graphics HW to know if this makes sense. Jon, what's
your take here? The clock change below, and in particular the original
code there that we use downstream, lends weight to my argument.

 + reg =  0x5420 0x0004/* display A */
 + 0x5424 0x0004/* display B */
 + 0x5800 0x0200 ; /* GART aperture */
 + interrupts =  0 73 0x04/* display A */
 +0 74 0x04 ; /* display B */
 +
 + lvds {
 + type = rgb;

These sub-nodes probably want a compatible property rather than a
type property.

 + size = 345 194;
 +
 + default-mode {
 + pixel-clock = 61715000;
 + vertical-refresh = 50;
 + resolution = 1366 768;
 + bits-per-pixel = 16;
 + horizontal-timings = 4 136 2 36;
 + vertical-timings = 2 4 21 10;
 + };
 + };

I imagine that quite a bit of thought needs to be put into the output
part of the binding in order to:

* Model the outputs/connectors separately from display controllers.
* Make sure that the basic infra-structure for representing an output is
general enough to be extensible to all the kinds of outputs we support,
not just the LVDS output.
* We were wondering about putting an EDID into the DT to represent the
display modes, so that all outputs had EDIDs rather than real monitors
having EDIDs, and fixed internal displays having some other
representation of capabilities.

I'm hoping that Jon will drive this.

 diff --git a/arch/arm/mach-tegra/tegra2_clocks.c 
 b/arch/arm/mach-tegra/tegra2_clocks.c

 - PERIPH_CLK(disp1, tegradc.0,NULL,   27, 0x138,  
 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
 process_id */
 - PERIPH_CLK(disp2, tegradc.1,NULL,   26, 0x13c,  
 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
 process_id */
 + PERIPH_CLK(disp1, tegra-drm,NULL,   27, 0x138,  
 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
 process_id */
 + PERIPH_CLK(disp2, tegra-drm,NULL,   26, 0x13c,  
 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
 process_id */

This doesn't seem right, and couples back to my assertion above that the
two display controller modules probably deserve separate device objects,
named e.g. tegradc.*.

 diff --git a/drivers/gpu/drm/tegra/Kconfig b/drivers/gpu/drm/tegra/Kconfig
 new file mode 100644
 index 000..f3382c9
 --- /dev/null
 +++ b/drivers/gpu/drm/tegra/Kconfig
 @@ -0,0 +1,10 @@
 +config DRM_TEGRA
 + tristate NVIDIA Tegra
 + depends on DRM  ARCH_TEGRA

Jon, do you think we'll end up eventually having a unified