RE: [alsa-devel] [PATCH v3] pass ELD to HDMI/DP audio driver

2011-08-05 Thread Stephen Warren
Wu Fengguang wrote at Friday, August 05, 2011 6:50 AM:
 On Fri, Aug 05, 2011 at 02:03:41AM +0800, Keith Packard wrote:
  On Thu, 4 Aug 2011 17:40:24 +0800, Wu Fengguang fengguang...@intel.com 
  wrote:
...
   You may wonder why the mode parameter is needed in intel_write_eld().
   This is because the ELD field aud_synch_delay (ie. A/V sync delay) may
   have different values in progressive/interleaved display modes.
 
  Ok, so you can't write ELD data until the display is active, which
  happens at mode_set time.
 
  Do you need to provide ELD when the display is inactive? Is this only to
  enable audio output when the display is not on? In that case, we will
 
 Good questions!  In general the audio functionalities should not
 depend on the display activeness. There are even audio-only HDMI
 devices. So I'll need to make intel_write_eld() work even without
 information about the current display mode.

Is there such a thing as an audio-only HDMI signal though; the audio data
is transferred inside the blanking region of the HDMI video signal, which
kinda implies that there needs to be some video content (even if the pixel
data is e.g. hard-coded black rather than being scanned out from a frame-
buffer) so as to define where those blanking regions are within the signal.

-- 
nvpublic

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 2/4] iommu: tegra/gart: Add device tree support

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
 This commit adds device tree support for the GART hardware available on
 NVIDIA Tegra 20 SoCs.
 
 Signed-off-by: Thierry Reding thierry.red...@avionic-design.de
 ---
  arch/arm/boot/dts/tegra20.dtsi |6 ++
  arch/arm/mach-tegra/board-dt-tegra20.c |1 +
  drivers/iommu/tegra-gart.c |   10 ++
  3 files changed, 17 insertions(+)

I think I'd prefer at least the tegra20.dtsi change split out into a
separate patch so I can queue it in a dt topic branch in the Tegra repo.

It might be a good idea to split this into two on an arch/arm vs.
drivers/iommu boundary. Looking at Olof's branches for 3.4, that split
is what happened there.

Finally, there should be a binding documentation file in
Documentation/devicetree/bindings in order to specify the number of reg
property entries needed, and their meaning, since there's more than 1
(even though you did comment it nicely in the .dtsi file)
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 3/4] drm: fixed: Add dfixed_frac() macro

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
 This commit is taken from the Chromium tree and was originally written
 by Robert Morell.

Maybe just cherry-pick it from there? That way, the git authorship will
show up as Robert.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-11 Thread Stephen Warren
On 04/11/2012 06:10 AM, Thierry Reding wrote:
 This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
 currently has rudimentary GEM support and can run a console on the
 framebuffer as well as X using the xf86-video-modesetting driver.
 Only the RGB output is supported. Quite a lot of things still need
 to be worked out and there is a lot of room for cleanup.

I'll let Jon Mayo comment on the actual driver implementation, since
he's a lot more familiar with Tegra's display hardware. However, I have
some general comments below.

  .../devicetree/bindings/gpu/drm/tegra.txt  |   24 +
  arch/arm/mach-tegra/board-dt-tegra20.c |3 +
  arch/arm/mach-tegra/tegra2_clocks.c|8 +-
  drivers/gpu/drm/Kconfig|2 +
  drivers/gpu/drm/Makefile   |1 +
  drivers/gpu/drm/tegra/Kconfig  |   10 +
  drivers/gpu/drm/tegra/Makefile |5 +
  drivers/gpu/drm/tegra/tegra_drv.c  | 2241 
 
  drivers/gpu/drm/tegra/tegra_drv.h  |  184 ++
  include/drm/tegra_drm.h|   44 +

Splitting this patch into two, between arch/arm and drivers/gpu would be
a good idea.

 diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
 b/Documentation/devicetree/bindings/gpu/drm/tegra.txt

 + drm@5420 {
 + compatible = nvidia,tegra20-drm;

This doesn't seem right; there isn't a DRM hardware module on Tegra,
since DRM is a Linux/software-specific term.

I'd at least expect to see this compatible flag be renamed to something
more like nvidia,tegra20-dc (dc==display controller).

Since Tegra has two display controller modules (I believe identical?),
and numerous other independent(?) blocks, I'd expect to see multiple
nodes in device tree, one per hardware block, such that each block gets
its own device and driver. That said, I'm not familiar enough with
Tegra's display and graphics HW to know if this makes sense. Jon, what's
your take here? The clock change below, and in particular the original
code there that we use downstream, lends weight to my argument.

 + reg =  0x5420 0x0004/* display A */
 + 0x5424 0x0004/* display B */
 + 0x5800 0x0200 ; /* GART aperture */
 + interrupts =  0 73 0x04/* display A */
 +0 74 0x04 ; /* display B */
 +
 + lvds {
 + type = rgb;

These sub-nodes probably want a compatible property rather than a
type property.

 + size = 345 194;
 +
 + default-mode {
 + pixel-clock = 61715000;
 + vertical-refresh = 50;
 + resolution = 1366 768;
 + bits-per-pixel = 16;
 + horizontal-timings = 4 136 2 36;
 + vertical-timings = 2 4 21 10;
 + };
 + };

I imagine that quite a bit of thought needs to be put into the output
part of the binding in order to:

* Model the outputs/connectors separately from display controllers.
* Make sure that the basic infra-structure for representing an output is
general enough to be extensible to all the kinds of outputs we support,
not just the LVDS output.
* We were wondering about putting an EDID into the DT to represent the
display modes, so that all outputs had EDIDs rather than real monitors
having EDIDs, and fixed internal displays having some other
representation of capabilities.

I'm hoping that Jon will drive this.

 diff --git a/arch/arm/mach-tegra/tegra2_clocks.c 
 b/arch/arm/mach-tegra/tegra2_clocks.c

 - PERIPH_CLK(disp1, tegradc.0,NULL,   27, 0x138,  
 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
 process_id */
 - PERIPH_CLK(disp2, tegradc.1,NULL,   26, 0x13c,  
 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
 process_id */
 + PERIPH_CLK(disp1, tegra-drm,NULL,   27, 0x138,  
 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
 process_id */
 + PERIPH_CLK(disp2, tegra-drm,NULL,   26, 0x13c,  
 6, mux_pllp_plld_pllc_clkm, MUX), /* scales with voltage and 
 process_id */

This doesn't seem right, and couples back to my assertion above that the
two display controller modules probably deserve separate device objects,
named e.g. tegradc.*.

 diff --git a/drivers/gpu/drm/tegra/Kconfig b/drivers/gpu/drm/tegra/Kconfig
 new file mode 100644
 index 000..f3382c9
 --- /dev/null
 +++ b/drivers/gpu/drm/tegra/Kconfig
 @@ -0,0 +1,10 @@
 +config DRM_TEGRA
 + tristate NVIDIA Tegra
 + depends on DRM  ARCH_TEGRA

Jon, do you think we'll end up eventually having a unified

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-14 Thread Stephen Warren
On 04/13/2012 03:14 AM, Thierry Reding wrote:
 * Stephen Warren wrote:
 On 04/12/2012 11:44 AM, Thierry Reding wrote:
 [...]
 And given that, I don't think we should name the node after some
 OS-specific software concept. Device tree is intended to model hardware.
 [...]
 Maybe one solution would be to have a top-level DRM device with a register
 map from 0x5400 to 0x547f, which the TRM designates as host
 registers. Then subnodes could be used for the subdevices.

 Ah yes, just what I was thinking above:-)
 
 I came up with the following:
 
   /* host1x */
   host1x : host1x@5000 {
   reg = 0x5000 0x00024000;
   interrupts = 0 64 0x04   /* cop syncpt */
 0 65 0x04   /* mpcore syncpt */
 0 66 0x04   /* cop general */
 0 67 0x04; /* mpcore general */
   };
 
   /* graphics host */
   graphics@5400 {
   compatible = nvidia,tegra20-graphics;
 
   #address-cells = 1;
   #size-cells = 1;
   ranges = 0 0x5400 0x0800;
 
   host1x = host1x;
 
   /* video-encoding/decoding */
   mpe@5404 {
   reg = 0x5404 0x0004;
   interrupts = 0 68 0x04;
   };
... [a bunch of nodes for graphics-related HW modules]

That all looks reasonable.

   display-controllers = disp1 disp2;
   outputs = lvds hdmi tvo dsi;

I don't think you need both the child nodes and those two properties.

In other words, I think you either want:

graphics@5400 {
... a bunch of child nodes
};

or you want:

disp1 : dc@5420 {
...
};
disp2 : dc@5424 {
...
};
... all the other graphics nodes

graphics@5400 {
display-controllers = disp1 disp2;
outputs = lvds hdmi tvo dsi;
};

In the former case, presumably the drivers for the child nodes would
make some API call into the parent node and just register themselves
directly as a certain type of driver, so avoiding the
display-controllers/outputs properties.

   /* initial configuration */
   configuration {
   lvds {
   display-controller = disp1;
   output = lvds;
   };
 
   hdmi {
   display-controller = disp2;
   output = hdmi;
   };
   };
   };
 
 I added an additional node for the initial configuration so that the driver
 knows which mapping to setup at boot.

Isn't that kind of thing usually set up by the video= KMS-related kernel
command-line option? See Documentation/fb/modedb.txt. Again here, I
think the actual display controllers would be allocated to whichever
outputs get used on a first-come first-serve based, so no need for the
display-controller property above either way.

 What I don't quite see yet is where to
 attach EDID data or pass the phandle to the I2C controller for DDC/EDID
 probing.

I think there's a third node type.

1) Nodes for the display controllers in Tegra (Also known as CRTCs I
believe)

2) Nodes for the video outputs from the Tegra chips (what you have as
outputs above)

3) Connectors, which aren't represented in your DT above yet.

I think you need to add additional nodes for (3). These are where you'd
represent all the EDID/I2C/... stuff. Then, the nodes for the outputs
will point at the nodes for the connectors using phandles, or vice-versa.

(IIRC, although I didn't look at them in detail, the sdrm patches
recently mentioned something about splitting up the various object types
in DRM and allowing them to be represented separately, and I vaguely
recall something along the lines of the types I mention above. I might
expect to have a 1:1 mapping between the DRM object types and DT nodes.)

 The initial configuration is certainly not the right place. Perhaps
 the outputs property should be made a node instead:
 
   outputs {
   lvds_out {
   output = lvds;
   edid = edid;
   };
 
   hdmi_out {
   output = hdmi;
   ddc = i2c2;
   };
   };
 
 But then outputs should probably become something like connectors
 instead and the initial configuration refers to the _out phandles.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-17 Thread Stephen Warren
On 04/15/2012 02:39 AM, Thierry Reding wrote:
 * Stephen Warren wrote:
 On 04/13/2012 03:14 AM, Thierry Reding wrote:
 display-controllers = disp1 disp2;
 outputs = lvds hdmi tvo dsi;

 I don't think you need both the child nodes and those two properties.

 In other words, I think you either want:

  graphics@5400 {
  ... a bunch of child nodes
  };

 or you want:

  disp1 : dc@5420 {
  ...
  };
  disp2 : dc@5424 {
  ...
  };
  ... all the other graphics nodes

  graphics@5400 {
  display-controllers = disp1 disp2;
  outputs = lvds hdmi tvo dsi;
  };

 In the former case, presumably the drivers for the child nodes would
 make some API call into the parent node and just register themselves
 directly as a certain type of driver, so avoiding the
 display-controllers/outputs properties.
 
 I think I like the former better. The way I understand it the children of the
 graphics node will have to be registered explicitly by the DRM driver because
 of_platform_populate() doesn't work recursively. That would ensure that the
 DRM driver can setup the CRTC and output registries before the other devices
 call back into the DRM to register themselves.

Yes, with the first way, the DRM driver will have to call
of_platform_populate() recursively to make this work.

The thing here is that the device tree should model hardware, not be
designed purely to match the device registration needs of the DRM
driver. I'm not sure that it's correct to model all those devices as
children of the top-level graphics object; I /think/ all the devices are
flat on a single bus, and hence not children of each-other. That all
said, I guess having the nodes as children isn't too far off how the HW
is designed (even if the register accesses aren't on a child bus, the
modules at least logically are grouped together in an umbrella
situation), so I wouldn't push back on the first option above that you
prefer.

 /* initial configuration */
 configuration {
 lvds {
 display-controller = disp1;
 output = lvds;
 };

 hdmi {
 display-controller = disp2;
 output = hdmi;
 };
 };
 };

 I added an additional node for the initial configuration so that the driver
 knows which mapping to setup at boot.

 Isn't that kind of thing usually set up by the video= KMS-related kernel
 command-line option? See Documentation/fb/modedb.txt. Again here, I
 think the actual display controllers would be allocated to whichever
 outputs get used on a first-come first-serve based, so no need for the
 display-controller property above either way.
 
 Boards should still be able to boot and display a console on the standard
 output even if the user doesn't provide a video= option. Shouldn't there be a
 way for a board DTS to specify what the default (or even allowed) connections
 are?

Why wouldn't the default be to light up all outputs that have an
attached display, or an algorithm something like:

* If internal LCD is present, use that
* Else, if HDMI display plugged in, use that
...

 Evaluation hardware like the Harmony might have LVDS, HDMI and VGA connectors
 to provide for a wide range of use cases. The Plutux for instance has only an
 HDMI connector and the Medcom has only LVDS. For the Medcom it would be quite
 confusing for people to suddenly see an HDMI-1 connector pop up f.e. in
 xrandr. It would be equally useless for the Plutux to show up as supporting
 an LVDS or VGA connector.

So the device tree for those devices would disable (or not include) the
connectors that were not present on the board.

...
 I see. Maybe this could be used for board-specific configuration? For
 example, the Plutux could have something like this:
 
   connectors {
   hdmi {
   output = hdmi;
   ddc = i2c2;
   };
   };
 
 The Medcom could have:
 
   connectors {
   lvds {
   output = lvds;
   edid = edid;
   };
   };
 
 While Harmony could be more generic and provide more outputs:
 
   connectors {
   lvds {
   output = lvds;
   ddc = i2c1;
   };
 
   vga {
   /* which output is used for VGA? */
   output = ...;
   ddc = i2c2;
 
   hdmi {
   output = hdmi;
   ddc = i2c3;
   };
   };

That looks like a reasonable start.

 Has there been any discussion as to how EDID data would best be represented
 in DT? Should it just be a binary blob or rather some textual representation?

I think

Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-17 Thread Stephen Warren
On 04/16/2012 12:48 PM, Thierry Reding wrote:
 * Stephen Warren wrote:
...
 Has there been any discussion as to how EDID data would best be represented
 in DT? Should it just be a binary blob or rather some textual 
 representation?

 I think a binary blob makes sense - that's the exact same format it'd
 have if read over the DDC I2C bus.
 
 DTC has /incbin/ for that. Is arch/arm/boot/dts still the correct place for
 EDID blobs? I could add tegra-medcom.edid if that's okay.

As far as I know, yes.

Perhaps we'll want to start putting stuff in SoC-specific
sub-directories given the number of files we'll end up with here
(irrespective of EDID etc.), but I haven't seen any move towards that yet.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC 4/4] drm: Add NVIDIA Tegra support

2012-04-17 Thread Stephen Warren
On 04/16/2012 01:03 PM, Thierry Reding wrote:
...
 I've been looking about for tools to generate EDID data but didn't find
 anything useful. Does anyone know of any tool that's more convenient than
 manually filling a struct edid and writing that to a file?

Sorry, no.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 3/5] i2c: Add of_i2c_get_adapter() function

2012-04-26 Thread Stephen Warren
On 04/25/2012 03:45 AM, Thierry Reding wrote:
 This function resolves an OF device node to an I2C adapter registered
 with the I2C core.

I think this is doing the same thing as a patch I posted recently:
http://www.spinics.net/lists/linux-i2c/msg07808.html

What's the advantage of one way over the other?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-03 Thread Stephen Warren
On 04/25/2012 03:45 AM, Thierry Reding wrote:
 This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
 currently has rudimentary GEM support and can run a console on the
 framebuffer as well as X using the xf86-video-modesetting driver. Only
 the RGB output is supported.
 
 HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
 quite work. EDID data can be retrieved but the output doesn't properly
 activate the connected TV.
 
 The DSI and TVO outputs and the HOST1X driver are just stubs that setup
 the corresponding resources but don't do anything useful yet.

I'm mainly going to comment on the device tree bindings here; hopefully
Jon and Terje can chime in on the code itself.

 diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
 b/Documentation/devicetree/bindings/gpu/drm/tegra.txt

 +Example:
 +
 +/memreserve/ 0x0e00 0x0200;
 +
 +...
 +
 +/ {
 + ...
 +
 + /* host1x */
 + host1x: host1x@5000 {
 + compatible = nvidia,tegra20-host1x;
 + reg = 0x5000 0x00024000;
 + interrupts = 0 64 0x04   /* cop syncpt */
 +   0 65 0x04   /* mpcore syncpt */
 +   0 66 0x04   /* cop general */
 +   0 67 0x04; /* mpcore general */
 + };

The host1x module is apparently a register bus, behind which all the
other modules sit. According to the address map in the TRM, all the
other modules appears to include all of MPE, VI, CSI, EPP, ISP, GR2D,
GR3D, DISPLAY A/B, HDMI, TVO, DSI, plus VG, VS, VCI, DSIB on Tegra30.

That said, I believe Terje wasn't convinced that all those modules are
really host1x children, just some. Can you check please, Terje?

Also, I wonder if host1x is really the parent of these modules,
register-bus-access-wise, or whether the bus covers the graphic host
registers at 0x5000-0x50023fff?

As such, I think the DT nodes for all those modules should be within the
host1x node (or perhaps graphics host node, pending above discussion):

host1x: host1x@5000 {
mpe@5404 { ... };
vi@5408 { ... };
...
};

host1x can easily instantiate all the child nodes simply by calling
of_platform_populate() at the end of its probe; see
sound/soc/tegra/tegra30_ahub.c for an example.

 + /* video-encoding/decoding */
 + mpe@5404 {
 + reg = 0x5404 0x0004;
 + interrupts = 0 68 0x04;
 + };

We'll probably end up needing a phandle from each of these nodes to
host1x, and a channel ID, so the drivers for these nodes can register
themselves with host1x. However, I it's probably OK to defer the DT
binding for this until we actually start working on command-channels.

 + /* graphics host */
 + graphics@5400 {
 + compatible = nvidia,tegra20-graphics;
 +
 + #address-cells = 1;
 + #size-cells = 1;
 + ranges;

I don't think those 3 properties are needed, unless there are child
nodes with registers.

 + display-controllers = disp1 disp2;
 + carveout = 0x0e00 0x0200;
 + host1x = host1x;
 + gart = gart;
 +
 + connectors {

I'm not sure that it makes sense to put the connectors nodes underneath
the graphics host node; the connectors aren't objects with registers
that are accessed through a bus managed by the graphics node. Equally,
the connectors could be useful outside of the graphics driver - e.g. the
audio driver might need to refer to an HDMI connector.

Instead, I'd probably put the connector nodes at the top level of the
device tree, and have graphics contain a property that lists the
phandles of all available connectors.

 + #address-cells = 1;
 + #size-cells = 0;
 +
 + connector@0 {
 + reg = 0;
 + edid = /incbin/(machine.edid);
 + output = lvds;
 + };

I think each of these needs some kind of compatible value to indicate
their type. Also, the ability to represent both HDMI video and audio
streams so that a sound card binding could use the HDMI connector too. I
see that drivers/extcon/ has appeared recently, and I wonder if the
connector nodes in DT shouldn't be handled by that subsystem?

 + connector@1 {
 + reg = 1;
 + output = hdmi;
 + ddc = i2c2;
 +
 + hpd-gpio = gpio 111 0; /* PN7 */
 + };
 + };
 + };
 +};

 diff --git a/arch/arm/mach-tegra/board-dt-tegra20.c 
 b/arch/arm/mach-tegra/board-dt-tegra20.c

 + { host1x, pll_c,14400,  true },
 + { disp1,  pll_p,6,  true },
 + { disp2,  pll_p,6,  true },

Can we use these tables /just/ to 

Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-08 Thread Stephen Warren
On 05/07/2012 02:50 AM, Terje Bergström wrote:
 On 25.04.2012 12:45, Thierry Reding wrote:
 
 +/ {
 +   ...
 +
 +   /* host1x */
 +   host1x: host1x@5000 {
 +   compatible = nvidia,tegra20-host1x;
 +   reg = 0x5000 0x00024000;
 +   interrupts = 0 64 0x04   /* cop syncpt */
 + 0 65 0x04   /* mpcore syncpt */
 + 0 66 0x04   /* cop general */
 + 0 67 0x04; /* mpcore general */
 +   };
 +
 +   /* video-encoding/decoding */
 +   mpe@5404 {
 +   reg = 0x5404 0x0004;
 +   interrupts = 0 68 0x04;
 +   };
 +
 
 (...)
 
 Hi Thierry,
 
 I have still lots of questions regarding how device trees work. I'm now
 just trying to match the device tree structure with hardware - let me
 know if that goes wrong.
 
 There's a hierarchy in the hardware, which should be represented in the
 device trees. All of the hardware are client modules for host1x - with
 the exception of host1x obviously. CPU has two methods for accessing the
 hardware: clients' register aperture and host1x channels. Both of these
 operate via host1x hardware.
 
 We should define host1x bus in the device tree, and move all nodes
 except host1x under that bus.

I think the host1x node /is/ that bus.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 08:02 AM, Terje Bergström wrote:
 On 26.06.2012 16:41, Thierry Reding wrote:
 
 On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergström wrote:
 We also assign certain host1x common resources per device by convention,
 f.ex. sync points, channels etc. We currently encode that information in
 the device node (3D uses sync point number X, 2D uses numbers Y and Z).
 The information is not actually describing hardware, as it just
 describes the convention, so I'm not sure if device tree is the proper
 place for it.
 Are they configurable? If so I think we should provide for them being
 specified in the device tree. They are still hardware resources being
 assigned to devices.
 
 Yes, they're configurable, and there's nothing hardware specific in the
 assignment of a sync point to a particular use. It's all just a software
 agreement. That's why I'm a bit hesitant on putting it in device trees,
 which are supposed to only describe hardware.

So I think that the DT can describe the existence of sync-points
(presumably include a node for the sync-point HW device if it's
separate). However, since the usage of each sync-point is entirely
arbitrary, that seems like something which should be either assigned
dynamically at run-time, or at least managed/assigned in SW at runtime
somehow, rather than hard-coded into DT; it's more policy than HW.

 Either way is fine for me. The full addresses are more familiar to me as
 we tend to use them internally.

 Using the OF mechanism for translating the host1x bus addresses,
 relative to the host1x base address, to CPU addresses seems purer, but
 either way should work fine.
 
 I'll let you decide, as I don't have a strong opinion either way. I
 guess whatever is the more common way wins.

I'd certainly prefer all the nodes to use the full/absolute address.
That way, the DT will exactly match the addresses in the documentation.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 07:41 AM, Thierry Reding wrote:
 On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergström wrote:
 On 26.06.2012 13:55, Thierry Reding wrote:
...
 status = disabled;
 
 gart = gart;
 
 /* video-encoding/decoding */ mpe { reg = 0x5404
 0x0004; interrupts = 0 68 0x04; status = disabled; };
 
 
 The client device interrupts are not very interesting, so they
 could be left out, too. Display controller related are probably
 an exception to this.
 
 If the interrupts aren't used at all we should drop them.

I disagree here; used is most likely something specific to a
particular OS's drivers. The HW always has the interrupts, and hence
they should be described in DT.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 07:01 AM, Terje Bergström wrote:
 On 26.06.2012 13:55, Thierry Reding wrote:
...
 An alternative would be to call of_platform_populate() from the host1x

[alternative to making the host1x node contain compatible=simple-bus.]

 driver. This has the advantage that it could integrate better with the
 host1x bus implementation that Terje is working on, but it also needs
 additional code to tear down the devices when the host1x driver is
 unloaded because a module reload would try to create duplicate devices
 otherwise.
 
 Yes, we already have a bus_type for nvhost, and we have nvhost_device
 and nvhost_driver that device from device and device_driver
 respectively. They all accommodate some host1x client device common
 behavior and data that we need to store. We use the bus_type also to
 match each device and driver together, but the matching is version
 sensitive. For example, Tegra2 3D needs different driver than Tegra3 3D.

I'd certainly like to see some upstream discussion re: why exactly we
have a custom bus type here. What does it do that a regular platform bus
doesn't do? Are those features something that would be generally useful
to add to the existing platform bus? Can we instead add hooks into
platform bus rather than creating a new bus? I recall you saying that
the nvhost_bus duplicated a lot of code from the existing platform bus.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 04:55 AM, Thierry Reding wrote:
 Hi,
 
 while I haven't got much time to work on the actual code right now, I
 think it might still be useful if we could get the device tree binding
 to a point where everybody is happy with it. That'll also save me some
 time once I get to writing the code because I won't have to redo it over
 again. =)
 
 So here's the current proposal:
 
   host1x {
   compatible = nvidia,tegra20-host1x, simple-bus;
   reg = 0x5000 0x00024000;
   interrupts = 0 64 0x04   /* cop syncpt */
 0 65 0x04   /* mpcore syncpt */
 0 66 0x04   /* cop general */
 0 67 0x04; /* mpcore general */
 
   #address-cells = 1;
   #size-cells = 1;
 
   ranges = 0x5400 0x5400 0x0400;
 
   status = disabled;

The idea behind status=disabled is that some HW only makes sense to
use on particular boards. This concept really only applies to HW modules
that drive external interfaces on the SoC, which in turn the board can
choose whether to connect anything to (or whether to even connect to any
external pins using the pinmux, or not).

As such, I don't think it makes sense to set status=disabled on
host1x, nor many other internal-only engines such as say mpe, epp, i2sp,
gr2d, gr3d, dc1, dc2.

However it does make sense for the output resources rgb, hdmi, tvo, dsi.

   /* outputs */
   rgb {
   compatible = nvidia,tegra20-rgb;
   status = disabled;
   };
...
 The rgb node is something that I don't quite know how to handle yet.
 Since it is really part of the display controller and uses its register
 space, it isn't quite correct to represent it as a separate device. But
 we will need a separate node to make it available as a connector, which
 will become more obvious below.

Are you referring to the DC_COM_PIN_OUTPUT* registers? Sorry, I'm not at
all familiar with our display HW yet.

Some possible solutions spring to mind:

a) The output nodes don't have to be direct children of host1x. Instead,
each DC could have an rgb child node that represents its own individual
output capability.

b) If the RGB-related registers in DC are completely independent of any
other DC registers and grouped together well enough, we can just carve a
chunk out of the DC register space and give that to the RGB node instead:

i.e. not:

   dc1: dc@5420 {
   compatible = nvidia,tegra20-dc;
   reg = 0x5420 0x0004;
   interrupts = 0 73 0x04;
   status = disabled;
   };


but something more like (the completely made up example):

dc1: dc@5420 {
compatible = nvidia,tegra20-dc;
reg = 0x5420 0x0002 0x54203000 0x1;
interrupts = 0 73 0x04;
status = disabled;
};

rgb {
compatible = nvidia,tegra20-rgb;
reg = 0x5422 0x0001;
status = disabled;
};

c) The rgb node could simply reference the dc nodes using a phandle, and
call into the dc driver to obtain RGB configuration services from it:

rgb {
compatible = nvidia,tegra20-rgb;
status = disabled;
nvidia,dcs = dc1 dc2;
};

By the way, if the RGB registers are in the DC, aren't there two
separate RGB outputs. Certainly the TRM implies that both DCs can be
driving LCDs, by reducing the width of the LCD signals that each DC uses
(lower bits-per-pixel, or perhaps DDR encoding on the data lines).

 Board DTS files could then extend this with board-specific requirements
 and connectors. The following describes the Medcom Wide:

   connectors {
   #address-cells = 1;
   #size-cells = 0;
 
   };

The connector seems to be a property of the individual output resources.
I'd expect to see the connector configuration be a child of the outputs
that a particular board had enabled; something more like:

host1x {
rgb {
status = okay;

connector@0 {
nvidia,edid = /incbin/(tegra-medcom.edid);
};
};
hdmi {
status = okay;

connector@0 {
nvidia,ddc-i2c-bus = tegra_i2c1;
};
};
};

Perhaps even completely omit the connector node, and put the properties
directly within the rgb/hdmi node itself. After all the HDMI output
really is the connector as far as Tegra goes.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 01:27 PM, Thierry Reding wrote:
 On Tue, Jun 26, 2012 at 11:41:43AM -0600, Stephen Warren wrote:
 On 06/26/2012 08:02 AM, Terje Bergström wrote:
 On 26.06.2012 16:41, Thierry Reding wrote:
 
 On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergström
 wrote:
 We also assign certain host1x common resources per device
 by convention, f.ex. sync points, channels etc. We
 currently encode that information in the device node (3D
 uses sync point number X, 2D uses numbers Y and Z). The
 information is not actually describing hardware, as it
 just describes the convention, so I'm not sure if device
 tree is the proper place for it.
 Are they configurable? If so I think we should provide for
 them being specified in the device tree. They are still
 hardware resources being assigned to devices.
 
 Yes, they're configurable, and there's nothing hardware
 specific in the assignment of a sync point to a particular use.
 It's all just a software agreement. That's why I'm a bit
 hesitant on putting it in device trees, which are supposed to
 only describe hardware.
 
 So I think that the DT can describe the existence of sync-points 
 (presumably include a node for the sync-point HW device if it's 
 separate). However, since the usage of each sync-point is
 entirely arbitrary, that seems like something which should be
 either assigned dynamically at run-time, or at least
 managed/assigned in SW at runtime somehow, rather than hard-coded
 into DT; it's more policy than HW.
 
 The sync-points are part of the host1x device as I understand it.
 If their usage is truly generic, then we can probably ignore them
 safely. Maybe it'd make sense to carry a property that defines the
 number of sync points available for the host1x hardware represented
 by the DT?

I would assume this can safely be inferred from the compatible value;
nvidia,tegra20-host1x v.s. nvidia,tegra30-host1x, and so there's no
need to represent this in DT. I would assume (and it's certainly just
an assumption) that there are numerous other small details that are
different between the two SoCs, and so the driver will need to have
some table mapping from compatible value to various information
anyway. Terje, can you confirm/deny this (and hopefully use your
knowledge of any future chips to guide the decision without giving
anything away)?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-26 Thread Stephen Warren
On 06/26/2012 01:31 PM, Thierry Reding wrote:
 On Tue, Jun 26, 2012 at 11:43:38AM -0600, Stephen Warren wrote:
 On 06/26/2012 07:41 AM, Thierry Reding wrote:
 On Tue, Jun 26, 2012 at 04:01:05PM +0300, Terje Bergström
 wrote:
 On 26.06.2012 13:55, Thierry Reding wrote:
 ...
 status = disabled;
 
 gart = gart;
 
 /* video-encoding/decoding */ mpe { reg = 0x5404 
 0x0004; interrupts = 0 68 0x04; status = disabled;
 };
 
 
 The client device interrupts are not very interesting, so
 they could be left out, too. Display controller related are
 probably an exception to this.
 
 If the interrupts aren't used at all we should drop them.
 
 I disagree here; used is most likely something specific to a 
 particular OS's drivers. The HW always has the interrupts, and
 hence they should be described in DT.
 
 Okay, I see. Does the same apply to the COP interrupts of the
 host1x node in your opinion? I don't know if it makes sense to
 describe something that's not reachable from the CPU. Yet it is
 defined in the GIC.

This probably applies to the interrupts too. The TRM does indicate
that git GIC has 4 interrupt IDs allocated to host1x. I recall Terje
saying that two of them weren't usable by the CPU though. Those two
points seem inconsistent. Terje, can you please explain further?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-27 Thread Stephen Warren
On 06/26/2012 01:51 PM, Thierry Reding wrote:
 On Tue, Jun 26, 2012 at 12:10:42PM -0600, Stephen Warren wrote:
 On 06/26/2012 04:55 AM, Thierry Reding wrote:
 Hi,
 
 while I haven't got much time to work on the actual code right
 now, I think it might still be useful if we could get the
 device tree binding to a point where everybody is happy with
 it. That'll also save me some time once I get to writing the
 code because I won't have to redo it over again. =)
 
 So here's the current proposal:
 
 host1x { compatible = nvidia,tegra20-host1x, simple-bus; 
 reg = 0x5000 0x00024000; interrupts = 0 64 0x04   /* cop
 syncpt */ 0 65 0x04   /* mpcore syncpt */ 0 66 0x04   /* cop
 general */ 0 67 0x04; /* mpcore general */
 
 #address-cells = 1; #size-cells = 1;
 
 ranges = 0x5400 0x5400 0x0400;
 
 status = disabled;
 
 The idea behind status=disabled is that some HW only makes
 sense to use on particular boards. This concept really only
 applies to HW modules that drive external interfaces on the SoC,
 which in turn the board can choose whether to connect anything to
 (or whether to even connect to any external pins using the
 pinmux, or not).
 
 As such, I don't think it makes sense to set status=disabled
 on host1x, nor many other internal-only engines such as say mpe,
 epp, i2sp, gr2d, gr3d, dc1, dc2.
 
 What about power management and resource usage? If a board for
 instance doesn't need gr3d at all it could just leave it at status
 = disabled to not have the corresponding driver loaded and not
 waste the power and resources.

The driver should be turning off all the clocks and power if the
devices aren't actually used (or not turning it on in the first
place). I guess there will be some slight overhead for the
device/driver instantiation.

However, in all likelihood, all/most boards will enable this feature
once it's in place. For very resource-constrained scenarios without
display, presumably one would be building a custom kernel without the
display driver enabled, so the DT content for display wouldn't ever
come into play.

 Board DTS files could then extend this with board-specific
 requirements and connectors. The following describes the Medcom
 Wide:
 
 connectors { #address-cells = 1; #size-cells = 0;
 
 };
 
 The connector seems to be a property of the individual output
 resources. I'd expect to see the connector configuration be a
 child of the outputs that a particular board had enabled;
 something more like:
 
 host1x { rgb { status = okay;
 
 connector@0 { nvidia,edid = /incbin/(tegra-medcom.edid); }; }; 
 hdmi { status = okay;
 
 connector@0 { nvidia,ddc-i2c-bus = tegra_i2c1; }; }; };
 
 Perhaps even completely omit the connector node, and put the
 properties directly within the rgb/hdmi node itself. After all
 the HDMI output really is the connector as far as Tegra goes.
 
 Heh. I seem to remember you objecting to this in a previous
 series[0] which is actually the reason that I moved them to the
 top-level in the first place. =)
 
 Thierry
 
 [0]: http://www.spinics.net/lists/linux-tegra/msg05298.html

I don't think I was objecting to exactly what I wrote above; in that
email, there were already separate connector nodes, but they were
placed directly inside the host1x node (at that time called graphics),
so still rather disconnected from the HW they represent.

The argument for sharing e.g. an HDMI port between both video and
audio still exists though. That said, I think now I'd still be
inclined to put all the connector information for video into the
hdmi/rgb/tvo nodes. If DT does grow to represent the user-connectors
at the top-level perhaps in conjunction with drivers/extcon, perhaps
the hdmi node can point at the extcon node with a phandle, or
vice-versa, to set up the link between the components of the
user-visible port.

Still, the decision here is possibly a little arbitrary; many schemes
would work. I think at this point I don't care /too/ strongly about
which is used, so the separate-connectors-at-top-level concept in your
email is probably OK. I wonder if the hdmi node doesn't need a phandle
pointing at the connector node though, so they can both find each-other?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-27 Thread Stephen Warren
On 06/26/2012 07:46 PM, Mark Zhang wrote:
 On Tue, 26 Jun 2012 12:55:13 +0200
 Thierry Reding thierry.red...@avionic-design.de wrote:
...
 I'm not sure I understand how information about the carveout would be
 obtained from the IOMMU API, though.
 
 I think that can be similar with current gart implementation. Define carveout 
 as: 
 
 carveout {
 compatible = nvidia,tegra20-carveout;
 size = 0x1000;
 };
 
 Then create a file such like tegra-carveout.c to get these definitions and 
 register itself as platform device's iommu instance.

The carveout isn't a HW object, so it doesn't seem appropriate to define
a DT node to represent it.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-27 Thread Stephen Warren
On 06/26/2012 08:32 PM, Mark Zhang wrote:
 On 06/26/2012 07:46 PM, Mark Zhang wrote:
 On Tue, 26 Jun 2012 12:55:13 +0200
 Thierry Reding thierry.red...@avionic-design.de wrote:
 ...
 I'm not sure I understand how information about the carveout would be
 obtained from the IOMMU API, though.

 I think that can be similar with current gart implementation. Define 
 carveout as:

 carveout {
 compatible = nvidia,tegra20-carveout;
 size = 0x1000;
 };

 Then create a file such like tegra-carveout.c to get these definitions and
 register itself as platform device's iommu instance.

 The carveout isn't a HW object, so it doesn't seem appropriate to define a DT
 node to represent it.
 
 Yes. But I think it's better to export the size of carveout as a configurable 
 item.
 So we need to define this somewhere. How about define carveout as a property 
 of gart?

There already exists a way of preventing Linux from using certain chunks
of memory; the /memreserve/ syntax. From a brief look at the dtc source,
it looks like /memreserve/ entries can have labels, which implies that a
property in the GART node could refer to the /memreserve/ entry by
phandle in order to know what memory regions to use.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/26/2012 11:07 PM, Thierry Reding wrote:
 On Tue, Jun 26, 2012 at 04:48:14PM -0600, Stephen Warren wrote:
...
 I actually like what you proposed above a lot, so if you don't
 mind either way I'll go with that proposal. Keeping the connector
 nodes as children of the outputs has the advantage of being able to
 reference them if we need it at some point. But it is also
 redundant in that a single output doesn't usually (never?) driver
 more than one connector.

Yes, I believe that each output is 1:1 with (the video portion of) a
connector. The display controllers obviously aren't 1:1.

 The same issue will have to be addressed for the CSI and VI nodes,
 but as I currently use neither of those I don't feel qualified to
 propose a binding for them. Also for the VI part we're completely
 missing documentation. Maybe somebody could push this to be
 released as well?

I did file a bug noting the request for VI documentation. At this
point in time, it's too early to say what, if anything, will come of that.

 If I understand correctly, most of the host1x children can also be 
 chained in a processing pipeline to do postprocessing an video
 input for example. I suppose that's generic and doesn't need to be
 represented in DT either, right?

Yes, I believe that's something internal to the driver.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/27/2012 08:29 AM, Hiroshi Doyu wrote:
 Could you explain a bit more why you want carveout size on per-board basis?

Different boards have different amounts of memory, and are sometimes
targeted at different use-cases (e.g. server with simple display buffer,
vs. consumer-oriented device intended to play games with OpenGL
allocating lots of textures).
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/27/2012 06:44 AM, Hiroshi Doyu wrote:
...
 I think that there are 2 cases:
 
   (1) discontiguous memory with IOMMU
   (2) contiguous memory without IOMMU(called carveout in general?)
...
 For (2), although memory is mostly anonymous one, we may need to know
 how much to allocate, where we only need size. This size is not from
 h/w feature, but it depends on the system load/usage. So I think that
 this size can be passed from kernel command line? For exmaple, we can
 specify how much contiguous memory is necessary with putting
 coherent_pool=??M in the kernel command line as below:
 
   coherent_pool=nn[KMG]   [ARM,KNL]
   Sets the size of memory pool for coherent, atomic dma
   allocations.

I guess if that's the standard way of initializing CMA, then that's fine.

It'd be nice if there was a way to specify that from the DT too; that
way the user/distro/bootloader constructing the kernel command-line
wouldn't have to remember to add random (potentially
Tegra-/board-specific) extra arguments onto the command-line; the Tegra
command-line in the upstream kernel is quite clean right now, especially
compare to the enormous number of options we require downstream:-(
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 12:18 AM, Hiroshi Doyu wrote:
 On Wed, 27 Jun 2012 16:44:14 +0200
 Thierry Reding thierry.red...@avionic-design.de wrote:
 
 I think that coherent_pool can be used only when the amount of
 contiguous memory is short in your system. Otherwise even unnecessary.

 Could you explain a bit more why you want carveout size on per-board basis?

 In the ideal case I would want to not have a carveout size at all.
 However there may be situations where you need to make sure some driver
 can allocate a given amount of memory. Having to specify this using a
 kernel command-line parameter is cumbersome because it may require
 changes to the bootloader or whatever. So if you know that a particular
 board always needs 128 MiB of carveout, then it makes sense to specify
 it on a per-board basis.
 
 Hm...I could understand somewhat;) but DT can also specify bootargs
 in dts file, which can support per-board-wide spec too, like the above
 sum of carveout needed from all drivers. I just want to avoid
 introducing a new parameter additionaly if we can make use of the
 existing mechanism.

The bootargs in the DT file is usually provided by (over-written by) the
bootloader. If we start requiring lots of random kernel command-line
arguments, that makes it more effort for the user of the bootloader
(e.g. distribution bootloader scripts, etc.) to create the kernel
command-line. I'd prefer to avoid that as much as possible. That said,
using a standardized command-line option that is (or will be) used by
all (ARM?) SoCs for the same purpose is reasonable, because there's
commonality there.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-28 Thread Stephen Warren
On 06/28/2012 05:12 AM, Thierry Reding wrote:
 On Wed, Jun 27, 2012 at 05:59:55PM +0200, Lucas Stach wrote:
 Am Mittwoch, den 27.06.2012, 16:44 +0200 schrieb Thierry Reding:
...
 In the ideal case I would want to not have a carveout size at
 all. However there may be situations where you need to make
 sure some driver can allocate a given amount of memory. Having
 to specify this using a kernel command-line parameter is
 cumbersome because it may require changes to the bootloader or
 whatever. So if you know that a particular board always needs
 128 MiB of carveout, then it makes sense to specify it on a
 per-board basis.
 
 If we go with CMA, this is a non-issue, as CMA allows to use the
 contig area for normal allocations and only purges them if it
 really needs the space for contig allocs.
 
 CMA certainly sounds like the most simple approach. While it may
 not be suited for 3D graphics or multimedia processing later on, I
 think we could use it at a starting point to get basic framebuffer
 and X support up and running. We can always move to something more
 advanced like TTM later.

I thought the whole purpose of CMA was to act as the infra-structure
to provide buffers to 3D, camera, etc. in particular allowing sharing
of buffers between them. In other words, isn't CMA the memory manager?
If there's some deficiency with CMA for 3D graphics, it seems like
that should be raised with those designing CMA. Or, am I way off base
with my expectations of CMA?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-06-29 Thread Stephen Warren
On 06/28/2012 11:19 AM, Lucas Stach wrote:
...
 CMA is just a way of providing large contiguous address space blocks in
 a dynamic fashion. ...
 
 TTM though solves more advanced matters, like buffer synchronisation
 between 3D and 2D block of hardware ...
 
 IMHO the best solution would be to use CMA as a flexible replacement of
 the static carveout area and put TTM on top of this ...

Ah right, thanks for the explanation. That makes sense to me now.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Tegra DRM device tree bindings

2012-07-06 Thread Stephen Warren
On 07/05/2012 06:15 AM, Thierry Reding wrote:
 Here's a new proposal that should address all issues collected
 during the discussion of the previous one:
 
 From tegra20.dtsi:
...

At a quick glance, that all seems pretty reasonable now.

 One problem I've come across when trying to get some rudimentary
 code working with this is that there's no longer a device which the
 DRM driver can bind to, because the top-level device (host1x) now
 has a separate driver.

Can't you just have the host1x driver trigger the instantiation of the
DRM driver? In other words, the host1x node is both the plain host1x
driver and the DRM driver. So, after initializing anything for host1x
itself, just end host1x's probe() with something a call to
drm_platform_init(), passing in host1x's own pdev.

After all, it seems like the whole point of representing host1x in the
DT is to expose the graphics HW, so you'd need the DRM device in that
case.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] of: Add videomode helper

2012-08-02 Thread Stephen Warren
On 07/04/2012 01:56 AM, Sascha Hauer wrote:
 This patch adds a helper function for parsing videomodes from the devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.

 diff --git a/Documentation/devicetree/bindings/video/displaymode 
 b/Documentation/devicetree/bindings/video/displaymode

 +Required properties:
 + - xres, yres: Display resolution
 + - left-margin, right-margin, hsync-len: Horizontal Display timing parameters
 +   in pixels
 +   upper-margin, lower-margin, vsync-len: Vertical display timing parameters 
 in
 +   lines

Perhaps bike-shedding, but...

For the margin property names, wouldn't it be better to use terminology
more commonly used for video timings rather than Linux FB naming. In
other words naming like:

hactive, hsync-len, hfront-porch, hback-porch?

 + - clock: displayclock in Hz
 +
 +Optional properties:
 + - width-mm, height-mm: Display dimensions in mm
 + - hsync-active-high (bool): Hsync pulse is active high
 + - vsync-active-high (bool): Vsync pulse is active high
 + - interlaced (bool): This is an interlaced mode
 + - doublescan (bool): This is a doublescan mode
 +
 +There are different ways of describing a display mode. The devicetree 
 representation
 +corresponds to the one used by the Linux Framebuffer framework described 
 here in
 +Documentation/fb/framebuffer.txt. This representation has been chosen 
 because it's
 +the only format which does not allow for inconsistent parameters.Unlike the 
 Framebuffer
 +framework the devicetree has the clock in Hz instead of ps.

As Rob mentioned, I think there shouldn't be any mention of Linux FB here.

 +
 +Example:
 +
 + display@0 {

This node appears to describe a video mode, not a display, hence the
node name seems wrong.

Many displays can support multiple different video modes. As mentioned
elsewhere, properties like display width/height are properties of the
display not the mode.

So, I might expect something more like the following (various overhead
properties like reg/#address-cells etc. elided for simplicity):

disp: display {
width-mm = ...;
height-mm = ...;
modes {
mode@0 {
/* 1920x1080p24 */
clock = 5200;
xres = 1920;
yres = 1080;
left-margin = 25;
right-margin = 25;
hsync-len = 25;
lower-margin = 2;
upper-margin = 2;
vsync-len = 2;
hsync-active-high;
};
mode@1 {
};
};
};

display-connector {
display = disp;
// interface-specific properties such as pixel format here
};

Finally, have you considered just using an EDID instead of creating
something custom? I know that creating an EDID is harder than writing a
few simple properties into a DT, but EDIDs have the following advantages:

a) They're already standardized and very common.

b) They allow other information such as a display's HDMI audio
capabilities to be represented, which can then feed into an ALSA driver.

c) The few LCD panel datasheets I've seen actually quote their
specification as an EDID already, so deriving the EDID may actually be easy.

d) People familiar with displays are almost certainly familiar with
EDID's mode representation. There are many ways of representing display
modes (sync position vs. porch widths, htotal specified rather than
specifying all the components and hence htotal being calculated etc.).
Not everyone will be familiar with all representations. Conversion
errors are less likely if the target is EDID's familiar format.

e) You'll end up with exactly the same data as if you have a DDC-based
external monitor rather than an internal panel, so you end up getting to
a common path in display handling code much more quickly.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] of: Add videomode helper

2012-08-02 Thread Stephen Warren
On 07/05/2012 08:51 AM, Rob Herring wrote:
 On 07/04/2012 02:56 AM, Sascha Hauer wrote:
 This patch adds a helper function for parsing videomodes from the devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.

 diff --git a/Documentation/devicetree/bindings/video/displaymode 
 b/Documentation/devicetree/bindings/video/displaymode

 +Example:
 +
 + display@0 {
 + /* 1920x1080p24 */
 + clock = 5200;
 
 Should this use the clock binding? You probably need both constraints
 and clock binding though.

I don't think the clock binding is appropriate here. This binding
specifies a desired video mode, and should specify just the expected
clock frequency for that mode. Perhaps any tolerance imposed by the
specification used to calculate the mode timing could be specified too,
but the allowed tolerance (for a mode to be recognized by the dispaly)
is more driven by the receiving device than the transmitting device.

The clock bindings should come into play in the display controller that
sends a signal in that display mode. Something like:

mode: hd1080p {
clock = 5200;
xres = 1920;
yres = 1080;

};

display-controller {
pixel-clock-source = clk ...; // common clock bindings here
default-mode = mode;

 Often you don't know the frequency up front and/or have limited control
 of the frequency (i.e. integer dividers). Then you have to adjust the
 margins to get the desired refresh rate. To do that, you need to know
 the ranges of values a panel can support. Perhaps you just assume you
 can increase the right-margin and lower-margins as I think you will hit
 pixel clock frequency max before any limit on margins.

I think this is more usually dealt with in HW design and matching
components.

The PLL/... feeding the display controller is going to have some known
specifications that imply which pixel clocks it can generate. HW
designers will pick a panel that accepts a clock the display controller
can generate. The driver for the display controller will just generate
as near of a pixel clock as it can to the rate specified in the mode
definition. I believe it'd be unusual for a display driver to start
fiddling with front-back porch (or margin) widths to munge the timing;
that kind of thing probably worked OK with analog displays, but with
digital displays where each pixel is clocked even in the margins, I
think that would cause more problems than it solved.

Similarly for external displays, the display controller will just pick
the nearest clock it can. If it can't generate a close enough clock, it
will just refuse to set the mode. In fact, a display controller driver
would typically validate this when the set of legal modes was enumerated
when initially detecting the display, so user-space typically wouldn't
even request invalid modes.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] of: Add videomode helper

2012-08-05 Thread Stephen Warren
On 08/03/2012 01:38 AM, Sascha Hauer wrote:
 Hi Stephen,
 
 On Thu, Aug 02, 2012 at 01:35:40PM -0600, Stephen Warren wrote:
 On 07/04/2012 01:56 AM, Sascha Hauer wrote:
 This patch adds a helper function for parsing videomodes from the 
 devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.

 diff --git a/Documentation/devicetree/bindings/video/displaymode 
 b/Documentation/devicetree/bindings/video/displaymode

 +Required properties:
 + - xres, yres: Display resolution
 + - left-margin, right-margin, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   upper-margin, lower-margin, vsync-len: Vertical display timing 
 parameters in
 +   lines

 Perhaps bike-shedding, but...

 For the margin property names, wouldn't it be better to use terminology
 more commonly used for video timings rather than Linux FB naming. In
 other words naming like:

 hactive, hsync-len, hfront-porch, hback-porch?
 
 Can do. Just to make sure:
 
 hactive == xres
 hsync-len == hsync-len
 hfront-porch == right-margin
 hback-porch == left-margin

I believe so yes.

 a) They're already standardized and very common.
 
 Indeed, that's a big plus for EDID. I have no intention of removing EDID
 data from the devicetree. There are situations where EDID is handy, for
 example when you get EDID data provided by your vendor.
 

 b) They allow other information such as a display's HDMI audio
 capabilities to be represented, which can then feed into an ALSA driver.

 c) The few LCD panel datasheets I've seen actually quote their
 specification as an EDID already, so deriving the EDID may actually be easy.

 d) People familiar with displays are almost certainly familiar with
 EDID's mode representation. There are many ways of representing display
 modes (sync position vs. porch widths, htotal specified rather than
 specifying all the components and hence htotal being calculated etc.).
 Not everyone will be familiar with all representations. Conversion
 errors are less likely if the target is EDID's familiar format.
 
 You seem to think about a different class of displays for which EDID
 indeed is a better way to handle. What I have to deal with here mostly
 are dumb displays which:
 
 - can only handle their native resolution
 - Have no audio capabilities at all
 - come with a datasheet which specify a min/typ/max triplet for
   xres,hsync,..., often enough they are scanned to pdf from some previously
   printed paper.
 
 These displays are very common on embedded devices, probably that's the
 reason I did not even think about the possibility that a single display
 might have different modes.

That's true, but as I mentioned, there are at least some dumb panels
(both I've seen recently) whose specification provides the EDID. I don't
know how common that is though, I must admit.

 e) You'll end up with exactly the same data as if you have a DDC-based
 external monitor rather than an internal panel, so you end up getting to
 a common path in display handling code much more quickly.
 
 All we have in our display driver currently is:
 
   edidp = of_get_property(np, edid, imxpd-edid_len);
   if (edidp) {
   imxpd-edid = kmemdup(edidp, imxpd-edid_len, GFP_KERNEL);
   } else {
   ret = of_get_video_mode(np, imxpd-mode, NULL);
   if (!ret)
   imxpd-mode_valid = 1;
   }

Presumably there's more to it though; something later checks
imxpd-mode_valid and if false, parses the EDID and sets up imxpd-mode,
etc.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [alsa-devel] [RFC] ASoC: snd_soc_jack for HDMI audio: does it make sense?

2012-08-24 Thread Stephen Warren
On 08/23/2012 07:44 PM, Ricardo Neri wrote:
 On 08/22/2012 02:55 AM, Takashi Iwai wrote:
 At Tue, 21 Aug 2012 19:58:02 -0500,
 Ricardo Neri wrote:
...
 Maybe the lack of audio support in drm is because the audio users should
 not talk to drm directly but to a lower level component (omapdrm,
 omapdss?). However, today there exists video technology supports audio
 as well, such as DisplayPort or HDMI. Could it make more sense now to
 provide audio support?

 The reason is that the audio and video handling are already separated
 in the hardware level, at least, for desktop graphics.
 
 Separated in what sense? Do they have separate register banks in

For NVIDIA desktop GPUs, this is certainly true, and I think so for any
Intel Azalia/HDA controller. The separate register banks are in
different PCI functions on the chip. The Intel Azalia/HDA spec also
architects specific ways that the audio and video parts interact (i.e.
ELD representation of EDID data, unsolicited response messages when the
video state changes, etc.) Oh, I see Takashi mentioned this below.

 independent power domains?

Most likely yes.

 Can audio an video work with complete
 independence. What happens if someone decides to power off video. Is the
 audio able to continue if required?

I believe audio DMA isn't affect by the video state, but I'm not 100%
sure of that.

 The audio infoframe is passed via ELD to the audio controller part
 upon plug/unplugging via HD-audio unsolicited event, and the audio
 driver sets up the stuff according to the given ELD.  Thus no extra
 interface between drm and ALSA was required in the kernel API level,
 so far.
 
 I see that the unsolicited event is used only to parse the EDID,
 correct? It also notifies about the jack status. Hence, if there the
 cable is disconnected the application will know and act accordingly. Is
 this understanding correct?

The kernel will know, and then passes the information on to user-space.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Kconfig DRM_USB/DRM_UDL, and select vs. depends, and causing Tegra USB to be disabled

2012-09-05 Thread Stephen Warren
With respect to the following commits:

df0b344 drm/usb: select USB_SUPPORT in Kconfig
8f057d7 gpu/mfd/usb: Fix USB randconfig problems

... which end up with the following in next-20120904:

config DRM_USB
depends on DRM
depends on USB_ARCH_HAS_HCD
select USB
select USB_SUPPORT

config DRM_UDL
depends on DRM  EXPERIMENTAL
depends on USB_ARCH_HAS_HCD
select DRM_USB

Surely this is backwards; these should be dependencies, not selects? In
other words:

config DRM_USB
depends on DRM  USB

config DRM_UDL
depends on DRM  EXPERIMENTAL  USB
select DRM_USB

or perhaps:

config DRM_USB
depends on DRM  USB

config DRM_UDL
depends on DRM  EXPERIMENTAL  DRM_USB

The problem here is that currently, the dependency logic for USB:

config USB
depends on USB_ARCH_HAS_HCD

... is duplicated into each of DRM_USB and DRM_UDL, thus requiring both
of those to be edited should the dependencies for USB ever change.

The current state of the code also causes some strange problem with
ARM's tegra_defconfig, whereby running make tegra_defconfig will
result in USB support fully enabled in .config as expected, yet
subsequently running make oldconfig will cause all USB support to be
removed from .config. For some reason, the above DRM logic is causing
CONFIG_USB_ARCH_HAS_HCD not to be selected (perhaps it isn't evaluated
because USB is selected, so there's no need to evaluate USB's
dependencies?). Arguably, this is a deficiency in Tegra's Kconfig, in
that it probably should say:

select USB_ARCH_HAS_EHCI

not:

select USB_ARCH_HAS_EHCI if USB_SUPPORT

... but it has contained the latter for quite some time, and it's always
worked before somehow.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Kconfig DRM_USB/DRM_UDL, and select vs. depends, and causing Tegra USB to be disabled

2012-09-05 Thread Stephen Warren
On 09/04/2012 02:00 PM, Guenter Roeck wrote:
 On Tue, Sep 04, 2012 at 01:19:12PM -0600, Stephen Warren wrote:
 With respect to the following commits:

 df0b344 drm/usb: select USB_SUPPORT in Kconfig
 8f057d7 gpu/mfd/usb: Fix USB randconfig problems

 ... which end up with the following in next-20120904:

 config DRM_USB
 depends on DRM
 depends on USB_ARCH_HAS_HCD
 select USB
 select USB_SUPPORT

 config DRM_UDL
 depends on DRM  EXPERIMENTAL
 depends on USB_ARCH_HAS_HCD
 select DRM_USB

 Surely this is backwards; these should be dependencies, not selects? In
 other words:

 config DRM_USB
 depends on DRM  USB

 config DRM_UDL
 depends on DRM  EXPERIMENTAL  USB
 select DRM_USB

 or perhaps:

 config DRM_USB
 depends on DRM  USB

 config DRM_UDL
 depends on DRM  EXPERIMENTAL  DRM_USB

 The problem here is that currently, the dependency logic for USB:

 config USB
  depends on USB_ARCH_HAS_HCD

 ... is duplicated into each of DRM_USB and DRM_UDL, thus requiring both
 of those to be edited should the dependencies for USB ever change.

 This should be fixed with in https://patchwork.kernel.org/patch/1373371/ (drm:
 udl: usb: Fix recursive Kconfig dependency), which should make it into the 
 next
 iteration of linux-next.

Yes, this does appear to solve all the problems for me. Thanks.

I still tend to believe that drivers should probably depend on things
rather than select them, but given the common precedent for select USB
that exists here, others clearly don't agree!

Sorry; accidentally sent the email too early last time:-(
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-09-21 Thread Stephen Warren
On 09/21/2012 05:22 AM, Leela Krishna Amudala wrote:
 This patch adds device tree based discovery support for exynos DRM-FIMD
 driver which includes driver modification to handle platform data in
 both the cases with DT and non-DT, Also adds the documentation for bindings.

 diff --git a/Documentation/devicetree/bindings/drm/exynos/fimd.txt 
 b/Documentation/devicetree/bindings/drm/exynos/fimd.txt
...
 + - samsung,fimd-display: This property should specify the phandle of the
 +   display device node which holds the video interface timing with the
 +   below mentioned properties.
 +
 +   - lcd-htiming: Specifies the horizontal timing for the overlay. The
 + horizontal timing includes four parameters in the following order.
 +
 + - horizontal back porch (in number of lcd clocks)
 + - horizontal front porch (in number of lcd clocks)
 + - hsync pulse width (in number of lcd clocks)
 + - Display panels X resolution.
 +
 +   - lcd-vtiming: Specifies the vertical timing for the overlay. The
 + vertical timing includes four parameters in the following order.
 +
 + - vertical back porch (in number of lcd lines)
 + - vertical front porch (in number of lcd lines)
 + - vsync pulse width (in number of lcd clocks)
 + - Display panels Y resolution.

Should this not use the new videomode timings that are under discussion at:

http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-09-21 Thread Stephen Warren
On 09/21/2012 01:22 AM, Inki Dae wrote:
 2012/9/21 Stephen Warren swar...@wwwdotorg.org:
 On 09/21/2012 05:22 AM, Leela Krishna Amudala wrote:
 This patch adds device tree based discovery support for exynos DRM-FIMD
 driver which includes driver modification to handle platform data in
 both the cases with DT and non-DT, Also adds the documentation for bindings.

 diff --git a/Documentation/devicetree/bindings/drm/exynos/fimd.txt 
 b/Documentation/devicetree/bindings/drm/exynos/fimd.txt
 ...
 + - samsung,fimd-display: This property should specify the phandle of the
 +   display device node which holds the video interface timing with the
 +   below mentioned properties.
 +
 +   - lcd-htiming: Specifies the horizontal timing for the overlay. The
 + horizontal timing includes four parameters in the following order.
 +
 + - horizontal back porch (in number of lcd clocks)
 + - horizontal front porch (in number of lcd clocks)
 + - hsync pulse width (in number of lcd clocks)
 + - Display panels X resolution.
 +
 +   - lcd-vtiming: Specifies the vertical timing for the overlay. The
 + vertical timing includes four parameters in the following order.
 +
 + - vertical back porch (in number of lcd lines)
 + - vertical front porch (in number of lcd lines)
 + - vsync pulse width (in number of lcd clocks)
 + - Display panels Y resolution.

 Should this not use the new videomode timings that are under discussion at:

 http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html

 
 ok, I agree with you. then the videomode helper is going to be merged
 to mainline(3.6)? if so, this patch should be reworked based on the
 videomode helper.

I think the videomode helpers would be merged for 3.7 at the very
earliest; 3.6 is cooked already. Given there are still some comments on
the binding, perhaps it won't happen until 3.8, but it'd be best to ask
on that thread so that people more directly involved with the status can
answer.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v4] of: Add videomode helper

2012-09-24 Thread Stephen Warren
On 09/24/2012 07:42 AM, Rob Herring wrote:
 On 09/19/2012 03:20 AM, Steffen Trumtrar wrote:
 This patch adds a helper function for parsing videomodes from the devicetree.
 The videomode can be either converted to a struct drm_display_mode or a
 struct fb_videomode.

 +++ b/Documentation/devicetree/bindings/video/displaymode
 @@ -0,0 +1,74 @@
 +videomode bindings
 +==
 +
 +Required properties:
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz
 
 A major piece missing is the LCD controller to display interface width
 and component ordering.

I thought this binding was solely defining the timing of the video
signal (hence video mode). Any definition of the physical interface to
the LCD/display-connector is something entirely orthogonal, so it seems
entirely reasonable to represent that separately.

 +Example:
 +
 +display@0 {
 
 It would be useful to have a compatible string here. We may not always
 know the panel type or have a fixed panel though. We could define
 generic-lcd or something for cases where the panel type is unknown.
 
 +width-mm = 800;
 +height-mm = 480;

I would hope that everything in the example above this point is just
that - an example, and this binding only covers the display mode
definition - i.e. that part of the example below.

If that's not the intent, as Rob says, there's a /ton/ of stuff missing.

 +modes {
 +mode0: mode@0 {
 +/* 1920x1080p24 */
 +clock = 5200;
 +hactive = 1920;
 +vactive = 1080;
 +hfront-porch = 25;
 +hback-porch = 25;
 +hsync-len = 25;
 +vback-porch = 2;
 +vfront-porch = 2;
 +vsync-len = 2;
 +hsync-active-high;
 +};
 +};
 +};

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-10-01 Thread Stephen Warren
On 09/30/2012 11:29 PM, Leela Krishna Amudala wrote:
 Hello Stephen Warren,
 
 The binding names that I use in my dts file should match with the
 names given in 
 http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html
 right?

I don't think so; the binding in that link is for example:

 + - xres, yres: Display resolution
 + - left-margin, right-margin, hsync-len: Horizontal Display timing parameters
 +   in pixels
 +   upper-margin, lower-margin, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz

i.e. a bunch of separate properties, one for each value needed to
describe the display timing. However, your patch contains:

 + - samsung,fimd-display: This property should specify the phandle of the
 +   display device node which holds the video interface timing with the
 +   below mentioned properties.
 +
 +   - lcd-htiming: Specifies the horizontal timing for the overlay. The
 + horizontal timing includes four parameters in the following order.
 +
 + - horizontal back porch (in number of lcd clocks)
 + - horizontal front porch (in number of lcd clocks)
 + - hsync pulse width (in number of lcd clocks)
 + - Display panels X resolution.

A single lcd-htiming property, which contains 4 values. (and a similar
construct for the vertical timing).

That seems entirely different to me...
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] of: add helper to parse display specs

2012-10-01 Thread Stephen Warren
On 09/24/2012 09:35 AM, Steffen Trumtrar wrote:
 Parse a display-node with timings and hardware-specs from devictree.

 diff --git a/Documentation/devicetree/bindings/video/display 
 b/Documentation/devicetree/bindings/video/display
 new file mode 100644
 index 000..722766a
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/video/display

This should be display.txt.

 @@ -0,0 +1,208 @@
 +display bindings
 +==
 +
 +display-node
 +

I'm not personally convinced about the direction this is going. While I
think it's reasonable to define DT bindings for displays, and DT
bindings for display modes, I'm not sure that it's reasonable to couple
them together into a single binding.

I think creating a well-defined timing binding first will be much
simpler than doing so within the context of a display binding; the
scope/content of a general display binding seems much less well-defined
to me at least, for reasons I mentioned before.

 +required properties:
 + - none
 +
 +optional properties:
 + - default-timing: the default timing value
 + - width-mm, height-mm: Display dimensions in mm

 + - hsync-active-high (bool): Hsync pulse is active high
 + - vsync-active-high (bool): Vsync pulse is active high

At least those two properties should exist in the display timing instead
(or perhaps as well). There are certainly cases where different similar
display modes are differentiated by hsync/vsync polarity more than
anything else. This is probably more likely with analog display
connectors than digital, but I see no reason why a DT binding for
display timing shouldn't cover both.

 + - de-active-high (bool): Data-Enable pulse is active high
 + - pixelclk-inverted (bool): pixelclock is inverted

 + - pixel-per-clk

pixel-per-clk is probably something that should either be part of the
timing definition, or something computed internally to the display
driver based on rules for the signal type, rather than something
represented in DT.

The above comment assumes this property is intended to represent DVI's
requirement for pixel clock doubling for low-pixel-clock-rate modes. If
it's something to do with e.g. a single-data-rate vs. double-data-rate
property of the underlying physical connection, that's most likely
something that should be defined in a binding specific to e.g. LVDS,
rather than something generic.

 + - link-width: number of channels (e.g. LVDS)
 + - bpp: bits-per-pixel
 +
 +timings-subnode
 +---
 +
 +required properties:
 +subnodes that specify
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz
 +
 +There are different ways of describing a display and its capabilities. The 
 devicetree
 +representation corresponds to the one commonly found in datasheets for 
 displays.
 +The description of the display and its timing is split in two parts: first 
 the display
 +properties like size in mm and (optionally) multiple subnodes with the 
 supported timings.
 +If a display supports multiple signal timings, the default-timing can be 
 specified.
 +
 +Example:
 +
 + display@0 {
 + width-mm = 800;
 + height-mm = 480;
 + default-timing = timing0;
 + timings {
 + timing0: timing@0 {

If you're going to use a unit address (@0) to ensure that node names
are unique (which is not mandatory), then each node also needs a reg
property with matching value, and #address-cells/#size-cells in the
parent. Instead, you could name the nodes something unique based on the
mode name to avoid this, e.g. 1080p24 { ... }.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH V6 2/2] video: drm: exynos: Add device tree support

2012-10-03 Thread Stephen Warren
On 10/02/2012 10:06 PM, Leela Krishna Amudala wrote:
 On Mon, Oct 1, 2012 at 9:50 PM, Stephen Warren swar...@wwwdotorg.org wrote:
 On 09/30/2012 11:29 PM, Leela Krishna Amudala wrote:
 Hello Stephen Warren,

 The binding names that I use in my dts file should match with the
 names given in 
 http://lists.freedesktop.org/archives/dri-devel/2012-July/024875.html
 right?

 I don't think so; the binding in that link is for example:

 + - xres, yres: Display resolution
 + - left-margin, right-margin, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   upper-margin, lower-margin, vsync-len: Vertical display timing 
 parameters in
 +   lines
 + - clock: displayclock in Hz

 i.e. a bunch of separate properties, one for each value needed to
 describe the display timing. However, your patch contains:
 
 I mean to say that even I have to use separate properties for each one
 instead of grouping them.
 Also the names should match with the ones given in the example..?

Yes. The patch I pointed to isn't supposed to be just an example, but
/the/ standard way of representing display timings.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-04 Thread Stephen Warren
On 10/04/2012 11:59 AM, Steffen Trumtrar wrote:

A patch description would be useful for something like this.

 diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
 b/Documentation/devicetree/bindings/video/display-timings.txt
 new file mode 100644
...
 +Usage in backend
 +

Everything before that point in the file looks fine to me.

Everything after this point in the file seems to be Linux-specific
implementation details. Does it really belong in the DT binding
documentation, rather than some Linux-specific documentation file?

 +struct drm_display_mode
 +===
 +
 +  
 +--+-+--+---+
 +  |  | |  |  
  |  ↑
 +  |  | |  |  
  |  |
 +  |  | |  |  
  |  |
 +  
 +--###--+---+ 
  |

I suspect the entire horizontal box above (and the entire vertical box
all the way down the left-hand side) should be on the bottom/right
instead of top/left. The reason I think this is because all of
vsync_start, vsync_end, vdisplay have to be referenced to some known
point, which is usually zero or the start of the timing definition, /or/
there would be some value indicating the size of the top marging/porch
in order to say where those other values are referenced to.

 +  |  #   ↑ ↑  ↑#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |   hdisplay #  |  
  |  |
 +  |  #--++---#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   |vsync_start |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |vsync_end |#  |  
  |  |
 +  |  #   | |  |vdisplay#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  |#  |  
  |  | vtotal
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  | hsync_start#  |  
  |  |
 +  |  #--+-+--+--|  
  |  |
 +  |  #   | |  |#  |  
  |  |
 +  |  #   | |  | hsync_end  #  |  
  |  |
 +  |  
 #--+-+--+--|  |
 +  |  #   | |  ↓#  |  
  |  |
 +  
 +--|#|--+---+ 
  |
 +  |  |   | |   |  |  
  |  |
 +  |  |   | |   |  |  
  |  |
 +  |  |   ↓ |   |  |  
  |  |
 +  
 +--+-+---+--+---+ 
  |
 +  |  | |   |  |  
  |  |
 +  |  | |   |  |  
  |  |
 +  |  | ↓   |  |  
  |  ↓
 +  
 +--+-+--+---+
 +   htotal
 +   
 -

 diff --git a/drivers/of/of_display_timings.c b/drivers/of/of_display_timings.c

 +static int parse_property(struct device_node *np, char *name,
 + struct timing_entry *result)

 + if (cells == 1)
 + ret = of_property_read_u32_array(np, name, result-typ, cells);

Should that branch not just set result-min/max to typ as well?
Presumably it'd prevent any code that interprets struct timing_entry
from having to check if those values were 0 or not?

 + else if (cells == 3)
 + ret = of_property_read_u32_array(np, name, result-min, cells);

 +struct display_timings 

Re: [PATCH 2/2 v6] of: add generic videomode description

2012-10-04 Thread Stephen Warren
On 10/04/2012 11:59 AM, Steffen Trumtrar wrote:
 Get videomode from devicetree in a format appropriate for the
 backend. drm_display_mode and fb_videomode are supported atm.
 Uses the display signal timings from of_display_timings

 +++ b/drivers/of/of_videomode.c

 +int videomode_from_timing(struct display_timings *disp, struct videomode *vm,

 + st = display_timings_get(disp, index);
 +
 + if (!st) {

It's a little odd to leave a blank line between those two lines.

Only half of the code in this file seems OF-related; the routines to
convert a timing to a videomode or drm display mode seem like they'd be
useful outside device tree, so I wonder if putting them into
of_videomode.c is the correct thing to do. Still, it's probably not a
big deal.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-05 Thread Stephen Warren
On 10/04/2012 03:35 PM, Guennadi Liakhovetski wrote:
 Hi Steffen
 
 Sorry for chiming in so late in the game, but I've long been wanting to 
 have a look at this and compare with what we do for V4L2, so, this seems a 
 great opportunity to me:-)
 
 On Thu, 4 Oct 2012, Steffen Trumtrar wrote:

 diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
 b/Documentation/devicetree/bindings/video/display-timings.txt

 +timings-subnode
 +---
 +
 +required properties:
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing parameters 
 in
 +   lines
 + - clock: displayclock in Hz
 
 You're going to hate me for this, but eventually we want to actually 
 reference clock objects in our DT bindings. For now, even if you don't 
 want to actually add clock phandles and stuff here, I think, using the 
 standard clock-frequency property would be much better!

In a definition of a display timing, we will never need to use the clock
binding; the clock binding would be used by the HW module that is
generating a timing, not by the timing definition itself.

That said, your comment about renaming the property to avoid any kind of
conceptual conflict is still quite valid. This is bike-shedding, but
pixel-clock might be more in line with typical video mode terminology,
although there's certainly preference in DT for using the generic term
clock-frequency that you proposed. Either is fine by me.

 +optional properties:
 + - hsync-active-high (bool): Hsync pulse is active high
 + - vsync-active-high (bool): Vsync pulse is active high
 
 For the above two we also considered using bool properties but eventually 
 settled down with integer ones:
 
 - hsync-active = 1
 
 for active-high and 0 for active low. This has the added advantage of 
 being able to omit this property in the .dts, which then doesn't mean, 
 that the polarity is active low, but rather, that the hsync line is not 
 used on this hardware. So, maybe it would be good to use the same binding 
 here too?

I agree. This also covers the case where analog display connectors often
use polarity to differentiate similar modes, yet digital connectors
often always use a fixed polarity since the receiving device can
measure the signal in more complete ways.

If the board HW inverts these lines, the same property names can exist
in the display controller itself, and the two values XORd together to
yield the final output polarity.

 + - de-active-high (bool): Data-Enable pulse is active high
 + - pixelclk-inverted (bool): pixelclock is inverted
 
 We don't (yet) have a de-active property in V4L, don't know whether we'll 
 ever have to distingsuish between what some datasheets call HREF and 
 HSYNC in DT, but maybe similarly to the above an integer would be 
 preferred. As for pixclk, we call the property pclk-sample and it's also 
 an integer.

Thinking about this more: de-active-high is likely to be a
board-specific property and hence something in the display controller,
not in the mode definition?

 + - interlaced (bool)
 
 Is interlaced a property of the hardware, i.e. of the board? Can the 
 same display controller on one board require interlaced data and on 
 another board - progressive?

Interlace is a property of a display mode. It's quite possible for a
particular display controller to switch between interlace and
progressive output at run-time. For example, reconfiguring the output
between 480i, 720p, 1080i, 1080p modes. Admittedly, if you're talking to
a built-in LCD display, you're probably always going to be driving the
single mode required by the panel, and that mode will likely always be
progressive. However, since this binding attempts to describe any
display timing, I think we still need this property per mode.

 BTW, I'm not very familiar with display 
 interfaces, but for interlaced you probably sometimes use a field signal, 
 whose polarity you also want to specify here? We use a field-even-active 
 integer property for it.

I think that's a property of the display controller itself, rather than
an individual mode, although I'm not 100% certain. My assertion is that
the physical interface that the display controller is driving will
determine whether embedded or separate sync is used, and in the separate
sync case, how the field signal is defined, and that all interlace modes
driven over that interface will use the same field signal definition.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-08 Thread Stephen Warren
On 10/08/2012 02:25 AM, Guennadi Liakhovetski wrote:
 On Fri, 5 Oct 2012, Stephen Warren wrote:
 
 On 10/04/2012 03:35 PM, Guennadi Liakhovetski wrote:
 Hi Steffen

 Sorry for chiming in so late in the game, but I've long been wanting to 
 have a look at this and compare with what we do for V4L2, so, this seems a 
 great opportunity to me:-)

 On Thu, 4 Oct 2012, Steffen Trumtrar wrote:

 diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
 b/Documentation/devicetree/bindings/video/display-timings.txt

 +timings-subnode
 +---
 +
 +required properties:
 + - hactive, vactive: Display resolution
 + - hfront-porch, hback-porch, hsync-len: Horizontal Display timing 
 parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: Vertical display timing 
 parameters in
 +   lines
 + - clock: displayclock in Hz

 You're going to hate me for this, but eventually we want to actually 
 reference clock objects in our DT bindings. For now, even if you don't 
 want to actually add clock phandles and stuff here, I think, using the 
 standard clock-frequency property would be much better!

 In a definition of a display timing, we will never need to use the clock
 binding; the clock binding would be used by the HW module that is
 generating a timing, not by the timing definition itself.
 
 You mean clock consumer bindings will be in the display device DT node? 
 And the display-timings node will be its child?

Yes

...
 + - interlaced (bool)

 Is interlaced a property of the hardware, i.e. of the board? Can the 
 same display controller on one board require interlaced data and on 
 another board - progressive?

 Interlace is a property of a display mode. It's quite possible for a
 particular display controller to switch between interlace and
 progressive output at run-time. For example, reconfiguring the output
 between 480i, 720p, 1080i, 1080p modes. Admittedly, if you're talking to
 a built-in LCD display, you're probably always going to be driving the
 single mode required by the panel, and that mode will likely always be
 progressive. However, since this binding attempts to describe any
 display timing, I think we still need this property per mode.
 
 But why do you need this in the DT then at all?

Because the driver for the display controller has no idea what display
or panel will be connected to it.

 If it's fixed, as required 
 per display controller, then its driver will know it. If it's runtime 
 configurable, then it's a purely software parameter and doesn't depend on 
 the board?

interlace-vs-progressive isn't fixed, as required per display
controller, but is a property of the mode being sent by the display
controller, and the requirements for that mode are driven by the
panel/display connected to the display controller, not the display
controller, in general.

...
 BTW, I'm not very familiar with display 
 interfaces, but for interlaced you probably sometimes use a field signal, 
 whose polarity you also want to specify here? We use a field-even-active 
 integer property for it.

 I think that's a property of the display controller itself, rather than
 an individual mode, although I'm not 100% certain. My assertion is that
 the physical interface that the display controller is driving will
 determine whether embedded or separate sync is used, and in the separate
 sync case, how the field signal is defined, and that all interlace modes
 driven over that interface will use the same field signal definition.
 
 In general, I might be misunderstanding something, but don't we have to 
 distinguish between 2 types of information about display timings: (1) is 
 defined by the display controller requirements, is known to the display 
 driver and doesn't need to be present in timings DT. We did have some of 
 these parameters in board data previously, because we didn't have proper 
 display controller drivers...

Yes, there probably is data of that kind, but the display mode timings
binding is only address standardized display timings information, not
controller-specific information, and hence doesn't cover this case.

 (2) is board specific configuration, and is
 such it has to be present in DT.

Certainly, yes.

 In that way, doesn't interlaced belong to type (1) and thus doesn't need 
 to be present in DT?

I don't believe so.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2 v6] of: add helper to parse display timings

2012-10-08 Thread Stephen Warren
On 10/08/2012 06:20 AM, Tomi Valkeinen wrote:
 On Mon, 2012-10-08 at 14:04 +0200, Laurent Pinchart wrote:
 On Monday 08 October 2012 12:01:18 Tomi Valkeinen wrote:
 On Mon, 2012-10-08 at 10:25 +0200, Guennadi Liakhovetski
 wrote:
...
 Of course, if this is about describing the hardware, the
 default-mode property doesn't really fit in...
 
 Maybe we should rename it to native-mode then ?
 
 Hmm, right, if it means native mode, then it is describing the
 hardware. But would it make sense to require that the native mode
 is the first mode in the list, then? This would make the separate 
 default-mode/native-mode property not needed.

I'm not sure if device-tree guarantees that the nodes enumerate in a
specific order. If it does, then that may be a reasonable solution.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v7 2/8] of: add helper to parse display timings

2012-11-01 Thread Stephen Warren
On 10/31/2012 03:28 AM, Steffen Trumtrar wrote:

Patch description? The patch defines the DT binding as well, which isn't
mentioned in the patch subject.

 new file mode 100644
 index 000..04c94a3
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/video/display-timings.txt

 +Usage in backend
 +

Everything before this point in the binding docs looks reasonable to me.
Everything after this point is Linux-specific/internal implementation
detail, and hence shouldn't be in the binding document.

I only read the DT binding.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm: Add NVIDIA Tegra20 support

2012-11-09 Thread Stephen Warren
On 11/09/2012 06:59 AM, Thierry Reding wrote:
 This commit adds a KMS driver for the Tegra20 SoC. This includes basic
 support for host1x and the two display controllers found on the Tegra20
 SoC. Each display controller can drive a separate RGB/LVDS output.

 diff --git 
 a/Documentation/devicetree/bindings/gpu/drm/nvidia,tegra20-host1x.txt 
 b/Documentation/devicetree/bindings/gpu/drm/nvidia,tegra20-host1x.txt
 new file mode 100644
 index 000..b4fa934
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/gpu/drm/nvidia,tegra20-host1x.txt

drm is a Linux-specific term, so shouldn't really be used as the
directory name for a binding. bindings/gpu/nvidia,tegra20-host1x.txt
would probably be just fine.

Aside from that, the bindings,
Acked-by: Stephen Warren swar...@nvidia.com

I don't really know anything about DRM or our display HW, so I haven't
reviewed the code at all. I certainly ack the concept of adding the
driver though! I have asked various other people at NVIDIA to give a
quick review of the code.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 1/2] drm: Add NVIDIA Tegra20 support

2012-11-09 Thread Stephen Warren
On 11/09/2012 06:59 AM, Thierry Reding wrote:
 This commit adds a KMS driver for the Tegra20 SoC. This includes basic
 support for host1x and the two display controllers found on the Tegra20
 SoC. Each display controller can drive a separate RGB/LVDS output.

I applied these two patches and the two arch/arm/mach-tegra patches you
posted, directly on top of next-20121109, and I see the following build
failure:

 drivers/gpu/drm/tegra/output.c: In function 'tegra_output_init':
 drivers/gpu/drm/tegra/output.c:166:9: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:166:3: error: implicit declaration of function 
 'of_get_display'
 drivers/gpu/drm/tegra/output.c:167:20: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:168:25: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:179:13: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:180:3: error: implicit declaration of function 
 'display_put'
 drivers/gpu/drm/tegra/output.c:180:21: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c:257:20: error: 'struct tegra_output' has no 
 member named 'display'
 drivers/gpu/drm/tegra/output.c: In function 'tegra_output_exit':
 drivers/gpu/drm/tegra/output.c:272:20: error: 'struct tegra_output' has no 
 member named 'display'

Does this depend on something not in linux-next?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v8 2/6] video: add of helper for videomode

2012-11-12 Thread Stephen Warren
On 11/12/2012 08:37 AM, Steffen Trumtrar wrote:
 This adds support for reading display timings from DT or/and convert one of 
 those
 timings to a videomode.
 The of_display_timing implementation supports multiple children where each
 property can have up to 3 values. All children are read into an array, that
 can be queried.
 of_get_videomode converts exactly one of that timings to a struct videomode.

 diff --git a/Documentation/devicetree/bindings/video/display-timings.txt 
 b/Documentation/devicetree/bindings/video/display-timings.txt

The device tree bindings look reasonable to me, so,

Acked-by: Stephen Warren swar...@nvidia.com
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 0/2] NVIDIA Tegra DRM driver

2012-11-12 Thread Stephen Warren
On 11/12/2012 02:55 PM, Thierry Reding wrote:
 This second version of this patch series addresses all the comments
 received so far. Most notably it takes advantage of the debugfs helpers
 provided by the DRM core. Oddly enough this actually increases the line
 count, but that's because the helpers don't fit with the subdevices
 approach as implemented by this driver. However some quick discussions
 with Rob Clark showed that Tegra DRM is not special in this respect but
 other drivers may need the same functionality. Eventually the debugfs
 code could be reworked on top of helpers that are better suited at the
 design of embedded, multi-device DRM drivers.
 
 Other than that there is some removal of code that was actually supposed
 to go into a later patch because it has dependencies that haven't been
 merged yet and some moving around of #defines and the device tree
 bindings documentation. Finally the driver now uses the DRM core's
 drm_compat_ioctl() instead of a custom and unimplemented (!) version.

The series,

Tested-by: Stephen Warren swar...@nvidia.com

(on the Harmony board's HDMI output; I'll test other boards/outputs later).
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 1/2] drm: Add NVIDIA Tegra20 support

2012-11-13 Thread Stephen Warren
On 11/13/2012 12:15 AM, Mark Zhang wrote:
 On 11/13/2012 05:55 AM, Thierry Reding wrote:
 This commit adds a KMS driver for the Tegra20 SoC. This includes basic
 support for host1x and the two display controllers found on the Tegra20
 SoC. Each display controller can drive a separate RGB/LVDS output.

Mark, when you're replying to a long patch, it's useful to quote only
the code you're directly responding to; slowly scrolling through your
email to find your comments is rather error-prone and time-consuming.

Thanks for the feedback though.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 1/2] drm: Add NVIDIA Tegra20 support

2012-11-13 Thread Stephen Warren
On 11/13/2012 02:37 AM, Thierry Reding wrote:
 On Tue, Nov 13, 2012 at 04:49:24PM +0800, Mark Zhang wrote:
 On 11/13/2012 03:48 PM, Thierry Reding wrote:
 * PGP Signed by an unknown key
 
 On Tue, Nov 13, 2012 at 03:15:47PM +0800, Mark Zhang wrote:
 On 11/13/2012 05:55 AM, Thierry Reding wrote:
 This commit adds a KMS driver for the Tegra20 SoC. This
 includes basic support for host1x and the two display
 controllers found on the Tegra20 SoC. Each display
 controller can drive a separate RGB/LVDS output.

 diff --git a/drivers/gpu/drm/tegra/Kconfig
 b/drivers/gpu/drm/tegra/Kconfig new file mode 100644 index
 000..be1daf7 --- /dev/null +++
 b/drivers/gpu/drm/tegra/Kconfig @@ -0,0 +1,23 @@ +config
 DRM_TEGRA +   tristate NVIDIA Tegra DRM +
 depends on DRM  OF  ARCH_TEGRA +   select
 DRM_KMS_HELPER +   select DRM_GEM_CMA_HELPER +
 select DRM_KMS_CMA_HELPER
 
 Just for curious, according to my testing, why the
 CONFIG_CMA is not enabled while DRM_GEM_CMA_HELPER 
 DRM_KMS_CMA_HELPER are enabled here?
 
 The reason is that CMA doesn't actually provide any API for
 drivers to use and in fact unless you use very large buffers
 you could indeed run this code on top of a non-CMA kernel and
 it will likely even work.
 
 
 Okay. But I think it's better to turn on CMA defaultly. During
 my testing, it's hard to allocate more 2MB without CMA...
 
 CMA is enabled by default in one of the Tegra default
 configuration patches in my tegra/next branch. I will submit that
 patch to Stephen when the 3.8 cycle starts, so that it'll be
 automatically enabled along with the DRM driver.
 
 But I don't think it makes sense to couple it to the DRM_TEGRA
 symbol as it isn't strictly required.

OK, I guess that approach makes sense; most people will just use the
defconfig and hence get a useful kernel, while flexibility will not be
lost if someone really wants.

Note that I have less than 1 week left to apply patches for 3.8. I
hope that if tegradrm makes it into the drm tree for 3.8, so will the
defconfig and other enablement patches to activate it.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2 0/2] NVIDIA Tegra DRM driver

2012-11-13 Thread Stephen Warren
On 11/12/2012 11:47 PM, Thierry Reding wrote:
 On Mon, Nov 12, 2012 at 05:17:18PM -0700, Stephen Warren wrote:
 On 11/12/2012 02:55 PM, Thierry Reding wrote:
 This second version of this patch series addresses all the
 comments received so far. Most notably it takes advantage of
 the debugfs helpers provided by the DRM core. Oddly enough this
 actually increases the line count, but that's because the
 helpers don't fit with the subdevices approach as implemented
 by this driver. However some quick discussions with Rob Clark
 showed that Tegra DRM is not special in this respect but other
 drivers may need the same functionality. Eventually the
 debugfs code could be reworked on top of helpers that are
 better suited at the design of embedded, multi-device DRM
 drivers.
 
 Other than that there is some removal of code that was actually
 supposed to go into a later patch because it has dependencies
 that haven't been merged yet and some moving around of #defines
 and the device tree bindings documentation. Finally the driver
 now uses the DRM core's drm_compat_ioctl() instead of a custom
 and unimplemented (!) version.
 
 The series,
 
 Tested-by: Stephen Warren swar...@nvidia.com
 
 (on the Harmony board's HDMI output; I'll test other
 boards/outputs later).
 
 You also gave an Acked-by for the DT binding documentation in the
 first version of this patchset, does it apply to the rest of the
 patch as well? That is, can I add it to patch 1?

I didn't actually read the rest of the patch since there are many
people much more familiar with the host1x/... code that will provide
useful feedback.

However, yes, I think it's fine to include my ack in the patch - it's
common to ack only parts of patches I believe.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v8 2/6] video: add of helper for videomode

2012-11-13 Thread Stephen Warren
On 11/13/2012 04:08 AM, Thierry Reding wrote:
 On Mon, Nov 12, 2012 at 04:37:02PM +0100, Steffen Trumtrar wrote:
 This adds support for reading display timings from DT or/and
 convert one of those timings to a videomode. The
 of_display_timing implementation supports multiple children where
 each property can have up to 3 values. All children are read into
 an array, that can be queried. of_get_videomode converts exactly
 one of that timings to a struct videomode.

 diff --git
 a/Documentation/devicetree/bindings/video/display-timings.txt
 b/Documentation/devicetree/bindings/video/display-timings.txt

 + - clock-frequency: displayclock in Hz
 
 display clock?

I /think/ I had suggested naming this clock-frequency before so that
the property name would be more standardized; other bindings use that
same name. But I'm not too attached to the name I guess.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 0/2] NVIDIA Tegra DRM driver

2012-11-15 Thread Stephen Warren
On 11/15/2012 02:28 PM, Thierry Reding wrote:
 This third version of the patch series cleans up some minor issues that
 were found in the previous versions, such as deferred probe not working
 properly and the display remaining enabled after the driver is removed.
 
 I've also put the two patches in this series into the tegra/drm/for-3.8
 branch of my repository on gitorious[0].

The series,

Tested-by: Stephen Warren swar...@nvidia.com

(on Harmony, using the HDMI output)

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: Add NVIDIA Tegra30 support

2012-11-16 Thread Stephen Warren
On 11/15/2012 09:58 PM, Mark Zhang wrote:
 This patch is based on Thierry's drm patch for Tegra20:
 - [PATCH v2 0/6] Device tree updates for host1x support
 - [PATCH v3 0/2] NVIDIA Tegra DRM driver

 diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
 index 216cd0f..6e9f1b4 100644
 --- a/drivers/gpu/drm/tegra/dc.c
 +++ b/drivers/gpu/drm/tegra/dc.c
 @@ -819,6 +819,7 @@ static int tegra_dc_remove(struct platform_device *pdev)
  
  static struct of_device_id tegra_dc_of_match[] = {
 { .compatible = nvidia,tegra20-dc, },
 + { .compatible = nvidia,tegra30-dc, },

The Tegra30 entries should come first in the table due to the order the
kernel (currently) matches compatible properties against theses tables.
The same comment applies for all 3 tables.

With the patch manually applied (due to the whitespace issues I
mentioned earlier), and this ordering issue fixed,

Tested-by: Stephen Warren swar...@nvidia.com

(On Cardhu, with no HDMI plugged in; see my next email for details).
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-11-26 Thread Stephen Warren
On 11/22/2012 12:37 PM, Thierry Reding wrote:
 Instead of using the stride derived from the display mode, use the pitch
 associated with the currently active framebuffer. This fixes a bug where
 the LCD display content would be skewed when enabling HDMI with a video
 mode different from that of the LCD.

This patch certainly doesn't cause any additional issues for me, so:

Tested-by: Stephen Warren swar...@nvidia.com

Howwever, it still doesn't allow both Cardhu's LCD panel and external
HDMI port (1080p) to be active at once. If I boot with both enabled, or
boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
are active, then some kind of display corruption starts; it looks like a
clocking issue or perhaps memory underflow.

Mark, can you please investigate this. I haven't tested to see if the
issue is Tegra30-specific, or also happens on Tegra20.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 5/8] ARM: tegra: Add auxiliary data for nvhost

2012-11-26 Thread Stephen Warren
On 11/26/2012 06:19 AM, Terje Bergstrom wrote:
 Add SoC specific auxiliary data to host1x and gr2d. nvhost uses
 this data.
 
 Signed-off-by: Terje Bergstrom tbergst...@nvidia.com
 Signed-off-by: Arto Merilainen amerilai...@nvidia.com

Arto's S-o-b really should be first and yours last since you're (Terje)
the one who touched the patches last.

 diff --git a/arch/arm/mach-tegra/board-dt-tegra20.c 
 b/arch/arm/mach-tegra/board-dt-tegra20.c

I think none of the changes the board-dt-tegra*.c should be made.

AUXDATA is a temporary measure to keep things working during the
transition to device tree. We want to remove entries from the AUXDATA
tables rather than add them. The only thing that's stopping us from
doing so right now is the lack of DT-based clock lookups, which hence
require devices to have a specific name.

 +static struct nvhost_device_data tegra_host1x_info = {
 + .clocks = { {host1x, UINT_MAX} },

 +static struct nvhost_device_data tegra_gr2d_info = {
 + .clocks = { {gr2d, UINT_MAX, true}, {epp, UINT_MAX, true} },

Clock names shouldn't be passed in platform data; instead, clk_get()
should be passed the device object and device-relative (i.e. not global)
clock name. I expect if the driver is fixed to make this change, the
changes to tegra*_clocks_data.c won't be needed either.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 5/8] ARM: tegra: Add auxiliary data for nvhost

2012-11-27 Thread Stephen Warren
On 11/26/2012 11:33 PM, Terje Bergström wrote:
 On 27.11.2012 01:39, Stephen Warren wrote:
 Clock names shouldn't be passed in platform data; instead, clk_get()
 should be passed the device object and device-relative (i.e. not global)
 clock name. I expect if the driver is fixed to make this change, the
 changes to tegra*_clocks_data.c won't be needed either.
 
 Isn't this code doing exactly that - getting a device relative clock,
 nvhost_module_init() in nvhost.acm.c:
 
 (...)
   /* initialize clocks to known state */
   while (pdata-clocks[i].name  i  NVHOST_MODULE_MAX_CLOCKS) {
   long rate = pdata-clocks[i].default_rate;
   struct clk *c;
 
   c = devm_clk_get(dev-dev, pdata-clocks[i].name);

The line above is getting the (device-relative) clock name from platform
data, rather than using some fixed name as it should be.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-11-27 Thread Stephen Warren
On 11/26/2012 08:16 PM, Mark Zhang wrote:
 On 11/27/2012 06:37 AM, Stephen Warren wrote:
 On 11/22/2012 12:37 PM, Thierry Reding wrote:
 Instead of using the stride derived from the display mode, use the pitch
 associated with the currently active framebuffer. This fixes a bug where
 the LCD display content would be skewed when enabling HDMI with a video
 mode different from that of the LCD.

 This patch certainly doesn't cause any additional issues for me, so:

 Tested-by: Stephen Warren swar...@nvidia.com

 Howwever, it still doesn't allow both Cardhu's LCD panel and external
 HDMI port (1080p) to be active at once. If I boot with both enabled, or
 boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
 are active, then some kind of display corruption starts; it looks like a
 clocking issue or perhaps memory underflow.
 
 I haven't observed this issue. What kind of display corruption you mean?
 Did it recover after some seconds or the display in LVDS panel was
 always corrupted?
 
 During my testing, I connected HDMI while booting cardhu and I can see
 the LVDS and HDMI working with no corruptions.

For your viewing pleasure (and playing with my new phone) :-)
http://www.youtube.com/watch?v=ZJxJnONz7DA

The external monitor is 1920x1200 I believe.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-11-27 Thread Stephen Warren
On 11/27/2012 11:17 AM, Stephen Warren wrote:
 On 11/26/2012 08:16 PM, Mark Zhang wrote:
 On 11/27/2012 06:37 AM, Stephen Warren wrote:
 On 11/22/2012 12:37 PM, Thierry Reding wrote:
 Instead of using the stride derived from the display mode, use the pitch
 associated with the currently active framebuffer. This fixes a bug where
 the LCD display content would be skewed when enabling HDMI with a video
 mode different from that of the LCD.

 This patch certainly doesn't cause any additional issues for me, so:

 Tested-by: Stephen Warren swar...@nvidia.com

 Howwever, it still doesn't allow both Cardhu's LCD panel and external
 HDMI port (1080p) to be active at once. If I boot with both enabled, or
 boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
 are active, then some kind of display corruption starts; it looks like a
 clocking issue or perhaps memory underflow.

 I haven't observed this issue. What kind of display corruption you mean?
 Did it recover after some seconds or the display in LVDS panel was
 always corrupted?

 During my testing, I connected HDMI while booting cardhu and I can see
 the LVDS and HDMI working with no corruptions.
 
 For your viewing pleasure (and playing with my new phone) :-)
 http://www.youtube.com/watch?v=ZJxJnONz7DA
 
 The external monitor is 1920x1200 I believe.

Jon Mayo says the corruption in the video is display (memory fetch)
underflow. Perhaps this is because (IIRC) the BCT I'm using on Cardhu
programs the memory controller at a slow rate, and the bootloader and/or
kernel is supposed to bump up the rate to the max, but that's not
implemented anywhere yet upstream. If you're testing with fastboot
instead of U-Boot, that might be re-programming the memory frequencies,
and hence avoiding this.

I guess we have a fun time ahead of us with mode validation and memory
controller programming.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 8/8] drm: tegra: Add gr2d device

2012-11-28 Thread Stephen Warren
On 11/28/2012 07:45 AM, Terje Bergström wrote:
 On 28.11.2012 16:06, Lucas Stach wrote:
 Why do even need/use dma-buf for this use case? This is all one DRM
 device, even if we separate host1x and gr2d as implementation modules.
 
 I didn't want to implement dependency to drm gem objects in nvhost, but
 we have thought about doing that. dma-buf brings quite a lot of
 overhead, so implementing support for gem buffers would make the
 sequence a bit leaner.
 
 nvhost already has infra to support multiple memory managers.
 
 So standard way of doing this is:
 1. create gem object for pushbuffer
 2. create fake mmap offset for gem obj
 3. map pushbuf using the fake offset on the drm device
 4. at submit time zap the mapping

 You need this logic anyway, as normally we don't rely on userspace to
 sync gpu and cpu, but use the kernel to handle the concurrency issues.
 
 Taking a step back - 2D streams are actually very short, in the order of
 100 bytes. Just copying them to kernel space would actually be faster
 than doing MMU operations.

I'm not sure it's a good idea to have one buffer submission mechanism
for the 2D class and another for the 3D/... class, nor to bet that 2D
streams will always be short.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] drm: tegra: Add maintainers entry

2012-11-28 Thread Stephen Warren
On 11/28/2012 12:18 PM, Thierry Reding wrote:
 Add myself as the maintainer of the NVIDIA Tegra DRM driver.

Aside from one comment below,

Acked-by: Stephen Warren swar...@nvidia.com

 +L:   dri-devel@lists.freedesktop.org

Should linux-te...@vger.kernel.org also be CC'd so that everything
Tegra-related goes to one place?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 1/8] video: tegra: Add nvhost driver

2012-11-29 Thread Stephen Warren
On 11/29/2012 03:21 AM, Terje Bergström wrote:
 On 28.11.2012 23:23, Thierry Reding wrote:
...
 + regs = platform_get_resource(dev, IORESOURCE_MEM, 0);
 + intr0 = platform_get_resource(dev, IORESOURCE_IRQ, 0);
 + intr1 = platform_get_resource(dev, IORESOURCE_IRQ, 1);
 +
 + if (!regs || !intr0 || !intr1) {

 I prefer to have these checked for explicitly, one by one for
 readability and potentially more useful diagnostics.
 
 Can do.
 
 Also you should be using platform_get_irq() for interrupts. Furthermore
 the host1x DT node (and the TRM) name the interrupts syncpt and
 general, so maybe those would be more useful variable names than
 intr0 and intr1.

 But since you don't use them anyway they shouldn't be part of this
 patch.
 
 True. I might also as well delete the general interrupt altogether, as
 we don't use it for any real purpose.

Do make sure the interrupts still are part of the DT binding though, so
that the binding fully describes the HW, and the interrupt is available
to retrieve if we ever do use it in the future.

 + for (i = 0; i  pdata-num_clks; i++)
 + clk_prepare_enable(pdata-clk[i]);
 + nvhost_syncpt_reset(host-syncpt);
 + for (i = 0; i  pdata-num_clks; i++)
 + clk_disable_unprepare(pdata-clk[i]);

 Stephen already hinted at this when discussing the AUXDATA. You should
 explicitly request the clocks.
 
 I'm not too happy about that idea. The clock management code is generic
 for all modules, and that's why it's driven by a data structure. Now
 Stephen and you ask me to unroll the loops and make copies of the code
 to facilitate different modules and different SoCs.

You can still create tables of clocks inside the driver and loop over
them. So, loop unrolling isn't related to my comments at least. It's
just that clk_get() shouldn't take its parameters from platform data.

But if these are clocks for (arbitrary) child modules (that may or may
not exist dynamically), why aren't the drivers for the child modules
managing them?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 1/8] video: tegra: Add nvhost driver

2012-11-29 Thread Stephen Warren
On 11/29/2012 04:47 AM, Thierry Reding wrote:
 On Thu, Nov 29, 2012 at 12:21:04PM +0200, Terje Bergström wrote:
 On 28.11.2012 23:23, Thierry Reding wrote:
 This could be problematic. Since drivers/video and
 drivers/gpu/drm are separate trees, this would entail a
 continuous burden on keeping both trees synchronized. While I
 realize that eventually it might be better to put the host1x
 driver in a separate place to accomodate for its use by other
 subsystems, I'm not sure moving it here right away is the best 
 approach.
 
 I understand your point, but I hope also that we'd end up with
 something that can be used as basis for the downstream kernel to
 migrate to upstream stack.
 
 The key point here is to make the API between nvhost and tegradrm
 as small and robust to changes as possible.
 
 I agree. But I also fear that there will be changes eventually and 
 having both go in via different tree requires those trees to be
 merged in a specific order to avoid breakage should the API change.
 This will be particularly ugly in linux-next.
 
 That's why I explicitly proposed to take this into
 drivers/gpu/drm/tegra for the time being, until we can be
 reasonably sure that the API is fixed. Then I'm fine with moving it
 wherever seems the best fit. Even then there might be the
 occasional dependency, but they should get fewer and fewer as the
 code matures.

It is acceptable for one maintainer to ack patches, and another
maintainer to merge a series that touches both their own code and
code owned by another tree. This should of course only be needed when
inter-module APIs change; changes to code within a module shouldn't
require this.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 2/8] video: tegra: Add syncpoint wait and interrupts

2012-11-29 Thread Stephen Warren
On 11/29/2012 01:44 AM, Thierry Reding wrote:
 On Mon, Nov 26, 2012 at 03:19:08PM +0200, Terje Bergstrom wrote:

 diff --git a/drivers/video/tegra/host/host1x/host1x_intr.c
 b/drivers/video/tegra/host/host1x/host1x_intr.c
 [...]
 +/* Spacing between sync registers */ +#define REGISTER_STRIDE 4
 
 Erm... no. The usual way you should be doing this is either make
 the register definitions account for the stride or use accessors
 that apply the stride. You should be doing the latter anyway to
 make accesses. For example:
 
 static inline void host1x_syncpt_writel(struct host1x *host1x, 
 unsigned long value, unsigned long offset) { writel(value,
 host1x-regs + SYNCPT_BASE + offset); }
 
 static inline unsigned long host1x_syncpt_readl(struct host1x
 *host1x, unsigned long offset) { return readl(host1x-regs +
 SYNCPT_BASE + offset); }
 
 Alternatively, if you want to pass the register index instead of
 the offset, you can use just multiply the offset in that function:
 
 writel(value, host1x-regs + SYNCPT_BASE + (offset  2));
 
 The same can also be done with the non-syncpt registers.

It seems like reasonable documentation to replace  2 with *
REGISTER_STRIDE here.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 1/8] video: tegra: Add nvhost driver

2012-12-03 Thread Stephen Warren
On 11/30/2012 03:38 AM, Thierry Reding wrote:
 On Fri, Nov 30, 2012 at 10:56:39AM +0200, Terje Bergström wrote:
 On 29.11.2012 13:47, Thierry Reding wrote:
 On Thu, Nov 29, 2012 at 12:21:04PM +0200, Terje Bergström
 wrote:
 Tegra20 and Tegra30 are compatible, but future chips are not.
 I was hoping we would be ready in upstream kernel for future
 chips.
 
 I think we should ignore that problem for now. Generally
 planning for any possible combination of incompatibilities
 leads to overgeneralized designs that require precisely these
 kinds of indirections.
 
 Once some documentation for Tegra 40 materializes we can start
 thinking about how to encapsulate the incompatible code.
 
 I think here our perspectives differ a lot. That is natural
 considering the company I work for and company you work for, so
 let's try to sync the perspective.
 
 In my reality, whatever is in market is old news and I barely
 work on them anymore. Upstreaming activity is the exception. 90%
 of my time is spent dealing with future chips which I know cannot
 be handled without this split to logical and physical driver
 parts.
 
 For you, Tegra2 and Tegra3 are the reality.
 
 To be fair, Tegra2 and Tegra3 are the reality for *everyone*
 *outside* NVIDIA.
 
 It's great that you spend most of your time working with future
 chips, but unless you submit the code for inclusion or review
 nobody upstream needs to be concerned about the implications. Most
 people don't have time to waste so we naturally try to keep the
 maintenance burden to a minimum.
 
 The above implies that when you submit code it shouldn't contain
 pieces that prepare for possible future extensions which may or may
 not be submitted (the exception being if such changes are part of a
 series where subsequent patches actually use them). The outcome is
 that the amount of cruft in the mainline kernel is kept to a
 minimum. And that's a very good thing.

I think there's room for letting Terje's complete knowledge of future
chips guide the design of the current code that's sent upstream.
Certainly we shouldn't add a ton of unnecessary abstraction layers
right now that aren't needed for Tegra20/30, but if there's some
decision that doesn't affect the bloat, opaqueness, ... of the current
code but one choice is better for future development without serious
negatives for the current code, it's pretty reasonable to make that
decision rather than the other.

(That all said, I haven't really followed the details of this
particular point, so I can't say how my comment applies to any
decisions being made right now - just that we shouldn't blanket reject
future knowledge when making decisions)

After all, making the right decision now will reduce the number/size
of patches later, and hence reduce code churn and reviewer load.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 1/8] video: tegra: Add nvhost driver

2012-12-03 Thread Stephen Warren
On 12/01/2012 07:58 AM, Thierry Reding wrote:
 On Sat, Dec 01, 2012 at 01:31:02PM +0200, Terje Bergström wrote:
...
 I was thinking of definitions like this:
 
 static inline u32 host1x_sync_cfpeek_ctrl_cfpeek_addr_f(u32 v) { 
 return (v  0x1ff)  0; }
 
 versus
 
 #define host1x_sync_cfpeek_ctrl_cfpeek_addr_f(v) ((v)  16) 
 0x3ff
 
 Both of these produce the same machine code and have same usage,
 but the latter has type checking and code coverage analysis and
 the former is (in my eyes) clearer. In both of these cases the
 usage is like this:
 
 writel(host1x_sync_cfpeek_ctrl_cfpeek_ena_f(1) |
 host1x_sync_cfpeek_ctrl_cfpeek_channr_f(chid) |
 host1x_sync_cfpeek_ctrl_cfpeek_addr_f(rd_ptr), m-sync_aperture +
 host1x_sync_cfpeek_ctrl_r());
 
 Again there's no precedent for doing this with static inline
 functions. You can do the same with macros. Type checking isn't an
 issue in these cases since we're talking about bitfields for which
 no proper type exists.

I suspect the inline functions could encode signed-vs-unsigned fields,
perhaps catch u8 variables when they should have been u32, etc.?
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: tegra: Use framebuffer pitch as line stride

2012-12-03 Thread Stephen Warren
On 12/03/2012 08:00 PM, Mark Zhang wrote:
 On 11/28/2012 02:37 PM, Mark Zhang wrote:
 On 11/28/2012 05:39 AM, Stephen Warren wrote:
 On 11/27/2012 11:17 AM, Stephen Warren wrote:
 On 11/26/2012 08:16 PM, Mark Zhang wrote:
 On 11/27/2012 06:37 AM, Stephen Warren wrote:
 On 11/22/2012 12:37 PM, Thierry Reding wrote:
 Instead of using the stride derived from the display mode, use the pitch
 associated with the currently active framebuffer. This fixes a bug where
 the LCD display content would be skewed when enabling HDMI with a video
 mode different from that of the LCD.

 This patch certainly doesn't cause any additional issues for me, so:

 Tested-by: Stephen Warren swar...@nvidia.com

 Howwever, it still doesn't allow both Cardhu's LCD panel and external
 HDMI port (1080p) to be active at once. If I boot with both enabled, or
 boot with just the LCD enabled and hot-plug HDMI, as soon as both heads
 are active, then some kind of display corruption starts; it looks like a
 clocking issue or perhaps memory underflow.

 I haven't observed this issue. What kind of display corruption you mean?
 Did it recover after some seconds or the display in LVDS panel was
 always corrupted?

 During my testing, I connected HDMI while booting cardhu and I can see
 the LVDS and HDMI working with no corruptions.

 For your viewing pleasure (and playing with my new phone) :-)
 http://www.youtube.com/watch?v=ZJxJnONz7DA

 The external monitor is 1920x1200 I believe.

 Jon Mayo says the corruption in the video is display (memory fetch)
 underflow. Perhaps this is because (IIRC) the BCT I'm using on Cardhu
 programs the memory controller at a slow rate, and the bootloader and/or
 kernel is supposed to bump up the rate to the max, but that's not
 implemented anywhere yet upstream. If you're testing with fastboot
 instead of U-Boot, that might be re-programming the memory frequencies,
 and hence avoiding this.


 All right, I just test the framebuffer console and xinit, I didn't
 install the whole ubuntu.

 I'll install the ubuntu in my cardhu and see whether I have this kind of
 issues.
 
 Hi swarren, I installed ubuntu 12.04 in l4t and didn't observe the issue
 you described. The display worked with no corruptions. I can show you
 the video if you want.
 
 What I used for testing is a cardhu board with our downstream U-Boot.
 
 But the HDMI didn't work. The HDMI monitor showed this: CANNOT DISPLAY
 THIS VIDEO MODE, CHANGE COMPUTER DISPLAY INPUT TO 1920x1080@60HZ. So
 sounds like the clock setting has some problems... I'll have a look at this.

Oh, I thought I'd followed up on this - Jon Mayo said it was display
underflow due to lack of memory bandwidth. IIRC, this may be due to the
BCT programming the memory controller for conservative settings that
don't require non-default voltages from the PMIC, with the expectation
that the bootloader or kernel will reprogram everything for correct
performance.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: First version of host1x intro

2012-12-06 Thread Stephen Warren
On 12/06/2012 01:13 AM, Mark Zhang wrote:
 On 12/06/2012 04:00 PM, Lucas Stach wrote:
 Am Donnerstag, den 06.12.2012, 15:49 +0800 schrieb Mark Zhang:
 [...]

 OK. So these relocation addresses are used to let userspace tells kernel
 which buffers mentioned in the command should be relocated to addresses
 which host1x clients able to reach.

 Yep, preferably all buffers referenced by a command stream should
 already be set up in such a position (CMA with Tegra2) or the relocation
 should be nothing more than setting up IOMMU page tables (Tegra3).

 I'm also wondering that, if our driver understands the stuffs in the
 commands, maybe we can find out all addresses in the command, in that
 way, we will not need userspace tells us which are the addresses need to
 be relocated, right?

 No. How will the kernel ever know which buffer gets referenced in a
 command stream? All the kernel sees is is a command stream with
 something like blit data to address 0xADDR in it. The only info that
 you can gather from that is that there must be some buffer to blit into.
 Neither do you know which buffer the stuff should be going to, nor can
 you know if you blit to offset zero in this buffer. It's perfectly valid
 to only use a subregion of a buffer.

 Or maybe I'm misunderstanding you and you mean it the other way around.
 You don't let userspace dictate the addresses, the relocation
 information just tells the kernel to find the addresses of the
 referenced buffers for you and insert them, instead of the dummy
 information, into the command stream.
 
 Yes, I think this is what I mean. No dummy information in the command
 stream, userspace just fills the address which it uses(actually this is
 cpu address of the buffer) in the command stream, and our driver must
 have a HashTable or something which contains the buffer address pair --
 (cpu address, dma address), so our driver can find the dma addresses for
 every buffer then modify the addresses in the command stream.

Typically there would be no CPU address; there's no need in most cases
to ever map most buffers to the CPU.

Automatically parsing the buffer sounds like an interesting idea, but
then the kernel relocation code would have to know the meaning of every
single register or command-stream method in order to know which of
them take a buffer address as an argument. I am not familiar with this
HW specifically, so perhaps it's much more regular than I think and it's
actually easy to do that, but I imagine it'd be a PITA to implement that
(although perhaps we have to for the command-stream validation stuff
anyway?). Also, it'd be a lot quicker at least for larger command
buffers to go straight to the few locations in the command stream where
a buffer is referenced (i.e. use the side-band metadata for relocation)
rather than having the CPU re-read the entire command stream in the
kernel to parse it.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-13 Thread Stephen Warren
On 12/13/2012 01:57 AM, Thierry Reding wrote:
 On Thu, Dec 13, 2012 at 10:48:55AM +0200, Terje Bergström wrote:
 On 12.12.2012 18:08, Thierry Reding wrote:
 I've briefly discussed this with Stephen on IRC because I
 thought I had remembered him objecting to the idea of adding a
 dummy device just for this purpose. It turns out, however, that
 what he didn't like was to add a dummy node to the DT just to
 make this happen, but he has no (strong) objections to a dummy
 platform device.
 
 While I'm not very happy about that solution, I've been going
 over it for a week now and haven't come up with any better
 alternative that doesn't have its own disadvantages. So perhaps
 we should go ahead and implement that. For the host1x driver
 this really just means creating a platform device and adding it
 to the system, with some of the fields tweaked to make things
 work.
 
 Even the virtual device is not too beautiful. The problem is that
 the virtual device is not physical parent for DC, HDMI, etc, so 
 dev_get_drvdata(pdev-dev.parent) returns the data from host1x
 device, not the virtual device.
 
 We'll post with something that goes around this, but it's not
 going to be too pretty. Let's try to find the solution once we
 get the code out.
 
 After some more discussion with Stephen on IRC we came to the
 conclusion that the easiest might be to have tegra-drm call into
 host1x with something like:
 
 void host1x_set_drm_device(struct host1x *host1x, struct device
 *dev);

If host1x is registering the dummy device that causes tegradrm to be
instantiated, then presumably there's no need for the API above, since
host1x will already have the struct device * for tegradrm, since it
created it?

 Once the dummy device has been properly set up and have each client
 use
 
 struct device *host1x_get_drm_device(struct host1x *host1x);
 
 to obtain a pointer to the dummy device, such that it can receive
 the driver-private data using dev_get_drvdata() on the returned
 device. As long as tegra-drm hasn't registered with host1x yet, the
 above function could return ERR_PTR(-EPROBE_DEFER), so that
 dependencies are automatically handled. This is required because,
 tegra-drm not being the parent of the child devices, it can be
 guaranteed that it is probed before any of the children.
 
 That way we should be able to get around any circular
 dependencies, since we only call into host1x from tegra-drm but not
 the other way around.


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-14 Thread Stephen Warren
On 12/13/2012 11:09 PM, Terje Bergström wrote:
 On 13.12.2012 19:58, Stephen Warren wrote:
 On 12/13/2012 01:57 AM, Thierry Reding wrote:
 After some more discussion with Stephen on IRC we came to the
 conclusion that the easiest might be to have tegra-drm call into
 host1x with something like:

 void host1x_set_drm_device(struct host1x *host1x, struct device
 *dev);

 If host1x is registering the dummy device that causes tegradrm to be
 instantiated, then presumably there's no need for the API above, since
 host1x will already have the struct device * for tegradrm, since it
 created it?
 
 I didn't add the dummy device in my latest patch set. I first set out to
 add it, and moved the drm global data to be drvdata of that device. Then
 I noticed that it doesn't actually help at all.
 
 The critical accesses to the global data are from probes of DC, HDMI,
 etc.

OK

 They want to get the global data by getting drvdata of their parent.

There's no reason that /has/ to be so; they can get any information from
wherever it is, rather than trying to require it to be somewhere it isn't.

 The dummy device is not their parent - host1x is. There's no
 connection between the dummy and the real client devices.

It's quite possible for the client devices to ask their actual parent
(host1x) for the tegradrm struct device *, by calling a host1x API, and
have host1x implement that API by returning the dummy/virtual device
that it created. That should be just 1-2 lines of code to implement in
host1x, plus maybe a couple more to correctly return -EPROBE_DEFERRED
when appropriate.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-20 Thread Stephen Warren
On 12/20/2012 02:17 AM, Terje Bergström wrote:
 On 16.12.2012 14:16, Thierry Reding wrote:
 Okay, so we're back on the topic of using globals. I need to assert
 again that this is not an option. If we were to use globals, then we
 could just as well leave out the dummy device and just do all of that in
 the tegra-drm driver's initialization function.
 
 I found a way of dropping the global in a straightforward way, and
 introduce dummy device for drm_platform_init().
 
 I added dummy device and driver, and moved the tegradrm global
 (previously called struct host1x *host1x) allocation to happen in the
 probe. In addition, probe calls device tree node traversal to do the
 tegra_drm_add_client() calls. The dummy device is owner for this global.
 
 I changed the device tree node traversal so that it goes actually
 through each host1x child, checks if it's supported by tegradrm, and if
 so, sets its drvdata to point to the tegradrm data.

I'm not sure that sounds right. drvdata is something that a driver
should manage itself.

What's wrong with just having each device ask the host1x (its parent)
for a pointer to the (dummy) tegradrm device. That seems extremely
simple, and doesn't require abusing existing stuff like drvdata for
non-standard purposes.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-20 Thread Stephen Warren
On 12/20/2012 02:34 PM, Terje Bergström wrote:
 On 20.12.2012 22:30, Thierry Reding wrote:
 The problem with your proposed solution is that, even any of Stephen's
 valid objections aside, it won't work. Once the tegra-drm module is
 unloaded, the driver's data will be left in the current state and the
 link to the dummy device will be lost.
 
 The dummy device is created by tegradrm's module init, because it's used

No, the tegradrm driver object might be registered by tegradrm's module
init, but the dummy tegradrm platform device would need to be
created/registered by host1x's probe. Otherwise, the device would get
created even if there was no host1x/... in the system (disabled for some
reason, multi-SoC kernel, ...)
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-20 Thread Stephen Warren
On 12/20/2012 02:50 PM, Thierry Reding wrote:
 On Thu, Dec 20, 2012 at 11:34:26PM +0200, Terje Bergström wrote:
 On 20.12.2012 22:30, Thierry Reding wrote:
 The problem with your proposed solution is that, even any of
 Stephen's valid objections aside, it won't work. Once the
 tegra-drm module is unloaded, the driver's data will be left in
 the current state and the link to the dummy device will be
 lost.
 
 The dummy device is created by tegradrm's module init, because
 it's used only by tegradrm. When tegradrm is unloaded, all the
 tegradrm devices and drivers are unloaded and freed, including
 the dummy one.
 
 No, the children platform devices of host1x are not freed, so
 their driver data will still contain the data set by the previous
 call to their driver's .probe() function.
 
 But reading what you said again, you were proposing to set the
 children driver data from the tegra-drm dummy device's .probe(). In
 that case it would probably work.

Many things would work, but since tegradrm isn't (or at least should
not be) the driver for the platform devices which are host1x's
children, it shouldn't be randomly screwing around with those devices'
drvdata fields.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2012-12-21 Thread Stephen Warren
On 12/21/2012 01:57 AM, Arto Merilainen wrote:
 On 12/20/2012 07:14 PM, Stephen Warren wrote:

 What's wrong with just having each device ask the host1x (its parent)
 for a pointer to the (dummy) tegradrm device. That seems extremely

 
 We are talking about creating a dummy device because:
 - we need to give something for drm_platform_init(),
 - drm related data should be stored somewhere,

Yes to those too, I believe.

 - some data must be passed to all driver probe() functions. This is not
 hw-related data, but just some lists that are used to ensure that all
 drivers have been initialised before calling drm_platform_init().

I haven't really thought through /when/ host1x would create the dummy
device. I guess if tegradrm really must initialize after all the devices
that it uses (rather than using something like deferred probe) then the
above may be true.

 All these issues are purely tegra-drm related and solving them elsewhere
 (in host1x) would be just wrong! host1x would not even use the dummy
 device for anything so creating it there seems bizarre.

I see the situation more like:

* There's host1x hardware.

* There's a low-level driver just for host1x itself; the host1x driver.

* There's a high-level driver for the entire host1x complex of devices.
That is tegradrm. There may be more high-level drivers in the future
(e.g. v4l2 camera driver if it needs to aggregate a bunch of host1x
sub-devices liek tegradrm does).

* The presence of the host1x DT node logically implies that both the two
drivers in the previous two points should be instantiated.

* Linux instantiates a single device per DT node.

* So, it's host1x's responsibility to instantiate the other device(s). I
think it's reasonable for the host1x driver to know exactly what
features the host1x HW complex supports; raw host1x access being one,
and the higher level concept of a tegradrm being another. So, it's
reasonable for host1x to trigger the instantiation of tegradrm.

* If DRM core didn't stomp on the device object's drvdata (or whichever
field it is), I would recommend not creating a dummy device at all, but
rather having the host1x driver directly implement multiple driver
interfaces. There are many examples like this already in the kernel,
e.g. combined GPIO+IRQ driver, combined GPIO+IRQ+pinctrl driver, etc.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCHv4 5/8] drm: tegra: Remove redundant host1x

2012-12-24 Thread Stephen Warren
On 12/21/2012 11:50 PM, Terje Bergström wrote:
 On 21.12.2012 16:36, Thierry Reding wrote:
 On Fri, Dec 21, 2012 at 01:39:21PM +0200, Terje Bergstrom wrote:
 +static struct platform_driver tegra_drm_platform_driver = {
 +   .driver = {
 +   .name = tegradrm,

 This should be tegra-drm to match the module name.
 
 We've actually created two problems.
 
 First is that the device name should match driver name which should
 match module name. But host1x doesn't know the module name of tegradrm.

There's no hard requirement for the device/driver name to match the
module name. It's good thing to do, but nothing will blow up if it don't
(modules can use MODULE_ALIAS() to declare which drivers they expose).

But, what's the problem with host1x knowing the driver name; the host1x
driver and tegradrm driver are both part of the same code-base, so this
seems trivial to achieve.

 Second problem is that host1x driver creates tegradrm device even if
 tegradrm isn't loaded to system.

That's fine. If there's no driver, the device simply won't be probe()d.
That's just like a device node existing in device tree, but the driver
for it not being enabled in the kernel, or the relevant module not being
inserted.

 These mean that the device has to be created in tegra-drm module to have

I definitely disagree here.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2013-01-04 Thread Stephen Warren
On 01/04/2013 03:09 AM, Terje Bergström wrote:
...
 I think we have now two ways to go forward with cons and pros:
  1) Keep host1x and tegra-drm as separate driver
+ Code almost done
- we need dummy device and dummy driver
- extra code and API when host1x creates dummy device and its passed
 to tegra-drm

Just to play devil's advocate:

I suspect that's only a few lines of code.

- tegra-drm device would need to be a child of host1x device. Having
 virtual and real devices as host1x children sounds weird.

And I doubt that would cause problems.

  2) Merge host1x and tegra-drm into one module. drm is a subcomponent,
 and whatever other personalities we wish would also be subcomponents of
 host1x. host1x calls tegra-drm directly to handle preparation for drm
 initialization. As they're in the same module, circular dependency is ok.
+ Simpler conceptually (no dummy device/driver)
+ Less code
- Proposal doesn't yet exist

But that said, I agree this approach would be very reasonable; it seems
to me that host1x really is the main HW behind a DRM driver or a V4L2
driver or ... As such, it seems quite reasonable for a single struct
device to exist that represents host1x, and for the driver for that
device to register both a DRM and a V4L2 driver etc. The code could
physically be organized into separate modules, and under different
Kconfig options for configurability etc.

But either way, I'll let you (Thierry and Terje) work out which way to go.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 6/8] gpu: drm: tegra: Remove redundant host1x

2013-01-07 Thread Stephen Warren
On 01/07/2013 01:20 AM, Terje Bergström wrote:
 On 04.01.2013 22:25, Stephen Warren wrote:
 On 01/04/2013 03:09 AM, Terje Bergström wrote:
 ...
 I think we have now two ways to go forward with cons and pros:
  1) Keep host1x and tegra-drm as separate driver
+ Code almost done
- we need dummy device and dummy driver
- extra code and API when host1x creates dummy device and its passed
 to tegra-drm
...
  2) Merge host1x and tegra-drm into one module. drm is a subcomponent,
 and whatever other personalities we wish would also be subcomponents of
 host1x. host1x calls tegra-drm directly to handle preparation for drm
 initialization. As they're in the same module, circular dependency is ok.
+ Simpler conceptually (no dummy device/driver)
+ Less code
- Proposal doesn't yet exist

 But that said, I agree this approach would be very reasonable; it seems
 to me that host1x really is the main HW behind a DRM driver or a V4L2
 driver or ... As such, it seems quite reasonable for a single struct
 device to exist that represents host1x, and for the driver for that
 device to register both a DRM and a V4L2 driver etc. The code could
 physically be organized into separate modules, and under different
 Kconfig options for configurability etc.

 But either way, I'll let you (Thierry and Terje) work out which way to go.
 
 If we want separate modules, we'd need the dummy device  dummy driver
 binding between them.

I haven't really thought it through, but I don't think so; I was
thinking separate modules more just to allow linking smaller chunks of
code at once rather than allowing optional functionality via loading (or
not) various modules. Hence, simple function calls between the
files/modules. Still, there may well be no need at all to split it into
separate modules.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/exynos: Get HDMI version from device tree

2013-01-07 Thread Stephen Warren
On 01/07/2013 01:43 PM, Sean Paul wrote:
 Add a property to the hdmi node so we can specify the HDMI version in
 the device tree instead of just defaulting to v1.4 with the existence of
 the dt node.

 diff --git a/Documentation/devicetree/bindings/drm/exynos/hdmi.txt 
 b/Documentation/devicetree/bindings/drm/exynos/hdmi.txt

 @@ -11,6 +11,8 @@ Required properties:
   c) pin function mode.
   d) optional flags and pull up/down.
   e) drive strength.
 +- samsung,supports-hdmi-1.4: Define if device supports HDMI v1.4
 +- samsung,supports-hdmi-1.3: Define if device supports HDMI v1.3

Which device; the HDMI controller in the SoC, or the HDMI sink?

The HDMI sync reports what it supports in the EDID, so there shouldn't
be a need to duplicate this in the device tree (besides, how can you
know what the user plugged in without parsing the EDID?)
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm/exynos: Get HDMI version from device tree

2013-01-08 Thread Stephen Warren
On 01/07/2013 04:12 PM, Sean Paul wrote:
 
 On Jan 7, 2013 5:32 PM, Stephen Warren swar...@wwwdotorg.org
 mailto:swar...@wwwdotorg.org wrote:

 On 01/07/2013 01:43 PM, Sean Paul wrote:
  Add a property to the hdmi node so we can specify the HDMI version in
  the device tree instead of just defaulting to v1.4 with the existence of
  the dt node.

  diff --git a/Documentation/devicetree/bindings/drm/exynos/hdmi.txt
 b/Documentation/devicetree/bindings/drm/exynos/hdmi.txt

  @@ -11,6 +11,8 @@ Required properties:
c) pin function mode.
d) optional flags and pull up/down.
e) drive strength.
  +- samsung,supports-hdmi-1.4: Define if device supports HDMI v1.4
  +- samsung,supports-hdmi-1.3: Define if device supports HDMI v1.3

 Which device; the HDMI controller in the SoC, or the HDMI sink?

 
 It's the controller.

Ah OK. Is it different versions of the controller HW IP block that
support the different HDMI versions, or is it some kind of
fusing/configuration with the same HW block? If different versions of
HW, wouldn't this difference usually be represented by different
compatible values, that indicate the exact HW version? I guess if the
only difference really is just the HDMI support and there are no other
bugs/quirks/enhancements, a separate property might make sense.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] drm/exynos: Get HDMI version from device tree

2013-01-08 Thread Stephen Warren
On 01/08/2013 01:16 PM, Sean Paul wrote:
 Add a property to the hdmi node so we can specify the HDMI version in
 the device tree instead of just defaulting to v1.4 with the existence of
 the dt node.

I guess this seems OK to me if required, although I'd certainly like to
see someone familiar with the Exynos HW confirm whether this should be
driven purely by DT compatible value for the HDMI IP block instead though.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCHv5 7/8] ARM: tegra: Add board data and 2D clocks

2013-01-15 Thread Stephen Warren
On 01/15/2013 04:26 AM, Terje Bergstrom wrote:
 Add a driver alias gr2d for Tegra 2D device, and assign a duplicate
 of 2D clock to that driver alias.

FYI on this one patch - it won't be applied to the Tegra tree until
after Prashant's common clock framework changes are applied. As such, it
will need some rework once those patches are applied, or perhaps won't
even be relevant any more; see below.

 diff --git a/arch/arm/mach-tegra/board-dt-tegra20.c 
 b/arch/arm/mach-tegra/board-dt-tegra20.c

 + OF_DEV_AUXDATA(nvidia,tegra20-gr2d, 0x5414, gr2d, NULL),

I assume the only reason to add AUXDATA is to give the device a specific
name, which will then match the driver name in the clock driver:

 - CLK_DUPLICATE(2d, tegra_grhost, gr2d),
 + CLK_DUPLICATE(2d, gr2d, gr2d),

If so, this shouldn't be needed once the common clock framework patches
are applied, since all device clocks will be retrieved from device tree,
and hence the device name will be irrelevant; the phandle in device tree
is all that will matter.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm: avoid mono_time_offset may be used uninitialized

2013-01-29 Thread Stephen Warren
From: Stephen Warren swar...@nvidia.com

Silence the following:
drivers/gpu/drm/drm_irq.c: In function 'drm_calc_vbltimestamp_from_scanoutpos':
drivers/gpu/drm/drm_irq.c:583:24: warning: 'mono_time_offset.tv64' may be used 
uninitialized in this function

... by always initializing mono_time_offset to zero. In practice, this
warning is false, since mono_time_offset is both set and used under the
condition if (!drm_timestamp_monotonic). However, at least my compiler
can't be coerced into realizing this; almost any code between the if
blocks that set and use the variable causes this warning.

Signed-off-by: Stephen Warren swar...@nvidia.com
---
I'm not sure what the current thinking is re: silencing warnings like this;
IIRC some people may dislike this kind of change. Perhaps if this change
alone is problematic, one could additionally remove the if condition on the
use of mono_time_offset, thus always using the value?
---
 drivers/gpu/drm/drm_irq.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
index 19c01ca..b199818 100644
--- a/drivers/gpu/drm/drm_irq.c
+++ b/drivers/gpu/drm/drm_irq.c
@@ -580,7 +580,7 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device 
*dev, int crtc,
  unsigned flags,
  struct drm_crtc *refcrtc)
 {
-   ktime_t stime, etime, mono_time_offset;
+   ktime_t stime, etime, mono_time_offset = {0};
struct timeval tv_etime;
struct drm_display_mode *mode;
int vbl_status, vtotal, vdisplay;
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] drm/exynos: Get HDMI version from device tree

2013-01-30 Thread Stephen Warren
On 01/30/2013 06:16 PM, Inki Dae wrote:
 2013/1/30 Sylwester Nawrocki sylvester.nawro...@gmail.com:
 Hi,


 On 01/08/2013 11:56 PM, Stephen Warren wrote:

 On 01/08/2013 01:16 PM, Sean Paul wrote:

 Add a property to the hdmi node so we can specify the HDMI version in
 the device tree instead of just defaulting to v1.4 with the existence of
 the dt node.


 I guess this seems OK to me if required, although I'd certainly like to
 see someone familiar with the Exynos HW confirm whether this should be
 driven purely by DT compatible value for the HDMI IP block instead though.


 I think the supported HDMI standard is something that could well be derived
 from the compatible property. The IP supporting v1.3 and v1.4 will be
 significantly different, so this would anyway already need to be reflected
 in the compatible property. The only issue I see here is that people tend
 to make the compatible string overly generic, so it is hardly usable for
 anything but matching an IP with its driver. For instance for exynos5 we
 have now (Documentation/devicetree/bindings/drm/exynos/hdmi.txt):

 compatible = samsung,exynos5-hdmi;

 For Exynos4 series there were already some patches proposed [1], but I
 believe
 this isn't a clean solution. Instead of things like:

 compatible = samsung,exynos4-hdmi13;
 compatible = samsung,exynos4-hdmi14;

 I would much more like to see the SoC version embedded in the compatible
 string, e.g.

 
 Hi Sylwester. long time no see.
 
 I think that if we use the SoC version embedded in the compatible
 string then each driver shoud aware of the ip version to the SoC to

The driver only needs to be aware of one SoC version for each IP version.

So with Sylwester's proposal:

 compatible = samsung,exynos4210-hdmi; /* among others it carries an
   information this IP supports
   HDMI v1.3 */

 compatible = samsung,exynos4212-hdmi; /* HDMI v1.4, IIRC */

The driver woulud only ever have to know about those two compatible
values (unless further incompatible HW revisions exist); any other SoC
would be listed as being compatible with one of those two strings (but
in addition to the specific value for the specific SoC, e.g. compatible
= samsung,exynox5xxx-hdmi, samsung,exynos4212-hdmi).

 use version specific feature so I think it's better to use it without
 the SoC version embedded in the compatible string like this,
 compatible = samsung,exynos4-hdmi
 version = 0x104 or 0x103

That would be quite non-typical.

 With this, all each driver to do  is to check version property and set
 version specific feature properly. And we have some dtsi file to can
 be used commonly.
 
 For example,
 exynos4.dtsi : have all Exynos4 series SoCs common properties and also
 use common compatible string.
 exynos4412.dtsi, exynos4212.dtsi and so on: have Exynos42xx specific
 properties. So the hdmi version string could be used here as version
 = 0x104 or 0x103
 exynos4412-board.dts: have board specific properties.
 
 compatible = samsung,exynos5-hdmi is reasonable to me.
 any opinions?

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm/exynos: Add device tree based discovery support for G2D

2013-01-31 Thread Stephen Warren
On 01/31/2013 06:27 PM, Inki Dae wrote:
 Hi Kukjin,
 
 -Original Message-
 From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
 ow...@vger.kernel.org] On Behalf Of Kukjin Kim
 Sent: Friday, February 01, 2013 9:15 AM
 To: 'Sylwester Nawrocki'; 'Inki Dae'
 Cc: 'Sachin Kamat'; linux-me...@vger.kernel.org; dri-
 de...@lists.freedesktop.org; devicetree-disc...@lists.ozlabs.org;
 patc...@linaro.org; s.nawro...@samsung.com
 Subject: RE: [PATCH 2/2] drm/exynos: Add device tree based discovery
 support for G2D

 Sylwester Nawrocki wrote:

 Hi Inki,

 Hi Sylwester and Inki,

 On 01/31/2013 02:30 AM, Inki Dae wrote:
 -Original Message-
 From: Sylwester Nawrocki [mailto:sylvester.nawro...@gmail.com]
 Sent: Thursday, January 31, 2013 5:51 AM
 To: Inki Dae
 Cc: Sachin Kamat; linux-me...@vger.kernel.org; dri-
 de...@lists.freedesktop.org; devicetree-disc...@lists.ozlabs.org;
 patc...@linaro.org; s.nawro...@samsung.com
 Subject: Re: [PATCH 2/2] drm/exynos: Add device tree based discovery
 support for G2D

 On 01/30/2013 09:50 AM, Inki Dae wrote:
 +static const struct of_device_id exynos_g2d_match[] = {
 +   { .compatible = samsung,g2d-v41 },

 not only Exynos5 and also Exyno4 has the g2d gpu and drm-based g2d
 driver shoud support for all Exynos SoCs. How about using
 samsung,exynos5-g2d instead and adding a new property 'version' to
 identify ip version more surely? With this, we could know which SoC
 and its g2d ip version. The version property could have '0x14' or
 others. And please add descriptions to dt document.

 Err no. Are you suggesting using samsung,exynos5-g2d compatible
 string
 for Exynos4 specific IPs ? This would not be correct, and you still
 can

 I assumed the version 'v41' is the ip for Exynos5 SoC. So if this
 version
 means Exynos4 SoC then it should be samsung,exynos4-g2d.

 Yes, v3.0 is implemented in the S5PC110 (Exynos3110) SoCs and
 Exynos4210,
 V4.1 can be found in Exynos4212 and Exynos4412, if I'm not mistaken.

 So we could have:

 compatible = samsung,exynos-g2d-3.0 /* for Exynos3110, Exynos4210 */
 compatible = samsung,exynos-g2d-4.1 /* for Exynos4212, Exynos4412 */

 In my opinion, this is better than later. Because as I said, when we can
 use
 IP version to identify, it is more clear and can be used

 One more, how about following?

 compatible = samsung,g2d-3.0
 compatible = samsung,g2d-4.1

 
 I think compatible string should be considered case by case.
 
 For example,
 If compatible = samsung,g2d-3.0 is added to exynos4210.dtsi, it'd be
 reasonable. But what if that compatible string is added to exynos4.dtsi?.
 This case isn't considered for exynos4412 SoC with v4.1. 

You can always add the most common value for the compatible property
into exynos4.dtsi, and then override it in exyons4210.dtsi, or other files.

Still, the idea of including the SoC version in the compatible value is
a good idea.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v2] drm/exynos: Get HDMI version from device tree

2013-01-31 Thread Stephen Warren
On 01/31/2013 10:36 AM, Stephen Warren wrote:
 On 01/31/2013 08:04 AM, Sean Paul wrote:
 ...
 I think if we take a step back, we're really not discussing HDMI
 version 1.3 vs. 1.4, we're really talking about the HDMI IP block
 version.
 
 Absolutely.
 
 The blocks just happen to implement different versions of the
 HDMI spec.
 
 Yes.
 
 The initial naming in the driver is unfortunate.

 That said, I think the above solution is fine, but it's a little
 misleading.  I'd much rather encode the version of the IP block
 instead of the SoC that contains it. Something like:

 compatible = samsung,exynos-hdmiXXX

 In this case, XXX is just some integer in the bindings that maps to an
 SoC. For example,

 +--+-+
 | HDMI IP version  | Exynos SoC  |
 +--+-+
 | samsung,exynos-hdmi1 | 4210|
 | samsung,exynos-hdmi2 | 4212, 5250  |
 +--+-+
 
 That seems reasonable to me. (But, does the documentation for these IP
 blocks specify the version in the format 1 and 2? It'd be best if
 the compatible value encoded the same version scheme as the IP block
 documentation).

The thing I forgot here is:

Even if the IP block is identical between two different SoCs, it's quite
possible it was tweaked just a tiny bit between the two SoCs, or
something about the environment into which the IP block was placed
differs. Either of those conditions could mean that the same IP block
instantiated into two different SoCs could end up requiring different
workarounds/bugfixes/... that the driver needs to know about. As such,
encoding the exact SoC into the compatible value, and then deriving the
IP block version from the SoC-specific compatible value, makes the most
sense.

This is really just what Olof was saying rephrased. I'm just following
up to change my mind on my assertion that the table above appears
reasonable.

And of course as Olof cares, the compatible value can contain multiple
values; the most specific first, and then progressively more generic
values, and if the driver doesn't care (yet?) about the differences
between some specific values, it can always /just/ match on the more
generic values until some specific WAR/bugfix is needed.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCHv5,RESEND 7/8] ARM: tegra: Add board data and 2D clocks

2013-02-04 Thread Stephen Warren
On 02/04/2013 04:26 AM, Thierry Reding wrote:
 On Tue, Jan 15, 2013 at 01:44:03PM +0200, Terje Bergstrom wrote:
 Add a driver alias gr2d for Tegra 2D device, and assign a
 duplicate of 2D clock to that driver alias.
 
 Signed-off-by: Terje Bergstrom tbergst...@nvidia.com --- 
 arch/arm/mach-tegra/board-dt-tegra20.c|1 + 
 arch/arm/mach-tegra/board-dt-tegra30.c|1 + 
 arch/arm/mach-tegra/tegra20_clocks_data.c |2 +- 
 arch/arm/mach-tegra/tegra30_clocks_data.c |1 + 4 files
 changed, 4 insertions(+), 1 deletion(-)
 
 With Prashant's clock rework patches now merged this patch can be 
 dropped.

Assuming this series is applied for 3.10 and not earlier, yes. I'd
certainly recommend applying for 3.10 not 3.9; the dependencies to
apply this for 3.9 given the AUXDATA/... requirements would be too
painful.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 1/3] drm/exynos: Get HDMI version from device tree

2013-02-05 Thread Stephen Warren
n 02/05/2013 04:42 PM, Sean Paul wrote:
 Use the compatible string in the device tree to determine which
 registers/functions to use in the HDMI driver. Also changes the
 references from v13 to 4210 and v14 to 4212 to reflect the IP
 block version instead of the HDMI version.

 diff --git a/Documentation/devicetree/bindings/drm/exynos/hdmi.txt 
 b/Documentation/devicetree/bindings/drm/exynos/hdmi.txt

Binding looks sane to me.

 diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c 
 b/drivers/gpu/drm/exynos/exynos_hdmi.c

  #ifdef CONFIG_OF
  static struct of_device_id hdmi_match_types[] = {
   {
 - .compatible = samsung,exynos5-hdmi,
 - .data   = (void *)HDMI_TYPE14,
 + .compatible = samsung,exynos4-hdmi,
   }, {
   /* end node */
   }

Why not fill in all the base compatible values there (I think you need
this anyway so that DTs don't all have to be compatible with
samsung,exynos4-hdmi), with .data containing the HDMI_VER_EXYNOS*
values, then ...

 @@ -2218,17 +2217,18 @@ static int hdmi_probe(struct platform_device *pdev)

 +
 + if (of_device_is_compatible(dev-of_node, samsung,exynos4210-hdmi))
 + hdata-version |= HDMI_VER_EXYNOS4210;
 + if (of_device_is_compatible(dev-of_node, samsung,exynos4212-hdmi))
 + hdata-version |= HDMI_VER_EXYNOS4212;
 + if (of_device_is_compatible(dev-of_node, samsung,exynos5250-hdmi))
 + hdata-version |= HDMI_VER_EXYNOS5250;

Instead of that, do roughly:

match = of_match_device(hdmi_match_types, pdev-dev);
if (match)
hdata-version |= (int)match-data;

That way, it's all table-based. Any future additions to
hdmi_match_types[] won't require another if statement to be added to
probe().
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 1/3] drm/exynos: Get HDMI version from device tree

2013-02-05 Thread Stephen Warren
On 02/05/2013 05:37 PM, Sean Paul wrote:
 On Tue, Feb 5, 2013 at 4:22 PM, Stephen Warren swar...@wwwdotorg.org wrote:
 n 02/05/2013 04:42 PM, Sean Paul wrote:
 Use the compatible string in the device tree to determine which
 registers/functions to use in the HDMI driver. Also changes the
 references from v13 to 4210 and v14 to 4212 to reflect the IP
 block version instead of the HDMI version.

 diff --git a/Documentation/devicetree/bindings/drm/exynos/hdmi.txt 
 b/Documentation/devicetree/bindings/drm/exynos/hdmi.txt

 Binding looks sane to me.

 diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c 
 b/drivers/gpu/drm/exynos/exynos_hdmi.c

  #ifdef CONFIG_OF
  static struct of_device_id hdmi_match_types[] = {
   {
 - .compatible = samsung,exynos5-hdmi,
 - .data   = (void *)HDMI_TYPE14,
 + .compatible = samsung,exynos4-hdmi,
   }, {
   /* end node */
   }

 Why not fill in all the base compatible values there (I think you need
 this anyway so that DTs don't all have to be compatible with
 samsung,exynos4-hdmi), with .data containing the HDMI_VER_EXYNOS*
 values, then ...

 
 At the moment, all DTs have to be compatible with exynos4-hdmi since
 it provides the base for the current driver. The driver uses 4210 and
 4212 to differentiate between different register addresses and
 features, but most things are just exynos4-hdmi compatible.

The DT nodes should include only the compatible values that the HW is
actually compatible with. If the HW isn't compatible with exynos4-hdmi,
that value shouldn't be in the compatible property, but instead whatever
the base value that the HW really is compatible with. The driver can
support multiple base compatible values from this table.

 @@ -2218,17 +2217,18 @@ static int hdmi_probe(struct platform_device *pdev)

 +
 + if (of_device_is_compatible(dev-of_node, samsung,exynos4210-hdmi))
 + hdata-version |= HDMI_VER_EXYNOS4210;
 + if (of_device_is_compatible(dev-of_node, samsung,exynos4212-hdmi))
 + hdata-version |= HDMI_VER_EXYNOS4212;
 + if (of_device_is_compatible(dev-of_node, samsung,exynos5250-hdmi))
 + hdata-version |= HDMI_VER_EXYNOS5250;

 Instead of that, do roughly:

 match = of_match_device(hdmi_match_types, pdev-dev);
 if (match)
 hdata-version |= (int)match-data;

 That way, it's all table-based. Any future additions to
 hdmi_match_types[] won't require another if statement to be added to
 probe().
 
 I don't think it's that easy. of_match_device returns the first match
 from the device table, so I'd still need to iterate through the
 matches. I could still break this out into a table, but I don't think
 of_match_device is the right way to probe it.

You shouldn't have to iterate over multiple matches. of_match_device()
is supposed to return the match for the first entry in the compatible
property, then if there was no match, move on to looking at the next
entry in the compatible property, etc. In practice, I think it's still
not implemented quite correctly for this, but you can make it work by
putting the newest compatible value first in the match table.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH v3 1/3] drm/exynos: Get HDMI version from device tree

2013-02-05 Thread Stephen Warren
On 02/05/2013 05:56 PM, Sean Paul wrote:
 On Tue, Feb 5, 2013 at 4:42 PM, Stephen Warren swar...@wwwdotorg.org wrote:
 On 02/05/2013 05:37 PM, Sean Paul wrote:
 On Tue, Feb 5, 2013 at 4:22 PM, Stephen Warren swar...@wwwdotorg.org 
 wrote:
 n 02/05/2013 04:42 PM, Sean Paul wrote:
 Use the compatible string in the device tree to determine which
 registers/functions to use in the HDMI driver. Also changes the
 references from v13 to 4210 and v14 to 4212 to reflect the IP
 block version instead of the HDMI version.

 diff --git a/Documentation/devicetree/bindings/drm/exynos/hdmi.txt 
 b/Documentation/devicetree/bindings/drm/exynos/hdmi.txt

 Binding looks sane to me.

 diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c 
 b/drivers/gpu/drm/exynos/exynos_hdmi.c

  #ifdef CONFIG_OF
  static struct of_device_id hdmi_match_types[] = {
   {
 - .compatible = samsung,exynos5-hdmi,
 - .data   = (void *)HDMI_TYPE14,
 + .compatible = samsung,exynos4-hdmi,
   }, {
   /* end node */
   }

 Why not fill in all the base compatible values there (I think you need
 this anyway so that DTs don't all have to be compatible with
 samsung,exynos4-hdmi), with .data containing the HDMI_VER_EXYNOS*
 values, then ...


 At the moment, all DTs have to be compatible with exynos4-hdmi since
 it provides the base for the current driver. The driver uses 4210 and
 4212 to differentiate between different register addresses and
 features, but most things are just exynos4-hdmi compatible.

 The DT nodes should include only the compatible values that the HW is
 actually compatible with. If the HW isn't compatible with exynos4-hdmi,
 that value shouldn't be in the compatible property, but instead whatever
 the base value that the HW really is compatible with. The driver can
 support multiple base compatible values from this table.
 
 All devices that use this driver are compatible, at some level, with
 exynos4-hdmi, so I think its usage is correct here.

But can a driver that only knows about the original exynos4-hdmi operate
any of the HW correctly without any additional knowledge? If not, the
new HW isn't compatible with the old.

 @@ -2218,17 +2217,18 @@ static int hdmi_probe(struct platform_device 
 *pdev)

 +
 + if (of_device_is_compatible(dev-of_node, 
 samsung,exynos4210-hdmi))
 + hdata-version |= HDMI_VER_EXYNOS4210;
 + if (of_device_is_compatible(dev-of_node, 
 samsung,exynos4212-hdmi))
 + hdata-version |= HDMI_VER_EXYNOS4212;
 + if (of_device_is_compatible(dev-of_node, 
 samsung,exynos5250-hdmi))
 + hdata-version |= HDMI_VER_EXYNOS5250;

 Instead of that, do roughly:

 match = of_match_device(hdmi_match_types, pdev-dev);
 if (match)
 hdata-version |= (int)match-data;

 That way, it's all table-based. Any future additions to
 hdmi_match_types[] won't require another if statement to be added to
 probe().

 I don't think it's that easy. of_match_device returns the first match
 from the device table, so I'd still need to iterate through the
 matches. I could still break this out into a table, but I don't think
 of_match_device is the right way to probe it.

 You shouldn't have to iterate over multiple matches. of_match_device()
 is supposed to return the match for the first entry in the compatible
 property, then if there was no match, move on to looking at the next
 entry in the compatible property, etc. In practice, I think it's still
 not implemented quite correctly for this, but you can make it work by
 putting the newest compatible value first in the match table.
 
 I think the only way that works is if you hardcode the compatible
 versions in the driver, like this:
 
 static struct of_device_id hdmi_match_types[] = {
 {
 .compatible = samsung,exynos5250-hdmi,
 .data = (void *)(HDMI_VER_EXYNOS5250 | HDMI_VER_EXYNOS4212);
 }, {
 .compatible = samsung,exynos4212-hdmi,
 .data = (void *)HDMI_VER_EXYNOS4212;
 }, {
 .compatible = samsung,exynos4210-hdmi,
 .data = (void *)HDMI_VER_EXYNOS4210;
 }, {
 /* end node */
 }
 };
 
 In that case, it eliminates the benefit of using device tree to
 determine the compatible bits. I hope I'm just being thick and missing
 something.

The table above looks /almost/ exactly correct to me, although I'm
unsure why samsung,exynos5250-hdmi has *two* version values; surely
there's a 1:1 mapping between the compatible values and the HW
compatibility they represent? That's certainly the intent.

Perhaps the two values think is because you're representing quirks or
features rather than HW versions? Compatible is supposed to represent HW
versions. Each HW version has a set of features/quirks, and multiple HW
versions can have intersecting sets of features/quirks. However,
features/quirks aren't HW versions

[PATCH] drm: tegra: don't depend on OF

2013-02-15 Thread Stephen Warren
From: Stephen Warren swar...@nvidia.com

ARCH_TEGRA always enabled OF, so there's no need for any driver to
depend on it.

Signed-off-by: Stephen Warren swar...@nvidia.com
---
 drivers/gpu/drm/tegra/Kconfig |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/tegra/Kconfig b/drivers/gpu/drm/tegra/Kconfig
index be1daf7..15974ae 100644
--- a/drivers/gpu/drm/tegra/Kconfig
+++ b/drivers/gpu/drm/tegra/Kconfig
@@ -1,6 +1,6 @@
 config DRM_TEGRA
tristate NVIDIA Tegra DRM
-   depends on DRM  OF  ARCH_TEGRA
+   depends on DRM  ARCH_TEGRA
select DRM_KMS_HELPER
select DRM_GEM_CMA_HELPER
select DRM_KMS_CMA_HELPER
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Broken build due to 6aed8ec drm: review locking for drm_fb_helper_restore_fbdev_mode

2013-02-19 Thread Stephen Warren
Daniel,

Commit 6aed8ec drm: review locking for
drm_fb_helper_restore_fbdev_mode (now in next-20130218 and later)
causes build failures for tegra_defconfig. The issue is this part of the
patch:

 diff --git a/drivers/gpu/drm/drm_fb_cma_helper.c 
 b/drivers/gpu/drm/drm_fb_cma_helper.c
 index 3742bc9..1b6ba2d 100644
 --- a/drivers/gpu/drm/drm_fb_cma_helper.c
 +++ b/drivers/gpu/drm/drm_fb_cma_helper.c
 @@ -389,8 +389,10 @@ EXPORT_SYMBOL_GPL(drm_fbdev_cma_fini);
   */
  void drm_fbdev_cma_restore_mode(struct drm_fbdev_cma *fbdev_cma)
  {
 +   drm_modeset_lock_all(dev);
 if (fbdev_cma)
 drm_fb_helper_restore_fbdev_mode(fbdev_cma-fb_helper);
 +   drm_modeset_unlock_all(dev);
  }
  EXPORT_SYMBOL_GPL(drm_fbdev_cma_restore_mode);

There, there is no dev variable, so compile fails.

If I revert this one patch, the build succeeds, although I didn't check
whether DRM still works after that (e.g. due to any dependencies from
the rest of the series).
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: nouveau lockdep splat

2013-03-06 Thread Stephen Warren
On 03/06/2013 12:14 PM, Marcin Slusarz wrote:
 On Wed, Mar 06, 2013 at 01:04:29AM +0100, Borislav Petkov wrote:
 On Tue, Mar 05, 2013 at 05:30:52PM +0100, Lucas Stach wrote:
 Dropping Tegra ML, it's not the place where Nouveau mails should go.

 $ ./scripts/get_maintainer.pl -f drivers/gpu/drm/nouveau/nv50_display.c
 ...
 linux-te...@vger.kernel.org (open list:TEGRA SUPPORT)

 Maybe get_maintainer.pl patterns need correction...
 
 That's new feature (introduced in commit eb90d0855b75f8 get_maintainer: allow
 keywords to match filenames) of get_maintainer.pl which now can look at file
 contents...

get_maintainer.pl could always look at file contents IIRC. The change
was that I added keyword tegra to the Tegra section that now matches
this file's contents.

./scripts/get_maintainer.pl -f drivers/gpu/drm/nouveau

... might be a better invocation, although perhaps I should add an
explicit exclusion for nouveau to the Tegra section in MAINTAINERS.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 1/2] get_maintainer: create filename-only regex match type

2013-03-06 Thread Stephen Warren
From: Stephen Warren swar...@nvidia.com

Create a new N: entry type in MAINTAINERS which performs a regex match
against filenames; either those extracted from patch +++ or --- lines,
or those specified on the command-line using the -f option.

This provides the same benefits as using a K: regex option to match a
set of filenames (see commit eb90d08 get_maintainer: allow keywords to
match filenames), but without the disadvantage that random file
content, such as comments,  will ever match the regex.

Suggested-by: Joe Perches j...@perches.com
Signed-off-by: Stephen Warren swar...@nvidia.com
---
 MAINTAINERS   |3 +++
 scripts/get_maintainer.pl |2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 9561658..1671842 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -90,6 +90,9 @@ Descriptions of section entries:
   F:   drivers/net/*   all files in drivers/net, but not below
   F:   */net/* all files in any top level directory/net
   One pattern per line.  Multiple F: lines acceptable.
+   N: Files and directories with regex patterns.
+  K:   [^a-z]tegra all files whose patch contains the word tegra
+  One pattern per line.  Multiple N: lines acceptable.
X: Files and directories that are NOT maintained, same rules as F:
   Files exclusions are tested before file matches.
   Can be useful for excluding a specific subdirectory, for instance:
diff --git a/scripts/get_maintainer.pl b/scripts/get_maintainer.pl
index ce4cc83..27dc5cb 100755
--- a/scripts/get_maintainer.pl
+++ b/scripts/get_maintainer.pl
@@ -611,7 +611,7 @@ sub get_maintainers {
$hash{$tvi} = $value_pd;
}
}
-   } elsif ($type eq 'K') {
+   } elsif ($type eq 'K' || $type eq 'N') {
if ($file =~ m/$value/x) {
$hash{$tvi} = 0;
}
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 2/2] MAINTAINERS: tegra: match related files using N: not K:

2013-03-06 Thread Stephen Warren
From: Stephen Warren swar...@nvidia.com

This causes the regex to be applied to filenames only, and not patch or
file content (such as comments). This should prevent e.g.
drivers/gpu/drm/nouveau/nv50_display.c from matching this entry.

Reported-by: Marcin Slusarz marcin.slus...@gmail.com
Signed-off-by: Stephen Warren swar...@nvidia.com
---
 MAINTAINERS |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 1671842..8b986e2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7851,7 +7851,7 @@ L:linux-te...@vger.kernel.org
 Q: http://patchwork.ozlabs.org/project/linux-tegra/list/
 T: git 
git://git.kernel.org/pub/scm/linux/kernel/git/swarren/linux-tegra.git
 S: Supported
-K: (?i)[^a-z]tegra
+N: [^a-z]tegra
 
 TEHUTI ETHERNET DRIVER
 M: Andy Gospodarek a...@greyhouse.net
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH V2 1/3] get_maintainer: create filename-only regex match type

2013-03-06 Thread Stephen Warren
From: Stephen Warren swar...@nvidia.com

Create a new N: entry type in MAINTAINERS which performs a regex match
against filenames; either those extracted from patch +++ or --- lines,
or those specified on the command-line using the -f option.

This provides the same benefits as using a K: regex option to match a
set of filenames (see commit eb90d08 get_maintainer: allow keywords to
match filenames), but without the disadvantage that random file
content, such as comments,  will ever match the regex.

Suggested-by: Joe Perches j...@perches.com
Signed-off-by: Stephen Warren swar...@nvidia.com
---
v2: Corrected typo in MAINTAINERS documentation
---
 MAINTAINERS   |3 +++
 scripts/get_maintainer.pl |2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 9561658..c9b1e37 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -90,6 +90,9 @@ Descriptions of section entries:
   F:   drivers/net/*   all files in drivers/net, but not below
   F:   */net/* all files in any top level directory/net
   One pattern per line.  Multiple F: lines acceptable.
+   N: Files and directories with regex patterns.
+  N:   [^a-z]tegra all files whose patch contains the word tegra
+  One pattern per line.  Multiple N: lines acceptable.
X: Files and directories that are NOT maintained, same rules as F:
   Files exclusions are tested before file matches.
   Can be useful for excluding a specific subdirectory, for instance:
diff --git a/scripts/get_maintainer.pl b/scripts/get_maintainer.pl
index ce4cc83..27dc5cb 100755
--- a/scripts/get_maintainer.pl
+++ b/scripts/get_maintainer.pl
@@ -611,7 +611,7 @@ sub get_maintainers {
$hash{$tvi} = $value_pd;
}
}
-   } elsif ($type eq 'K') {
+   } elsif ($type eq 'K' || $type eq 'N') {
if ($file =~ m/$value/x) {
$hash{$tvi} = 0;
}
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH V2 2/3] MAINTAINERS: tegra: match related files using N: not K:

2013-03-06 Thread Stephen Warren
From: Stephen Warren swar...@nvidia.com

This causes the regex to be applied to filenames only, and not patch or
file content (such as comments). This should prevent e.g.
drivers/gpu/drm/nouveau/nv50_display.c from matching this entry.

Reported-by: Marcin Slusarz marcin.slus...@gmail.com
Signed-off-by: Stephen Warren swar...@nvidia.com
---
 MAINTAINERS |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index c9b1e37..2d02ab5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7851,7 +7851,7 @@ L:linux-te...@vger.kernel.org
 Q: http://patchwork.ozlabs.org/project/linux-tegra/list/
 T: git 
git://git.kernel.org/pub/scm/linux/kernel/git/swarren/linux-tegra.git
 S: Supported
-K: (?i)[^a-z]tegra
+N: [^a-z]tegra
 
 TEHUTI ETHERNET DRIVER
 M: Andy Gospodarek a...@greyhouse.net
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH V2 3/3] get_maintainer: prevent keywords from matching filenames

2013-03-06 Thread Stephen Warren
From: Stephen Warren swar...@nvidia.com

This reverts most of eb90d08 get_maintainer: allow keywords to match
filenames; all except the parts that are required to implement the new
N: entry type.

The rationale is that it's better to have K: match just patch or file
content as it previously did, and N: match just filenames, rather than
have K: math all three cases. This gives more explicit control, and
removes the temptation to use K: for filenames, and then have those
keywords accidentally match unexpected patch or file content.

Signed-off-by: Stephen Warren swar...@nvidia.com
---
v2: New patch.
---
 MAINTAINERS   |9 -
 scripts/get_maintainer.pl |2 +-
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 2d02ab5..e68a07a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -100,13 +100,12 @@ Descriptions of section entries:
   X:   net/ipv6/
   matches all files in and below net excluding net/ipv6/
K: Keyword perl extended regex pattern to match content in a
-  patch or file, or an affected filename.  For instance:
+  patch or file.  For instance:
   K: of_get_profile
- matches patch or file content, or filenames, that contain
- of_get_profile
+ matches patches or files that contain of_get_profile
   K: \b(printk|pr_(info|err))\b
- matches patch or file content, or filenames, that contain one or
- more of the words printk, pr_info or pr_err
+ matches patches or files that contain one or more of the words
+ printk, pr_info or pr_err
   One regex pattern per line.  Multiple K: lines acceptable.
 
 Note: For the hard of thinking, this list is meant to remain in alphabetical
diff --git a/scripts/get_maintainer.pl b/scripts/get_maintainer.pl
index 27dc5cb..5e4fb14 100755
--- a/scripts/get_maintainer.pl
+++ b/scripts/get_maintainer.pl
@@ -611,7 +611,7 @@ sub get_maintainers {
$hash{$tvi} = $value_pd;
}
}
-   } elsif ($type eq 'K' || $type eq 'N') {
+   } elsif ($type eq 'N') {
if ($file =~ m/$value/x) {
$hash{$tvi} = 0;
}
-- 
1.7.10.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH V2 3/3] get_maintainer: prevent keywords from matching filenames

2013-03-06 Thread Stephen Warren
On 03/06/2013 05:30 PM, Joe Perches wrote:
 On Wed, 2013-03-06 at 17:29 -0700, Stephen Warren wrote:
 From: Stephen Warren swar...@nvidia.com

 This reverts most of eb90d08 get_maintainer: allow keywords to match
 filenames; all except the parts that are required to implement the new
 N: entry type.
 
 Just combine patches 1 and 3 into a single patch.

That would break bisectability; using MAINTAINERS after applying a
squashed 1+3 but without patch 2 applied would not operate as desired.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH V2 3/3] get_maintainer: prevent keywords from matching filenames

2013-03-06 Thread Stephen Warren
On 03/06/2013 05:40 PM, Joe Perches wrote:
 On Wed, 2013-03-06 at 17:34 -0700, Stephen Warren wrote:
 On 03/06/2013 05:30 PM, Joe Perches wrote:
 On Wed, 2013-03-06 at 17:29 -0700, Stephen Warren wrote:
 From: Stephen Warren swar...@nvidia.com

 This reverts most of eb90d08 get_maintainer: allow keywords to match
 filenames; all except the parts that are required to implement the new
 N: entry type.

 Just combine patches 1 and 3 into a single patch.

 That would break bisectability; using MAINTAINERS after applying a
 squashed 1+3 but without patch 2 applied would not operate as desired.
 
 smile
 
 Which is why I showed the whole thing in a single patch.
 No worries if it's simply to increase the patch count...

I'm not sure what showed refers to?

I guess if I squashed /everything/ into a single commit (i.e. the change
to the Tegra section in MAINTAINERS too) then there wouldn't be any
bisect issues. I really don't care about patch count. The reason for 1
patch is that there's often push-back if doing logically separate things
(adding N: feature, modifying a MAINTAINERS entry to use it, removing
part of K: feature) in a single patch...
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


  1   2   3   4   >