[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-22 Thread Terje Bergström
On 21.05.2012 16:58, Thierry Reding wrote:

> I agree that reflecting the hardware hierarchy within the device tree makes
> sense. I'll need to add some code to tear down child devices in the cleanup
> path, but that shouldn't be too difficult.
> 
> However, adding a whole bus_type implementation would pretty much duplicate
> the platform bus code. I guess host1x could just provide struct host1x_client
> a set of corresponding APIs to create them, request channels, etc.


Hi,

There is a reason for existence of bus_type in Linux kernel. It exposes
the various busses to developers, and give a framework for drivers to
work in. It just makes the drivers easier to develop, and makes the big
picture easier to understand.

The problem is that bus_type is cumbersome to implement, and most
implementations seem to duplicate significant amount of code from
platform bus. This is the problem that we should tackle.

If I manage to get the boilerplate code in nvhost for bus_type small
enough, that's the structure we should use. If bus_type is just
inherently fat and broken, I'll need to migrate nvhost away from it.

Terje


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-22 Thread Jon Mayo
On Wed, Apr 25, 2012 at 2:45 AM, Thierry Reding
 wrote:
> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> currently has rudimentary GEM support and can run a console on the
> framebuffer as well as X using the xf86-video-modesetting driver. Only
> the RGB output is supported.
>
> HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
> quite work. EDID data can be retrieved but the output doesn't properly
> activate the connected TV.
>
> The DSI and TVO outputs and the HOST1X driver are just stubs that setup
> the corresponding resources but don't do anything useful yet.
>

Nice work Thierry, I'm happy with the display programming bits. I
think the pll_p enable/disable problem with RGB is better fixed when
we have some fancier clock infrastructure. I have some ideas how it
could be fixed now, but given that I want to see pll_p or pll_d /
pll_d2 selected automatically based on final clock, I'd rather hold
off on fixing things that are not really broken that going to be
replaced anyways. You can save a little power by leaving D pll's off,
and the logic to do it isn't hard and fairly safe to hard code into
the driver instead of trying to represent it in DT. (FYI - pll_d2 is a
Tegra3 feature, this way you can do things like run DSI and HDMI with
low jitter, instead of trying to take over pll_c for HDMI, which
deprives other modules of the clock freedom)

I have more DSI devices on Tegra3 than on Tegra2, so I'm putting
Tegra3 support near the top of my TODO list, because in my mind
getting forward progress on DSI support depends on it. HDMI
programming is not too tough, it's mostly loading calibrated magic
values into some SOR registers. The values are different for T20 and
T30 series (same fields, just needed different calibration).

--
"If you give someone a program, you will frustrate them for a day; if
you teach them how to program, you will frustrate them for a
lifetime." — David Leinweber
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-22 Thread Terje Bergström
On 21.05.2012 16:58, Thierry Reding wrote:

> I agree that reflecting the hardware hierarchy within the device tree makes
> sense. I'll need to add some code to tear down child devices in the cleanup
> path, but that shouldn't be too difficult.
> 
> However, adding a whole bus_type implementation would pretty much duplicate
> the platform bus code. I guess host1x could just provide struct host1x_client
> a set of corresponding APIs to create them, request channels, etc.


Hi,

There is a reason for existence of bus_type in Linux kernel. It exposes
the various busses to developers, and give a framework for drivers to
work in. It just makes the drivers easier to develop, and makes the big
picture easier to understand.

The problem is that bus_type is cumbersome to implement, and most
implementations seem to duplicate significant amount of code from
platform bus. This is the problem that we should tackle.

If I manage to get the boilerplate code in nvhost for bus_type small
enough, that's the structure we should use. If bus_type is just
inherently fat and broken, I'll need to migrate nvhost away from it.

Terje
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-22 Thread Jon Mayo
On Wed, Apr 25, 2012 at 2:45 AM, Thierry Reding
 wrote:
> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> currently has rudimentary GEM support and can run a console on the
> framebuffer as well as X using the xf86-video-modesetting driver. Only
> the RGB output is supported.
>
> HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
> quite work. EDID data can be retrieved but the output doesn't properly
> activate the connected TV.
>
> The DSI and TVO outputs and the HOST1X driver are just stubs that setup
> the corresponding resources but don't do anything useful yet.
>

Nice work Thierry, I'm happy with the display programming bits. I
think the pll_p enable/disable problem with RGB is better fixed when
we have some fancier clock infrastructure. I have some ideas how it
could be fixed now, but given that I want to see pll_p or pll_d /
pll_d2 selected automatically based on final clock, I'd rather hold
off on fixing things that are not really broken that going to be
replaced anyways. You can save a little power by leaving D pll's off,
and the logic to do it isn't hard and fairly safe to hard code into
the driver instead of trying to represent it in DT. (FYI - pll_d2 is a
Tegra3 feature, this way you can do things like run DSI and HDMI with
low jitter, instead of trying to take over pll_c for HDMI, which
deprives other modules of the clock freedom)

I have more DSI devices on Tegra3 than on Tegra2, so I'm putting
Tegra3 support near the top of my TODO list, because in my mind
getting forward progress on DSI support depends on it. HDMI
programming is not too tough, it's mostly loading calibrated magic
values into some SOR registers. The values are different for T20 and
T30 series (same fields, just needed different calibration).

--
"If you give someone a program, you will frustrate them for a day; if
you teach them how to program, you will frustrate them for a
lifetime." ? David Leinweber


[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-21 Thread Thierry Reding
* Terje Bergstr?m wrote:
> On 21.05.2012 14:05, Thierry Reding wrote:
> 
> > I agree. It's really just a "simple-bus" kind of bus. Except that it'll need
> > another compatible value to make the host1x driver bind to it. But we should
> > be able to make it work without creating a completely new bus. I guess the
> > "ranges" property could specify 0x5400 as the base address and child
> > nodes could use offsets relative to that. Maybe even that is overkill,
> > though.
> 
> 
> Hi,
> 
> In the phase where we are just getting display up, yes, the bus is a
> really simple bus. For that, all you need to know about host1x is that
> it needs to be turned on whenever you talk to the display. Having a bus
> seems overkill.
> 
> At a latest stage, we're going to need support for host1x channels and
> related infrastructure. In that phase, we need to find good homes for
> channel allocation, job tracking, push buffer management, and other code
> common to all host1x drivers. In the current nvhost driver, it's the job
> of the bus_type and related utilities, and that seems like a good enough
> fit.
> 
> I have a bit of hard time understanding the significance of a bus_type
> in the Linux driver model.
> 
> On one hand, bus_type allows grouping a set of devices and drivers
> according to the APIs they implement. In nvhost, we have struct
> nvhost_device and struct nvhost_driver, and each host1x client driver
> needs to use them. It's natural to put them in the same bus_type. As a
> bonus, we can assign further common tasks to the bus_type.
> 
> On the other hand, implementing a bus_type introduces code almost
> duplicate to platform bus. This is the part that I have hard time with.
> 
> The device tree should reflect the target state, as it's going to be an
> API between kernel and boot loader. I believe it'd be best to introduce
> host1x as a bus and put all host1x clients as nodes inside host1x.

I agree that reflecting the hardware hierarchy within the device tree makes
sense. I'll need to add some code to tear down child devices in the cleanup
path, but that shouldn't be too difficult.

However, adding a whole bus_type implementation would pretty much duplicate
the platform bus code. I guess host1x could just provide struct host1x_client
a set of corresponding APIs to create them, request channels, etc.

Thierry
-- next part --
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: 



[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-21 Thread Terje Bergström
On 21.05.2012 14:05, Thierry Reding wrote:

> I agree. It's really just a "simple-bus" kind of bus. Except that it'll need
> another compatible value to make the host1x driver bind to it. But we should
> be able to make it work without creating a completely new bus. I guess the
> "ranges" property could specify 0x5400 as the base address and child
> nodes could use offsets relative to that. Maybe even that is overkill,
> though.


Hi,

In the phase where we are just getting display up, yes, the bus is a
really simple bus. For that, all you need to know about host1x is that
it needs to be turned on whenever you talk to the display. Having a bus
seems overkill.

At a latest stage, we're going to need support for host1x channels and
related infrastructure. In that phase, we need to find good homes for
channel allocation, job tracking, push buffer management, and other code
common to all host1x drivers. In the current nvhost driver, it's the job
of the bus_type and related utilities, and that seems like a good enough
fit.

I have a bit of hard time understanding the significance of a bus_type
in the Linux driver model.

On one hand, bus_type allows grouping a set of devices and drivers
according to the APIs they implement. In nvhost, we have struct
nvhost_device and struct nvhost_driver, and each host1x client driver
needs to use them. It's natural to put them in the same bus_type. As a
bonus, we can assign further common tasks to the bus_type.

On the other hand, implementing a bus_type introduces code almost
duplicate to platform bus. This is the part that I have hard time with.

The device tree should reflect the target state, as it's going to be an
API between kernel and boot loader. I believe it'd be best to introduce
host1x as a bus and put all host1x clients as nodes inside host1x.

Terje


[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-21 Thread Thierry Reding
* Stephen Warren wrote:
> On 05/07/2012 02:50 AM, Terje Bergstr?m wrote:
> > Hi Thierry,
> > 
> > I have still lots of questions regarding how device trees work. I'm now
> > just trying to match the device tree structure with hardware - let me
> > know if that goes wrong.
> > 
> > There's a hierarchy in the hardware, which should be represented in the
> > device trees. All of the hardware are client modules for host1x - with
> > the exception of host1x obviously. CPU has two methods for accessing the
> > hardware: clients' register aperture and host1x channels. Both of these
> > operate via host1x hardware.
> > 
> > We should define host1x bus in the device tree, and move all nodes
> > except host1x under that bus.
> 
> I think the host1x node /is/ that bus.

I agree. It's really just a "simple-bus" kind of bus. Except that it'll need
another compatible value to make the host1x driver bind to it. But we should
be able to make it work without creating a completely new bus. I guess the
"ranges" property could specify 0x5400 as the base address and child
nodes could use offsets relative to that. Maybe even that is overkill,
though.

Thierry
-- next part --
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: 



[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-21 Thread Thierry Reding
* Stephen Warren wrote:
> On 04/25/2012 03:45 AM, Thierry Reding wrote:
> > This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> > currently has rudimentary GEM support and can run a console on the
> > framebuffer as well as X using the xf86-video-modesetting driver. Only
> > the RGB output is supported.
> > 
> > HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
> > quite work. EDID data can be retrieved but the output doesn't properly
> > activate the connected TV.
> > 
> > The DSI and TVO outputs and the HOST1X driver are just stubs that setup
> > the corresponding resources but don't do anything useful yet.
> 
> I'm mainly going to comment on the device tree bindings here; hopefully
> Jon and Terje can chime in on the code itself.
> 
> > diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
> > b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
> 
> > +Example:
> > +
> > +/memreserve/ 0x0e00 0x0200;
> > +
> > +...
> > +
> > +/ {
> > +   ...
> > +
> > +   /* host1x */
> > +   host1x: host1x at 5000 {
> > +   compatible = "nvidia,tegra20-host1x";
> > +   reg = <0x5000 0x00024000>;
> > +   interrupts = <0 64 0x04   /* cop syncpt */
> > + 0 65 0x04   /* mpcore syncpt */
> > + 0 66 0x04   /* cop general */
> > + 0 67 0x04>; /* mpcore general */
> > +   };
> 
> The host1x module is apparently a register bus, behind which all the
> other modules sit. According to the address map in the TRM, "all the
> other modules" appears to include all of MPE, VI, CSI, EPP, ISP, GR2D,
> GR3D, DISPLAY A/B, HDMI, TVO, DSI, plus VG, VS, VCI, DSIB on Tegra30.
> 
> That said, I believe Terje wasn't convinced that all those modules are
> really host1x children, just some. Can you check please, Terje?
> 
> Also, I wonder if host1x is really the parent of these modules,
> register-bus-access-wise, or whether the bus covers the "graphic host
> registers" at 0x5000-0x50023fff?
> 
> As such, I think the DT nodes for all those modules should be within the
> host1x node (or perhaps graphics host node, pending above discussion):
> 
> host1x: host1x at 5000 {
> mpe at 5404 { ... };
> vi at 5408 { ... };
> ...
> };
> 
> host1x can easily instantiate all the child nodes simply by calling
> of_platform_populate() at the end of its probe; see
> sound/soc/tegra/tegra30_ahub.c for an example.

I actually had an implementation at some point that did exactly that. The
problem was that there is no equivalent to of_platform_populate() to call at
module removal, so that when the tegra-drm module was unloaded and reloaded,
it would try to create duplicate devices.

Something ugly can probably be done with device_for_each_child(), but maybe a
better option would be to add a helper to the DT code to explicitly undo the
effects of of_platform_populate() (of_platform_desolate()?).

> > +   /* video-encoding/decoding */
> > +   mpe at 5404 {
> > +   reg = <0x5404 0x0004>;
> > +   interrupts = <0 68 0x04>;
> > +   };
> 
> We'll probably end up needing a phandle from each of these nodes to
> host1x, and a channel ID, so the drivers for these nodes can register
> themselves with host1x. However, I it's probably OK to defer the DT
> binding for this until we actually start working on command-channels.

I suppose these should now also drop the unit addresses to follow your
cleanup patches of the DTS files.

> > +   /* graphics host */
> > +   graphics at 5400 {
> > +   compatible = "nvidia,tegra20-graphics";
> > +
> > +   #address-cells = <1>;
> > +   #size-cells = <1>;
> > +   ranges;
> 
> I don't think those 3 properties are needed, unless there are child
> nodes with registers.

Right, those are relics from the earlier version I mentioned above, where the
other nodes were children of the graphics node.

> > +   display-controllers = <&disp1 &disp2>;
> > +   carveout = <0x0e00 0x0200>;
> > +   host1x = <&host1x>;
> > +   gart = <&gart>;
> > +
> > +   connectors {
> 
> I'm not sure that it makes sense to put the connectors nodes underneath
> the graphics host node; the connectors aren't objects with registers
> that are accessed through a bus managed by the graphics node. Equally,
> the connectors could be useful outside of the graphics driver - e.g. the
> audio driver might need to refer to an HDMI connector.
> 
> Instead, I'd probably put the connector nodes at the top level of the
> device tree, and have graphics contain a property that lists the
> phandles of all available connectors.

That makes a lot of sense.

> > +   #address-cells = <1>;
> > +   #size-cells = <0>;
> > +
> > +   connector at 0 {
> > +   reg = <0>;
> > +   edid = /incbin/("m

Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-21 Thread Thierry Reding
* Terje Bergström wrote:
> On 21.05.2012 14:05, Thierry Reding wrote:
> 
> > I agree. It's really just a "simple-bus" kind of bus. Except that it'll need
> > another compatible value to make the host1x driver bind to it. But we should
> > be able to make it work without creating a completely new bus. I guess the
> > "ranges" property could specify 0x5400 as the base address and child
> > nodes could use offsets relative to that. Maybe even that is overkill,
> > though.
> 
> 
> Hi,
> 
> In the phase where we are just getting display up, yes, the bus is a
> really simple bus. For that, all you need to know about host1x is that
> it needs to be turned on whenever you talk to the display. Having a bus
> seems overkill.
> 
> At a latest stage, we're going to need support for host1x channels and
> related infrastructure. In that phase, we need to find good homes for
> channel allocation, job tracking, push buffer management, and other code
> common to all host1x drivers. In the current nvhost driver, it's the job
> of the bus_type and related utilities, and that seems like a good enough
> fit.
> 
> I have a bit of hard time understanding the significance of a bus_type
> in the Linux driver model.
> 
> On one hand, bus_type allows grouping a set of devices and drivers
> according to the APIs they implement. In nvhost, we have struct
> nvhost_device and struct nvhost_driver, and each host1x client driver
> needs to use them. It's natural to put them in the same bus_type. As a
> bonus, we can assign further common tasks to the bus_type.
> 
> On the other hand, implementing a bus_type introduces code almost
> duplicate to platform bus. This is the part that I have hard time with.
> 
> The device tree should reflect the target state, as it's going to be an
> API between kernel and boot loader. I believe it'd be best to introduce
> host1x as a bus and put all host1x clients as nodes inside host1x.

I agree that reflecting the hardware hierarchy within the device tree makes
sense. I'll need to add some code to tear down child devices in the cleanup
path, but that shouldn't be too difficult.

However, adding a whole bus_type implementation would pretty much duplicate
the platform bus code. I guess host1x could just provide struct host1x_client
a set of corresponding APIs to create them, request channels, etc.

Thierry


pgplsOtWO0t4s.pgp
Description: PGP signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-21 Thread Terje Bergström
On 21.05.2012 14:05, Thierry Reding wrote:

> I agree. It's really just a "simple-bus" kind of bus. Except that it'll need
> another compatible value to make the host1x driver bind to it. But we should
> be able to make it work without creating a completely new bus. I guess the
> "ranges" property could specify 0x5400 as the base address and child
> nodes could use offsets relative to that. Maybe even that is overkill,
> though.


Hi,

In the phase where we are just getting display up, yes, the bus is a
really simple bus. For that, all you need to know about host1x is that
it needs to be turned on whenever you talk to the display. Having a bus
seems overkill.

At a latest stage, we're going to need support for host1x channels and
related infrastructure. In that phase, we need to find good homes for
channel allocation, job tracking, push buffer management, and other code
common to all host1x drivers. In the current nvhost driver, it's the job
of the bus_type and related utilities, and that seems like a good enough
fit.

I have a bit of hard time understanding the significance of a bus_type
in the Linux driver model.

On one hand, bus_type allows grouping a set of devices and drivers
according to the APIs they implement. In nvhost, we have struct
nvhost_device and struct nvhost_driver, and each host1x client driver
needs to use them. It's natural to put them in the same bus_type. As a
bonus, we can assign further common tasks to the bus_type.

On the other hand, implementing a bus_type introduces code almost
duplicate to platform bus. This is the part that I have hard time with.

The device tree should reflect the target state, as it's going to be an
API between kernel and boot loader. I believe it'd be best to introduce
host1x as a bus and put all host1x clients as nodes inside host1x.

Terje
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-21 Thread Thierry Reding
* Stephen Warren wrote:
> On 05/07/2012 02:50 AM, Terje Bergström wrote:
> > Hi Thierry,
> > 
> > I have still lots of questions regarding how device trees work. I'm now
> > just trying to match the device tree structure with hardware - let me
> > know if that goes wrong.
> > 
> > There's a hierarchy in the hardware, which should be represented in the
> > device trees. All of the hardware are client modules for host1x - with
> > the exception of host1x obviously. CPU has two methods for accessing the
> > hardware: clients' register aperture and host1x channels. Both of these
> > operate via host1x hardware.
> > 
> > We should define host1x bus in the device tree, and move all nodes
> > except host1x under that bus.
> 
> I think the host1x node /is/ that bus.

I agree. It's really just a "simple-bus" kind of bus. Except that it'll need
another compatible value to make the host1x driver bind to it. But we should
be able to make it work without creating a completely new bus. I guess the
"ranges" property could specify 0x5400 as the base address and child
nodes could use offsets relative to that. Maybe even that is overkill,
though.

Thierry


pgp0UZrLG6ASb.pgp
Description: PGP signature
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-21 Thread Thierry Reding
* Stephen Warren wrote:
> On 04/25/2012 03:45 AM, Thierry Reding wrote:
> > This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> > currently has rudimentary GEM support and can run a console on the
> > framebuffer as well as X using the xf86-video-modesetting driver. Only
> > the RGB output is supported.
> > 
> > HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
> > quite work. EDID data can be retrieved but the output doesn't properly
> > activate the connected TV.
> > 
> > The DSI and TVO outputs and the HOST1X driver are just stubs that setup
> > the corresponding resources but don't do anything useful yet.
> 
> I'm mainly going to comment on the device tree bindings here; hopefully
> Jon and Terje can chime in on the code itself.
> 
> > diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
> > b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
> 
> > +Example:
> > +
> > +/memreserve/ 0x0e00 0x0200;
> > +
> > +...
> > +
> > +/ {
> > +   ...
> > +
> > +   /* host1x */
> > +   host1x: host1x@5000 {
> > +   compatible = "nvidia,tegra20-host1x";
> > +   reg = <0x5000 0x00024000>;
> > +   interrupts = <0 64 0x04   /* cop syncpt */
> > + 0 65 0x04   /* mpcore syncpt */
> > + 0 66 0x04   /* cop general */
> > + 0 67 0x04>; /* mpcore general */
> > +   };
> 
> The host1x module is apparently a register bus, behind which all the
> other modules sit. According to the address map in the TRM, "all the
> other modules" appears to include all of MPE, VI, CSI, EPP, ISP, GR2D,
> GR3D, DISPLAY A/B, HDMI, TVO, DSI, plus VG, VS, VCI, DSIB on Tegra30.
> 
> That said, I believe Terje wasn't convinced that all those modules are
> really host1x children, just some. Can you check please, Terje?
> 
> Also, I wonder if host1x is really the parent of these modules,
> register-bus-access-wise, or whether the bus covers the "graphic host
> registers" at 0x5000-0x50023fff?
> 
> As such, I think the DT nodes for all those modules should be within the
> host1x node (or perhaps graphics host node, pending above discussion):
> 
> host1x: host1x@5000 {
> mpe@5404 { ... };
> vi@5408 { ... };
> ...
> };
> 
> host1x can easily instantiate all the child nodes simply by calling
> of_platform_populate() at the end of its probe; see
> sound/soc/tegra/tegra30_ahub.c for an example.

I actually had an implementation at some point that did exactly that. The
problem was that there is no equivalent to of_platform_populate() to call at
module removal, so that when the tegra-drm module was unloaded and reloaded,
it would try to create duplicate devices.

Something ugly can probably be done with device_for_each_child(), but maybe a
better option would be to add a helper to the DT code to explicitly undo the
effects of of_platform_populate() (of_platform_desolate()?).

> > +   /* video-encoding/decoding */
> > +   mpe@5404 {
> > +   reg = <0x5404 0x0004>;
> > +   interrupts = <0 68 0x04>;
> > +   };
> 
> We'll probably end up needing a phandle from each of these nodes to
> host1x, and a channel ID, so the drivers for these nodes can register
> themselves with host1x. However, I it's probably OK to defer the DT
> binding for this until we actually start working on command-channels.

I suppose these should now also drop the unit addresses to follow your
cleanup patches of the DTS files.

> > +   /* graphics host */
> > +   graphics@5400 {
> > +   compatible = "nvidia,tegra20-graphics";
> > +
> > +   #address-cells = <1>;
> > +   #size-cells = <1>;
> > +   ranges;
> 
> I don't think those 3 properties are needed, unless there are child
> nodes with registers.

Right, those are relics from the earlier version I mentioned above, where the
other nodes were children of the graphics node.

> > +   display-controllers = <&disp1 &disp2>;
> > +   carveout = <0x0e00 0x0200>;
> > +   host1x = <&host1x>;
> > +   gart = <&gart>;
> > +
> > +   connectors {
> 
> I'm not sure that it makes sense to put the connectors nodes underneath
> the graphics host node; the connectors aren't objects with registers
> that are accessed through a bus managed by the graphics node. Equally,
> the connectors could be useful outside of the graphics driver - e.g. the
> audio driver might need to refer to an HDMI connector.
> 
> Instead, I'd probably put the connector nodes at the top level of the
> device tree, and have graphics contain a property that lists the
> phandles of all available connectors.

That makes a lot of sense.

> > +   #address-cells = <1>;
> > +   #size-cells = <0>;
> > +
> > +   connector@0 {
> > +   reg = <0>;
> > +   edid = /incbin/("machine.edid");
> > + 

Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-07 Thread Stephen Warren
On 05/07/2012 02:50 AM, Terje Bergström wrote:
> On 25.04.2012 12:45, Thierry Reding wrote:
> 
>> +/ {
>> +   ...
>> +
>> +   /* host1x */
>> +   host1x: host1x@5000 {
>> +   compatible = "nvidia,tegra20-host1x";
>> +   reg = <0x5000 0x00024000>;
>> +   interrupts = <0 64 0x04   /* cop syncpt */
>> + 0 65 0x04   /* mpcore syncpt */
>> + 0 66 0x04   /* cop general */
>> + 0 67 0x04>; /* mpcore general */
>> +   };
>> +
>> +   /* video-encoding/decoding */
>> +   mpe@5404 {
>> +   reg = <0x5404 0x0004>;
>> +   interrupts = <0 68 0x04>;
>> +   };
>> +
> 
> (...)
> 
> Hi Thierry,
> 
> I have still lots of questions regarding how device trees work. I'm now
> just trying to match the device tree structure with hardware - let me
> know if that goes wrong.
> 
> There's a hierarchy in the hardware, which should be represented in the
> device trees. All of the hardware are client modules for host1x - with
> the exception of host1x obviously. CPU has two methods for accessing the
> hardware: clients' register aperture and host1x channels. Both of these
> operate via host1x hardware.
> 
> We should define host1x bus in the device tree, and move all nodes
> except host1x under that bus.

I think the host1x node /is/ that bus.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-07 Thread Terje Bergström
On 25.04.2012 12:45, Thierry Reding wrote:

> +/ {
> +   ...
> +
> +   /* host1x */
> +   host1x: host1x at 5000 {
> +   compatible = "nvidia,tegra20-host1x";
> +   reg = <0x5000 0x00024000>;
> +   interrupts = <0 64 0x04   /* cop syncpt */
> + 0 65 0x04   /* mpcore syncpt */
> + 0 66 0x04   /* cop general */
> + 0 67 0x04>; /* mpcore general */
> +   };
> +
> +   /* video-encoding/decoding */
> +   mpe at 5404 {
> +   reg = <0x5404 0x0004>;
> +   interrupts = <0 68 0x04>;
> +   };
> +

(...)

Hi Thierry,

I have still lots of questions regarding how device trees work. I'm now
just trying to match the device tree structure with hardware - let me
know if that goes wrong.

There's a hierarchy in the hardware, which should be represented in the
device trees. All of the hardware are client modules for host1x - with
the exception of host1x obviously. CPU has two methods for accessing the
hardware: clients' register aperture and host1x channels. Both of these
operate via host1x hardware.

We should define host1x bus in the device tree, and move all nodes
except host1x under that bus. This will help us in the long run, as we
will have multiple drivers (drm, v4l2) each accessing hardware under
host1x. We will need to model the bus and the bus_type will need to take
over responsibilities of managing the common resources.

When we are clocking hardware, whenever we want to access display's
register aperture, host1x needs to be clocked.

> +   /* graphics host */

> +   graphics at 5400 {
> +   compatible = "nvidia,tegra20-graphics";
> +
> +   #address-cells = <1>;
> +   #size-cells = <1>;
> +   ranges;
> +
> +   display-controllers = <&disp1 &disp2>;
> +   carveout = <0x0e00 0x0200>;
> +   host1x = <&host1x>;
> +   gart = <&gart>;
> +
> +   connectors {
> +   #address-cells = <1>;
> +   #size-cells = <0>;
> +
> +   connector at 0 {
> +   reg = <0>;
> +   edid = /incbin/("machine.edid");
> +   output = <&lvds>;
> +   };
> +
> +   connector at 1 {
> +   reg = <1>;
> +   output = <&hdmi>;
> +   ddc = <&i2c2>;
> +
> +   hpd-gpio = <&gpio 111 0>; /* PN7 */
> +   };
> +   };
> +   };
> +};


I'm not sure what this node means. The register range from 5400
onwards is actually the one that you just described in the nodes of the
individual client modules. Why is it represented here again?

Terje


[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-07 Thread Stephen Warren
On 05/07/2012 02:50 AM, Terje Bergstr?m wrote:
> On 25.04.2012 12:45, Thierry Reding wrote:
> 
>> +/ {
>> +   ...
>> +
>> +   /* host1x */
>> +   host1x: host1x at 5000 {
>> +   compatible = "nvidia,tegra20-host1x";
>> +   reg = <0x5000 0x00024000>;
>> +   interrupts = <0 64 0x04   /* cop syncpt */
>> + 0 65 0x04   /* mpcore syncpt */
>> + 0 66 0x04   /* cop general */
>> + 0 67 0x04>; /* mpcore general */
>> +   };
>> +
>> +   /* video-encoding/decoding */
>> +   mpe at 5404 {
>> +   reg = <0x5404 0x0004>;
>> +   interrupts = <0 68 0x04>;
>> +   };
>> +
> 
> (...)
> 
> Hi Thierry,
> 
> I have still lots of questions regarding how device trees work. I'm now
> just trying to match the device tree structure with hardware - let me
> know if that goes wrong.
> 
> There's a hierarchy in the hardware, which should be represented in the
> device trees. All of the hardware are client modules for host1x - with
> the exception of host1x obviously. CPU has two methods for accessing the
> hardware: clients' register aperture and host1x channels. Both of these
> operate via host1x hardware.
> 
> We should define host1x bus in the device tree, and move all nodes
> except host1x under that bus.

I think the host1x node /is/ that bus.


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-07 Thread Terje Bergström
On 25.04.2012 12:45, Thierry Reding wrote:

> +/ {
> +   ...
> +
> +   /* host1x */
> +   host1x: host1x@5000 {
> +   compatible = "nvidia,tegra20-host1x";
> +   reg = <0x5000 0x00024000>;
> +   interrupts = <0 64 0x04   /* cop syncpt */
> + 0 65 0x04   /* mpcore syncpt */
> + 0 66 0x04   /* cop general */
> + 0 67 0x04>; /* mpcore general */
> +   };
> +
> +   /* video-encoding/decoding */
> +   mpe@5404 {
> +   reg = <0x5404 0x0004>;
> +   interrupts = <0 68 0x04>;
> +   };
> +

(...)

Hi Thierry,

I have still lots of questions regarding how device trees work. I'm now
just trying to match the device tree structure with hardware - let me
know if that goes wrong.

There's a hierarchy in the hardware, which should be represented in the
device trees. All of the hardware are client modules for host1x - with
the exception of host1x obviously. CPU has two methods for accessing the
hardware: clients' register aperture and host1x channels. Both of these
operate via host1x hardware.

We should define host1x bus in the device tree, and move all nodes
except host1x under that bus. This will help us in the long run, as we
will have multiple drivers (drm, v4l2) each accessing hardware under
host1x. We will need to model the bus and the bus_type will need to take
over responsibilities of managing the common resources.

When we are clocking hardware, whenever we want to access display's
register aperture, host1x needs to be clocked.

> +   /* graphics host */

> +   graphics@5400 {
> +   compatible = "nvidia,tegra20-graphics";
> +
> +   #address-cells = <1>;
> +   #size-cells = <1>;
> +   ranges;
> +
> +   display-controllers = <&disp1 &disp2>;
> +   carveout = <0x0e00 0x0200>;
> +   host1x = <&host1x>;
> +   gart = <&gart>;
> +
> +   connectors {
> +   #address-cells = <1>;
> +   #size-cells = <0>;
> +
> +   connector@0 {
> +   reg = <0>;
> +   edid = /incbin/("machine.edid");
> +   output = <&lvds>;
> +   };
> +
> +   connector@1 {
> +   reg = <1>;
> +   output = <&hdmi>;
> +   ddc = <&i2c2>;
> +
> +   hpd-gpio = <&gpio 111 0>; /* PN7 */
> +   };
> +   };
> +   };
> +};


I'm not sure what this node means. The register range from 5400
onwards is actually the one that you just described in the nodes of the
individual client modules. Why is it represented here again?

Terje
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-03 Thread Stephen Warren
On 04/25/2012 03:45 AM, Thierry Reding wrote:
> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> currently has rudimentary GEM support and can run a console on the
> framebuffer as well as X using the xf86-video-modesetting driver. Only
> the RGB output is supported.
> 
> HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
> quite work. EDID data can be retrieved but the output doesn't properly
> activate the connected TV.
> 
> The DSI and TVO outputs and the HOST1X driver are just stubs that setup
> the corresponding resources but don't do anything useful yet.

I'm mainly going to comment on the device tree bindings here; hopefully
Jon and Terje can chime in on the code itself.

> diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
> b/Documentation/devicetree/bindings/gpu/drm/tegra.txt

> +Example:
> +
> +/memreserve/ 0x0e00 0x0200;
> +
> +...
> +
> +/ {
> + ...
> +
> + /* host1x */
> + host1x: host1x@5000 {
> + compatible = "nvidia,tegra20-host1x";
> + reg = <0x5000 0x00024000>;
> + interrupts = <0 64 0x04   /* cop syncpt */
> +   0 65 0x04   /* mpcore syncpt */
> +   0 66 0x04   /* cop general */
> +   0 67 0x04>; /* mpcore general */
> + };

The host1x module is apparently a register bus, behind which all the
other modules sit. According to the address map in the TRM, "all the
other modules" appears to include all of MPE, VI, CSI, EPP, ISP, GR2D,
GR3D, DISPLAY A/B, HDMI, TVO, DSI, plus VG, VS, VCI, DSIB on Tegra30.

That said, I believe Terje wasn't convinced that all those modules are
really host1x children, just some. Can you check please, Terje?

Also, I wonder if host1x is really the parent of these modules,
register-bus-access-wise, or whether the bus covers the "graphic host
registers" at 0x5000-0x50023fff?

As such, I think the DT nodes for all those modules should be within the
host1x node (or perhaps graphics host node, pending above discussion):

host1x: host1x@5000 {
mpe@5404 { ... };
vi@5408 { ... };
...
};

host1x can easily instantiate all the child nodes simply by calling
of_platform_populate() at the end of its probe; see
sound/soc/tegra/tegra30_ahub.c for an example.

> + /* video-encoding/decoding */
> + mpe@5404 {
> + reg = <0x5404 0x0004>;
> + interrupts = <0 68 0x04>;
> + };

We'll probably end up needing a phandle from each of these nodes to
host1x, and a channel ID, so the drivers for these nodes can register
themselves with host1x. However, I it's probably OK to defer the DT
binding for this until we actually start working on command-channels.

> + /* graphics host */
> + graphics@5400 {
> + compatible = "nvidia,tegra20-graphics";
> +
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;

I don't think those 3 properties are needed, unless there are child
nodes with registers.

> + display-controllers = <&disp1 &disp2>;
> + carveout = <0x0e00 0x0200>;
> + host1x = <&host1x>;
> + gart = <&gart>;
> +
> + connectors {

I'm not sure that it makes sense to put the connectors nodes underneath
the graphics host node; the connectors aren't objects with registers
that are accessed through a bus managed by the graphics node. Equally,
the connectors could be useful outside of the graphics driver - e.g. the
audio driver might need to refer to an HDMI connector.

Instead, I'd probably put the connector nodes at the top level of the
device tree, and have graphics contain a property that lists the
phandles of all available connectors.

> + #address-cells = <1>;
> + #size-cells = <0>;
> +
> + connector@0 {
> + reg = <0>;
> + edid = /incbin/("machine.edid");
> + output = <&lvds>;
> + };

I think each of these needs some kind of compatible value to indicate
their type. Also, the ability to represent both HDMI video and audio
streams so that a sound card binding could use the HDMI connector too. I
see that drivers/extcon/ has appeared recently, and I wonder if the
connector nodes in DT shouldn't be handled by that subsystem?

> + connector@1 {
> + reg = <1>;
> + output = <&hdmi>;
> + ddc = <&i2c2>;
> +
> + hpd-gpio = <&gpio 111 0>; /* PN7 */
> + };
> + };
> + };
> +};

> diff --git a/arch/arm/mach-tegra/board-dt-tegra20.c 
> b/arch/arm/mach-tegra/board-dt-tegra20.c

> + { "host1x", "pll_c",14400,  true },
> + { "disp1",  "pl

[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-05-03 Thread Stephen Warren
On 04/25/2012 03:45 AM, Thierry Reding wrote:
> This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
> currently has rudimentary GEM support and can run a console on the
> framebuffer as well as X using the xf86-video-modesetting driver. Only
> the RGB output is supported.
> 
> HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
> quite work. EDID data can be retrieved but the output doesn't properly
> activate the connected TV.
> 
> The DSI and TVO outputs and the HOST1X driver are just stubs that setup
> the corresponding resources but don't do anything useful yet.

I'm mainly going to comment on the device tree bindings here; hopefully
Jon and Terje can chime in on the code itself.

> diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
> b/Documentation/devicetree/bindings/gpu/drm/tegra.txt

> +Example:
> +
> +/memreserve/ 0x0e00 0x0200;
> +
> +...
> +
> +/ {
> + ...
> +
> + /* host1x */
> + host1x: host1x at 5000 {
> + compatible = "nvidia,tegra20-host1x";
> + reg = <0x5000 0x00024000>;
> + interrupts = <0 64 0x04   /* cop syncpt */
> +   0 65 0x04   /* mpcore syncpt */
> +   0 66 0x04   /* cop general */
> +   0 67 0x04>; /* mpcore general */
> + };

The host1x module is apparently a register bus, behind which all the
other modules sit. According to the address map in the TRM, "all the
other modules" appears to include all of MPE, VI, CSI, EPP, ISP, GR2D,
GR3D, DISPLAY A/B, HDMI, TVO, DSI, plus VG, VS, VCI, DSIB on Tegra30.

That said, I believe Terje wasn't convinced that all those modules are
really host1x children, just some. Can you check please, Terje?

Also, I wonder if host1x is really the parent of these modules,
register-bus-access-wise, or whether the bus covers the "graphic host
registers" at 0x5000-0x50023fff?

As such, I think the DT nodes for all those modules should be within the
host1x node (or perhaps graphics host node, pending above discussion):

host1x: host1x at 5000 {
mpe at 5404 { ... };
vi at 5408 { ... };
...
};

host1x can easily instantiate all the child nodes simply by calling
of_platform_populate() at the end of its probe; see
sound/soc/tegra/tegra30_ahub.c for an example.

> + /* video-encoding/decoding */
> + mpe at 5404 {
> + reg = <0x5404 0x0004>;
> + interrupts = <0 68 0x04>;
> + };

We'll probably end up needing a phandle from each of these nodes to
host1x, and a channel ID, so the drivers for these nodes can register
themselves with host1x. However, I it's probably OK to defer the DT
binding for this until we actually start working on command-channels.

> + /* graphics host */
> + graphics at 5400 {
> + compatible = "nvidia,tegra20-graphics";
> +
> + #address-cells = <1>;
> + #size-cells = <1>;
> + ranges;

I don't think those 3 properties are needed, unless there are child
nodes with registers.

> + display-controllers = <&disp1 &disp2>;
> + carveout = <0x0e00 0x0200>;
> + host1x = <&host1x>;
> + gart = <&gart>;
> +
> + connectors {

I'm not sure that it makes sense to put the connectors nodes underneath
the graphics host node; the connectors aren't objects with registers
that are accessed through a bus managed by the graphics node. Equally,
the connectors could be useful outside of the graphics driver - e.g. the
audio driver might need to refer to an HDMI connector.

Instead, I'd probably put the connector nodes at the top level of the
device tree, and have graphics contain a property that lists the
phandles of all available connectors.

> + #address-cells = <1>;
> + #size-cells = <0>;
> +
> + connector at 0 {
> + reg = <0>;
> + edid = /incbin/("machine.edid");
> + output = <&lvds>;
> + };

I think each of these needs some kind of compatible value to indicate
their type. Also, the ability to represent both HDMI video and audio
streams so that a sound card binding could use the HDMI connector too. I
see that drivers/extcon/ has appeared recently, and I wonder if the
connector nodes in DT shouldn't be handled by that subsystem?

> + connector at 1 {
> + reg = <1>;
> + output = <&hdmi>;
> + ddc = <&i2c2>;
> +
> + hpd-gpio = <&gpio 111 0>; /* PN7 */
> + };
> + };
> + };
> +};

> diff --git a/arch/arm/mach-tegra/board-dt-tegra20.c 
> b/arch/arm/mach-tegra/board-dt-tegra20.c

> + { "host1x", "pll_c",14400,  true },
> +

[RFC v2 5/5] drm: Add NVIDIA Tegra support

2012-04-25 Thread Thierry Reding
This commit adds a very basic DRM driver for NVIDIA Tegra SoCs. It
currently has rudimentary GEM support and can run a console on the
framebuffer as well as X using the xf86-video-modesetting driver. Only
the RGB output is supported.

HDMI support was taken from NVIDIA's Linux kernel tree but it doesn't
quite work. EDID data can be retrieved but the output doesn't properly
activate the connected TV.

The DSI and TVO outputs and the HOST1X driver are just stubs that setup
the corresponding resources but don't do anything useful yet.

Signed-off-by: Thierry Reding 
---
 .../devicetree/bindings/gpu/drm/tegra.txt  |  125 +++
 arch/arm/mach-tegra/board-dt-tegra20.c |   14 +
 arch/arm/mach-tegra/include/mach/iomap.h   |6 +
 arch/arm/mach-tegra/tegra2_clocks.c|   19 +-
 drivers/gpu/drm/Kconfig|2 +
 drivers/gpu/drm/Makefile   |1 +
 drivers/gpu/drm/tegra/Kconfig  |   19 +
 drivers/gpu/drm/tegra/Makefile |7 +
 drivers/gpu/drm/tegra/tegra-dc.c   |  630 +
 drivers/gpu/drm/tegra/tegra-dc.h   |  245 +
 drivers/gpu/drm/tegra/tegra-drv.c  |  600 
 drivers/gpu/drm/tegra/tegra-drv.h  |  201 
 drivers/gpu/drm/tegra/tegra-dsi.c  |  145 +++
 drivers/gpu/drm/tegra/tegra-encon.c|  406 
 drivers/gpu/drm/tegra/tegra-fb.c   |  355 +++
 drivers/gpu/drm/tegra/tegra-gem.c  |  425 +
 drivers/gpu/drm/tegra/tegra-hdmi.c |  994 
 drivers/gpu/drm/tegra/tegra-hdmi.h |  462 +
 drivers/gpu/drm/tegra/tegra-host1x.c   |   90 ++
 drivers/gpu/drm/tegra/tegra-rgb.c  |  148 +++
 drivers/gpu/drm/tegra/tegra-tvo.c  |  152 +++
 include/drm/tegra_drm.h|   40 +
 22 files changed, 5075 insertions(+), 11 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/gpu/drm/tegra.txt
 create mode 100644 drivers/gpu/drm/tegra/Kconfig
 create mode 100644 drivers/gpu/drm/tegra/Makefile
 create mode 100644 drivers/gpu/drm/tegra/tegra-dc.c
 create mode 100644 drivers/gpu/drm/tegra/tegra-dc.h
 create mode 100644 drivers/gpu/drm/tegra/tegra-drv.c
 create mode 100644 drivers/gpu/drm/tegra/tegra-drv.h
 create mode 100644 drivers/gpu/drm/tegra/tegra-dsi.c
 create mode 100644 drivers/gpu/drm/tegra/tegra-encon.c
 create mode 100644 drivers/gpu/drm/tegra/tegra-fb.c
 create mode 100644 drivers/gpu/drm/tegra/tegra-gem.c
 create mode 100644 drivers/gpu/drm/tegra/tegra-hdmi.c
 create mode 100644 drivers/gpu/drm/tegra/tegra-hdmi.h
 create mode 100644 drivers/gpu/drm/tegra/tegra-host1x.c
 create mode 100644 drivers/gpu/drm/tegra/tegra-rgb.c
 create mode 100644 drivers/gpu/drm/tegra/tegra-tvo.c
 create mode 100644 include/drm/tegra_drm.h

diff --git a/Documentation/devicetree/bindings/gpu/drm/tegra.txt 
b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
new file mode 100644
index 000..6c23155
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpu/drm/tegra.txt
@@ -0,0 +1,125 @@
+Example:
+
+/memreserve/ 0x0e00 0x0200;
+
+...
+
+/ {
+   ...
+
+   /* host1x */
+   host1x: host1x at 5000 {
+   compatible = "nvidia,tegra20-host1x";
+   reg = <0x5000 0x00024000>;
+   interrupts = <0 64 0x04   /* cop syncpt */
+ 0 65 0x04   /* mpcore syncpt */
+ 0 66 0x04   /* cop general */
+ 0 67 0x04>; /* mpcore general */
+   };
+
+   /* video-encoding/decoding */
+   mpe at 5404 {
+   reg = <0x5404 0x0004>;
+   interrupts = <0 68 0x04>;
+   };
+
+   /* video input */
+   vi at 5408 {
+   reg = <0x5408 0x0004>;
+   interrupts = <0 69 0x04>;
+   };
+
+   /* EPP */
+   epp at 540c {
+   reg = <0x540c 0x0004>;
+   interrupts = <0 70 0x04>;
+   };
+
+   /* ISP */
+   isp at 5410 {
+   reg = <0x5410 0x0004>;
+   interrupts = <0 71 0x04>;
+   };
+
+   /* 2D engine */
+   gr2d at 5414 {
+   reg = <0x5414 0x0004>;
+   interrupts = <0 72 0x04>;
+   };
+
+   /* 3D engine */
+   gr3d at 5418 {
+   reg = <0x5418 0x0004>;
+   };
+
+   /* display controllers */
+   disp1: dc at 5420 {
+   compatible = "nvidia,tegra20-dc";
+   reg = <0x5420 0x0004>;
+   interrupts = <0 73 0x04>;
+   };
+
+   disp2: dc at 5424 {
+   compatible = "nvidia,tegra20-dc";
+   reg = <0x5424 0x0004>;
+   interrupts = <0 74 0x04>;
+