On Wed, 2011-02-23 at 02:41 -0600, Taneja, Archit wrote:
> Currently, the core DSS platform device requests for an irq line for OMAP2 and
> OMAP3. Make DISPC and DSI platform devices request for a shared IRQ line.
> 
> On OMAP3, the logical OR of DSI and DISPC interrupt lines goes to the MPU. 
> There
> is a register DSS_IRQSTATUS which tells if the interrupt came from DISPC or 
> DSI.
> 
> On OMAP2, there is no DSI, only DISPC interrupts goto the MPU. There is no
> DSS_IRQSTATUS register.
> 
> Hence, it makes more sense to have separate irq handlers corresponding to the
> DSS sub modules instead of having a common handler.
> 
> Since on OMAP3 the logical OR of the lines goes to MPU, the irq line is shared
> among the IRQ handlers.
> 
> The hwmod irq info has been removed for DSS to DISPC and DSI for OMAP2 and 
> OMAP3
> hwmod databases. The Probes of DISPC and DSI now request for irq handlers.

<snip>

> +       r = request_irq(dispc.irq, omap_dispc_irq_handler, IRQF_SHARED,
> +               "OMAP DISPC", dispc.pdev);
> +       if (r < 0) {
> +               DSSERR("request_irq failed\n");
> +               goto fail1;
>         }
> 
>         enable_clocks(1);
> @@ -3361,10 +3388,15 @@ static int omap_dispchw_probe(struct platform_device 
> *pdev)
>         enable_clocks(0);
> 
>         return 0;
> +fail1:
> +       iounmap(dispc.base);
> +fail0:
> +       return r;
>  }
> 
>  static int omap_dispchw_remove(struct platform_device *pdev)
>  {
> +       free_irq(dispc.irq, NULL);

This fails when unloading the DSS module. free_irq() needs the same data
that was used in request_irq, dispc.pdev in this case. And the same
thing in dsi.

I fixed this, and a minor conflict in dsi's fail path. The commit is in
my master branch. Please check the commit to see that I didn't mess
anything up.

Otherwise the patch is good.

 Tomi


--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to