RE: RE: RE: [PATCH v4 12/12] video: da8xx-fb: CCF clock divider handling

2013-01-28 Thread Mohammed, Afzal
Hi Mike,

On Sat, Jan 26, 2013 at 04:14:53, Mike Turquette wrote:

 I think Paul W. or someone on the TI side should weigh in on your clkdev
 entries.  My main point is that the actual tree should be modeled and
 clocks shouldn't be globbed together unnecessarily.  As mentioned in the
 other mail thread you might be better off making a divider for your LCDC
 IP block and modeling each node individually.

It seems complexity of driver would increase by creating a new inherited
divider clock and having a total 3-4 clock nodes. The advantage going
with it would be higher configurable resolution for pixel clock.
Current use cases work without higher pixel clock resolution.

And drm driver posted for the same IP is without CCF modeling.

So I will presently not model clock nodes in LCDC IP, later if use cases
badly require, this can be done (and if it happens, hopefully by that
DaVinci would be CCF'ed and it would be more clean to implement it).

Thanks for sharing your ideas.

Regards
Afzal
N�r��yb�X��ǧv�^�)޺{.n�+{��f��{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�mzZ+�ݢj��!�i

RE: RE: [PATCH v4 12/12] video: da8xx-fb: CCF clock divider handling

2013-01-25 Thread Mohammed, Afzal
Hi Mike,

On Thu, Jan 24, 2013 at 22:30:44, Mike Turquette wrote:
 Quoting Mohammed, Afzal (2013-01-24 03:36:02)

  So there are 3 - LIDD is actually not for present use case, CORE could
  be clubbed with the divider to have a composite clock. And CORE is
  in functional clock path and logically it's perfectly alright to have
  the composite clock.

 Some of the clock names are a bit generic, so a question that I'm going
 to repeat throughout my response: is this clock only inside of your
 video IP ?

Yes these three clocks are inside LCDC IP.

 Regarding the CORE clock, is this only inside of your IP or are you
 referring to the SoC CORE clock which is driven by a DPLL and clocks
 DDR and many other peripherals (often MMC, UART, etc)?

Sorry for the confusion, here CORE refers to clock inside LCDC IP. This
CORE should not be confused with CORE PLL. Actually I used CORE so that
it corresponds to the nomenclature in LCDC section of TRM.

 Note that this is from my past experience with OMAP, and I'm making an
 assumption that the clock scheme between OMAP and Da Vinci/AM335x parts
 isn't very different.

Additional detail: DaVinci doesn't have these 3 clocks controls available,
so these three are required only on AM335x (which has IP version 2 )

 Is there a public TRM I can look at?  It would help me understand this
 without having to ask you so many annoying questions ;)

No problem, http://www.ti.com/product/am3359


  And now we are left with DMA, this is actually in the interface clock
  path which driver in unaware. An option would be to have DMA clock
  as child of CORE plus divider composite clock, even though logically
  DMA can't be considered in the same path.

 Why is the driver unaware of the interface clk?  For instance OMAP3 had
 separate fclk and iclk for IPs and drivers would call clk_enable on
 both.  Or am I misunderstanding something?

HWMOD handles enabling those upon pm_runtime calls, HWMOD makes an alias
for main clock with fck, but not for ick, so currently ick is
unavailable for the driver, continued below ..

 In general I don't think the clock subtree should be modeled in a way
 that is convenient for software, but instead model the actual hardware.
 Trust me, if you don't model the actual hardware then you will be very
 confused when you come back and revisit this code in 6 months and can't
 remember why things are so weird looking.

Ok, then it seems an omap clock entry for con-id ick should be created
as follows (dpll_core_m4_ck supplies interface clock),

CLK(4830e000.lcdc,ick,  dpll_core_m4_ck,   CK_AM33XX)

And then in the driver, DMA gate clock should be made a child of this clock
(obtained with con-id ick).

Let me know your opinion on this.

Regards
Afzal





Re: RE: RE: [PATCH v4 12/12] video: da8xx-fb: CCF clock divider handling

2013-01-25 Thread Mike Turquette
Quoting Mohammed, Afzal (2013-01-25 04:05:44)
 Hi Mike,
 
 On Thu, Jan 24, 2013 at 22:30:44, Mike Turquette wrote:
  Quoting Mohammed, Afzal (2013-01-24 03:36:02)
 
   So there are 3 - LIDD is actually not for present use case, CORE could
   be clubbed with the divider to have a composite clock. And CORE is
   in functional clock path and logically it's perfectly alright to have
   the composite clock.
 
  Some of the clock names are a bit generic, so a question that I'm going
  to repeat throughout my response: is this clock only inside of your
  video IP ?
 
 Yes these three clocks are inside LCDC IP.
 
  Regarding the CORE clock, is this only inside of your IP or are you
  referring to the SoC CORE clock which is driven by a DPLL and clocks
  DDR and many other peripherals (often MMC, UART, etc)?
 
 Sorry for the confusion, here CORE refers to clock inside LCDC IP. This
 CORE should not be confused with CORE PLL. Actually I used CORE so that
 it corresponds to the nomenclature in LCDC section of TRM.
 
  Note that this is from my past experience with OMAP, and I'm making an
  assumption that the clock scheme between OMAP and Da Vinci/AM335x parts
  isn't very different.
 
 Additional detail: DaVinci doesn't have these 3 clocks controls available,
 so these three are required only on AM335x (which has IP version 2 )
 
  Is there a public TRM I can look at?  It would help me understand this
  without having to ask you so many annoying questions ;)
 
 No problem, http://www.ti.com/product/am3359
 
 
   And now we are left with DMA, this is actually in the interface clock
   path which driver in unaware. An option would be to have DMA clock
   as child of CORE plus divider composite clock, even though logically
   DMA can't be considered in the same path.
 
  Why is the driver unaware of the interface clk?  For instance OMAP3 had
  separate fclk and iclk for IPs and drivers would call clk_enable on
  both.  Or am I misunderstanding something?
 
 HWMOD handles enabling those upon pm_runtime calls, HWMOD makes an alias
 for main clock with fck, but not for ick, so currently ick is
 unavailable for the driver, continued below ..
 
  In general I don't think the clock subtree should be modeled in a way
  that is convenient for software, but instead model the actual hardware.
  Trust me, if you don't model the actual hardware then you will be very
  confused when you come back and revisit this code in 6 months and can't
  remember why things are so weird looking.
 
 Ok, then it seems an omap clock entry for con-id ick should be created
 as follows (dpll_core_m4_ck supplies interface clock),
 
 CLK(4830e000.lcdc,ick,  dpll_core_m4_ck,   CK_AM33XX)
 
 And then in the driver, DMA gate clock should be made a child of this clock
 (obtained with con-id ick).
 
 Let me know your opinion on this.
 

I think Paul W. or someone on the TI side should weigh in on your clkdev
entries.  My main point is that the actual tree should be modeled and
clocks shouldn't be globbed together unnecessarily.  As mentioned in the
other mail thread you might be better off making a divider for your LCDC
IP block and modeling each node individually.

Regards,
Mike

 Regards
 Afzal
--
To unsubscribe from this list: send the line unsubscribe linux-omap in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v4 12/12] video: da8xx-fb: CCF clock divider handling

2013-01-24 Thread Mohammed, Afzal
Hi Mike,

On Thu, Jan 24, 2013 at 01:52:04, Mike Turquette wrote:
 Quoting Afzal Mohammed (2013-01-23 03:48:56)

  +static inline void da8xx_fb_clkc_enable(void)
  +{
  if (lcd_revision == LCD_VERSION_2)
  lcdc_write(LCD_V2_DMA_CLK_EN | LCD_V2_LIDD_CLK_EN |
  LCD_V2_CORE_CLK_EN, LCD_CLK_ENABLE_REG);

  +static inline int da8xx_fb_calc_config_clk_divider(struct da8xx_fb_par 
  *par,
  +   struct fb_videomode 
  *mode)
  +{

  +   ret = clk_set_rate(par-child_clk, PICOS2KHZ(mode-pixclock) * 
  1000);
  +   if (IS_ERR_VALUE(ret)) {
  +   dev_err(par-dev, unable to setup pixel clock of %u ps,
  +   mode-pixclock);
  +   return ret;
  +   }
  +   da8xx_fb_clkc_enable();

 It looks like you are using the legacy method to enable/disable the
 clock and using the CCF basic divider to set the rate.  This feels a bit
 hacky to me.  If you want to model your clock in CCF then you should
 probably model the whole clock, not just the rate-change aspects of it.

Initially I thought about it, but seeing requirement of 3 gate clocks
(due to 3 bits meant for different purposes - DMA, LIDD and CORE
functionalities), felt that having 4 clocks (3 gate + 1 divider) in
driver would be an overdesign [leaving a branch instead of a leaf of
the tree in driver ;)].

 Have you looked at the composite clock patches from Prashant?  Those
 might give you the divider+gate properties that you are looking for:
 http://article.gmane.org/gmane.linux.kernel/1416697

Thanks for the pointer,

Now with the composite clock in mind, it was tried to relate to what
was required for the present scenario.

So there are 3 - LIDD is actually not for present use case, CORE could
be clubbed with the divider to have a composite clock. And CORE is
in functional clock path and logically it's perfectly alright to have
the composite clock.

And now we are left with DMA, this is actually in the interface clock
path which driver in unaware. An option would be to have DMA clock
as child of CORE plus divider composite clock, even though logically
DMA can't be considered in the same path.

Also tried not enabling DMA clock, but driver is able to provide
display without any issues, so was thinking whether to avoid
instantiating DMA clock at all and hence to have a simple single
composite clock. Trying to get information internally on whether
not setting DMA clock bits would actually make a difference.

If you have any opinion on how to deal here, let me know.

Regards
Afzal




Re: RE: [PATCH v4 12/12] video: da8xx-fb: CCF clock divider handling

2013-01-24 Thread Mike Turquette
Quoting Mohammed, Afzal (2013-01-24 03:36:02)
 Hi Mike,
 
 On Thu, Jan 24, 2013 at 01:52:04, Mike Turquette wrote:
  Quoting Afzal Mohammed (2013-01-23 03:48:56)
 
   +static inline void da8xx_fb_clkc_enable(void)
   +{
   if (lcd_revision == LCD_VERSION_2)
   lcdc_write(LCD_V2_DMA_CLK_EN | LCD_V2_LIDD_CLK_EN |
   LCD_V2_CORE_CLK_EN, LCD_CLK_ENABLE_REG);
 
   +static inline int da8xx_fb_calc_config_clk_divider(struct da8xx_fb_par 
   *par,
   +   struct fb_videomode 
   *mode)
   +{
 
   +   ret = clk_set_rate(par-child_clk, PICOS2KHZ(mode-pixclock) * 
   1000);
   +   if (IS_ERR_VALUE(ret)) {
   +   dev_err(par-dev, unable to setup pixel clock of %u ps,
   +   mode-pixclock);
   +   return ret;
   +   }
   +   da8xx_fb_clkc_enable();
 
  It looks like you are using the legacy method to enable/disable the
  clock and using the CCF basic divider to set the rate.  This feels a bit
  hacky to me.  If you want to model your clock in CCF then you should
  probably model the whole clock, not just the rate-change aspects of it.
 
 Initially I thought about it, but seeing requirement of 3 gate clocks
 (due to 3 bits meant for different purposes - DMA, LIDD and CORE
 functionalities), felt that having 4 clocks (3 gate + 1 divider) in
 driver would be an overdesign [leaving a branch instead of a leaf of
 the tree in driver ;)].
 
  Have you looked at the composite clock patches from Prashant?  Those
  might give you the divider+gate properties that you are looking for:
  http://article.gmane.org/gmane.linux.kernel/1416697
 
 Thanks for the pointer,
 
 Now with the composite clock in mind, it was tried to relate to what
 was required for the present scenario.
 
 So there are 3 - LIDD is actually not for present use case, CORE could
 be clubbed with the divider to have a composite clock. And CORE is
 in functional clock path and logically it's perfectly alright to have
 the composite clock.
 

Some of the clock names are a bit generic, so a question that I'm going
to repeat throughout my response: is this clock only inside of your
video IP ?

Regarding the CORE clock, is this only inside of your IP or are you
referring to the SoC CORE clock which is driven by a DPLL and clocks
DDR and many other peripherals (often MMC, UART, etc)?

Note that this is from my past experience with OMAP, and I'm making an
assumption that the clock scheme between OMAP and Da Vinci/AM335x parts
isn't very different.

Is there a public TRM I can look at?  It would help me understand this
without having to ask you so many annoying questions ;)

 And now we are left with DMA, this is actually in the interface clock
 path which driver in unaware. An option would be to have DMA clock
 as child of CORE plus divider composite clock, even though logically
 DMA can't be considered in the same path.
 

Why is the driver unaware of the interface clk?  For instance OMAP3 had
separate fclk and iclk for IPs and drivers would call clk_enable on
both.  Or am I misunderstanding something?

In general I don't think the clock subtree should be modeled in a way
that is convenient for software, but instead model the actual hardware.
Trust me, if you don't model the actual hardware then you will be very
confused when you come back and revisit this code in 6 months and can't
remember why things are so weird looking.

Thanks,
Mike

 Also tried not enabling DMA clock, but driver is able to provide
 display without any issues, so was thinking whether to avoid
 instantiating DMA clock at all and hence to have a simple single
 composite clock. Trying to get information internally on whether
 not setting DMA clock bits would actually make a difference.
 
 If you have any opinion on how to deal here, let me know.
 
 Regards
 Afzal
--
To unsubscribe from this list: send the line unsubscribe linux-omap in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v4 12/12] video: da8xx-fb: CCF clock divider handling

2013-01-23 Thread Mike Turquette
Quoting Afzal Mohammed (2013-01-23 03:48:56)
snip
 +static inline void da8xx_fb_clkc_enable(void)
 +{
 if (lcd_revision == LCD_VERSION_2)
 lcdc_write(LCD_V2_DMA_CLK_EN | LCD_V2_LIDD_CLK_EN |
 LCD_V2_CORE_CLK_EN, LCD_CLK_ENABLE_REG);
  }
  
 -static inline void da8xx_fb_calc_config_clk_divider(struct da8xx_fb_par *par,
 +#ifdef CONFIG_COMMON_CLK
 +static inline int da8xx_fb_calc_config_clk_divider(struct da8xx_fb_par *par,
 +   struct fb_videomode *mode)
 +{
 +   int ret;
 +
 +   ret = clk_set_rate(par-child_clk, PICOS2KHZ(mode-pixclock) * 1000);
 +   if (IS_ERR_VALUE(ret)) {
 +   dev_err(par-dev, unable to setup pixel clock of %u ps,
 +   mode-pixclock);
 +   return ret;
 +   }
 +   da8xx_fb_clkc_enable();

It looks like you are using the legacy method to enable/disable the
clock and using the CCF basic divider to set the rate.  This feels a bit
hacky to me.  If you want to model your clock in CCF then you should
probably model the whole clock, not just the rate-change aspects of it.

Have you looked at the composite clock patches from Prashant?  Those
might give you the divider+gate properties that you are looking for:
http://article.gmane.org/gmane.linux.kernel/1416697

Regards,
Mike
--
To unsubscribe from this list: send the line unsubscribe linux-omap in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html