Hi Ben, On Tue, Mar 23, 2010 at 22:48:44, Ben Gardiner wrote: > On Tue, Mar 23, 2010 at 1:04 PM, Nori, Sekhar <[email protected]> wrote: > > On Mon, Mar 22, 2010 at 20:36:13, Vladimir Pantelic wrote: > >> Ben Gardiner wrote: > >> > On Mon, Mar 22, 2010 at 10:20 AM, Vladimir Pantelic > >> > <[email protected]> wrote: > >> >> Ben Gardiner wrote:
[...] > >> >> So, the SPI data ends up in linux user space anyway. So are you really > >> >> sure you cannot achieve your latencies with linux only? > >> > > >> > That's a good question. It is a bit of a hypothetical situation so I > >> > can't say for sure in this case. I guess what I'm trying to get across > >> > is that we would prefer to not write linux driver code since having > >> > alot of custom Linux driver code has burned us in the past. We would > >> > prefer to use existing code bases and drivers and work to make those > >> > existing drivers stable with patches (that we post back upstream). > >> > >> err, but aren't linux spi drivers "existing" and/or "stable"? > >> > >> > This 'proxy driver' would be one that grabs the platform resources and > >> > does nothing else then? Are there existing examples of proxy drivers > >> > like this? > >> > >> no idea. > >> > > > > AFAIK, there are no such proxy drivers. IMHO, it is much easier > > to statically partition resources between the two cores rather > > than create proxy drivers for resource sharing. So, if you want > > to control a particular SPI instance from DSP, just don't register > > the platform device for that SPI instance in your Linux board file. > > Thank you for offering your advice, Sekhar. One of our concerns with > this approach -- or so we think -- is that the kernel power management > would be free to supress the clock of that device. Could you comment > on whether this is a concern and if it can be addressed in a static > partition of resources? > Right, power management will be an area of concern. If you are using OMAP-L137, Linux will only reset the unused clocks at boot-up. Linux can be configured not to do so. Also, the clocks can be turned back on by DSP when it starts to use the them. The only other time clocks can be turned off is at driver unload time - even this should happen only for peripherals for which a platform device is registered. On OMAP-L138, more power management features are implemented leading to more complication. So, cpuidle will put the DDR into power down mode even though DSP is very much active. Thankfully this is not a hard error since DDR still services requests when in power down mode. The access will have higher latency though. With cpufreq DSP will start running slower if ARM is under less load. There would be a need for some messaging from DSP to ARM via DSPLink to set the cpufreq policy such that a lower frequency is not attempted. cpufreq also affects drivers. So any device used by DSP, deriving input clock from PLL0 and providing external IO clock will have to be informed regarding the clock change so it can wait for IO to complete and change its internal divider. Here again some sort of DSPLink based messaging would have to come into play. PLL1 remains constant during cpufreq. suspend-to-RAM will pose the maximum challenge as all drivers running DSP side will need to be informed of the impending clock shutdown. Similar to Linux PM framework on ARM, there is a DSP side Power management framework (PWRM) and the drivers support PM functionality as defined by PWRM. But, like Linux, it operates in isolation. Unfortunately, there is no existing examples of using PM with both cores cooperating with each other (AFAIK). Thanks, Sekhar _______________________________________________ Davinci-linux-open-source mailing list [email protected] http://linux.davincidsp.com/mailman/listinfo/davinci-linux-open-source
