Re: [PATCH v3 0/4] Introduce hardware spinlock framework
Hi Andrew, On Sat, Dec 18, 2010 at 2:53 AM, Tony Lindgren t...@atomide.com wrote: * Ohad Ben-Cohen o...@wizery.com [101216 13:34]: Tony, Andrew, can you please have a look ? Any comment or suggestion is appreciated. Looks sane to me from omap point of view and it seems the locks are now all timeout based: Acked-by: Tony Lindgren t...@atomide.com Can you please have a look at this patch set (see link no. [1] below) ? Short summary: This hwspinlock framework adds support for hardware-based locking devices, needed to accomplish synchronization and mutual exclusion operations between remote processors that have no alternative mechanism to do so. This patch set adds a framework and an OMAP implementation. It had circulated several times in the relevant mailing lists, and all comments were addressed. The third version of this patch set [1] was submitted about a month ago and so far there is no outstanding comment. Users for this framework that we are aware of: 1. OMAP4 and Davinci Netra (DM8168): both of these SoC have the same hwspinlock module and will share the same host implementation. 2. Other platforms (such as omap3530 and omapl1xx) that have no such hardware support, but would still need to achieve synchronization (to communicate with their DSP). The only way to achieve mutual exclusion on those platforms is by using a shared-memory synchronization algorithm called Peterson's Algorithm [2]. We would still need the same hwspinlock framework for that - the busy looping, the timeout, the various locking schemes, the lock resource allocation - are all still valid. The only difference would be the actual lock implementation, therefore we will add another host implementation for these platforms. 3. The C6474, a multi-core DSP device [3], have Linux running on one of its cores, and hardware support for synchronization (btw the C6474 has a richer hardware module that would need more than the hwspinlock framework offer today - it also supports queuing, owner semantics and interrupt notification to let a processor know when it acquires a lock, so it wouldn't have to spin..). Disclaimer: it will probably take some time until c6x support is merged upstream, but this is something that is being actively worked on [4]. Any comment or suggestion is appreciated. Thanks, Ohad. [1] http://www.mail-archive.com/linux-omap@vger.kernel.org/msg39833.html [2] http://en.wikipedia.org/wiki/Peterson's_algorithm [3] http://focus.ti.com/docs/prod/folders/print/tms320c6474.html [4] http://www.linux-c6x.org/wiki/index.php/Main_Page -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
On Tue, 4 Jan 2011 14:23:20 +0200 Ohad Ben-Cohen o...@wizery.com wrote: Hi Andrew, On Sat, Dec 18, 2010 at 2:53 AM, Tony Lindgren t...@atomide.com wrote: * Ohad Ben-Cohen o...@wizery.com [101216 13:34]: Tony, Andrew, can you please have a look ? Any comment or suggestion is appreciated. Looks sane to me from omap point of view and it seems the locks are now all timeout based: Acked-by: Tony Lindgren t...@atomide.com Can you please have a look at this patch set (see link no. [1] below) ? I looked - it looks reasonable. This is exactly the wrong time to be looking at large new patchsets - please refresh, retest and resend after -rc1 is released? -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
On Tue, Jan 4, 2011 at 10:19 PM, Andrew Morton a...@linux-foundation.org wrote: Acked-by: Tony Lindgren t...@atomide.com Can you please have a look at this patch set (see link no. [1] below) ? I looked - it looks reasonable. This is exactly the wrong time to be looking at large new patchsets - please refresh, retest and resend after -rc1 is released? Sure, thanks ! -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
* Ohad Ben-Cohen o...@wizery.com [101216 13:34]: On Thu, Dec 16, 2010 at 11:08 PM, Greg KH g...@kroah.com wrote: I've seen a lot of discussion about this, are all of the review comments now addressed? Yes, all comments were addressed in this v3 iteration, and this thread has been idle for about 10 days. That's because we are all busy with other things :( Oh sure, my intention was only positive (to demonstrate that there are no outstanding comments), sorry if it sounded differently than intended. Sorry, I really don't have the time at the moment to review this code, nor does it seem to affect areas that I maintain, so I don't understand what you are looking for from me. Sorry for the noise - I contacted you because you maintain the driver core, in the hope you could suggest where this can go through. Tony, Andrew, can you please have a look ? Any comment or suggestion is appreciated. Looks sane to me from omap point of view and it seems the locks are now all timeout based: Acked-by: Tony Lindgren t...@atomide.com Regards, Tony -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
On Tue, Dec 14, 2010 at 8:40 PM, Ohad Ben-Cohen o...@wizery.com wrote: On Tue, Dec 14, 2010 at 7:06 PM, Greg KH g...@kroah.com wrote: Like the most important one, why is this generic code if it's only for one specific platform? We started out with an omap-specific driver, but Tony preferred that we make this a platform-agnostic framework, in order to keep the IPC drivers that will use it generic, and assuming that more users will show up for such framework. I was just told about additional users for this (thanks Mugdha): 1) There are several platforms (such as omap3530 and omapl1xx) that have no such hardware support, but would still need to use the same IPC drivers (to communicate with their DSP). The only way to achieve mutual exclusion on those platforms is by using a shared-memory synchronization algorithm called Peterson's Algorithm [1]. We would still need the same hwspinlock framework for that - the busy looping, the timeout, the various locking schemes, the resource allocation - are all still valid. The only difference is the actual lock implementation. 2) The C6474, which is a multi-core DSP device [2], have Linux running on one of its cores, and would be using the same IPC drivers, too. C6474 has hardware support for synchronization, which would also benefit from such hwspinlock module (btw the C6474 has a richer hardware module that would need more than the hwspinlock framework offer today - it also supports queuing, owner semantics and interrupt notification to let a processor know when it acquires a lock, so it wouldn't have to spin..) Thanks, Ohad. [1] http://en.wikipedia.org/wiki/Peterson's_algorithm [2] http://focus.ti.com/docs/prod/folders/print/tms320c6474.html To me it sounds reasonable, but both ways (framework / omap-specific driver) will work for us just fine, and I can switch back to a misc driver if this is preferred. The complete discussion of v1 is at: http://comments.gmane.org/gmane.linux.kernel/1049802 We also discussed this at v2 of the patches with David, see the complete discussion at: http://comments.gmane.org/gmane.linux.kernel/1067016 Thanks, Ohad. thanks, greg k-h -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
On Tue, Dec 14, 2010 at 08:40:28PM +0200, Ohad Ben-Cohen wrote: On Tue, Dec 14, 2010 at 7:06 PM, Greg KH g...@kroah.com wrote: Can you please have a look and say if this looks OK ? Look at what, I don't see a patch here. Here's the complete patchset: http://www.mail-archive.com/linux-omap@vger.kernel.org/msg39833.html If you prefer, I can resubmit. I've seen a lot of discussion about this, are all of the review comments now addressed? Yes, all comments were addressed in this v3 iteration, and this thread has been idle for about 10 days. That's because we are all busy with other things :( Sorry, I really don't have the time at the moment to review this code, nor does it seem to affect areas that I maintain, so I don't understand what you are looking for from me. confused, greg k-h -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
On Thu, Dec 16, 2010 at 11:08 PM, Greg KH g...@kroah.com wrote: I've seen a lot of discussion about this, are all of the review comments now addressed? Yes, all comments were addressed in this v3 iteration, and this thread has been idle for about 10 days. That's because we are all busy with other things :( Oh sure, my intention was only positive (to demonstrate that there are no outstanding comments), sorry if it sounded differently than intended. Sorry, I really don't have the time at the moment to review this code, nor does it seem to affect areas that I maintain, so I don't understand what you are looking for from me. Sorry for the noise - I contacted you because you maintain the driver core, in the hope you could suggest where this can go through. Tony, Andrew, can you please have a look ? Any comment or suggestion is appreciated. Thanks, Ohad. -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
Hi Greg, Tony, On Sat, Dec 4, 2010 at 1:50 AM, Ohad Ben-Cohen o...@wizery.com wrote: OMAP4 introduces a Hardware Spinlock device, which provides hardware assistance for synchronization and mutual exclusion between heterogeneous processors and those not operating under a single, shared operating system (e.g. OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP). The intention of this hardware device is to allow remote processors, that have no alternative mechanism to accomplish synchronization and mutual exclusion operations, to share resources (such as memory and/or any other hardware resource). This patchset adds hwspinlock framework that makes it possible for drivers to use those hwspinlock devices and stay platform-independent. ... Documentation/hwspinlock.txt | 299 +++ arch/arm/mach-omap2/Makefile | 1 + arch/arm/mach-omap2/hwspinlock.c | 63 arch/arm/mach-omap2/omap_hwmod_44xx_data.c | 64 drivers/Kconfig | 2 + drivers/Makefile | 1 + drivers/hwspinlock/Kconfig | 22 ++ drivers/hwspinlock/Makefile | 6 + drivers/hwspinlock/hwspinlock.h | 61 +++ drivers/hwspinlock/hwspinlock_core.c | 557 drivers/hwspinlock/omap_hwspinlock.c | 231 include/linux/hwspinlock.h | 298 +++ 12 files changed, 1605 insertions(+), 0 deletions(-) Can you please have a look and say if this looks OK ? If it does, where would you prefer this to go through ? Thanks, Ohad. -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
On Tue, Dec 14, 2010 at 04:31:36PM +0200, Ohad Ben-Cohen wrote: Hi Greg, Tony, On Sat, Dec 4, 2010 at 1:50 AM, Ohad Ben-Cohen o...@wizery.com wrote: OMAP4 introduces a Hardware Spinlock device, which provides hardware assistance for synchronization and mutual exclusion between heterogeneous processors and those not operating under a single, shared operating system (e.g. OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP). The intention of this hardware device is to allow remote processors, that have no alternative mechanism to accomplish synchronization and mutual exclusion operations, to share resources (such as memory and/or any other hardware resource). This patchset adds hwspinlock framework that makes it possible for drivers to use those hwspinlock devices and stay platform-independent. ... Documentation/hwspinlock.txt | 299 +++ arch/arm/mach-omap2/Makefile | 1 + arch/arm/mach-omap2/hwspinlock.c | 63 arch/arm/mach-omap2/omap_hwmod_44xx_data.c | 64 drivers/Kconfig | 2 + drivers/Makefile | 1 + drivers/hwspinlock/Kconfig | 22 ++ drivers/hwspinlock/Makefile | 6 + drivers/hwspinlock/hwspinlock.h | 61 +++ drivers/hwspinlock/hwspinlock_core.c | 557 drivers/hwspinlock/omap_hwspinlock.c | 231 include/linux/hwspinlock.h | 298 +++ 12 files changed, 1605 insertions(+), 0 deletions(-) Can you please have a look and say if this looks OK ? Look at what, I don't see a patch here. I've seen a lot of discussion about this, are all of the review comments now addressed? Like the most important one, why is this generic code if it's only for one specific platform? thanks, greg k-h -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
On Tue, Dec 14, 2010 at 7:06 PM, Greg KH g...@kroah.com wrote: Can you please have a look and say if this looks OK ? Look at what, I don't see a patch here. Here's the complete patchset: http://www.mail-archive.com/linux-omap@vger.kernel.org/msg39833.html If you prefer, I can resubmit. I've seen a lot of discussion about this, are all of the review comments now addressed? Yes, all comments were addressed in this v3 iteration, and this thread has been idle for about 10 days. Like the most important one, why is this generic code if it's only for one specific platform? We started out with an omap-specific driver, but Tony preferred that we make this a platform-agnostic framework, in order to keep the IPC drivers that will use it generic, and assuming that more users will show up for such framework. To me it sounds reasonable, but both ways (framework / omap-specific driver) will work for us just fine, and I can switch back to a misc driver if this is preferred. The complete discussion of v1 is at: http://comments.gmane.org/gmane.linux.kernel/1049802 We also discussed this at v2 of the patches with David, see the complete discussion at: http://comments.gmane.org/gmane.linux.kernel/1067016 Thanks, Ohad. thanks, greg k-h -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
On 12/03/2010 03:50 PM, Ohad Ben-Cohen wrote: OMAP4 introduces a Hardware Spinlock device, which provides hardware assistance for synchronization and mutual exclusion between heterogeneous processors and those not operating under a single, shared operating system (e.g. OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP). The intention of this hardware device is to allow remote processors, that have no alternative mechanism to accomplish synchronization and mutual exclusion operations, to share resources (such as memory and/or any other hardware resource). Does anything other than OMAP4 have one of these things? And are there any devices that are commonly encountered across more than one platform that might have these hardware spinlocks, thus making it advantageous to have a common framework? If not why a general purpose framework for a very chip specific feature? There are chips with hardware atomic memory, but the only time it makes sense to use it is in conjunction with specialize on-chip devices. So the implementation details are kept in the drivers rather than creating a general purpose hwatomic (with its hwatomic_add() et al.) type. David Daney This patchset adds hwspinlock framework that makes it possible for drivers to use those hwspinlock devices and stay platform-independent. Currently there are two users for this hwspinlock interface: 1. Inter-processor communications: on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the remote M3 and/or C64x+ slave processors. To achieve fast message-based communications, a minimal kernel support is needed to deliver messages arriving from a remote processor to the appropriate user process. This communication is based on a simple data structure that is shared between the remote processors, and access to it is synchronized using the hwspinlock module (remote processor directly places new messages in this shared data structure). 2. On some OMAP4 boards, the I2C bus is shared between the A9 and the M3, and the hwspinlock is used to synchronize access to it. While (2) can get away with an omap-specific hwspinlock implementation, (1) is by no means omap-specific, and a common hwspinlock interface is needed to keep it generic, in hopes that it will be useful for other platforms as well. Changes v2-v3: - Remove the timeout-less _lock API variant (Tony) - s/state-io_base/io_base/ (Ionut) - Remove the generic wording (David) - s/hwspinlock_/hwspin_lock_/ (Mugdha) - Use MAX_SCHEDULE_TIMEOUT to indicate no timeout (Mugdha) - Be more verbose on egregious API misuse (Olof) - locking API misuse is BUG_ON material (Russell) Note: Russell also suggested compiling out NULL checks on ARM. I've posted an initial proposal for this (see https://lkml.org/lkml/2010/11/29/96), which I'm going to resubmit separately. If accepted, I'll adopt hwspinlocks to use it. Patches are tested against linux-omap-2.6 master, which is 2.6.37-rc4 plus omap material. All, but the third (the hwmod omap patch), apply on top of mainline 2.6.37-rc4 as well Changes v1-v2: - Convert to a generic interface (Tony) - API should silently succeed if framework isn't built (Greg) - Don't use ERR_PTR pattern (Grant) - Use tristate, fix and extend commentary (Kevin) - Provide API flexibility regarding irq handling (Arnd, Grant) Note: after reviewing OMAP's L4 access times, and comparing them with external memory latencies, I can say that there is no notable difference. Because of that, we can safely treat the hwspinlock like we do with regular spinlocks: preemption should be disabled, but whether to disable interrupts or not is up to the caller. So despite the TRM's recommendation to always disable local interrupts when taking an OMAP Hardware Spinlock, I have decided to allow callers not to do that (by providing the full extent of hwspin_lock(), hwspin_lock_irq() and hwspin_lock_irqsave() API). Just like regular spinlocks, it's up to the callers to decide whether interrupts should be disabled or not. Sleeping, btw, is still prohibited of course. Contributions: Previous versions of an omap-specific hwspinlock driver circulated in linux-omap several times, and received substantial attention and contribution from many developers (see [1][2][3][4][5][6]): Simon Que did the initial implementation and pushed several iterations Benoit Cousson provided extensive review, help, improvements and hwmod support Hari Kanigeri helped out when Simon was away Sanjeev Premi, Santosh Shilimkar and Nishanth Menon did lots of review I'd like to thank Benoit Cousson, Steve Krueger, Hari Kanigeri, Nourredine Hamoudi and Richard Woodruff for useful discussions about the OMAP Spinlock requirements and use-cases. Relevant linux-omap threads: [1] http://thread.gmane.org/gmane.linux.ports.arm.omap/38755 [2] http://thread.gmane.org/gmane.linux.ports.arm.omap/38917 [3] http://thread.gmane.org/gmane.linux.ports.arm.omap/39187 [4]
Re: [PATCH v3 0/4] Introduce hardware spinlock framework
On Sat, Dec 4, 2010 at 2:29 AM, David Daney dda...@caviumnetworks.com wrote: Does anything other than OMAP4 have one of these things? I'm not aware of any, but David Brownell had some ideas about it in our previous v2 discussion (see http://www.mail-archive.com/linux-omap@vger.kernel.org/msg39413.html). Btw, you might want to read throughout that entire discussion with David, since it was touching the same questions you raise here. And are there any devices that are commonly encountered across more than one platform that might have these hardware spinlocks, thus making it advantageous to have a common framework? Such devices are just multiple processors that have no alternative mechanism to accomplish synchronization. OMAP4 has a few non-coherent heterogeneous processors that do not support fancy synchronization primitives, so it needs this hwspinlock peripheral. Otherwise, I don't know how common that is (/might become), and I'd hate speculating, but I suspect that OMAP is not going to be the only platform with multiple coprocessors, that have no other means of achieving synchronization, and with which inter-processor communications is desired. If not why a general purpose framework for a very chip specific feature? We started out with an omap-specific driver (see first iteration at http://lwn.net/Articles/410375/), but were asked to make it a platform-agnostic framework, in order to keep the IPC drivers that will use it generic (and assuming that more users will show up for such framework). To me it sounds reasonable, but again, I prefer not to speculate how many users will show up for this, if any. Both ways (framework / omap-specific driver) will work for us just fine. There are chips with hardware atomic memory, but the only time it makes sense to use it is in conjunction with specialize on-chip devices. So the implementation details are kept in the drivers rather than creating a general purpose hwatomic (with its hwatomic_add() et al.) type. This case is a bit different IMHO: some of the potential users of hwspinlocks are just generic IPC drivers that are by no means platform-specific. It's not a special on-chip device: it's just one processor, trying to achieve mutual exclusion with another processor, before manipulating some shared data structure. -- To unsubscribe from this list: send the line unsubscribe linux-omap in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html