Re: [PATCH v1 05/24] clk: wrap I/O access for improved portability

2013-07-18 Thread Russell King - ARM Linux
On Thu, Jul 18, 2013 at 09:04:02AM +0200, Gerhard Sittig wrote:
 The common clock API assumes (it's part of the contract) that
 there are potentially expensive operations like get, put, prepare
 and unprepare, as well as swift and non-blocking operations like
 enable and disable.

Let's get something straight here, because what you've said above is
wrong.

1. clk_get() and clk_put() are NOT part of the common clock API.
   They're separate - they're part of the clk API, and the infrastructure
   behind that is clkdev, which is a separately owned thing (by me.)

2. The contract of the clk API is defined by the clk API, not by some
   random implementation like the common clock API.  The clk API is
   maintained by myself, and is described in include/linux/clk.h

3. clk_prepare() and clk_unprepare() are functions MUST only be called
   from contexts where sleeping is permitted.  These functions MAY sleep
   for whatever reason they require to, and as long as they require to.
   (This is the whole reason these two functions were created in the
   first place.)

4. clk_enable() and clk_disable() MAY be called from any context, but
   MUST never sleep.  If you need to talk over a non-atomic bus for these,
   then these functions should be no-ops, and the code which does that
   must be executed from the clk_prepare()/clk_unprepare() operations.

That is the clk API contract.  The CCF has no bearing on this; if it
disagrees, then the CCF is buggy and is non-conformant.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/3] dmaengine: add dma_get_slave_sg_limits()

2013-07-18 Thread Russell King - ARM Linux
On Thu, Jul 18, 2013 at 11:46:39AM -0500, Joel Fernandes wrote:
 The API is optionally implemented by dmaengine drivers and when
 unimplemented will return a NULL pointer. A client driver using
 this API provides the required dma channel, address width, and
 burst size of the transfer. dma_get_slave_sg_limits() returns an
 SG limits structure with the maximum number and size of SG segments
 that the given channel can handle.

Please look at what's already in struct device:

struct device {
...
struct device_dma_parameters *dma_parms;
...
};

This provides:

struct device_dma_parameters {
/*
 * a low level driver may set these to teach IOMMU code about
 * sg limitations.
 */
unsigned int max_segment_size;
unsigned long segment_boundary_mask;
};

Now, these are helpfully accessed via:

dma_get_max_seg_size(dev)
dma_set_max_seg_size(dev)
dma_get_seg_boundary(dev)
dma_set_seg_boundary(dev, mask)

Drivers already use these to work out how to construct the scatterlist
before passing it to the DMA API, which means that they should also be
used when creating a scatterlist for the DMA engine (think about it -
you have to use the DMA API to map the buffers for the DMA engine too.)

So, we already have two properties defined on a per-device basis: the
maximum size of a scatterlist segment, and the boundary over which any
segment must not cross.

The former ties up with your max_seg_len() property, though arguably it
may depend on the DMA engine access size.  The problem with implementing
this new API though is that the subsystems (such as SCSI) which already
use dma_get_max_seg_size() will be at odds with what is possible via the
DMA engine.

I strongly suggest using the infrastructure at device level and not
implementing some private DMA engine API to convey this information.

As for the maximum number of scatterlist entries, really that's a bug in
the DMA engine implementations if they can't accept arbitary lengths.
I've created DMA engine drivers for implementations where you have to
program each segment individually, ones which can have the current and
next segments, as well as those which can walk a list.  Provided you get
informed of a transfer being completed, there really is no reason for a
DMA engine driver to limit the number of scatterlist entries that it
will accept.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 18/24] i2c: mpc: OF clock lookup for MPC512x

2013-07-18 Thread Russell King - ARM Linux
On Thu, Jul 18, 2013 at 10:20:52PM +0200, Gerhard Sittig wrote:
 + /* enable clock for the I2C peripheral (non fatal) */
 + clk = of_clk_get_by_name(node, per);
 + if (!IS_ERR(clk)) {
 + clk_prepare_enable(clk);
 + clk_put(clk);
 + }
 +

This kind of hacked up approach to the clk API is exactly the thing I
really don't like seeing.  I don't know what it is... is the clk API
somehow difficult to use or what's the problem with doing stuff correctly?

1. Get the clock in your probe function.
2. Prepare it at the appropriate time.
3. Enable it appropriately.  (or if you want to combine 2 and 3, use
   clk_prepare_enable().)
4. Ensure that enables/disables and prepares/unprepares are appropriately
   balanced.
5. 'put' the clock in your remove function.

Certainly do not get-enable-put a clock.  You're supposed to hold on to
the clock all the time that you're actually using it.

Final point - if you want to make it non-fatal, don't play games like:

clk = clk_get(whatever);
if (IS_ERR(clk))
clk = NULL;

...
if (clk)
clk_prepare(clk);

Do this instead:

clk = clk_get(whatever);
...

if (!IS_ERR(clk))
clk_prepare(clk);
etc.

(And on this subject, I'm considering whether to make a change to the
clk API where clk_prepare() and clk_enable() return zero when passed
an error pointer - this means drivers with optional clocks don't have
to burden themselves with these kinds of checks.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC PATCH v5 0/1] drivers: mfd: Versatile Express SPC support

2013-07-17 Thread Russell King - ARM Linux
On Wed, Jul 17, 2013 at 11:57:55AM -0400, Nicolas Pitre wrote:
 The sanest location at this point might simply be 
 drivers/platform/vexpress_spc.c or drivers/platform/vexpress/spc.c 
 depending on whether or not more such driver glue is expected in the 
 vexpress future.  No point putting arm in the path, especially if this 
 is later reused on arm64.

You wouldn't be making that argument if it were arch/arm64 and arch/arm32 -
you'd probably be arguing that arm made perfect sense.

Don't get too hung up on names please, it's really not worth the time
and effort being soo pedantic, and being soo pedantic leads to pointless
churn when someone comes along and wants to pedantically change the
names because it no longer matches the users.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sat, Jul 13, 2013 at 12:56:50PM +0200, Sylwester Nawrocki wrote:
 On 07/13/2013 10:35 AM, Jean-Francois Moine wrote:
 On Fri, 12 Jul 2013 13:00:23 -0600 Daniel Draked...@laptop.org  wrote:
 On Fri, Jul 12, 2013 at 12:39 PM, Jean-Francois Moinemoin...@free.fr  
 wrote:

 - the phandles to the clocks does not tell how the clock may be set by
the driver (it is an array index in the 510).

 In the other threads on clock selection, we decided that that exact
 information on how to select a clock (i.e. register bits/values) must
 be in the driver, not in the DT. What was suggested there is a
 well-documented scheme for clock naming, so the driver knows which
 clock is which. That is defined in the proposed DT binding though the
 valid clock names part. For an example driver implementation of this
 you can see my patch armada_drm clock selection - try 2.

 OK.

 Sorry to go back to this thread.

 I use my Cubox for daily jobs as a desktop computer. My kernel is a DT
 driven 3.10.0. The dove-drm, tda998x and si5351 (clock) are kernel
 modules. I set 3 clocks in the DT for the LCD0: lcdclk, axi and extclk0

 Hmm, a bit off topic question, does it work when the si5351 module gets
 removed ? I can't see anything preventing clock provider module from being
 removed why any of its clocks is used and clk_unregister() function is
 currently unimplemented.

When I designed the clk API, I arranged for things like clk_get() to
take a reference on the module if the clock was supplied by a module.
You'll see some evidence of that by looking back in the git history
for arch/arm/mach-integrator/include/mach/clkdev.h.

If the common clk API has been designed without any thought to clocks
supplied by modules and making sure that in-use clocks don't go away,
then it's going to be a real pain to sort that out.  I don't think
refcounting clocks makes sense (what do you do with an enabled clock
that you then remove the module for but it's still being used?  Do you
just shut it down anyway?  Do you leave it running?  What about
subsequent calls?).

I think this is one case where taking a reference on the module supplying
it makes total sense.

 (si5351). Normally, the external clock is used, but, sometimes, the
 si5351 module is not yet initialized when the drm driver starts. So,
 for 1920x1080p, it uses the lcdclk which sets the LCD clock to 13
 (40/3) instead of 148500. As a result, display and sound still work
 correctly on my TV set thru HDMI.

 So, it would be nice to have 2 usable clocks per LCD, until we find a
 good way to initialize the modules in the right order at startup time.

 Doesn't deferred probing help here ?

Indeed it does.  Just because one TV works with such a wrong clock does not
mean that all TVs and HDMI compatible monitors will do.  It's 11% out.

The reason that audio works is because of the way the HDMI transmitter works
- it can calculate the CTS value to send (by measuring it) and it sends that
value over the link so the TV can regenerate the correct audio clock from
the HDMI clock.

Whether that's going to be universally true, I don't know - it depends on
how much audio data gets sent via each frame.  As the HDMI clock is slower,
there would need to be more audio data sent.

  lcd0: lcd-controller@82 {
  compatible = marvell,dove-lcd0;
 [...]
  };

  lcd1: lcd-controller@81 {
  compatible = marvell,dove-lcd1;
 [...]
  };

 Using different compatible strings to indicate multiple instances of same
 hardware doesn't seem right. Unless LCD0, LCD1 are really different pieces

They aren't.  They're 100% identical in the Armada 510.

 of hardware incompatible with each other I think it would be more correct
 to use same compatible property and DT node aliases, similarly as is now
 done with e.g. I2C busses:

   aliases {
   lcd0 = lcd_0;  
   lcd1 = lcd_1;  
   };

   lcd_0: lcd-controller@82 {
   compatible = marvell,dove-lcd;

I'd much prefer marvell,armada-510-lcd rather than using the codenames for
the devices.  Otherwise we're going to run into totally different things
being used for different devices (eg, armada-xp...)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sat, Jul 13, 2013 at 08:25:15AM -0600, Daniel Drake wrote:
 I guess the IRE falls into the same category as the DCON - we won't
 implement it for now, but knowing where it might fit in is useful.

I don't see much need at the moment for IRE.  IRE isn't going to be
useful for general graphics rotation as it only supports up to 16bpp
RGB.

It looks to me like this module was designed to rotate YUV/camera
images.  If we wanted to rotate a video image, then it would probably
be more efficient to use this, but it will cause a conversion of the
image format in doing so.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sat, Jul 13, 2013 at 07:44:58PM +0200, Sebastian Hesselbarth wrote:
 On 07/13/2013 01:12 PM, Russell King - ARM Linux wrote:
 When I designed the clk API, I arranged for things like clk_get() to
 take a reference on the module if the clock was supplied by a module.
 You'll see some evidence of that by looking back in the git history
 for arch/arm/mach-integrator/include/mach/clkdev.h.

 Last time I checked, clkdev API neither implements referencing nor
 unregister.

*Sigh*.

a613163dff04cbfcb7d66b06ef4a5f65498ee59b:
arch/arm/mach-integrator/include/mach/clkdev.h:
-struct clk {
-   unsigned long   rate;
-   const struct clk_ops*ops;
-   struct module   *owner;
-   const struct icst_params *params;
-   void __iomem*vcoreg;
-   void*data;
-};
-
-static inline int __clk_get(struct clk *clk)
-{
-   return try_module_get(clk-owner);
-}
-
-static inline void __clk_put(struct clk *clk)
-{
-   module_put(clk-owner);
-}

70ee65771424829fd092a1df9afcc7e24c94004b:
arch/arm/mach-integrator/impd1.c:
 static int impd1_probe(struct lm_device *dev)
...
-   for (i = 0; i  ARRAY_SIZE(impd1-vcos); i++) {
-   impd1-vcos[i].ops = impd1_clk_ops,
-   impd1-vcos[i].owner = THIS_MODULE,
-   impd1-vcos[i].params = impd1_vco_params,
-   impd1-vcos[i].data = impd1;
-   }
-   impd1-vcos[0].vcoreg = impd1-base + IMPD1_OSC1;
-   impd1-vcos[1].vcoreg = impd1-base + IMPD1_OSC2;
-
-   impd1-clks[0] = clkdev_alloc(impd1-vcos[0], NULL, lm%x:01000,
-   dev-id);
-   impd1-clks[1] = clkdev_alloc(fixed_14745600, NULL, lm%x:00100,
-   dev-id);
-   impd1-clks[2] = clkdev_alloc(fixed_14745600, NULL, lm%x:00200,
-   dev-id);
-   for (i = 0; i  ARRAY_SIZE(impd1-clks); i++)
-   clkdev_add(impd1-clks[i]);
...
 static void impd1_remove(struct lm_device *dev)
...
-   for (i = 0; i  ARRAY_SIZE(impd1-clks); i++)
-   clkdev_drop(impd1-clks[i]);

drivers/clk/clkdev.c:
/*
 * clkdev_drop - remove a clock dynamically allocated
 */
void clkdev_drop(struct clk_lookup *cl)
{
mutex_lock(clocks_mutex);
list_del(cl-node);
mutex_unlock(clocks_mutex);
kfree(cl);
}
EXPORT_SYMBOL(clkdev_drop);

No, of course, I'm imagining all the above!

Now, today, the IM-PD/1 support got broken in respect of ensuring that
things are properly refcounted in the rush to convert things to this
wonderful new common clk API - because it's oh soo much better.  Yes,
right, I'd forgotten the number one rule of kernel programming - always
sacrifice correctness when we get a new fantasic hip idea!  Silly me.

 This is on Mike's list and IIRC there already has been
 a proposal for unregister. Si5351 was the first clkdev driver ever
 that could possibly be unloaded, so there may be still some features
 missing.

Utter rubbish - it's not the first as you can see from the above.
Integrator had this support since the clk and clkdev APIs came along,
because the IM-PD/1 module was implemented as a module, and it has to
create and register clocks for its peripherals.

What you've found is a backwards step with the common clk API, nothing
more.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sat, Jul 13, 2013 at 10:43:29PM +0200, Sylwester Nawrocki wrote:
 I wasn't aware of it, thanks. I've seen a patch from Jiada Wang, it seems
 they're working on v4 with clock object reference counting. Presumably we
 need both clk_get() to be taking reference on the module and reference
 counted clk free, e.g. in cases where clock provider is a hot-pluggable
 device. It might be too paranoid though, I haven't seen hardware
 configurations where a clock source could be unplugged safely when whole
 system is running.

I'm not going to accept refcounting being thrown into clk_get().  The
clkdev API already has refcounting, as much as it needs to.  It just
needs people to use the hooks that I provided back in 2008 when I
created the clkdev API for doing _precisely_ this job.

Have a read through these commits, which backup my statement above:

0318e693d3a56836632bf1a2cfdafb7f34bcc703 - initial commit of the clkdev API
d72fbdf01fc77628c0b837d0dd2fd564fa26ede6 - converting Integrator to clkdev API

and it will show you how to do refcounting.  The common clk API just
needs to stop defining __clk_get() and __clk_put() to be an empty
function and implement them appropriately for it's clk implementation,
like they were always meant to be.

__clk_get() and __clk_put() are the clk-implementation specific parts
of the clkdev API, because the clkdev API is utterly divorsed from the
internals of what a 'struct clk' actually is.  clkdev just treats a
'struct clk' as a completely opaque type and never bothers poking
about inside it.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: DT binding review for Armada display subsystem

2013-07-13 Thread Russell King - ARM Linux
On Sun, Jul 14, 2013 at 12:16:58AM +0200, Sylwester Nawrocki wrote:
 On 07/13/2013 11:02 PM, Russell King - ARM Linux wrote:
 On Sat, Jul 13, 2013 at 10:43:29PM +0200, Sylwester Nawrocki wrote:
 I wasn't aware of it, thanks. I've seen a patch from Jiada Wang, it seems
 they're working on v4 with clock object reference counting. Presumably we
 need both clk_get() to be taking reference on the module and reference
 counted clk free, e.g. in cases where clock provider is a hot-pluggable
 device. It might be too paranoid though, I haven't seen hardware
 configurations where a clock source could be unplugged safely when whole
 system is running.

 I'm not going to accept refcounting being thrown into clk_get().  The
 clkdev API already has refcounting, as much as it needs to.  It just
 needs people to use the hooks that I provided back in 2008 when I
 created the clkdev API for doing _precisely_ this job.

 Have a read through these commits, which backup my statement above:

 0318e693d3a56836632bf1a2cfdafb7f34bcc703 - initial commit of the clkdev API
 d72fbdf01fc77628c0b837d0dd2fd564fa26ede6 - converting Integrator to clkdev 
 API

 and it will show you how to do refcounting.  The common clk API just
 needs to stop defining __clk_get() and __clk_put() to be an empty
 function and implement them appropriately for it's clk implementation,
 like they were always meant to be.

 Sure, I've already seen the above commits. This is how I understood it
 as well - __clk_get(), __clk_put() need to be defined by the common clk
 API, since it is going to replace all the arch specific implementations.

 __clk_get() and __clk_put() are the clk-implementation specific parts
 of the clkdev API, because the clkdev API is utterly divorsed from the
 internals of what a 'struct clk' actually is.  clkdev just treats a
 'struct clk' as a completely opaque type and never bothers poking
 about inside it.

 OK, but at the clock's implementation side we may need, e.g. to use kref
 to keep track of the clock's state, and free it only when it is unprepared,
 all its children are unregistered, etc. ? Is it not what you stated in
 your comment to patch [1] ?

If you want to do refcounting on a clock (which you run into problems
as I highlighted earlier in this thread) then yes, you need to use a
kref, and take a reference in __clk_get() and drop it in __clk_put()
to make things safe.

Whether you also take a reference on the module supplying the clock or
not depends whether you need to keep the code around to manipulate that
clock while there are users of it.

As I've already said - consider the possibilities with this scenario.
Here's one:

  A clock consumer is using a clock, but the clock supplier has been
  removed.  The clock consumer wants to change the state of the clock
  in some way - eg, system is suspending.  clk_disable() doesn't fail,
  but on resume, clk_enable() does... (and how many people assume that
  clk_enable() never fails?)  And who knows what rate the clock is now
  producing or whether it's even producing anything...

I'm not saying don't do the refcounting thing I mentioned back in June.
I'm merely pointing out the issues that there are.  There isn't a one
right answer here because each has their own advantages and disadvantages
(and problems.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: Sharing *.dtsi between Linux architectures?

2013-07-12 Thread Russell King - ARM Linux
On Fri, Jul 12, 2013 at 01:58:35PM -0600, Stephen Warren wrote:
 Is there a (possibly just proposed) mechanism in place to allow *.dts
 from multiple Linux architectures to share common *.dtsi files?

Don't forget that the long term goal is to move these files out of the
kernel source, which means that they'll be a separately managed project,
possibly with a different file structure.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V3] of: Set the DMA mask to 64 bits when dma_addr_t is 64-bits

2013-07-08 Thread Russell King - ARM Linux
On Fri, Jul 05, 2013 at 12:33:21PM -0700, Laura Abbott wrote:
 On 7/3/2013 7:15 AM, Ming Lei wrote:
 On Sat, Apr 27, 2013 at 5:32 AM, Rob Herring robherri...@gmail.com wrote:
 On 04/26/2013 03:31 PM, Laura Abbott wrote:
 Currently, of_platform_device_create_pdata always sets the
 coherent DMA mask to 32 bits. On ARM systems without CONFIG_ZONE_DMA,
 arm_dma_limit gets set to ~0 or 0x on LPAE based
 systems. Since arm_dma_limit represents the smallest dma_mask
 on the system, the default of 32 bits prevents any dma_coherent
 allocation from succeeding unless clients manually set the
 dma mask first. Rather than make every client on an LPAE system set
 the mask manually, account for the size of dma_addr_t when setting
 the coherent mask.

 Signed-off-by: Laura Abbott lau...@codeaurora.org
 ---
   drivers/of/platform.c |2 +-
   1 files changed, 1 insertions(+), 1 deletions(-)

 diff --git a/drivers/of/platform.c b/drivers/of/platform.c
 index 0970505..5f0ba94 100644
 --- a/drivers/of/platform.c
 +++ b/drivers/of/platform.c
 @@ -214,7 +214,7 @@ struct platform_device 
 *of_platform_device_create_pdata(
   #if defined(CONFIG_MICROBLAZE)
dev-archdata.dma_mask = 0xUL;
   #endif
 - dev-dev.coherent_dma_mask = DMA_BIT_MASK(32);
 + dev-dev.coherent_dma_mask = DMA_BIT_MASK(sizeof(dma_addr_t) * 8);

 This is going to change the mask from 32 to 64 bits on 64-bit powerpc
 and others possibly. Maybe it doesn't matter. I think it doesn't, but
 I'm not sure enough to apply for 3.10. So I'll queue it for 3.11.

 Without the patch, LPAE enabled board may not boot at all, but looks
 it still isn't in -next tree.

 But I am wondering if it is a correct approach, because enabling LPAE
 doesn't mean the I/O devices can support DMA to/from 64bit address, and
 it is very probably devices can't do it at all.


 The problem is the way the arm_dma_limit is set up, all dma allocations  
 are currently broken regardless of if the actual device supports 64-bit  
 addresses or not.

Please explain this statement.

 I previously asked about the arm_dma_limit and was told that the current  
 behavior is the correct approach (see  
 https://lists.ozlabs.org/pipermail/devicetree-discuss/2013-April/032729.html 
 and  
 https://lists.ozlabs.org/pipermail/devicetree-discuss/2013-April/032690.html) 

 The point here is to set the mask to something reasonable such that  
 allocations can succeed by default. If devices can't use part of the  
 memory space they can adjust the limit accordingly.

The point is arm_dma_limit is that it sets the limit of the memory you
get from the page allocators when you ask for a GFP_DMA allocation.

In the case of no GFP_DMA zone, all memory is available for allocations.
Hence, arm_dma_limit is set to the absolute maximum physical memory
address which an allocator could return.

In the case of a 32-bit only system, this is 32-bits.  In the case of a
LPAE system, it is 64-bit.  (This follows because arm_dma_limit is
phys_addr_t, and the size of this follows the bit size of the system.)

The only questionable case is where we have a LPAE system with a GFP_DMA
zone and arm_dma_limit is set to 0xf - this limit is probably
too low.

Now the reason for the arm_dma_limit is very simple: if the streaming
APIs are passed a buffer outside of the DMA mask, we must reallocate
that memory.  We must have a way to get memory which is within the DMA
mask.  That's what passing GFP_DMA does.  If you have no GFP_DMA zone,
then any memory could be returned.  In the case of a LPAE system with
memory both above and below the 32-bit address range, and you try to
allocate a buffer which happens to be above the 32-bit boundary, what
do you do?  Do you just retry the allocation and hope for the best?
What if that allocation also came out above the 32-bit boundary?  Do
you continue gobbling up system memory until you get a page which you
can DMA do and then free all those buffers you unsuccesfully obtained?

We have a massive problem with DMA masks in the Linux kernel, because
of the x86-ism of assuming that physical memory always starts at
address zero.  This assumption simply is not true on ARM, even more so
with LPAE, which is why people are starting to see a problem.

Consider this case: you have an existing 32-bit DMA controller.  This
controller sets a 32-bit DMA mask, because that's all it is capable of
addressing.  It gets used on 32-bit systems, and it works fine.

It gets put onto a LPAE system where memory starts at the 4GB boundary.
The DMA controller can address the first 4GB of memory because that's
how it's wired up.  The DMA controller still has a 32-bit DMA mask
because as far as the DMA controller is concerned, nothing has changed.
But now things fail because the kernel assumes that a DMA mask is
something to do with a physical address.

The solution to this is to fix the DMA masks and such like.  The problem
is getting the traction throughout the architectures, drivers, 

Re: [PATCH V3] of: Set the DMA mask to 64 bits when dma_addr_t is 64-bits

2013-07-08 Thread Russell King - ARM Linux
On Mon, Jul 08, 2013 at 06:03:57PM +0800, Ming Lei wrote:
 On Sat, Jul 6, 2013 at 5:15 PM, Russell King - ARM Linux
 li...@arm.linux.org.uk wrote:
  On Sat, Jul 06, 2013 at 10:21:05AM +0800, Ming Lei wrote:
  But please see below words in Documentation/DMA-API.txt:
 
 Further, the physical address of the memory must be within the
 dma_mask of the device (the dma_mask represents a bit mask of 
  the
 addressable region for the device.  I.e., if the physical 
  address of
 the memory anded with the dma_mask is still equal to the 
  physical
 address, then the device can perform DMA to the memory).
 
  You may argue that the description is about dev-dma_mask, but the
  usage should be same.
 
  In fact, most of devices in ARM SoC(even with LPAE enabled) doesn't
  support 64bit DMA, so I don't see the point that the default mask is set
  as 64bit.
 
  There's another couple of issues there:
 
  (a) Linux assumes that physical memory starts at location zero.  There
  are various places in the kernel that assume that this holds true:
 
  max_dma_pfn = (dma_mask  PAGE_SHIFT) + 1
 
  One notable place is the block layer.  I've suggested to Santosh a
  way to fix this but I need to help him a little more with it.
 
  (b) Device DMA addresses are a *separate* address space to the physical
  address space.  Device DMA addresses are the addresses which need to
  be programmed into the device to perform DMA to the identified
  location.
 
  What (b) means is that if you are ending up with a 64-bit address to be
  programmed into a 32-bit only register, there is something very very
  wrong.  What this also means is that a 32-bit DMA mask should suffice,
  because if the DMA address results in a 32-bit address _for the DMA
  device_ then we are within its capabilities.
 
  In any case, the work around is not to set the DMA mask to be 64-bit.
 
 So how about working around the problem by setting arm_dma_limit as
 (phys_addr_t)0x if CONFIG_ZONE_DMA is unset? Or other better
 solutions?
 
 Otherwise enabling LPAE may cause system boot failure because
 mmc card may not work when arm_dma_limit becomes (u64)-1.

Well, working around it by bodging it means that the bodges will stay
and the problem will become even worse later, and we won't have the
weight of people saying it doesn't work to use as leverage to persuade
people that DMA masks need to change.  Sorry.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V3] of: Set the DMA mask to 64 bits when dma_addr_t is 64-bits

2013-07-05 Thread Russell King - ARM Linux
On Wed, Jul 03, 2013 at 10:15:50PM +0800, Ming Lei wrote:
 
 Without the patch, LPAE enabled board may not boot at all, but looks
 it still isn't in -next tree.
 
 But I am wondering if it is a correct approach, because enabling LPAE
 doesn't mean the I/O devices can support DMA to/from 64bit address, and
 it is very probably devices can't do it at all.
 
 Another way is to always set arm_dma_limit as 0x when
 CONFIG_ZONE_DMA is unset, and let driver override device's dma
 mask if the device supports 64bit DMA.

Okay, here's a teach-in on DMA masks, this is how it's supposed to work:

1. The bus code which creates the devices sets a default mask appropriate
   for the bus type.  For PCI, this is 32-bit, for ISA, it's 24-bit.

2. Before performing DMA, drivers should query the capabilities of the
   platform using dma_set_mask() for streaming transfers, and
   dma_set_coherent_mask() for coherent transfers.  Note: the DMA API
   specifies that a mask accepted by dma_set_mask() will also be
   accepted by dma_set_coherent_mask() - in other words, it is
   permissible _not_ to check the return value of dma_set_coherent_mask()
   iff that same mask is currently in use via dma_set_mask().

   (Side note: I have a patch in my tree which introduces
   dma_set_mask_and_coherent() which sets both.)

3. Drivers should attempt to set a mask appropriate for the device.
   If the device can drive 32-bit addresses, it requests a 32-bit
   mask.  If only 24-bit, a 24-bit mask.  If it wants to do more
   than 32-bit, it should set a larger mask.

   (See the DMA addressing limitations section of
Documentation/DMA-API-HOWTO.txt).

4. PCI drivers use this as part of a negotiation with the rest of the
   system; you can plug a 64-bit capable PCI card into a 32-bit only
   capable host, and it should work - the driver should try to
   request the 64-bit mask first, if that succeeds then it can use
   DMA to 64-bit addresses.  If it fails, then it should then try to
   set a 32-bit mask.

So, that's more or less what's currently known of DMA masks.  What a lot
of ARM drivers do is totally ignore (3) and just assume that the platform
code set the mask up appropriately.  This is an incorrect assumption, and
it's something I'm slowly fixing where I have the knowledge.

What we shouldn't be relying upon is the default DMA mask being correct.

Last point is - the device actually doing DMA should be the one which the
DMA mask counts for above; if the device is just a user of DMA, then its
DMA mask should not matter (if it does, it's likely because of x86/PCI
assumptions made in the subsystem code) and actually should _not_ be set.
In other words, the DMA engine probe function should set the DMA mask on
the DMA controller, and you should do DMA mappings against the DMA
controller's struct device, not against the IO peripheral's struct
device.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 1/5] phy: Add driver for Exynos MIPI CSIS/DSIM DPHYs

2013-06-27 Thread Russell King - ARM Linux
On Thu, Jun 27, 2013 at 09:17:13AM +0300, Felipe Balbi wrote:
 On Wed, Jun 26, 2013 at 05:00:34PM +0200, Sylwester Nawrocki wrote:
  Hi,
  
  On 06/25/2013 05:06 PM, Felipe Balbi wrote:
   +static struct platform_driver exynos_video_phy_driver = {
+  .probe  = exynos_video_phy_probe,
  
   you *must* provide a remove method. drivers with NULL remove are
   non-removable :-)
  
  Actually the remove() callback can be NULL, it's just missing module_exit
  function that makes a module not unloadable.
 
 look at the implementation of platform_drv_remove():
 
  499 static int platform_drv_remove(struct device *_dev)
  500 {
  501 struct platform_driver *drv = to_platform_driver(_dev-driver);
  502 struct platform_device *dev = to_platform_device(_dev);
  503 int ret;
  504 
  505 ret = drv-remove(dev);
  506 if (ACPI_HANDLE(_dev))
  507 acpi_dev_pm_detach(_dev, true);
  508 
  509 return ret;
  510 }
 
 that's not a conditional call right :-)

Wrong.

if (drv-remove)
drv-driver.remove = platform_drv_remove;

The function you quote will only be used if drv-remove is non-NULL.
You do not need to provide a remove method.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v4 02/10] pinctrl: mvebu: dove pinctrl driver

2013-06-18 Thread Russell King - ARM Linux
On Thu, Sep 13, 2012 at 05:41:44PM +0200, Sebastian Hesselbarth wrote:
 +#define DOVE_GLOBAL_CONFIG_1 (DOVE_SB_REGS_VIRT_BASE | 0xe802C)
 +#define  DOVE_TWSI_ENABLE_OPTION1BIT(7)
 +#define DOVE_GLOBAL_CONFIG_2 (DOVE_SB_REGS_VIRT_BASE | 0xe8030)
 +#define  DOVE_TWSI_ENABLE_OPTION2BIT(20)
 +#define  DOVE_TWSI_ENABLE_OPTION3BIT(21)
 +#define  DOVE_TWSI_OPTION3_GPIO  BIT(22)
...
 +static int dove_twsi_ctrl_set(struct mvebu_mpp_ctrl *ctrl,
 + unsigned long config)
 +{
 + unsigned long gcfg1 = readl(DOVE_GLOBAL_CONFIG_1);
 + unsigned long gcfg2 = readl(DOVE_GLOBAL_CONFIG_2);
 +
 + gcfg1 = ~DOVE_TWSI_ENABLE_OPTION1;
 + gcfg2 = ~(DOVE_TWSI_ENABLE_OPTION2 | DOVE_TWSI_ENABLE_OPTION2);
 +
 + switch (config) {
 + case 1:
 + gcfg1 |= DOVE_TWSI_ENABLE_OPTION1;
 + break;
 + case 2:
 + gcfg2 |= DOVE_TWSI_ENABLE_OPTION2;
 + break;
 + case 3:
 + gcfg2 |= DOVE_TWSI_ENABLE_OPTION3;
 + break;
 + }
 +
 + writel(gcfg1, DOVE_GLOBAL_CONFIG_1);
 + writel(gcfg2, DOVE_GLOBAL_CONFIG_2);
 +
 + return 0;
 +}

So, I've just been thinking about the LCD clocking on the Armada 510,
and found that there's dividers for the internal LCD clocks in the
global config 1/2 registers.  So I grepped the kernel source for
references to these, expecting to find something in drivers/clk, but
found the above.

Now, the reason that I'm replying to this is that we're again falling
into the same trap that we did with SA1100 devices.  Back in those
days, there wasn't so much of a problem because the kernel was basically
single threaded when it came to code like the above on such platforms.
There was no kernel preemption.

However, todays kernel is sometimes SMP, commonly with kernel preemption
enabled, maybe even RT.  This makes things like the above sequence a
problem where a multifunction register is read, modified and then
written back.

Consider two threads doing this, and a preemption event happening in the
middle of this sequence to another thread also doing a read-modify-write
of the same register.  Which one wins depends on the preemption sequence,
but ultimately one loses out.

Any access to such registers needs careful thought, and protection in some
manner.

Maybe what we need is something like this:

static DEFINE_SPINLOCK(io_lock);
static void modifyl(u32 new, u32 mask, void __iomem *reg)
{
unsigned long flags;
u32 val;

spin_lock_irqsave(io_lock, flags);
val = readl(reg)  ~mask;
val |= new | mask;
writel(val, reg);
spin_unlock_irqrestore(io_lock, flags);
}

in order to provide arbitrated access to these kinds of multifunction
registers in a safe, platform agnostic way.

Comments?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v4 02/10] pinctrl: mvebu: dove pinctrl driver

2013-06-18 Thread Russell King - ARM Linux
On Tue, Jun 18, 2013 at 05:02:49PM +0200, Linus Walleij wrote:
 Nowadays I would do the above with regmap_update_bits().
 
 Mutual exclusion for read-modify-write of individual bits in a
 register is one of those cases where doing a regmap over
 a memory-mapped register range makes a lot of sense.
 (drivers/mfd/syscon.c being a nice example)

So, for that solution we need to have some kind of global regmap per
register or somesuch.  Then you run into regmap needing a struct
device - well, with a shared register, which struct device do you
use, or do you have to invent one?

That sounds more heavy-weight than is really necessary.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 06/11] ARM:stixxxx: Add STiH415 SOC support

2013-06-13 Thread Russell King - ARM Linux
On Tue, Jun 11, 2013 at 07:50:31AM +0100, Srinivas KANDAGATLA wrote:
 You are right, It does not make sense to use BIT() macro for field which
 has more than 1 bit. I think using mix of both BIT() and the old style
 will make code look bit confusing to reader, Also no other mach code in
 the kernel use BIT while configuring L2 controller. So am going to drop
 the idea of using BIT here and leave the code as it is.

I'd suggest putting a comment in the code to that effect.  With the way
cleanups get done, I wouldn't be surprised if this attracts a lot of
people wanting to do a trivial 1  bit - BIT(bit) conversions.

One of the problems of open source is that you can say no to a patch
like that until you're blue in the face, but it will eventually make
its way in via some path.

Just one of the reasons I consider BIT() to be evil and an inappropriate
macro.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] ARM: tegra: add basic SecureOS support

2013-06-10 Thread Russell King - ARM Linux
On Mon, Jun 10, 2013 at 04:47:22PM +0900, Alexandre Courbot wrote:
 One could remove the naked attribute and put there registers into the
 clobber list, but then the function will be inlined and we will have
 to ensure the parameters end up in the right register (and having a
 function that cannot be inlined is convenient in that respect). So as
 far as I can tell, having the function naked and saving the registers
 ourselves seems to be the most convenient way around this.

If you use such a large clobber list, you risk the compiler barfing on
you that it's run out of registers.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] ARM: tegra: add basic SecureOS support

2013-06-10 Thread Russell King - ARM Linux
On Mon, Jun 10, 2013 at 05:11:15PM +0900, Alexandre Courbot wrote:
 On Sat, Jun 8, 2013 at 1:33 AM, Stephen Warren swar...@wwwdotorg.org wrote:
  I think we need to separate the concept of support for *a* secure
  monitor, from support for a *particular* secure monitor.
 
  Agreed. In this case, can we assume that support for a specific secure
  monitor is not arch-specific, and that this patch should be moved
  outside of arch-tegra and down to arch/arm? In other words, the ABI of
  a particular secure monitor should be the same no matter the chip,
  shouldn't it?
 
  I would like to believe that the Trusted Foundations monitor had the
  same ABI irrespective of which Soc it was running on. However, I have
  absolutely no idea at all if that's true. Even if there's some common
  subset of the ABI that is identical across all SoCs, I wouldn't be too
  surprised if there were custom extensions for each different SoC, or
  just perhaps even each product.
 
  Can you research this and find out the answer?
 
 Will do. Information about TF is scarce unfortunately.

The answer is... there isn't a common ABI.  That is something I pressed
ARM Ltd for when this stuff first appeared and they were adamant that
they were not going to specify any kind of ABI for this interface.

The net result is that everyone has invented their own interfaces around
it.  Some pass all arguments in registers, some pass arguments in memory
and pass pointers to those memory locations, and those memory locations
have to be flushed in some way.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 01/11] serial:st-asc: Add ST ASC driver.

2013-06-10 Thread Russell King - ARM Linux
On Mon, Jun 10, 2013 at 10:21:00AM +0100, Srinivas KANDAGATLA wrote:
 This patch adds support to ASC (asynchronous serial controller)
 driver, which is basically a standard serial driver. This IP is common
 across all the ST parts for settop box platforms.
 
 ASC is embedded in ST COMMS IP block. It supports Rx  Tx functionality.
 It support all industry standard baud rates.

Your driver is not POSIX compliant.

 + for (; count != 0; count--) {
 + c = asc_in(port, ASC_RXBUF);
 + flag = TTY_NORMAL;
 + port-icount.rx++;
 +
 + if (unlikely(c  ASC_RXBUF_FE)) {
 + if (c == ASC_RXBUF_FE) {
 + port-icount.brk++;
 + if (uart_handle_break(port))
 + continue;
 + flag = TTY_BREAK;
 + } else {
 + port-icount.frame++;
 + flag = TTY_FRAME;
 + }
 + } else if (ascport-check_parity 
 +unlikely(c  ASC_RXBUF_PE)) {
 + port-icount.parity++;
 + flag = TTY_PARITY;
 + }
 +
 + if (uart_handle_sysrq_char(port, c))
 + continue;
 + tty_insert_flip_char(tport, c  0xff, flag);
 + }
 + if (overrun) {
 + port-icount.overrun++;
 + tty_insert_flip_char(tport, 0, TTY_OVERRUN);
 + }

No support for ignoring error conditions.  No support for ignoring all
input... and:

 +static void asc_set_termios(struct uart_port *port, struct ktermios *termios,
 + struct ktermios *old)
 +{
 + struct asc_port *ascport = to_asc_port(port);
 + unsigned int baud;
 + u32 ctrl_val;
 + tcflag_t cflag;
 + unsigned long flags;
 +
 + port-uartclk = clk_get_rate(ascport-clk);
 +
 + baud = uart_get_baud_rate(port, termios, old, 0, port-uartclk/16);
 + cflag = termios-c_cflag;
 +
 + spin_lock_irqsave(port-lock, flags);
 +
 + /* read control register */
 + ctrl_val = asc_in(port, ASC_CTL);
 +
 + /* stop serial port and reset value */
 + asc_out(port, ASC_CTL, (ctrl_val  ~ASC_CTL_RUN));
 + ctrl_val = ASC_CTL_RXENABLE | ASC_CTL_FIFOENABLE;
 +
 + /* reset fifo rx  tx */
 + asc_out(port, ASC_TXRESET, 1);
 + asc_out(port, ASC_RXRESET, 1);
 +
 + /* set character length */
 + if ((cflag  CSIZE) == CS7) {
 + ctrl_val |= ASC_CTL_MODE_7BIT_PAR;
 + } else {
 + ctrl_val |= (cflag  PARENB) ?  ASC_CTL_MODE_8BIT_PAR :
 + ASC_CTL_MODE_8BIT;
 + }
 +
 + ascport-check_parity = (cflag  PARENB) ? 1 : 0;
 +
 + /* set stop bit */
 + ctrl_val |= (cflag  CSTOPB) ? ASC_CTL_STOP_2BIT : ASC_CTL_STOP_1BIT;
 +
 + /* odd parity */
 + if (cflag  PARODD)
 + ctrl_val |= ASC_CTL_PARITYODD;
 +
 + /* hardware flow control */
 + if ((cflag  CRTSCTS)  ascport-hw_flow_control)
 + ctrl_val |= ASC_CTL_CTSENABLE;

This doesn't reflect those facts back into the termios structure to
indicate that they aren't supported.

Consider using uart_port's ignore and read status masks to implement
the break, framing, parity and overrun checking in your interrupt
handler using the same methodology as drivers like 8250, amba-pl011
etc.  That will help you get these code sequences correct.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 06/11] ARM:stixxxx: Add STiH415 SOC support

2013-06-10 Thread Russell King - ARM Linux
On Mon, Jun 10, 2013 at 12:46:59PM +0100, Srinivas KANDAGATLA wrote:
  +   aux_ctrl = (0x1  L2X0_AUX_CTRL_SHARE_OVERRIDE_SHIFT) |
  +   (0x1  L2X0_AUX_CTRL_DATA_PREFETCH_SHIFT) |
  +   (0x1  L2X0_AUX_CTRL_INSTR_PREFETCH_SHIFT) |
  +   (way_size  L2X0_AUX_CTRL_WAY_SIZE_SHIFT);
  
  
  
  #include linux/bitops.h
  Linus Walleij would write use  BIT() here
 
 I will use BIT() macro.

Without checking those fields... BIT() is only appropriate if you're
really talking about single bits.  If you have a field of more than a
single bit which you happen to be setting to '1' then it's not
appropriate to use BIT().
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-09 Thread Russell King - ARM Linux
On Sun, Jun 09, 2013 at 11:09:59PM +0100, luke.leighton wrote:
 this is all a rather round-about way to say that for those people who
 heard and are thinking of heeding russell's call to be silent and to
 ignore me

Okay, so you've just misrepresented me in the above comment.  I never
said anything of the sort.  The closest that I've come to a comment like
that is asking you to stop wasting people's time.

So, given your displayed inability to properly convey what people have
said to you in a discussion in your own replies, is there *any* reason
that anyone should trust you to communicate their position to any other
third party?

In some ways, I'm *glad* that you've passed everything on verbatum,
because it means that it's (hopefully) free from any misrepresentations
such as the above.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 09:02:43AM +0100, luke.leighton wrote:
  ok.  so.  we come back to the question again: what shall i propose to
 them that they consider doing, and what benefit would it be to them to
 do so?
 
  i cannot go to them and say you have to do this [insert proposal
 here] - it has to be more subtle than that.

It seems that you don't have to do anything at all.  From what Arnd and
others have said, they are _already_ talking to, and working with people
in the kernel community to move their code base forward, move to DT, and
they are already starting to send their own patches for the mainline
kernel themselves.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 10:40:37AM +0200, Vladimir Pantelic wrote:
 luke.leighton wrote:   so.
 
coming back to what you said earlier: i'm formulating what to say to
  allwinner [and need to pre-send something by monday so that they can
  consider it before the meeting].  so far, it consists of:
 
  * device-tree is what the linux kernel community has come up with, it
  is equivalent to FEX.
 
  * the linux kernel community would like to apologise for not
  consulting with you (allwinner) on the decision to only accept device
  tree

 apologize? WTF?

(I don't seem to have the original mail).

Definitely not.

The way this generally works is that discussions happen in public on
mailing lists, and people who have an interest get involved in the
discussion.  If someone decides to avoid the mailing lists because they
want to be secret about XYZ then they won't be part of the decision
making process - but that's not _our_ problem.  That is _their_ problem.

In the case of DT, there was quite a lengthy discussion about it and
its adoption.

So, either the discussion took place _before_ there was any interest in
the ARM kernel developments from Allwinner, or they themselves _chose_
not to be represented in the discussion by not being on the mailing list
or participating in the discussion.

There is nothing for us to apologise for here.

(Occasionally, the kernel community *is* a dictatorship - for example,
when Linus threatens to delete all the ARM defconfigs, or when Linus
decides that we're running out of control when adding more than 10k
lines of new code per kernel release, or - in the same way - when I see
something I really don't like, but thankfully those are fairly rare.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 09:48:22AM +0200, Vladimir Pantelic wrote:
 luke.leighton wrote:
   3 days remaining on the clock.

 what catastrophic thing will happen when the time runs out?

Maybe the world will explode into tiny small bits?  Probably not.  I
suspect nothing of any relevance to us.

As has already been pointed out, Allwinner is already working with
people in the kernel community to move things forward, so the right
things are already happening.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 02:49:28PM +, joem wrote:
   SoC vendors are free to join the discussion, and many SoC vendors are part
   of the kernel community, so calling this unilateral is plain wrong.
  
   you're free to believe that, vladimir.  i've explained why that
  hasn't happened, in prior messages.  can we move forward, please?
 
 I prefer it if the vladimir troll (and fellow trolls)
 make requests for one of us to make or do something instead of
 constantly whining and wasting time.
 
 After all we are attached to funds and resources ready to
 deploy if something sounds a good idea and gets a vote.
 
 To vladimir - please put your thoughts in a blog where it belongs.
 If its important, I'm sure others would offer comment
 and take you up on your thoughts.

I think your position is confused.  Everything that Vladimir (in his
three posts) in this thread so far have been correct.  Vladimir is not
the one doing any trolling in this thread.

Vladimir has not requested anything.  He hasn't whined.  He hasn't
wasted time.  He has stated the following in _three_ short succinct
emails:
(a) no one gets to impose stipulate timescales unless they're willing
to financially sponsor the work.
(b) the adopted position of the Linux kernel developers.

Luke Leighton on the other hand is demanding that we (Linux kernel
developers) come up with proposals within three days so that Luke can
act as a middle man between us and Allwinner, and is blaming the Linux
kernel community for the situation with Allwinner.

As you seem to have your facts wrong, I can only conclude that you
are also trolling... I hope I'm wrong about that and you've just made
an innocent mistake.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 08:02:03PM +0100, luke.leighton wrote:
  well, tough.  get me up to speed, *fast*.

No, not unless you're willing to *pay* someone to spend time teaching you,
because you are asking to be *taught* about the current situation, so
you're asking someone to do some _work_ _for_ _you_.  If you're not
willing to do that, then all the information is out there in the public
domain for you to learn from yourself.

 please stop wasting time like this:

Pot. Black.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 08:04:26PM +0100, luke.leighton wrote:
 On Fri, Jun 7, 2013 at 7:54 PM, Olof Johansson o...@lixom.net wrote:
  By demanding
 
  a-a-ah, no demands made.

 well, tough.  get me up to speed, *fast*.  please stop wasting time
like this: get me up to speed.

That is a demand.  Stop trolling.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-07 Thread Russell King - ARM Linux
On Fri, Jun 07, 2013 at 08:18:14PM +0100, Luke Kenneth Casson Leighton wrote:
 On Fri, Jun 7, 2013 at 7:26 PM, Russell King - ARM Linux
 li...@arm.linux.org.uk wrote:
  Luke Leighton on the other hand is demanding that we
 
  no demands have been made, russell: i've informed you of an immovable
 deadline which will pass beyond which the opportunity being presented
 is lost.

 well, tough.  get me up to speed, *fast*.  please stop wasting time
like this: get me up to speed.

That is a demand.  On the other hand this would not be:

Can someone please get me up to speed on this?  That is a request.

Sorry, you've just lost some more credibility.

Please let those who are already working with Allwinner continue to
work with them rather than spending time needlessly with someone who
clearly doesn't have any idea about what they say even from one moment
to the next.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] ARM: tegra: add basic SecureOS support

2013-06-06 Thread Russell King - ARM Linux
On Thu, Jun 06, 2013 at 04:28:07PM +0900, Alexandre Courbot wrote:
 +static int __attribute__((used)) __tegra_smc_stack[10];
 +
 +/*
 + * With EABI, subtype and arg already end up in r0, r1 and r2 as they are
 + * function arguments, but we prefer to play safe here and explicitly move
 + * these values into the expected registers anyway. mov instructions without
 + * any side-effect are turned into nops by the assembler, which limits
 + * overhead.

No they aren't.  They will be preserved as:
mov r0, r0
mov r1, r1
mov r2, r2

Moreover, things will go wrong if the compiler decides for whatever reason
to move 'arg' into r0 before calling your code sequence.  So really this
is quite fragile.

There's also no point in mentioning EABI in the above paragraph; all ARM
ABIs under Linux have always placed the arguments in r0..r3 and then on
the stack.  You can assert that this is always true by using the __asmeq()
macro.

Also...

 + */
 +static void tegra_generic_smc(u32 type, u32 subtype, u32 arg)
 +{
 + asm volatile(
 + .arch_extensionsec\n\t
 + ldrr3, =__tegra_smc_stack\n\t
 + stmia  r3, {r4-r12, lr}\n\t

using a statically allocated area to save the registers in this way is
probably a bad move; although the hotplug code guarantees that there
will be no concurrency between CPU hotplug operations, this sets a
precident for it being used in places where no such protection exists.

What is wrong with stacking r4-r12, lr onto the SVC stack?  You don't
save the SVC stack pointer, so one can only assume that your SMC
implmentation preserves this (if it doesn't, that's yet another bug
with this assembly code.)

Combining these two issues, you're probably far better off using an
__attribute__((naked)) function for this - which means you get to
write the entire function in assembly code without any compiler
interference:

static void __attribute__((naked)) tegra_generic_smc(u32 type, u32 subtype, u32 
arg)
{
asm volatile(
.arch_extensionsec\n\t
stmfd  sp!, {r4 - r12, lr}\n\t
__asmeq(%0, r0)
__asmeq(%1, r1)
__asmeq(%2, r2)
movr3, #0\n\t
movr4, #0\n\t
dsb\n\t
smc#0\n\t
ldmfd  sp!, {r4 - r12, pc}
:
: r (type), r (subtype), r (arg)
: memory);
}
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-06 Thread Russell King - ARM Linux
On Thu, Jun 06, 2013 at 01:24:57PM +0100, luke.leighton wrote:
 On Thu, Jun 6, 2013 at 1:01 AM, Tomasz Figa tomasz.f...@gmail.com wrote:
 
  I don't see any other solution here than moving all the Allwinner code to
  DT (as it has been suggested in this thread several times already), as
  this is the only hardware description method supported by ARM Linux.
 
  i repeat again: please state, explicitly and unequivocably that you -
 linux kernel developers - are happy that the reach of linux and
 gnu/linux OSes is dramatically reduced due to this intransigent
 position.

If companies are going to go off and invent the square wheel, and that
makes *them* suffer the loss of being able to merge back into the
mainline kernel, thereby making *their* job of moving forward with
their kernel versions much harder, then yes, we *are* happy.

Companies will always do idiotic things; it's *them* and their users
who have to live with the results of that bad decision making process.

Eventually, the pain of being outside the mainline kernel will become
too great, especially if their users demand things like kernel upgrades
from them.  Eventually, they will change.

And no, this isn't an intransigent position.  This is reality given
that ARM already has way too much code in the Linux kernel and we're
trying to reduce that through the use of technologies like DT.  The
last thing we need right now is for another DT like implementation
to come along which is only used on a minority of platforms - even if
the platform it is used on is successful.

The way this works is this:
- you either go with the policies which are set, or
- you change the policy as a whole because you have a technically
  superior solution

What that means in this case is either you adopt DT, or you convince
everyone that DT isn't the solution, but your solution is, and we adopt
your solution for *everything* instead.

If neither of those are possible, then that's really not our problem,
and it's not _us_ who are being intransigent.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [Arm-netbook] getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-06 Thread Russell King - ARM Linux
On Thu, Jun 06, 2013 at 01:22:04PM +0100, luke.leighton wrote:
 On Thu, Jun 6, 2013 at 1:19 AM, Henrik Nordström
 hen...@henriknordstrom.net wrote:
  tor 2013-06-06 klockan 00:54 +0100 skrev luke.leighton:
 
   Not really the case. Actually the opposite. DT have this as well, and
   integrated in device probing. Allwinner need to hack every driver used
   to add their gpio requests to have pinmuxing triggered.
 
   augh.  ok.  solutions.  what are the solutions here?
 
  What I said before.
 
  idea: hook into devicetree gpio functions to allow script-fex gpio
 functions to gain access in a separate module?  that sort of thing.
 
  Go with DT for the kernel. There is no need for two configuration
  mechanisms doing the same thing. Disguise it in fex form (and
  translator) if too hard for people with a DOS editior to configure.
 
  what methods for doing that.  i need proposals.  4 days on the clock.

No.

If you want to set time scales, please put up money to find the work to
come up with those proposals.

Virtually no one here is a charity; the charity days of open source are
long gone.  Everyone needs money to put food in their mouths, and the
way this works is that those who pay the most get the time.  That's the
nature of a open and free market.

What's more is that you have been given some good proposals already:
converting the fex description to DT for the kernel.  Wow, that means you
can use the work which has already been done in the mainline kernel for
free!  How cool is that!
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-05 Thread Russell King - ARM Linux
On Wed, Jun 05, 2013 at 03:00:13PM -0600, Stephen Warren wrote:
 2) Having U-Boot itself read a DT and configure itself, just like the
 kernel does. This is relatively new, and only supported by a few boards
 (all Tegra to some extent, and a couple each Samsung Exynos and Xilinx
 boards). I suspect/guess this is the kind of thing that Luke was
 referring to re: U-Boot fex integration.

Reading what I have of this thread, this is just another case of
$company runs of and does their own unique way of doing something,
which is in a totally different direction from that path chosen by
Linux kernel developers, and Linux kernel developers are _expected_
to roll over and accept $company's unique way of doing it.

$company could have assisted with the DT effort, helping to sort out
the common arch issues (which haven't been all that much), converting
drivers to DT and such like.  But no, instead they've gone off and
created their own thing.

I wonder how many more of these cases there needs to be before people
get the message that Linux kernel developers *do* *not* like this
behaviour, and if you do this, then chances are you're going to be
stuck with having code which isn't able to be merged into mainline.

And I don't buy the argument that we were still sorting out DT.  DT has
been well defined for many many years before we started using it on ARM.
It has been used for years on both PowerPC and Sparc architectures to
describe their hardware, and all of the DT infrastructure was already
present in the kernel.  What was left for us were:

* converting our platform-data based drivers to use data from DT.
* come up with ways of dealing with SoC issues such as clock
  representation, pin muxing and such like in DT.

But no... all that had to be created in this custom fex stuff which now
presents a barrier with getting support for something merged.

And somehow people make out that this is _our_ problem...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: getting allwinner SoC support upstream (was Re: Uploading linux (3.9.4-1))

2013-06-05 Thread Russell King - ARM Linux
On Wed, Jun 05, 2013 at 05:38:45PM -0400, Lennart Sorensen wrote:
 I haven't personally dealt with any nvidia arm devices, so I have no
 idea how those are turning out, nor have I looked much at the marvell
 ones yet (even though I have a cubox sitting on my desk I intend to play
 around with).

Cubox is Dove, which dates from way before DT on ARM, and there's
relatively few people working on Dove support, so progress towards
DT support is rather slow there.  I believe it to be at the point
where it's possible to boot the Cubox using DT, but not all features
are supported.

But then, not all features of the Dove SoC are supported in mainline
either; the port essentially stopped after a little more than basic
support was merged.  Eg, no hardware monitoring driver, no video
drivers, etc...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/3] ARM: msm: Re-organize platsmp to make it extensible

2013-06-04 Thread Russell King - ARM Linux
On Mon, Jun 03, 2013 at 05:19:44PM -0700, Rohit Vaswani wrote:
 + sc1_base_ptr = of_iomap(dn, 0);
 + if (sc1_base_ptr) {
 + writel_relaxed(0, sc1_base_ptr + VDD_SC1_ARRAY_CLAMP_GFS_CTL);
 + writel_relaxed(0, sc1_base_ptr + SCSS_CPU1CORE_RESET);
 + writel_relaxed(3, sc1_base_ptr + SCSS_DBG_STATUS_CORE_PWRDUP);
 + mb();
 + iounmap(sc1_base_ptr);

If you need to fiddle with power rails and resets for your secondary
core, you don't need any of the pen_release stuff, and you really
should get rid of it.  The pen_release stuff is only there for
platforms where there's no proper way of controlling the secondary
CPUs except by using a software method.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/2] net: mv643xx_eth: proper initialization for Kirkwood SoCs

2013-05-24 Thread Russell King - ARM Linux
On Fri, May 24, 2013 at 01:01:25PM -0400, Jason Cooper wrote:
 On Fri, May 24, 2013 at 01:03:25PM +0200, Linus Walleij wrote:
  IMO: if you want to go down that road, what you really want is not
  ever more expressible device trees, but real open firmware,
  or ACPI or UEFI that can interpret and run bytecode as some
  bios for you. With DT coming from OF maybe this is a natural
  progression of things, but one has to realize when we reach the
  point where what we really want is a bios. Then your time is
  likely better spent with Tianocore or something than with the
  kernel.
 
 shudder.  I like embedded systems because the *don't* have a bios.

Then you're into scenarios like I have with my laptop, where - those
of you who check the nightly build results will have noticed - one
of my serial ports doesn't always exist.  That's because the ACPI data
in the BIOS is *wrong*.  It reports that it has been enabled when it
hasn't, and the disassembled byte code is at fault here.

The fix?  God knows.  As far as I'm concerned as a user, or even as an
OS developer, it's unfixable without getting the ACPI data structures
changed, and that's not something I can do.

Do you really want that on ARM?  Given the fiasco with the location of
the registers, are you sure you want to place more trust in that
direction?  Does it give you a warm fuzzy feeling to know that you
might have to work out some way to patch vendor supplied bytecode?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V2 1/6] drivers: bus: add a new driver for WEIM

2013-05-23 Thread Russell King - ARM Linux
On Thu, May 23, 2013 at 04:16:13PM +0800, Huang Shijie wrote:
 + /* get the clock */
 + weim-clk = devm_clk_get(pdev-dev, NULL);
 + if (IS_ERR(weim-clk))
 + goto weim_err;
 +
 + clk_prepare_enable(weim-clk);

I notice people are getting lazy about this.  clk_prepare_enable() can
return an error...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH V2] ARM: bcm281xx: Add L2 support for Rev A2 chips

2013-05-10 Thread Russell King - ARM Linux
On Thu, May 09, 2013 at 01:44:34PM -0700, Olof Johansson wrote:
 On Thu, May 02, 2013 at 05:57:33PM -0700, Christian Daudt wrote:
  Rev A2 SoCs have an unorthodox memory re-mapping and this needs
  to be reflected in the cache operations.
  This patch adds new outer cache functions for the l2x0 driver
  to support this SoC revision. It also adds a new compatible
  value for the cache to enable this functionality.
  
  Updates from V1:
  - remove section 1 altogether and note that in comments
  - simplify section selection caused by section 1 removal
  - BUG_ON just in case section 1 shows up
  
  Signed-off-by: Christian Daudt c...@broadcom.com
 
 This patch mostly covers code that is on Russells plate, so please feed
 this to his tracker to be picked up by him. Feel free to add:

Yes, but there's a problem.  I don't have bcm11351.dtsi to apply this
patch to.  This is one of the problems of splitting the SoC stuff
from core stuff - either I start doing cross-merges (which cause Linus
and everyone else pain as we've seen this time around) or we have to
stuff everything through a single tree.

Not happy.

Not applying until after -rc1.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC 1/8] serial:st-asc: Add ST ASC driver.

2013-05-10 Thread Russell King - ARM Linux
On Wed, May 08, 2013 at 09:36:13AM -0700, Greg KH wrote:
 On Wed, May 08, 2013 at 06:31:48PM +0200, Arnd Bergmann wrote:
  On Wednesday 08 May 2013, Greg KH wrote:
just mention there is not hardware reason to not use the generic ttySx
in place of ttyAS as we have only one IP that handle serial on this
family of SoC

personally I'll switch to ttySx
   
   Great, then you can use the same major/minor range as well, so there's
   no more objection from me about this :)
  
  Does that work these days when you have kernel with multiple built-in
  uart drivers?
 
 It should, as the major/minor registration should only happen when the
 hardware is found, but I haven't tested it out, so I can't say for sure.

serial stuff has never operated like that.  More specifically, it's a
limitation with the tty stuff that the way stuff works is that a
tty driver can only drive a single bunch of contiguous minor numbers.
No interleaving is allowed.

That limitation has existed for years, and I don't see it going away.
As long as that limitation exists, you can only ever have one serial
driver driving a set of contiguous minor numbers.

There has been an attempt to work around this by making the 8250
driver special which was a complete hack to get it to work.  That
was while I maintained this stuff and I outright refused to make one
serial driver magically special.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 1/5] irqchip: add support for Marvell Orion SoCs

2013-05-03 Thread Russell King - ARM Linux
On Fri, May 03, 2013 at 01:48:35AM +0200, Sebastian Hesselbarth wrote:
 This patch adds an irqchip driver for the main interrupt controller found
 on Marvell Orion SoCs (Kirkwood, Dove, Orion5x, Discovery Innovation).
 Corresponding device tree documentation is also added.
 
 Signed-off-by: Sebastian Hesselbarth sebastian.hesselba...@gmail.com
 ---
 Note: This patch triggers a checkpatch warning for
   WARNING: Avoid CamelCase: handle_IRQ
 
 Changelog:
 v1-v2:
 - rename compatible string to marvell,orion-intc (Suggested by Jason 
 Gunthorpe)
 - request mem regions for irq base registers (Suggested by Jason Gunthorpe)
 - make orion_handle_irq static (Suggested by Jason Gunthorpe)
 - make use of IRQCHIP_DECLARE (Suggested by Jason Gunthorpe)

It would still be a good idea to convert this to use the generic chip
stuff which Thomas created, though exactly how is hard to see at the
moment.

 +static void orion_irq_mask(struct irq_data *irqd)
 +{
 + unsigned int irq = irqd_to_hwirq(irqd);
 + unsigned int irq_off = irq % 32;
 + int reg = irq / 32;
 + u32 val;
 +
 + val = readl(orion_irq_base[reg] + ORION_IRQ_MASK);
 + writel(val  ~(1  irq_off), orion_irq_base[reg] + ORION_IRQ_MASK);
 +}

This could be replaced with irq_gc_mask_clr_bit().

 +
 +static void orion_irq_unmask(struct irq_data *irqd)
 +{
 + unsigned int irq = irqd_to_hwirq(irqd);
 + unsigned int irq_off = irq % 32;
 + int reg = irq / 32;
 + u32 val;
 +
 + val = readl(orion_irq_base[reg] + ORION_IRQ_MASK);
 + writel(val | (1  irq_off), orion_irq_base[reg] + ORION_IRQ_MASK);
 +}

This with irq_gc_mask_set_bit().

 +
 +static struct irq_chip orion_irq_chip = {
 + .name   = orion_irq,
 + .irq_mask   = orion_irq_mask,
 + .irq_unmask = orion_irq_unmask,
 +};
 +
 +static int orion_irq_map(struct irq_domain *d, unsigned int virq,
 +  irq_hw_number_t hw)
 +{
 + irq_set_chip_and_handler(virq, orion_irq_chip,
 +  handle_level_irq);
 + set_irq_flags(virq, IRQF_VALID | IRQF_PROBE);

This is where it starts to get tricky, because I can't see how you'd
merge the irq_alloc_generic_chip() and irq_setup_generic_chip() with
this.  Maybe you don't need to do anything here and just do all that
in orion_of_init() instead?  But then you seem to need to know the
virq range before hand, and I can't see how that's known.  Maybe Thomas
can provide some enlightenment about how the gc irqchip stuff works
with the irq domain stuff...

However, you wouldn't need the statically defined orion_irq_chip nor
orion_irq_base[] either, because those would be held within the gc
irqchip stuff and stored in the upper layer.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC patch 7/8] genirq: generic chip: Add linear irq domain support

2013-05-03 Thread Russell King - ARM Linux
On Fri, May 03, 2013 at 09:50:53PM -, Thomas Gleixner wrote:
 + /* Init mask cache ? */
 + if (dgc-gc_flags  IRQ_GC_INIT_MASK_CACHE) {
 + raw_spin_lock_irqsave(gc-lock, flags);
 + gc-mask_cache = irq_reg_readl(gc-reg_base + ct-regs.mask);
 + raw_spin_unlock_irqrestore(gc-lock, flags);
 + }

This looks a little weird to me - it seems that it'll re-read this
each time any irq is mapped in the domain, which is probably not
wanted.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC patch 4/8] genirq: generic chip: Cache per irq bit mask

2013-05-03 Thread Russell King - ARM Linux
On Fri, May 03, 2013 at 09:50:50PM -, Thomas Gleixner wrote:
 - u32 mask = ~(1  (d-irq - gc-irq_base));
 + u32 mask = ~(d-mask);

I suspect the above changes are all a result of a search-and-replace
which results in the cosmetic parens remaining.  Would be nice to get
rid of them too as they're completely unnecessary.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] irqchip: add support for Marvell Orion SoCs

2013-05-02 Thread Russell King - ARM Linux
On Thu, May 02, 2013 at 08:33:48PM +0200, Sebastian Hesselbarth wrote:
 On 05/02/2013 08:25 PM, Sebastian Hesselbarth wrote:
 This patch adds an irqchip driver for the main interrupt controller found
 on Marvell Orion SoCs (Kirkwood, Dove, Orion5x, Discovery Innovation).
 Corresponding device tree documentation is also added.

 Signed-off-by: Sebastian Hesselbarthsebastian.hesselba...@gmail.com
 ---
 Note: This patch triggers a checkpatch warning for
WARNING: Avoid CamelCase:handle_IRQ

 [...]
 diff --git a/include/linux/irqchip/orion.h b/include/linux/irqchip/orion.h
 new file mode 100644
 index 000..04f7bab
 --- /dev/null
 +++ b/include/linux/irqchip/orion.h
 @@ -0,0 +1,18 @@
 +/*
 + * Marvell Orion SoCs IRQ chip driver header.
 + *
 + * Sebastian Hesselbarthsebastian.hesselba...@gmail.com
 + *
 + * This file is licensed under the terms of the GNU General Public
 + * License version 2.  This program is licensed as is without any
 + * warranty of any kind, whether express or implied.
 + */
 +
 +#ifndef __LINUX_IRQCHIP_ORION_H
 +#define __LINUX_IRQCHIP_ORION_H
 +
 +#includeasm/exception.h

 First review by myself. The above include is a left-over and
 will be removed in a v2.

You still need your first level IRQ handlers marked with __exception_irq_entry
which is defined in the above file.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] irqchip: add support for Marvell Orion SoCs

2013-05-02 Thread Russell King - ARM Linux
On Thu, May 02, 2013 at 08:54:20PM +0200, Sebastian Hesselbarth wrote:
 On 05/02/13 20:45, Russell King - ARM Linux wrote:
 On Thu, May 02, 2013 at 08:33:48PM +0200, Sebastian Hesselbarth wrote:
 On 05/02/2013 08:25 PM, Sebastian Hesselbarth wrote:
 This patch adds an irqchip driver for the main interrupt controller found
 on Marvell Orion SoCs (Kirkwood, Dove, Orion5x, Discovery Innovation).
 Corresponding device tree documentation is also added.

 Signed-off-by: Sebastian Hesselbarthsebastian.hesselba...@gmail.com
 ---
 Note: This patch triggers a checkpatch warning for
 WARNING: Avoid CamelCase:handle_IRQ

 [...]
 diff --git a/include/linux/irqchip/orion.h b/include/linux/irqchip/orion.h
 new file mode 100644
 index 000..04f7bab
 --- /dev/null
 +++ b/include/linux/irqchip/orion.h
 @@ -0,0 +1,18 @@
 +/*
 + * Marvell Orion SoCs IRQ chip driver header.
 + *
 + * Sebastian Hesselbarthsebastian.hesselba...@gmail.com
 + *
 + * This file is licensed under the terms of the GNU General Public
 + * License version 2.  This program is licensed as is without any
 + * warranty of any kind, whether express or implied.
 + */
 +
 +#ifndef __LINUX_IRQCHIP_ORION_H
 +#define __LINUX_IRQCHIP_ORION_H
 +
 +#includeasm/exception.h

 First review by myself. The above include is a left-over and
 will be removed in a v2.

 You still need your first level IRQ handlers marked with 
 __exception_irq_entry
 which is defined in the above file.


 Russell,

 I know and it is marked with __exception_irq_entry. The above is in
 include/linux/irqchip/orion.h and only used for .init_irq in machine
 descriptor later.

 The irq handler is never exposed to the board file itself, but set
 within orion_init_irq. This approach has been taked by
 irqchip/irq-gic.c and irqchip/irq-vic.c rather than adding
 .handle_irq to the machine descriptor.

But I don't find an asm/exception.h include in 
drivers/irqchip/whateveryour.cfileiscalled
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/2] GPIO: clps711x: Rewrite driver for using generic GPIO code

2013-04-25 Thread Russell King - ARM Linux
On Thu, Apr 25, 2013 at 03:31:07PM +0400, Alexander Shiyan wrote:
  On Sat, Apr 13, 2013 at 08:21:39AM +0400, Alexander Shiyan wrote:
   +static void __init clps711x_add_gpio(void)
   +{
   + const struct resource clps711x_gpio0_res[] = {
   + DEFINE_RES_MEM(PADR, SZ_1),
   + DEFINE_RES_MEM(PADDR, SZ_1),
   + };
 ...
  Don't do this - this is incredibly wasteful.
  
  C lesson 1: do not put unmodified initialised structures onto the stack.
  
  What the C compiler will do with the above is exactly the same as this
  for each structure:
  
  static const struct resource private_clps711x_gpio4_res[] = {
  DEFINE_RES_MEM(PEDR, SZ_1),
  DEFINE_RES_MEM(PEDDR, SZ_1),
  };
  
  static void __init clps711x_add_gpio(void)
  {
  const struct resource clps711x_gpio4_res[] = private_clps711x_gpio4_res;
  ...
  
  which will in itself be a call out to memcpy, or an inlined memcpy.  Now
  do you see why it's wasteful?  You might as well write the thing in that
  way to start with and safe the additional code to copy the structures.
  
  The other way to do this, which will probably be far more space efficient:
  
  static phys_addr_t gpio_addrs[][2] = {
  { PADR, PADDR },
  { PBDR, PBDDR },
  ...
  };
  
  static void __init clps711x_add_gpio(void)
  {
  struct resource gpio_res[2];
  unsigned i;
  
  gpio_res[0].flags = IORESOURCE_MEM;
  gpio_res[1].flags = IORESOURCE_MEM;
  
  for (i = 0; i  ARRAY_SIZE(gpio_addrs); i++) {
  gpio_res[0].start = gpio_addrs[i][0];
  gpio_res[0].end = gpio_res[0].start;
  gpio_res[1].start = gpio_addrs[i][1];
  gpio_res[1].end = gpio_res[1].start;
  
  platform_device_register_simple(clps711x-gpio, i,
  gpio_res, ARRAY_SIZE(gpio_res));
  }
  }
  
  which results in smaller storage and most probably also smaller code size
  too.
 
 Very strange results with this change.
 So, I add a debug output before platform_device_register_simple for
 print resource and in __request_resource for print parent.
 This is output.
 gpio 0 [mem 0x8000] [mem 0x8040]
 resource: root [??? 0xe3a0f000-0x flags 0xb0]
 clps711x-gpio.0: failed to claim resource 0
 
 The first line is seems correct. But I do not understand why
 parent is wrong here. Normally it should be as:
 resource: root [mem 0x-0x].
 And it shows normal with my first version.
 Have anyone any ideas?

memset(gpio_res, 0, sizeof(gpio_res));
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] of: Set the DMA mask to 64 bits on ARM LPAE systems

2013-04-25 Thread Russell King - ARM Linux
On Thu, Apr 25, 2013 at 10:09:57AM -0700, Laura Abbott wrote:
 I thought about this as well but in arch/arm/mm/mm.h:

 #ifdef CONFIG_ZONE_DMA
 extern phys_addr_t arm_dma_limit;
 #else
 #define arm_dma_limit ((phys_addr_t)~0)
 #endif

 arm_dma_limit is explicitly cast to phys_addr_t, which means that  
 arm_dma_limit will be always be sizeof(phys_addr_t) regardless of  
 sizeof(dma_addr_t).

This is the size of the DMA zone, which controls the _minimum_ DMA mask
that can be used in the system.  DMA masks must always be greater than
this limit.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC PATCH] drivers: bus: add ARM CCI support

2013-04-22 Thread Russell King - ARM Linux
On Thu, Apr 18, 2013 at 01:54:04PM -0400, Nicolas Pitre wrote:
 On Thu, 18 Apr 2013, Stephen Boyd wrote:
 
  On 04/11/13 07:47, Lorenzo Pieralisi wrote:
   diff --git a/drivers/bus/arm-cci.c b/drivers/bus/arm-cci.c
   new file mode 100644
   index 000..81953de
   --- /dev/null
   +++ b/drivers/bus/arm-cci.c
  [...]
   +static void notrace cci_port_control(unsigned int port, bool enable)
   +{
   + void __iomem *base = ports[port].base;
   +
   + if (!base)
   + return;
   +
   + writel_relaxed(enable, base + CCI_PORT_CTRL);
   + while (readl_relaxed(cci_ctrl_base + CCI_CTRL_STATUS)  0x1)
   + ;
  
  cpu_relax()?
 
 In some cases there is no more cache coherence when this is called and 
 the hardware might not be in a good state to cope with whatever action 
 people might be tempted to insert into cpu_relax().  After the CCI is 
 disabled it is important to keep a low profile and not touch anything 
 global. With some early hardware revision, even a DMB here was harmful.

It would be useful if there was a comment in the code explaining this.
As it stands, you _will_ get a stream of patches from kernel janitors
itching to add cpu_relax() there.

If you're going to do something different from the norm, it _needs_ to
definitely be commented and explained.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC v2] video: ARM CLCD: Add DT CDF support

2013-04-22 Thread Russell King - ARM Linux
On Thu, Apr 18, 2013 at 06:33:21PM +0100, Pawel Moll wrote:
 This patch adds basic DT bindings for the PL11x CLCD cells
 and make their fbdev driver use them, together with the
 Common Display Framework.
 
 The DT provides information about the hardware configuration
 and limitations (eg. the largest supported resolution)
 but the video modes come exclusively from the Common
 Display Framework drivers, referenced to by the standard CDF
 binding.
 
 Signed-off-by: Pawel Moll pawel.m...@arm.com

Much better.

I will point out though that there be all sorts of worms here when you
come to the previous ARM evaluation boards (which is why the capabilities
stuff got written in the first place) where there's a horrid mixture of
BGR/RGB ordering at various levels of the system - some of which must be
set correctly because the CLCD output isn't strictly used as R bits
G bits and B bits (to support different formats from the CLCD's native
formats.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/2] GPIO: clps711x: Rewrite driver for using generic GPIO code

2013-04-19 Thread Russell King - ARM Linux
On Sat, Apr 13, 2013 at 08:21:39AM +0400, Alexander Shiyan wrote:
 +static void __init clps711x_add_gpio(void)
 +{
 + const struct resource clps711x_gpio0_res[] = {
 + DEFINE_RES_MEM(PADR, SZ_1),
 + DEFINE_RES_MEM(PADDR, SZ_1),
 + };
 + const struct resource clps711x_gpio1_res[] = {
 + DEFINE_RES_MEM(PBDR, SZ_1),
 + DEFINE_RES_MEM(PBDDR, SZ_1),
 + };
 + const struct resource clps711x_gpio2_res[] = {
 + DEFINE_RES_MEM(PCDR, SZ_1),
 + DEFINE_RES_MEM(PCDDR, SZ_1),
 + };
 + const struct resource clps711x_gpio3_res[] = {
 + DEFINE_RES_MEM(PDDR, SZ_1),
 + DEFINE_RES_MEM(PDDDR, SZ_1),
 + };
 + const struct resource clps711x_gpio4_res[] = {
 + DEFINE_RES_MEM(PEDR, SZ_1),
 + DEFINE_RES_MEM(PEDDR, SZ_1),
 + };

Don't do this - this is incredibly wasteful.

C lesson 1: do not put unmodified initialised structures onto the stack.

What the C compiler will do with the above is exactly the same as this
for each structure:

static const struct resource private_clps711x_gpio4_res[] = {
DEFINE_RES_MEM(PEDR, SZ_1),
DEFINE_RES_MEM(PEDDR, SZ_1),
};

static void __init clps711x_add_gpio(void)
{
const struct resource clps711x_gpio4_res[] = private_clps711x_gpio4_res;
...

which will in itself be a call out to memcpy, or an inlined memcpy.  Now
do you see why it's wasteful?  You might as well write the thing in that
way to start with and safe the additional code to copy the structures.

The other way to do this, which will probably be far more space efficient:

static phys_addr_t gpio_addrs[][2] = {
{ PADR, PADDR },
{ PBDR, PBDDR },
...
};

static void __init clps711x_add_gpio(void)
{
struct resource gpio_res[2];
unsigned i;

gpio_res[0].flags = IORESOURCE_MEM;
gpio_res[1].flags = IORESOURCE_MEM;

for (i = 0; i  ARRAY_SIZE(gpio_addrs); i++) {
gpio_res[0].start = gpio_addrs[i][0];
gpio_res[0].end = gpio_res[0].start;
gpio_res[1].start = gpio_addrs[i][1];
gpio_res[1].end = gpio_res[1].start;

platform_device_register_simple(clps711x-gpio, i,
gpio_res, ARRAY_SIZE(gpio_res));
}
}

which results in smaller storage and most probably also smaller code size
too.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCHv8 07/19] arm: pci: add a align_resource hook

2013-04-19 Thread Russell King - ARM Linux
On Mon, Apr 15, 2013 at 12:36:09PM -0400, Jason Cooper wrote:
 Russell,
 
 Thanks for applying this patch (7683/1) to your tree.  I see it's in
 your for-next, which I understand *isn't* stable.

That is correct.

   029baf1 ARM: 7683/1: pci: add a align_resource hook
 
 Is there a stable branch I could depend on for the rest of this series
 (particularly, patch 10)?  It looks like you applied it to your misc
 branch, but that's not currently available.

I will have to look at that; as I'm still catching up with mail three
days after having returned and there's still quite an amount outstanding,
it's going to be a while before I can look at this.  Given that we're at
-rc7 and the amount of outstanding mail, it's likely that may happen
after the merge window has opened.

___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC 06/10] video: ARM CLCD: Add DT CDF support

2013-04-18 Thread Russell King - ARM Linux
On Wed, Apr 17, 2013 at 04:17:18PM +0100, Pawel Moll wrote:
 +#if defined(CONFIG_OF)
 +static int clcdfb_of_get_tft_parallel_panel(struct clcd_panel *panel,
 + struct display_entity_interface_params *params)
 +{
 + int r = params-p.tft_parallel.r_bits;
 + int g = params-p.tft_parallel.g_bits;
 + int b = params-p.tft_parallel.b_bits;
 +
 + /* Bypass pixel clock divider, data output on the falling edge */
 + panel-tim2 = TIM2_BCD | TIM2_IPC;
 +
 + /* TFT display, vert. comp. interrupt at the start of the back porch */
 + panel-cntl |= CNTL_LCDTFT | CNTL_LCDVCOMP(1);
 +
 + if (params-p.tft_parallel.r_b_swapped)
 + panel-cntl |= CNTL_BGR;

NAK.  Do not set this explicitly.  Note the driver talks about this being
the old way - this should not be used with the panel capabilities - and
in fact it will be overwritten.

Instead, you need to encode this into the capabilities by masking the
below with CLCD_CAP_RGB or CLCD_CAP_BGR depending on the ordering.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/6] of/pci: Provide support for parsing PCI DT ranges property

2013-03-23 Thread Russell King - ARM Linux
On Sat, Mar 23, 2013 at 01:04:53PM +0900, Jingoo Han wrote:
 - switch ((pci_space  24)  0x3) {
 - case 1: /* PCI IO space */
 + if (iter.flags  IORESOURCE_IO) {

Please look at how IORESOURCE_* stuff is defined:
#define IORESOURCE_TYPE_BITS0x1f00  /* Resource type */
#define IORESOURCE_IO   0x0100  /* PCI/ISA I/O ports */
#define IORESOURCE_MEM  0x0200
#define IORESOURCE_REG  0x0300  /* Register offsets */
#define IORESOURCE_IRQ  0x0400
#define IORESOURCE_DMA  0x0800
#define IORESOURCE_BUS  0x1000

Notice that it's not an array of bits.

So this should be:
if ((iter.flags  IORESOURCE_TYPE_BITS) == IORESOURCE_IO) {

 + iter.cpu_addr = iter.pci_addr;
 + } else if (iter.flags  IORESOURCE_MEM) {

And this should be:
} else if ((iter.flags  IORESOURCE_TYPE_BITS) == 
IORESOURCE_MEM) {

 + if (iter.flags  IORESOURCE_IO) {

Same here.

 + } else if (iter.flags  IORESOURCE_MEM) {

And again here.

 - switch ((pci_space  24)  0x3) {
 - case 1: /* PCI IO space */
 + if (iter.flags  IORESOURCE_IO) {

ditto.

 + iter.cpu_addr = iter.pci_addr;
 + } else if (flags  IORESOURCE_MEM) {

ditto.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/5] arm: mvebu: Select DMA_BOUNCE when LPAE is selected in Kconfig

2013-03-22 Thread Russell King - ARM Linux
On Thu, Mar 21, 2013 at 05:26:15PM +0100, Gregory CLEMENT wrote:
 From: Lior Amsalem al...@marvell.com
 
 For mvebu IOs are 32 bits and we have 40 bits memory due to LPAE so
 make sure we give 32 bits addresses to the IOs.
 
 Signed-off-by: Lior Amsalem al...@marvell.com
 Tested-by: Franklin f...@marvell.com
 Signed-off-by: Gregory CLEMENT gregory.clem...@free-electrons.com

Oh god no.  Please move away from the addition on DMABOUNCE - that code
creaks, doesn't have highmem support, and is known to give problems on
various platforms.

Instead, please rely on using the DMA mask and such like, just like on
x86.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 03/11] ARM: timer-sp: convert to use CLKSRC_OF init

2013-03-22 Thread Russell King - ARM Linux
On Thu, Mar 21, 2013 at 09:31:01PM -0500, Rob Herring wrote:
 Perhaps re-writing it like this would be more clear:
 
 if (irq_num == 2){
   __sp804_clockevents_init(base + TIMER_2_BASE, irq, clk1, name);
   __sp804_clocksource_and_sched_clock_init(base, name, clk0, 1);
 } else {
   __sp804_clockevents_init(base, irq, clk0, name);
   __sp804_clocksource_and_sched_clock_init(base + TIMER_2_BASE,
   name, clk1, 1);
 }

Yes, that is definitely much clearer than the original patch.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 03/11] ARM: timer-sp: convert to use CLKSRC_OF init

2013-03-21 Thread Russell King - ARM Linux
On Wed, Mar 20, 2013 at 05:54:03PM -0500, Rob Herring wrote:
 + clk0 = of_clk_get(np, 0);
 + if (IS_ERR(clk0))
 + clk0 = NULL;
 +
 + /* Get the 2nd clock if the timer has 2 timer clocks */
 + if (of_count_phandle_with_args(np, clocks, #clock-cells) == 3) {
 + clk1 = of_clk_get(np, 1);
 + if (IS_ERR(clk1)) {
 + pr_err(sp804: %s clock not found: %d\n, np-name,
 + (int)PTR_ERR(clk1));
 + return;
 + }
 + } else
 + clk1 = clk0;
 +
 + irq = irq_of_parse_and_map(np, 0);
 + if (irq = 0)
 + return;
 +
 + of_property_read_u32(np, arm,sp804-has-irq, irq_num);
 + if (irq_num == 2)
 + tmr2_evt = true;
 +
 + __sp804_clockevents_init(base + (tmr2_evt ? TIMER_2_BASE : 0),
 +  irq, tmr2_evt ? clk1 : clk0, name);
 + __sp804_clocksource_and_sched_clock_init(base + (tmr2_evt ? 0 : 
 TIMER_2_BASE),
 +  name, tmr2_evt ? clk0 : clk1, 
 1);

This just looks totally screwed to me.

A SP804 cell has two timers, and has one clock input and two clock
enable inputs.  The clock input is common to both timers.  The timers
only count on the rising edge of the clock input when the enable
input is high.  (the APB PCLK also matters too...)

Now, the clock enable inputs are controlled by the SP810 system
controller to achieve different clock rates for each.  So, we *can*
view an SP804 as having two clock inputs.

However, the two clock inputs do not depend on whether one or the
other has an IRQ or not.  Timer 1 is always clocked by TIMCLK 
TIMCLKEN1.  Timer 2 is always clocked by TIMCLK  TIMCLKEN2.

Using the logic above, the clocks depend on how the IRQs are wired
up... really?  Can you see from my description above why that is
screwed?  What bearing does the IRQ have on the wiring of the
clock inputs?

And, by doing this aren't you embedding into the DT description
something specific to the Linux implementation for this device -
namely the decision by you that the IRQs determine how the clocks
get used?

So... NAK.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 02/11] ARM: remove extra timer-sp control register clearing

2013-03-21 Thread Russell King - ARM Linux
On Wed, Mar 20, 2013 at 05:54:02PM -0500, Rob Herring wrote:
 From: Rob Herring rob.herr...@calxeda.com
 
 The timer-sp initialization code clears the control register before
 initializing the timers, so every platform doing this is redundant.
 
 For unused timers, we should not care what state they are in.

NAK.  We do care what state they're in.  What if they have their interrupt
enable bit set, and IRQ is shared with the clock event timer?

No, this patch is just wrong.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 11/11] devtree: add binding documentation for sp804

2013-03-21 Thread Russell King - ARM Linux
On Wed, Mar 20, 2013 at 05:54:11PM -0500, Rob Herring wrote:
 +Optional properties:
 +- arm,sp804-has-irq = #: In the case of only 1 timer irq line connected, 
 this
 + specifies if the irq connection is for timer 1 or timer 2. A value of 1
 + or 2 should be used.

This fails to mention that it has any bearing on the clocks.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 3/4 v2] net: mvmdio: enhance driver to support SMI error/done interrupts

2013-03-15 Thread Russell King - ARM Linux
On Thu, Mar 14, 2013 at 07:08:34PM +0100, Florian Fainelli wrote:
 + if (dev-err_interrupt == NO_IRQ) {
...
 + init_waitqueue_head(dev-smi_busy_wait);
 +
 + dev-err_interrupt = platform_get_irq(pdev, 0);
 + if (dev-err_interrupt != -ENXIO) {
...
 + } else
 + dev-err_interrupt = NO_IRQ;


FYI, NO_IRQ is not supposed to be used anymore (we're supposed to be
removing it).  platform_get_irq() returns negative numbers for failure,
so why not test for  0 in both the above tests, or maybe = 0 (as
IRQ0 is also not supposed to be valid.)?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/1] ARM: OMAP: gpmc: request CS address space for ethernet chips

2013-03-12 Thread Russell King - ARM Linux
On Sun, Mar 10, 2013 at 06:18:22PM +0100, Javier Martinez Canillas wrote:
 +static int gpmc_probe_ethernet_child(struct platform_device *pdev,
 +  struct device_node *child)
 +{
 + int ret, cs;
 + unsigned long base;
 + struct resource res;
 + struct platform_device *of_dev;
 +
 + if (of_property_read_u32(child, reg, cs)  0) {
 + dev_err(pdev-dev, %s has no 'reg' property\n,
 + child-full_name);
 + return -ENODEV;
 + }
 +
 + if (of_address_to_resource(child, 0, res)) {
 + dev_err(pdev-dev, %s has malformed 'reg' property\n,
 + child-full_name);
 + return -ENODEV;
 + }
 +
 + ret = gpmc_cs_request(cs, resource_size(res), base);
 + if (IS_ERR_VALUE(ret)) {

NAK.  ret  0 is the correct way to test here.  Don't use IS_ERR_VALUE
unless you have a _very_ good reason to.  That's a good bit of advice
for everything in linux/err.h.  Don't use *anything* from there without
a damned good reason.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: omap: IORESOURCE_IRQ flags not set when defining a GPIO-IRQ from DT

2013-03-01 Thread Russell King - ARM Linux
On Fri, Mar 01, 2013 at 05:17:57PM +0100, Javier Martinez Canillas wrote:
  unsigned long irq_flags = SMC_IRQ_FLAGS;
  ...
 if (irq_flags == -1 || ires-flags  IRQF_TRIGGER_MASK)
  irq_flags = ires-flags  IRQF_TRIGGER_MASK;
 
  while smsc911x driver's probe function uses the flags from the
  resource unconditionally:
 
  irq_flags = irq_res-flags  IRQF_TRIGGER_MASK;
 
  So, at the end both will set irq_flags to whatever is on the
  IORESOURCE_IRQ struct resource flags member.
 
  Actually, that's not the case for smc91x. By default SMC_IRQ_FLAGS != -1
  (for omap) and so it will not set irq_flags to ires-flags 
  IRQF_TRIGGER_MASK. However, if I force irq_flags to be -1, then I see
  that irq_flags are to 0.

smc91x is complicated by the fact that it started off life before there
was any possibility to pass IRQ flags through resources.  So we ended
up with smc91x.h containing _lots_ of platform specific data, and the
driver could only be built for one platform.

I fixed that by sorting out this IRQ passing method, and changing smc91x
to support both the fixed configuration, and the dynamic-through-IRQflags
method.

There is no reason for any other driver to support the fixed method; that
would be a completely backwards step.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [RFC PATCH 7/9] thermal: tegra30: add tegra30 thermal driver

2013-02-19 Thread Russell King - ARM Linux
On Mon, Feb 18, 2013 at 07:30:29PM +0800, Wei Ni wrote:
 +static struct tegra_thermal_data * __devinit thermal_tegra_dt_parse_pdata(

__dev* no longer exists.

 + tdata = devm_kzalloc(pdev-dev, sizeof(*tdata), GFP_KERNEL);
 + if (!tdata) {
 + dev_err(pdev-dev, Can't allocate platform data\n);
 + return NULL;
 + }
 + memset(tdata, 0, sizeof(*tdata));

Useless memset.  k*z*alloc already zeros the memory before returning.

 +static int tegra30_thermal_probe(struct platform_device *pdev)
 +{
 + struct tegra_thermal_data *pdata = pdev-dev.platform_data;

You read pdata here

 + struct thermal_zone *tz;
 + struct thermal_sensor *ts;
 + static struct thermal_cooling_device *cdev;
 + int ret;
 +
 + pdata = thermal_tegra_dt_parse_pdata(pdev);

and immediately overwrite it here.

 + if (!pdata) {
 + dev_err(pdev-dev, Get platform data failed.\n);
 + return -EINVAL;
 + }
 +
 + /* Create a thermal zone */
 + tz = create_thermal_zone(tz_tegra, NULL);
 + if (!tz) {
 + dev_err(pdev-dev, Create thermal_zone failed.\n);
 + return -EINVAL;
 + }
 +
 + pdata-tz = tz;

This isn't how you deal with driver data.  Set driver data against a
platform device using platform_set_drvdata(pdev, tz).

 +static int tegra30_thermal_remove(struct platform_device *pdev)
 +{
 + struct tegra_thermal_data *pdata = pdev-dev.platform_data;

and use platform_get_drvdata() here - and don't use pdata-tz.
struct struct thermal_zone *tz = platform_get_drvdata(pdev);
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/4] dmaengine: dw_dmac: move to generic DMA binding

2013-02-16 Thread Russell King - ARM Linux
On Sat, Feb 16, 2013 at 10:07:39AM +, Arnd Bergmann wrote:
 On Saturday 16 February 2013, Viresh Kumar wrote:
  On 15 February 2013 23:51, Arnd Bergmann a...@arndb.de wrote:
   +static bool dw_dma_generic_filter(struct dma_chan *chan, void *param)
{
  
   +   dws-cfg_hi = 0x;
   +   dws-cfg_lo = 0x;
  
  s/0x/-1 ?
 
 It's an 'unsigned int'. While -1 would work here, I always find it a little
 odd to rely on that feature of the C language.

However, relying on an 'int' being 32-bits is also rather odd, and
probably much more dubious too.  If you want to set all bits in an
int, the portable way to do that is ~0.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v2 8/8] reset: Add driver for gpio-controlled reset pins

2013-02-14 Thread Russell King - ARM Linux
On Wed, Feb 13, 2013 at 06:34:32PM +0100, Philipp Zabel wrote:
 Signed-off-by: Philipp Zabel p.za...@pengutronix.de

Just be aware that PXA has a gpio reset facility which is used for SoC
reset - see arch/arm/mach-pxa/reset.c

The use of gpio-reset would be ambiguous... or maybe PXA's usage could be
combined somehow with this?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH RFC 1/7] platform: add a device node

2013-02-10 Thread Russell King - ARM Linux
On Sun, Feb 10, 2013 at 02:49:21AM +0100, Javier Martinez Canillas wrote:
 I knew this would be controversial and that's why I didn't mean it to be a 
 patch
 but a RFC :)
 
 The problem basically is that you have to associate the platform device with 
 its
 corresponding DT device node because it can be used in the driver probe 
 function.

When DT is being used, doesn't DT create the platform devices for you,
with the device node already set correctly?

Manually creating platform devices and adding DT nodes to it sounds like
the wrong thing to be doing.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/2] dmaengine: OMAP: Register SDMA controller with Device Tree DMA driver

2013-02-09 Thread Russell King - ARM Linux
On Wed, Feb 06, 2013 at 03:03:16PM -0600, Jon Hunter wrote:
 @@ -673,7 +702,7 @@ static int omap_dma_init(void)
  {
   int rc = platform_driver_register(omap_dma_driver);
  
 - if (rc == 0) {
 + if ((rc == 0)  (!of_have_populated_dt())) {

Looks good, the worst I can find is this over-use of parens...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-09 Thread Russell King - ARM Linux
On Sat, Feb 09, 2013 at 09:35:53PM +0530, Sekhar Nori wrote:
 On 2/1/2013 11:52 PM, Matt Porter wrote:
  +   ret = of_address_to_resource(node, 1, res);
 
 of_address_to_resource() needs linux/of_address.h
 
  +   if (IS_ERR_VALUE(ret))
 
 This needs linux/err.h

More importantly, is this the correct way to check for errors from
of_address_to_resource() ?  Grepping the source shows that either less
than 0 or non-zero are the commonly used conditions here.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/5] spi: pl022: use generic DMA slave configuration if possible

2013-02-08 Thread Russell King - ARM Linux
On Thu, Feb 07, 2013 at 10:15:48PM +0100, Arnd Bergmann wrote:
 On Thursday 07 February 2013 21:19:04 Linus Walleij wrote:
  On Thu, Feb 7, 2013 at 8:42 PM, Arnd Bergmann a...@arndb.de wrote:
   On Thursday 07 February 2013, Linus Walleij wrote:
  
   Actually I once read about a feature where the kernel provides
   a static page full of zeroes or something like this, that would be
   ideal to use in cases like this, then all of this dummy page
   allocation and freeing can be deleted.
  
   You mean empty_zero_page? That only works if this page is
   read-only from the perspective of the DMA controller, but
   then it would be a good fit, yes.
  
  That's actually how it's used.
  
  SPI is symmetric, and in the DMA case we're not poking
  data into the buffers from the CPU so the controller need
  something - anything - to stream to the block.
  
  If we can use that page we'll even save a few remaps.
 
 I'm slightly worried about the caching effects though. The
 idea of the empty-zero page is that all user processes get
 it when they read a page before they write to it, so the
 data in it can essentially always be cache-hot.
 
 If we do DMA from that page to a device what would be the
 overhead of flushing the (clean) cache lines?

If it's DMA _to_ a device, then we will only ever clean the lines prior to
a transfer, never invalidate them.  So that's not really a concern.  (There
better not be any dirty cache lines associated with the empty zero page
either.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 06/14] ARM: pci: Keep pci_common_init() around after init

2013-02-06 Thread Russell King - ARM Linux
On Tue, Feb 05, 2013 at 09:41:48PM +0100, Thierry Reding wrote:
 On Wed, Jan 09, 2013 at 09:43:06PM +0100, Thierry Reding wrote:
  When using deferred driver probing, PCI host controller drivers may
  actually require this function after the init stage.
  
  Signed-off-by: Thierry Reding thierry.red...@avionic-design.de
  ---
   arch/arm/kernel/bios32.c | 6 +++---
   1 file changed, 3 insertions(+), 3 deletions(-)
 
 Russell,
 
 Can this patch and patch 7 (ARM: pci: Allow passing per-controller
 private data) of this series be applied for 3.9? Thomas uses them in his
 Marvell PCIe series as well and it would allow to reduce the complexity
 of the dependencies.

It'll need to go into the patch system in that case...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-05 Thread Russell King - ARM Linux
On Mon, Feb 04, 2013 at 09:47:38PM +, Arnd Bergmann wrote:
 On Monday 04 February 2013, Linus Walleij wrote:
  So I think the above concerns are moot. The callback we can
  set on cookies is entirely optional, and it's even implemented by
  each DMA engine, and some may not even support it but require
  polling, and then it won't even be implemented by the driver.
 
 Just to ensure that everybody is talking about the same thing here:
 Is it just the callback that is optional, or also the interrupt
 coming from the hardware?

If everyone implements stuff correctly, both.  The callback most certainly
is optional as things stand.  The interrupt - that depends on the DMA
engine.

Some DMA engines you can't avoid it because you need to reprogram the
hardware with the next+1 transfer upon completion of an existing transfer.
Others may allow you to chain transfers in hardware.  That's all up to
how the DMA engine driver is implemented and how the hardware behaves.

Now, there's another problem here: that is, people abuse the API.  People
don't pass DMA_CTRL_ACK | DMA_PREP_INTERRUPT into their operations by
default.  People like typing '0'.

The intention of the DMA_PREP_INTERRUPT is significant here: it means
ask the hardware to send an interrupt upon completion of this transfer.

Because soo many people like to type '0' instead in their DMA engine
clients, it means that this flag is utterly useless today - you have to
ignore it.  So there's _no_ way for client drivers to actually tell the
a DMA engine driver which _doesn't_ need to signal interrupts at the end
of every transfer not to do so.

So yes, the DMA engine API supports it.  Whether the _implementations_
themselves do is very much hit and miss (and in reality is much more
miss than hit.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-05 Thread Russell King - ARM Linux
On Mon, Feb 04, 2013 at 04:54:45PM -0500, Cyril Chemparathy wrote:
 You're assuming that cookies complete in order.  That is not necessarily  
 true.

Under what circumstances is that not true?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-05 Thread Russell King - ARM Linux
On Tue, Feb 05, 2013 at 04:47:05PM +, Mark Brown wrote:
 On Tue, Feb 05, 2013 at 05:21:48PM +0100, Linus Walleij wrote:
 
  For IRQ mode, use the completion callback to push each cookie
  to NAPI, and thus let the IRQ drive the traffic.
 
 The whole purpose of NAPI is to avoid taking interrupts for completion
 of transfers.  Anything that generates interrupts when NAPI is in
 polling mode is defeating the point.

Yes, but you're missing one very important point.  If your DMA engine
is decoupled from the network device, and the DMA engine relies upon
interrupts to queue further transfers to move data into RAM, then you
have two options:

1. Receive these interrupts so you can update the DMA engine for
   further data transfer.
2. Don't receive these interrupts, and cause the network device to
   error out with a FIFO overrun because its DMA requests have not
   been serviced. (which may raise an interrupt.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-05 Thread Russell King - ARM Linux
On Tue, Feb 05, 2013 at 04:30:45PM +0100, Linus Walleij wrote:
 On Mon, Feb 4, 2013 at 10:54 PM, Cyril Chemparathy cy...@ti.com wrote:
  On 02/04/2013 04:11 PM, Linus Walleij wrote:
 
  Cyril, just stack up the cookies and take a sweep over them to see
  which ones are baked when the NAPI poll comes in - problem
  solved.
 
  You're assuming that cookies complete in order.  That is not necessarily
  true.
 
 So put them on a wait list? Surely you will have a list of pending
 cookies and pick from the front of the queue if there isn't a hole on
 queue position 0.

Not quite.  The key is the cookie system DMA engine employs to indicate
when a cookie is complete.

Cookies between the issued sequence and completed sequence are defined
to be in progress, everything else is defined to be completed.

This means that if completed sequence is 1, and issued sequence is 5,
then cookies with values 2, 3, 4, 5 are in progress.  You can't mark
sequence 4 as being complete until 2 and 3 have completed.

If we need out-of-order completion, then that's a problem for the DMA
engine API, because you'd need to radically change the way completion
is marked.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-04 Thread Russell King - ARM Linux
On Mon, Feb 04, 2013 at 05:41:53PM +0200, Felipe Balbi wrote:
 Hi,
 
 On Fri, Feb 01, 2013 at 09:30:03PM +, Russell King - ARM Linux wrote:
 I guess to make the MUSB side simpler we would need musb-dma-engine 
 glue
 to map dmaengine to the private MUSB API. Then we would have some
 starting point to also move inventra (and anybody else) to dmaengine
 API.

   Why? Inventra is a dedicated device's private DMA controller, why 
make
universal DMA driver for it?
   
   because it doesn't make sense to support multiple DMA APIs. We can check
   from MUSB's registers if it was configured with Inventra DMA support and
   based on that we can register MUSB's own DMA Engine to dmaengine API.
  
  Hang on.  This is one of the DMA implementations which is closely
  coupled with the USB and only the USB?  If it is...
  
  I thought this had been discussed _extensively_ before.  I thought the
  resolution on it was:
  1. It would not use the DMA engine API.
  2. It would not live in arch/arm.
  3. It would be placed nearby the USB driver it's associated with.
  
  (1) because we don't use APIs just for the hell of it - think.  Do we
  use the DMA engine API for PCI bus mastering ethernet controllers?  No.
  Do we use it for PCI bus mastering SCSI controllers?  No.  Because the
  DMA is integral to the rest of the device.
 
 that's not really a fair comparison, however. MUSB is used with several
 DMA engines.

I only mentioned it because it _was_ brought up as an argument against
using the DMA engine API in the previous discussions.  I'm just reminding
people what was discussed.

 Considering all of the above, it's far better to use DMA engine and get
 rid of all the mess.

Which is what both you and I have been saying for the last 3 or so years
on this subject...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-04 Thread Russell King - ARM Linux
On Mon, Feb 04, 2013 at 06:47:12PM +0200, Felipe Balbi wrote:
 Hi,
 
 On Mon, Feb 04, 2013 at 08:36:38PM +0300, Sergei Shtylyov wrote:
 In my eyes, getting rid of the mess doesn't justify breaking the rules 
  that
  Russell formulated above.
 
 MUSB is no PCI, there is no single, standard interface to the DMA
 engine (unlike Synopsys' dwc-usb3 and dwc-usb2, where we don't use DMA
 engine), every DMA engine comes with its own set of registers, its own
 programming model and so forth.

Err, before you get too wrapped up in that argument... if you think there's
anything like a common hardware interface for DMA on PCI, you'd be wrong.
There isn't.

What there is is a bus protocol for it.  How the device decides to perform
DMA is up to the device; there's no standard register definitions across
all devices.  Yes, some classes have a standard register set (like USB,
and ATA) but others are totally device specific (eg, network chips).

However, on PCI, the DMA support is always integrated with the rest of
the device itself.  That is not the case here.

So, bringing PCI up into this discussion is totally irrelevant.  As I've
already explained, this was brought up in the _previous_ discussion as
a reason why _not_ to use the DMA engine API for this.

So, can we please leave PCI totally out of this?  It didn't belong here
2-3 years ago, and it sure enough doesn't belong here now.  It's nothing
but pure distraction.  And the more this goes on, the more I can see,
yet again, the CPPI 4.1 going nowhere at all.

As I can see it, there's nothing more to talk about.  The issue has been
extensively talked to death.  What it needs now is not _more_ talk, it
needs someone to actually go and _do_ something useful with it.

I realise that may be difficult given the mess that TI must still be in
internally since the upheaval of OMAP, but it sounds like there's a
different group needing this stuff to work...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Sat, Feb 02, 2013 at 06:09:24AM +0400, Sergei Shtylyov wrote:
 Hello.

 On 02-02-2013 4:44, Russell King - ARM Linux wrote:

 On Fri, Feb 01, 2013 at 11:49:11PM +0300, Sergei Shtylyov wrote:
 good point, do you wanna send some patches ?

  I have already sent them countless times and even stuck CPPI 4.1 
 support (in
 arch/arm/common/cppi41.c) in Russell's patch system. TI requested to 
 remove the
 patch. :-(

 sticking into arch/arm/common/ wasn't a nice move. But then again, so
 wasn't asking for the patch to be removed :-s

 Err, patches don't get removed, they get moved to 'discarded'.

 Any chance to bring it back to life? :-)
 Although... drivers/usb/musb/cppi41.c would need to be somewhat
 reworked for at least AM35x and I don't have time. But that may change,
 of course.

 Right, I've just looked back at the various meeting minutes from December
 2010 when the CPPI stuff was discussed.  Yes, I archive these things and
 all email discussions for referencing in cases like this.

Thanks.

 Unfortunately, they do not contain any useful information other than the
 topic having been brought up.  At that point, the CPPI stuff was in
 mach-davinci, and I had suggested moving it into drivers/dma.

I don't remember that, probably was out of the loop again.

 The result of that was to say that it doesn't fit the DMA engine APIs.

I remember this as a discussion happening post me sending the patch to 
 the patch system and it being discarded...

 So someone came up with the idea of putting it in arch/arm/common - which

Probably was me. There was also idea of putting it into 
 drivers/usb/musb/ -- which TI indeed followed in its Arago prject. I 
 firmly denied that suggestion.

 I frankly ignored by email (how long have we been saying no drivers in
 arch/arm ?)

But there *are* drivers there! And look at edma.c which is about to be 
 moved there... Anyway, I haven't seen such warnings, probably was too 
 late in the game.

I've already objected about the header moving to some random place in
arch/arm/include.  Really, edma.c needs to find another home too - but
there's a difference here.  edma.c is already present under arch/arm.
CPPI is _not_.  CPPI is new code appearing under arch/arm (you can see
that for yourself by looking at the diffstat of 6305/1... it doesn't
move files, it adds new code.)

 Now, it would've been discussed in that meeting, but unfortunately no
 record exists of that.  What does follow that meeting is a discussion
 trail.  From what I can see there, but it looks to me like the decision
 was taken to move it to the DMA engine API, and work on sorting out MUSB
 was going to commence.

 The last email in that says I'll get to that soon... and that is also
 the final email I have on this topic.  I guess if nothing has happened...
 Shrug, that's someone elses problem.

Well, as usual... :-(

 Anyway, the answer for putting it in arch/arm/common hasn't changed,
 and really, where we are now, post Linus having a moan about the size
 of arch/arm, that answer is even more concrete in the negative.  It's
 54K of code which should not be under arch/arm at all.

 Anyway, if you need to look at the patch, it's 6305/1.  Typing into the
 summary search box 'cppi' found it in one go.

Thanks, I remember this variant was under arch/arm/common/.
Now however, I see what happened to that variant in somewhat different 
 light. Looks like it was entirely your decision to discard the patch, 
 without TI's request...

Firstly, it is *my* perogative to say no to anything in arch/arm, and I
really don't have to give reasons for it if I choose to.

Secondly, it *was* discussed with TI, and the following thread of
discussion (threaded to the minutes email) shows that *something* was
going to happen _as a result of that meeting_ to address the problem of
it being under arch/arm.  And *therefore* it was discarded from the patch
system - because there was expectation that it was going to get fixed.

For christ sake, someone even agreed to do it.  Even a target was mentioned,
of 2.6.39.  That was mentioned on 7th December 2010.  And 6305/1 was
discarded on 8th December 2010.  Cause and effect.

And yes, *you* were not part of that discussion.  You work for Montavista
which contracts with TI to provide this support.  It is up to TI to pass
stuff like this on to their contractors.

There are two people on this thread CC list who were also involved or
CC'd on the mails from the thread in 2010...  Tony and Felipe.
Unfortunately, the person who agreed to do the work is no longer in the
land of the living.  Yes I know it's inconvenient for people to die
when they've still got lots of important work to do but that's what can
happen...
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Sat, Feb 02, 2013 at 10:18:51AM +, Russell King - ARM Linux wrote:
 On Sat, Feb 02, 2013 at 06:09:24AM +0400, Sergei Shtylyov wrote:
  Hello.
 
  On 02-02-2013 4:44, Russell King - ARM Linux wrote:
 
  On Fri, Feb 01, 2013 at 11:49:11PM +0300, Sergei Shtylyov wrote:
  good point, do you wanna send some patches ?
 
   I have already sent them countless times and even stuck CPPI 4.1 
  support (in
  arch/arm/common/cppi41.c) in Russell's patch system. TI requested to 
  remove the
  patch. :-(
 
  sticking into arch/arm/common/ wasn't a nice move. But then again, so
  wasn't asking for the patch to be removed :-s
 
  Err, patches don't get removed, they get moved to 'discarded'.
 
  Any chance to bring it back to life? :-)
  Although... drivers/usb/musb/cppi41.c would need to be somewhat
  reworked for at least AM35x and I don't have time. But that may change,
  of course.
 
  Right, I've just looked back at the various meeting minutes from December
  2010 when the CPPI stuff was discussed.  Yes, I archive these things and
  all email discussions for referencing in cases like this.
 
 Thanks.
 
  Unfortunately, they do not contain any useful information other than the
  topic having been brought up.  At that point, the CPPI stuff was in
  mach-davinci, and I had suggested moving it into drivers/dma.
 
 I don't remember that, probably was out of the loop again.

Here you go - this goes back even _further_ - November 2009 - on the
mailing list.  The entire thread:

http://lists.arm.linux.org.uk/lurker/thread/20091102.105759.a54cf3f5.en.html

And selected emails from it:

http://lists.arm.linux.org.uk/lurker/message/20091102.103706.38c029b5.en.html
On Mon, Nov 02, 2009 at 10:37:06AM +, I wrote:
| On Mon, Nov 02, 2009 at 04:27:59PM +0530, Gupta, Ajay Kumar wrote:
|  Another option is to create arch/arm/ti-common to place all TI platform's
|  common software, such as CPPI4.1 used both in DA8xx and AM3517.
| 
| No thanks.  I really don't see why we should allow TI to have yet more
| directories scattered throughout the tree that are out of place with
| existing conventions.
| 
| And what is this CPPI thing anyway?
| 
| http://acronyms.thefreedictionary.com/CPPI
| 
| Communications Port Programming Interface seems to be about the best
| applicable one from that list!
| 
| If it's a USB DMA device (from the patches I can find, that seems to be
| the case) then why can't it live in drivers/usb or drivers/dma ?

And again:

http://lists.arm.linux.org.uk/lurker/message/20091102.115458.61cde450.en.html
On Mon, Nov 02, 2009 at 11:54:58AM +, I wrote:
| On Mon, Nov 02, 2009 at 04:27:59PM +0530, Gupta, Ajay Kumar wrote:
|  CPPI4.1 DMA engine can be used either by USB or by Ethernet interface though
|  currently only USB is using it but in future even Ethernet devices may use 
it.
| 
| drivers/dma does seem to be the right place for this.

http://lists.arm.linux.org.uk/lurker/message/20091102.110217.adec3ca7.en.html
Even Felipe Balbi said so:
| you might want to provide support for it via drivers/dma and for the
| musb stuff, you just add the wrappers musb uses. See how tusb6010_omap.c
| uses OMAP's system dma which is also used by any other driver which
| requests a dma channel.

And it seems that _YOU_ did get the message - see your quoted text in:
http://lists.arm.linux.org.uk/lurker/message/20091230.132240.ecd56b3d.en.html
 We're currently having it there but the matter is it should be shred
 between different platforms, so arch/arm/common/ seems like the right
 place (which Russell didn't like, suggesting ill suited for that
 drivers/dma/ instead).

See - you acknowledge here that I don't like it.  So you _KNOW_ my views
on it in December 2009, contary to what you're saying in this thread.

Yet, you persisted with putting it in arch/arm/common:

http://lists.arm.linux.org.uk/lurker/message/20100515.181453.472c7c10.en.html
| Changes since the previous version:
| - moved everything from arch/arm/mach-davinci/ to arch/arm/common/;
| - s/CONFIG_CPPI41/CONFIG_TI_CPPI41/, made that option invisible;
| - added #include linux/slab.h for kzalloc();
| - switched alloc_queue() and cppi41_queue_free() to using bit operations;
| - replaced 'static' linking_ram[] by local variable in 
cppi41_queue_mgr_init();
| - fixed pr_debug() in cppi41_dma_ctrlr_init() to print the real queue manager 
#.

So, see, I had already objected to it being in arch/arm well before
you stuck your patch into the patch system.  And somehow you think
that ignoring my previous comments and doing it anyway will result in
progress?

So, let's recap.  The timeline behind this is:

+ 2 Nov 2009: Question asked about putting it in arch/arm/ti-common
  + I responded saying a clear no to that, suggesting other locations
all of which were outside arch/arm.
  + I responded again saying an hour or two later saying the same thing.
  + Felipe Balbi agreed with drivers/dma.
+ 15 May 2010: v5 posted with it in arch/arm/common
+ 06 Aug

Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Fri, Feb 01, 2013 at 01:59:59PM -0500, Matt Porter wrote:
 On Fri, Feb 01, 2013 at 07:52:46PM +, Sergei Shtylyov wrote:
  Hello.
  
  On 02/01/2013 09:49 PM, Matt Porter wrote:
  
   Move mach-davinci/dma.c to common/edma.c so it can be used
   by OMAP (specifically AM33xx) as well.
  
   I think this should rather go to drivers/dma/?
  
   No, this is the private EDMA API. It's the analogous thing to
   the private OMAP dma API that is in plat-omap/dma.c. The actual
   dmaengine driver is in drivers/dma/edma.c as a wrapper around
   this...same way OMAP DMA engine conversion is being done.
  
Keeps me wondering why we couldn't have the same with CPPI 4.1 when I 
  proposed
  that, instead of waiting indefinitely for TI to convert it to drivers/dma/
  directly. We could have working MUSB DMA on OMAP-L1x/Sitara all this 
  time... Sigh.
 
 That is a shame. Yeah, I've pointed out that I was doing this exactly
 the same way as was acceptable for the OMAP DMA conversion since it was
 in RFC. The reasons are sound since in both cases, we have many drivers
 to convert that need to continue using the private DMA APIs.

It's very simple.

The OMAP DMA stuff in plat-omap/dma.c has existed for a long time, well
before things like the DMA engine API came into being.  Same with a lot
of the DMA code in arch/arm.  It is therefore in the privileged position
of having grandfather rights when it comes to existing in mainline.

However, today what we're finding is that these private per-platform APIs
are sub-optimal; they prevent the peripheral drivers in the drivers/
subtree from being re-used on other SoCs.  So, even through they pre-exist
the DMA engine support, they're being gradually converted to the API.

Moreover, we have _far_ too much code in arch/arm.  It's something that
has been attracted complaints from Linus.  It's something that's got us
close to having updates to arch/arm refused during merge windows.  It's
got us close to having parts of arch/arm deleted from the mainline kernel.

The long term plan is _still_ to remove arch/arm/*omap*/dma.c and move
the whole thing to drivers/dma.  That can only happen when everything is
converted (or the drivers which aren't converted have been retired and
deleted) - and provided that TI decide to continue funding that work.
That's still uncertain since the whole TI OMAP thing imploded two months
ago.

Now, CPPI is brand new code to arch/arm - always has been.  It post-dates
the DMA engine API.  And it's been said many times about moving it to
drivers/dma.  The problem is Sergei doesn't want to do it - he's anti the
DMA engine API for whatever reasons he can dream up.  He professes no
knowledge of my dislike for having it in arch/arm, yet there's emails
from years back showing that he knew about it.  TI knows about it; Ajay
about it.  Yet... well... history speaks volumes about this.

Now, there may be a new problem with CPPI.  That being we're now requiring
DT support.  _Had_ this been done prior to the push for DT, then it would
not have been a concern, because then the problem becomes one for DT.
But now that OMAP is being converted to DT, and DT is The Way To Go now,
that's an additional problem that needs to be grappled with - or it may
make things easier.

However, as I've said now many times: CPPI is _not_ going in arch/arm.
Period.  That makes it not my problem.  Period.

I really have nothing further to say on CPPI; everything that can be said
has already been said.  If there's still pressure to have it in arch/arm,
I will _NOT_ respond to it, because there's nothing to be said about it
which hasn't been said before.  The only reason it's still being pressed
for is a total reluctance to do anything about it being in arch/arm.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Fri, Feb 01, 2013 at 10:41:08AM -0800, Tony Lindgren wrote:
 * Matt Porter mpor...@ti.com [130201 10:25]:
  Move mach-davinci/dma.c to common/edma.c so it can be used
  by OMAP (specifically AM33xx) as well.
 
 I think this should rather go to drivers/dma/?

Yes, it should, but just like OMAP, there's a conversion effort that needs
to be gone through.  It has one point - and only one point - which allows
its continued existence under arch/arm, and that is it already exists
there.

If it was new code, the answer would be a definite NACK, but it isn't.
It's pre-existing code which is already in mainline.  It's merely being
moved.

Another plus point for it is that there does seem to be a DMA engine
driver for it, so hopefully we'll see it killed off in arch/arm soon.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-02 Thread Russell King - ARM Linux
On Sat, Feb 02, 2013 at 08:27:42PM +0400, Sergei Shtylyov wrote:
 There are two people on this thread CC list who were also involved or
 CC'd on the mails from the thread in 2010...  Tony and Felipe.
 Unfortunately, the person who agreed to do the work is no longer in the
 land of the living.  Yes I know it's inconvenient for people to die
 when they've still got lots of important work to do but that's what can
 happen...

Hm... wasn't it David Brownell? He's the only person who I know has 
 died recently who has dealt with DaVinci, MUSB and the releated stuff.

Actually, it wasn't David who was going to do it - that's where the email
thread gets messy because the mailer David was using makes no distinction
in text format between what bits of text make up the original email being
replied to, and which bits of text are written by David.

It might have been Felipe; there appears to be an email from Felipe saying
that as the immediate parent to David's email.  But that's not really the
point here.  The point is that _someone_ agreed to put the work in, and
_that_ agreement is what caused the patch to be discarded.

And, as I've already explained, you brought up the subject of it being
discarded shortly after, and it got discussed there _again_, and the
same things were said _again_ by at least two people about it being in
drivers/dma.

But anyway, that's all past history.  What was said back then about it
being elsewhere in the tree is just as relevant today as it was back
then.  The only difference is that because that message wasn't received,
it's now two years later with no progress on that.  And that's got
*nothing* what so ever to do with me.

I know people like to blame me just because I'm apparantly the focus of
the blame culture, but really this is getting beyond a joke.

So, I want an apology from you for your insistance that I'm to blame
for this.  Moreover, _I_ want to know what is going to happen in the
future with this so that I don't end up being blamed anymore for the
lack of progress on this issue.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-01 Thread Russell King - ARM Linux
On Fri, Feb 01, 2013 at 10:56:00PM +0200, Felipe Balbi wrote:
 hi,
 
 On Fri, Feb 01, 2013 at 11:49:11PM +0300, Sergei Shtylyov wrote:
   good point, do you wanna send some patches ?
  
 I have already sent them countless times and even stuck CPPI 4.1 support 
  (in
  arch/arm/common/cppi41.c) in Russell's patch system. TI requested to remove 
  the
  patch. :-(
 
 sticking into arch/arm/common/ wasn't a nice move. But then again, so
 wasn't asking for the patch to be removed :-s

Err, patches don't get removed, they get moved to 'discarded'.

   I guess to make the MUSB side simpler we would need musb-dma-engine glue
   to map dmaengine to the private MUSB API. Then we would have some
   starting point to also move inventra (and anybody else) to dmaengine
   API.
  
 Why? Inventra is a dedicated device's private DMA controller, why make
  universal DMA driver for it?
 
 because it doesn't make sense to support multiple DMA APIs. We can check
 from MUSB's registers if it was configured with Inventra DMA support and
 based on that we can register MUSB's own DMA Engine to dmaengine API.

Hang on.  This is one of the DMA implementations which is closely
coupled with the USB and only the USB?  If it is...

I thought this had been discussed _extensively_ before.  I thought the
resolution on it was:
1. It would not use the DMA engine API.
2. It would not live in arch/arm.
3. It would be placed nearby the USB driver it's associated with.

(1) because we don't use APIs just for the hell of it - think.  Do we
use the DMA engine API for PCI bus mastering ethernet controllers?  No.
Do we use it for PCI bus mastering SCSI controllers?  No.  Because the
DMA is integral to the rest of the device.

The DMA engine API only makes sense if the DMA engine is a shared
system resource.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH v7 01/10] ARM: davinci: move private EDMA API to arm/common

2013-02-01 Thread Russell King - ARM Linux
On Sat, Feb 02, 2013 at 04:07:59AM +0400, Sergei Shtylyov wrote:
 Hello.

 On 02-02-2013 1:30, Russell King - ARM Linux wrote:

 On Fri, Feb 01, 2013 at 11:49:11PM +0300, Sergei Shtylyov wrote:
 good point, do you wanna send some patches ?

 I have already sent them countless times and even stuck CPPI 4.1 
 support (in
 arch/arm/common/cppi41.c) in Russell's patch system. TI requested to 
 remove the
 patch. :-(

 sticking into arch/arm/common/ wasn't a nice move. But then again, so
 wasn't asking for the patch to be removed :-s

 Err, patches don't get removed, they get moved to 'discarded'.

Any chance to bring it back to life? :-)
Although... drivers/usb/musb/cppi41.c would need to be somewhat 
 reworked for at least AM35x and I don't have time. But that may change, 
 of course.

Right, I've just looked back at the various meeting minutes from December
2010 when the CPPI stuff was discussed.  Yes, I archive these things and
all email discussions for referencing in cases like this.

Unfortunately, they do not contain any useful information other than the
topic having been brought up.  At that point, the CPPI stuff was in
mach-davinci, and I had suggested moving it into drivers/dma.

The result of that was to say that it doesn't fit the DMA engine APIs.
So someone came up with the idea of putting it in arch/arm/common - which
I frankly ignored by email (how long have we been saying no drivers in
arch/arm ?)

Now, it would've been discussed in that meeting, but unfortunately no
record exists of that.  What does follow that meeting is a discussion
trail.  From what I can see there, but it looks to me like the decision
was taken to move it to the DMA engine API, and work on sorting out MUSB
was going to commence.

The last email in that says I'll get to that soon... and that is also
the final email I have on this topic.  I guess if nothing has happened...
Shrug, that's someone elses problem.

Anyway, the answer for putting it in arch/arm/common hasn't changed,
and really, where we are now, post Linus having a moan about the size
of arch/arm, that answer is even more concrete in the negative.  It's
54K of code which should not be under arch/arm at all.

Anyway, if you need to look at the patch, it's 6305/1.  Typing into the
summary search box 'cppi' found it in one go.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/5] dmaengine: dw_dmac: move to generic DMA binding

2013-01-29 Thread Russell King - ARM Linux
On Tue, Jan 29, 2013 at 10:50:23AM +, Arnd Bergmann wrote:
 (putting back the Cc list, I assumed you dropped them accidentally)

That'll be why I don't have a copy of Andy's email to reply to.

 On Tuesday 29 January 2013, Andy Shevchenko wrote:
  On Mon, 2013-01-28 at 21:58 +, Arnd Bergmann wrote: 
   - if ((last_dw == dw)  (last_bus_id == param))
   + /* both the driver and the device must match */
   +if (chan-device-dev-driver != dw_driver.driver)
  
  Could we somehow pass the .driver to the generic filter function (via
  *_dma_controller_register() ? ) and do this to each DMA driver?

How, and what driver gets passed?

 My hope is still that we can avoid using filter functions entirely
 when we use xlate() logic, and instead just go through the channels
 of the dma engine we know we are looking at.

Has anyone yet determined whether any of these new DMA engine slave APIs
are usable for implementations which have a separate MUX between the
DMA engine and the DMA peripheral?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/5] dmaengine: dw_dmac: move to generic DMA binding

2013-01-29 Thread Russell King - ARM Linux
On Tue, Jan 29, 2013 at 01:44:10PM +, Arnd Bergmann wrote:
 Can you give an example for this? We were careful to make sure it
 works with platforms that connect a slave to multiple dma engines,
 out of which any could be used for a given transfer. In the device
 tree binding, you specify all possible controllers and give the
 alternatives the same name, for example:
 
   serial@1000 {
   compatible = arm,pl011, arm,primecell;
   dmas = dwdma0 7 0, dwdma0 8 1, pl330 17, pl330 15;
   dma-names = rx, tx, rx, tx;
   ...
   };

No, that's not what I mean.  I mean the situation we find on Versatile
platforms:

8 3   3
PL080 DMA --/--+--/-- FPGA Mux --/-- {bunch of off-CPU peripherals}
   |  5
   `--/-- {On-CPU peripherals}

This is something that I've been raising _every time_ I've been involved
with DMA engine discussions when it comes to the matching stuff, so this
is nothing new, and it's not unknown about.

What is different this time around is that I've been purposely omitted
from the discussions (like what seems to be happening more and more - I
notice that I'm no longer copied on PL011 patches despite being the
driver author, or GIC patches, or VIC patches) so this stuff doesn't
get properly considered.

But it doesn't matter anymore; I'm soo out of the loop on stuff like DT
and the like that my input would be more of a hinderence now.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/5] dmaengine: dw_dmac: move to generic DMA binding

2013-01-29 Thread Russell King - ARM Linux
On Tue, Jan 29, 2013 at 03:45:40PM +0200, Andy Shevchenko wrote:
 On Tue, 2013-01-29 at 13:31 +, Arnd Bergmann wrote: 
  On Tuesday 29 January 2013, Viresh Kumar wrote:
   You can still keep fargs as is and just fill them as:
   
  fargs.cfg_lo = 0;
   
   if (DMA_TO_DEV)
  // dest is periph
  fargs.cfg_hi = be32_to_cpup(dma_spec-args+0)  11;
   else if (DEV_TO_DMA)
  // src is periph
  fargs.cfg_hi = be32_to_cpup(dma_spec-args+0)  7;
   
   The field size is 4 bits.
  
  Ah, good. So I guess the dma-requests property should actually
  be 16 then.
  
  Does this mean that an implicit zero request line means memory?
 
 No, it doesn't.
 When dma is doing mem2mem transfers the request line field is ignored by
 the hw.

Memory to memory transfers are dealt with using a totally different API
to the slave API.  Look at the rest of the DMA engine API to see how it's
used - any channel is selected with a DMA_MEMCPY capability.

(IMHO, the MEM2MEM transfer type against the slave API should never have
been permitted.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/5] dmaengine: dw_dmac: move to generic DMA binding

2013-01-29 Thread Russell King - ARM Linux
On Tue, Jan 29, 2013 at 02:55:49PM +, Arnd Bergmann wrote:
 On Tuesday 29 January 2013, Russell King - ARM Linux wrote:
  No, that's not what I mean.  I mean the situation we find on Versatile
  platforms:
  
  8 3   3
  PL080 DMA --/--+--/-- FPGA Mux --/-- {bunch of off-CPU peripherals}
 |  5
 `--/-- {On-CPU peripherals}
  
  This is something that I've been raising _every time_ I've been involved
  with DMA engine discussions when it comes to the matching stuff, so this
  is nothing new, and it's not unknown about.
 
 Ok, I see. I have not actually been involved with the DMA engine API
 much, so for me it's the first time /I/ see this being explained.
 
 From the DT binding perspective, doing this should be no problem. I guess
 you would model the MUX as a trivial dma engine device that forwards to
 another one, just like we do for nested interrupt controllers:
 
   pl080: dma-engine@1000 {
   compatible =arm,pl080, arm,primecell;
   reg = 0x1000 0x1000;
   dma-requests = 8;
   dma-channels = 4;
   #dma-cells = 2;
   };
 
   fpga {
   mux: dma-mux@f0008000 {
   reg = 0xf0008000 100;
   compatible = arm,versatile-fpga-dmamux;
   dma-requests = 7;
   dma-channels = 3;
   #dma-cells = 1;
   dmas = pl080 5 0, pl080 6 0 pl080 7 0;
   dma-names = mux5, mux6, mux7;
   };
   ...
 
   some-device@f000a000 {
   compatible = foo,using-muxed-dma;
   dmas = mux 3, mux 4;
   dma-names = rx, tx;
   };
   };
 
 The driver for foo,using-muxed-dma just requests a slave channel for its
 lines, which ends up calling the xlate function of the driver for the mux.
 That driver still needs to get written, of course, but it should be able
 to recursively call dma_request_slave_channel on the pl080 device.
 The arm,versatile-fpga-dmamux driver would not be registered to the dmaengine
 layer, but it would register as an of_dma_controller.

That's a good way to represent it but it fails in a very big way:
You're stuffing N peripherals down to 3 request lines to the DMA
engine, and you may want more than 3 of those peripherals to be
making use of the DMA engine at any one time.

Before anyone says you shouldn't be doing this consider this:
your typical DMA slave engine already has this structure:

N DMA channels --- M DMA requests --- M peripherals

where N  M.  In other words, there is _already_ a MUX between the
peripherals and the DMA engine channels themselves (what do you think
the request index which you have to program into DMA channel control
registers is doing...

We support this external mux today in the PL080 driver - and we do that
by having the PL080 driver do the scheduling of virtual DMA channels on
the actual hardware itself.  The PL080 driver has knowledge that there
may be some sort of additional muxing layer between it and the
peripheral.

As the APIs stand today, you just can't do this without having the
DMA engine driver itself intimately involved because a layer above
doesn't really have much clue as to what's going on, and the DMA
engine stuff itself doesn't layer particularly well.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/5] dmaengine: dw_dmac: move to generic DMA binding

2013-01-29 Thread Russell King - ARM Linux
On Tue, Jan 29, 2013 at 04:36:38PM +, Arnd Bergmann wrote:
 If the pl080 driver already has code for the mux in it, then it should
 handle both of_dma_controller instances in my example. It would
 not change anything regarding the binding, which just describes the
 way that the hardware is connected. I have not looked at the implementation
 of the pl080 driver, but I'd assume we could get away with just having
 two separate xlate() functions.

Well, how it all works in the PL08x driver at present is:

- the platform code passes into the PL08x driver a description of the
  virtual channels, eg:

[1] = {
/* Muxed on channel 0-3 */
.bus_id = aacirx,
.min_signal = 0,
.max_signal = 2,
.muxval = 0x01,
.periph_buses = PL08X_AHB1 | PL08X_AHB2,
},

- the virtual channels include:
  - the minimum and maximum index of the DMA request signals into the
PL08x that can be used with this peripheral.
  - the the register value for the external mux, if any.
  - other PL08x specific data to do with this DMA peripheral

- when a virtual channel is requested by a DMA client, it claims just
  the virtual channel.  No actual hardware resources are assigned,
  and no mappings are setup.

- when a transfer is prepared on a virtual channel, part of the
  preparation is to locate the request signal to be used - and
  platform code is requested to supply that from the description
  associated with the channel (such as the above fragment.)

- the versatile PL08x code looks at min_signal/max_signal, and walks
  this space looking for an unassigned request signal.  If there is
  an external mux associated with this request signal, the mux is
  appropriately programmed.  If a request signal is currently assigned
  the next request signal will be tried until there are no further
  possibilities, when failure occurs.  In that case, the prepare
  function also fails.

- the PL08x driver then knows which request signal is associated with
  the peripheral, and sets up the descriptors to be dependent on this
  request signal.  This mapping must not change until the transfer
  has completed.

- when the descriptor is completed - and freed, the muxing is released
  and the DMA request signal becomes available for other users to make
  use of.

Practically, what this means is that:

(a) we've ensured that all drivers using PL08x do _not_ expect that
descriptors can always be prepared; they must continue to work correctly
when the prepare function returns NULL.

(b) several devices end up sharing the first three request signals
and they're used on an opportunistic basis.  Theoretically, if you have
one of these boards where AACI DMA works, you can be using one of these
DMA requests for audio playback, another for record, and the remaining
one can be used on an opportunistic basis between the second MMCI
interface (should that also work - which I've proven is an impossiblity!)
and the 3rd UART... or the USB if there was a driver for that device.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: arm: Kernel failures when several memory banks are used with starting address above 128MB

2013-01-23 Thread Russell King - ARM Linux
On Tue, Jan 22, 2013 at 08:18:13PM +0100, Michal Simek wrote:
 I have a question regarding to the case where DTS specify one memory bank
 for example 0x0 0x4000 with CONFIG_ARM_PATCH_PHYS_VIRT=y
 where the kernel can be loaded at a 16MB boundary.

That's where you're going wrong.  We assume that the kernel is loaded
within the 16MB of memory _always_.  Can't get around this on a
multiplatform kernel.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 3/6] arm: kconfig: don't select TWD with local timer for Armada 370/XP

2013-01-22 Thread Russell King - ARM Linux
On Mon, Jan 21, 2013 at 08:44:41PM +, Arnd Bergmann wrote:
 Documentation is generally considered a good thing, but few people
 can be bothered to write it, and few of the other people that should
 read it actually do.

Actually _that_ is just one of the problems with documentation.  The
whole thing is full of problems:

1. People don't write documentation.

2. People don't bother to update correct documentation when the code
   changes in a way that the documentation should be updated for.

3. People don't bother to read documentation which is provided.

(2) is about as bad as it gets - because the small percentage of people
who do read the documentation rather than the code now are lead down
paths which are no longer true.

And (2) educates people to behave like (3).  And (3) educates people to
behave like (1) - why should they bother if they're just going to have
to waste time in an email explaining it time and time again (even when
documentation does exist.)
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 3/6] arm: kconfig: don't select TWD with local timer for Armada 370/XP

2013-01-22 Thread Russell King - ARM Linux
On Tue, Jan 22, 2013 at 03:57:02PM +, Arnd Bergmann wrote:
 On Monday 21 January 2013, Gregory CLEMENT wrote:
  I don't see a strong reason to not enable it if we don't use it. My concern
  was that I don't need it so I didn't want to include it and generating extra
  code for nothing. Then just after having sent this patch set, I received 
  your
  patch set about build regression in 3.8 and especially the part about
  CONFIG_MULTIPLATFORM made me realized that it could be a problem.
 
 Ok.
 
   Maybe it can be written as
   
   config LOCAL_TIMERS
 bool Use local timer interrupts
 depends on SMP
 default y
   
   config HAVE_ARM_TWD
 depends on LOCAL_TIMERS
 default ARCH_MULTIPLATFORM || (!ARCH_MSM_SCORPIONMP  !EXYNOS4_MCT)
  
  So in this case why not written something like this:
  default ARCH_MULTIPLATFORM || (!ARCH_MSM_SCORPIONMP  !EXYNOS4_MCT 
   !ARMADA_370_XP_TIMER)
 
 That does not change anything, because ARMADA_370_XP_TIMER is only ever 
 enabled
 when ARCH_MULTIPLATFORM is enabled as well.
 
 default y
  I am not a kconfig expert, but won't this line set HAVE_ARM_TWD to 'y' 
  whatever
  the result of the previous line?
 
 Yes, that was a mistake on my side.

Sigh.  No.  Wrong.

config HAVE_ARM_TWD
depends on LOCAL_TIMERS
default ARCH_MULTIPLATFORM || (!ARCH_MSM_SCORPIONMP  !EXYNOS4_MCT  
!ARMADA_370_XP_TIMER)
default y

This takes the value of the first enabled default.  The first enabled
default is the first default (it's unconditional).  So, the default y
will never be used.

Given the above, it's far from clear what the actual behaviour being
asked for is - it looks totally and utterly screwed to me - and the
wrong thing to be doing.

If the desire is to have it enabled if ARCH_MULTIPLATFORM is set, then
it's easy, and requires just a _single_ line addition:

 config LOCAL_TIMERS
bool Use local timer interrupts
depends on SMP
default y
select HAVE_ARM_TWD if (!ARCH_MSM_SCORPIONMP  !EXYNOS4_MCT)
+   select HAVE_ARM_TWD if ARCH_MULTIPLATFORM
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/6] clocksource: time-armada-370-xp: add local timer support

2013-01-21 Thread Russell King - ARM Linux
On Mon, Jan 21, 2013 at 06:53:58PM +0100, Gregory CLEMENT wrote:
 +struct clock_event_device __percpu **percpu_armada_370_xp_evt;
 +

Have you run this through checkpatch?  Should the above be static?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH RFC 2/2] Improve bios32 support for DT PCI host bridge controllers

2013-01-18 Thread Russell King - ARM Linux
On Fri, Jan 18, 2013 at 11:40:19AM +, Andrew Murray wrote:
  static void __init pcibios_init_hw(struct hw_pci *hw, struct list_head *head)
  {
   struct pci_sys_data *sys = NULL;
 + static int busnr;
   int ret;
 - int nr, busnr;
 -
 - for (nr = busnr = 0; nr  hw-nr_controllers; nr++) {
 + int nr;
 + for (nr = 0; nr  hw-nr_controllers; nr++) {

Please try to retain the originally formatting.  There should be a blank
line between int nr; and for
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 05/14] lib: Add I/O map cache implementation

2013-01-16 Thread Russell King - ARM Linux
On Wed, Jan 16, 2013 at 11:18:22AM +0100, Thierry Reding wrote:
   err = ioremap_page_range(virt, virt + SZ_64K - 1, phys,

Why -1 here?
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 2/2] ARM: tegra: Use DT /cpu node to detect number of CPU core

2013-01-14 Thread Russell King - ARM Linux
On Mon, Jan 14, 2013 at 09:52:50AM +0200, Hiroshi Doyu wrote:
 + if (!arm_dt_cpu_map_valid())
 + set_cpu_possible(0, true);

You don't need to do any of this (and, therefore, I don't think you even
need the first patch.)

The generic boot code will set CPU0 as possible, present and online.  Think
about it: you're booting on that very CPU so it better be possible, present
and online.  If it isn't, what CPU is executing your code?

arm_dt_init_cpu_maps() will only ever set _additional_ CPUs in the possible
map.

Ergo, by the time the above code is run, CPU0 will already be marked as
possible.  Therefore, the above code and arm_dt_cpu_map_valid() is not
required.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] cpsw: Add support to read cpu MAC address

2013-01-11 Thread Russell King - ARM Linux
On Fri, Jan 11, 2013 at 04:15:02PM +0100, Michal Bachraty wrote:
 + if (!request_mem_region(priv-conf_res-start,
 + resource_size(priv-conf_res), ndev-name)) {
 + dev_err(priv-dev, failed request i/o region\n);
 + ret = -ENXIO;
 + goto clean_clk_ret;
 + }
 +
 + regs = ioremap(priv-conf_res-start,
 + resource_size(priv-conf_res));
 + if (!regs) {
 + dev_err(priv-dev, unable to map i/o region\n);
 + goto clean_configuration_iores_ret;
 + }

In this day and age where error paths don't get any testing, and are
frequently buggy, where we have alternative APIs which make those paths
more reliable, I think we should do everything to make use of that.

And, to prove the point, your error paths are buggy.  Yes, you release
the mem region correctly (well done for picking the right interface for
that!) but the ioremap() is never cleaned up.

So, any chance of converting the above to devm_request_and_ioremap() ?

Thanks.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] ARM: kernel: DT cpu map validity check helper function

2013-01-11 Thread Russell King - ARM Linux
On Fri, Jan 11, 2013 at 04:17:38PM +, Lorenzo Pieralisi wrote:
 This patch implements a helper function that platforms can call to check
 whether DT based cpu map initialization and cores count were completed
 successfully.

Umm, are you sure this works?  Two problems here:
- the kernel boot marks the booting CPU (in our case, CPU0) as present,
  possible and online before arch code gets called.  smp_init_cpus()
  will be called with the maps already initialized per that.

- this really needs to be paired with a patch showing how you intend it
  to be used; merely adding the helper without any sign of it being used
  means it's just bloat which may not end up being used.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/4] gpiolib: introduce descriptor-based GPIO interface

2013-01-09 Thread Russell King - ARM Linux
On Wed, Jan 09, 2013 at 10:06:16AM +0900, Alexandre Courbot wrote:
 On Tue, Jan 8, 2013 at 9:59 PM, Arnd Bergmann a...@arndb.de wrote:
  Please avoid the use of IS_ERR_OR_NULL(), especially on interfaces you
  introduce yourself. AFAICT, gpiod_get cannot return NULL, so you
  should not check for that.
 
 Sure - you sound like IS_ERR_OR_NULL() is generally considered evil,
 may I ask why this is the case?

I think I've explained that in the past; many people just do not think.
They just use whatever macro they feel like is the right one.  We keep
seeing this, and this is a persistent problem. It's getting to be more
of a problem because people are starting to argue back when you point
out that they're wrong.

People are even starting to believe that documentation which specifies
explicitly values where IS_ERR() is true are considered errors,
everything else is valid means that the use of IS_ERR_OR_NULL() in such
cases is permissible.  (I've had such an argument with two people
recently.)

So, interfaces which have well defined return values and even interfaces
which specify _how_ errors should be checked end up being checked with
the wrong macros.  People constantly translate IS_ERR() to IS_ERR_OR_NULL()
even when it's inappropriate.

People don't think and people don't read documentation.  People don't
remember this level of detail.  Whatever the excuse, the problem remains.
IS_ERR_OR_NULL() always gets used inappropriately and without any regard
to whether it's correct or not.

So yes, IS_ERR_OR_NULL() _is_ pure evil.  IMHO this macro is doing more
harm than good.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/4] gpiolib: introduce descriptor-based GPIO interface

2013-01-09 Thread Russell King - ARM Linux
On Wed, Jan 09, 2013 at 10:35:22AM +, Arnd Bergmann wrote:
 On Wednesday 09 January 2013, Alexandre Courbot wrote:
  On Tue, Jan 8, 2013 at 9:59 PM, Arnd Bergmann a...@arndb.de wrote:
   Please avoid the use of IS_ERR_OR_NULL(), especially on interfaces you
   introduce yourself. AFAICT, gpiod_get cannot return NULL, so you
   should not check for that.
  
  Sure - you sound like IS_ERR_OR_NULL() is generally considered evil,
 
 Correct.
 
  may I ask why this is the case?
 
 It's very hard to get right: either you are interested in the error code,
 and then you don't have one in some cases, or you don't care but have
 to check for it anyway. When you define a function, just make it clear
 what the expected return values are, either NULL for error or a negative
 ERR_PTR value, but not both.

Indeed, and any code which does this:

if (IS_ERR_OR_NULL(ptr))
return PTR_ERR(ptr);

is buggy because on NULL it returns 0, which is generally accepted as being
success.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH 1/4] gpiolib: introduce descriptor-based GPIO interface

2013-01-09 Thread Russell King - ARM Linux
On Wed, Jan 09, 2013 at 10:44:14AM +, Russell King - ARM Linux wrote:
 On Wed, Jan 09, 2013 at 10:35:22AM +, Arnd Bergmann wrote:
  On Wednesday 09 January 2013, Alexandre Courbot wrote:
   On Tue, Jan 8, 2013 at 9:59 PM, Arnd Bergmann a...@arndb.de wrote:
Please avoid the use of IS_ERR_OR_NULL(), especially on interfaces you
introduce yourself. AFAICT, gpiod_get cannot return NULL, so you
should not check for that.
   
   Sure - you sound like IS_ERR_OR_NULL() is generally considered evil,
  
  Correct.
  
   may I ask why this is the case?
  
  It's very hard to get right: either you are interested in the error code,
  and then you don't have one in some cases, or you don't care but have
  to check for it anyway. When you define a function, just make it clear
  what the expected return values are, either NULL for error or a negative
  ERR_PTR value, but not both.
 
 Indeed, and any code which does this:
 
   if (IS_ERR_OR_NULL(ptr))
   return PTR_ERR(ptr);
 
 is buggy because on NULL it returns 0, which is generally accepted as being
 success.

oh = omap_hwmod_lookup(oh_name);
if (!oh) {

oh = omap_hwmod_lookup(oh_name);
if (IS_ERR_OR_NULL(oh)) {

Does this function return NULL on errors or an errno-encoded-pointer on
errors?

d = debugfs_create_dir(pm_debug, NULL);
if (IS_ERR_OR_NULL(d))
return PTR_ERR(d);

Well, covered above.  NULL is success here.

err = gpio_request(en_vdd_1v05, EN_VDD_1V05);
if (err) {
pr_err(%s: gpio_request failed: %d\n, __func__, err);
return err;
}

gpio_direction_output(en_vdd_1v05, 1);

regulator = regulator_get(NULL, vdd_ldo0,vddio_pex_clk);
if (IS_ERR_OR_NULL(regulator)) {
pr_err(%s: regulator_get failed: %d\n, __func__,
   (int)PTR_ERR(regulator));
goto err_reg;
}
...
err_reg:
gpio_free(en_vdd_1v05);

return err;
}

So 'err' here is never set.  when IS_ERR_OR_NULL evaluates true.
Setting 'err' to PTR_ERR(regulator) is not correct because a NULL return
sets 'err' to zero.  Same here:

/* create property id */
ret = ipp_create_id(ctx-prop_idr, ctx-prop_lock, c_node,
property-prop_id);
if (ret) {
DRM_ERROR(failed to create id.\n);
goto err_clear;
}
...
c_node-start_work = ipp_create_cmd_work();
if (IS_ERR_OR_NULL(c_node-start_work)) {
DRM_ERROR(failed to create start work.\n);
goto err_clear;
}
c_node-stop_work = ipp_create_cmd_work();
if (IS_ERR_OR_NULL(c_node-stop_work)) {
DRM_ERROR(failed to create stop work.\n);
goto err_free_start;
}
c_node-event_work = ipp_create_event_work();
if (IS_ERR_OR_NULL(c_node-event_work)) {
DRM_ERROR(failed to create event work.\n);
goto err_free_stop;
}

We also have it cropping up as a way to 'verify' function arguments
are correct:

int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
  struct gpd_timing_data *td)
{
if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
return -EINVAL;

And then we have this beauty in USB code:

if (!IS_ERR_OR_NULL(udc-transceiver))
(void) otg_set_peripheral(udc-transceiver-otg, NULL);
else
pullup_disable(udc);
...
seq_printf(s,
UDC rev %d.%d, fifo mode %d, gadget %s\n
hmc %d, transceiver %s\n,
tmp  4, tmp  0xf,
fifo_mode,
udc-driver ? udc-driver-driver.name : (none),
HMC,
udc-transceiver
? udc-transceiver-label
: (cpu_is_omap1710()
? external : (none)));

If udc-transceiver were to be an error code, the above will segfault or
access memory at the top of memory space.

These are just a few of the issues I've picked out at random from grepping
the kernel source for IS_ERR_OR_NULL().  Yes, there's some valid use cases
but the above are all horrid, buggy or down right wrong, and I wouldn't be
at all surprised if that was all too common.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


[PATCH] Proposed removal of IS_ERR_OR_NULL() (was: Re: [PATCH 1/4] gpiolib: introduce descriptor-based GPIO interface)

2013-01-09 Thread Russell King - ARM Linux
So, it seems there's some concensus building here, and it seems that
I've become the chosen victi^wvolunteer for this.  So, here's a patch.
It's missing a Guns-supplied-by: tag though.

From: Russell King rmk+ker...@arm.linux.org.uk
Subject: Mark IS_ERR_OR_NULL() deprecated

IS_ERR_OR_NULL() attracts a lot of abuse: people use it without much
thought about it's effects.  Common errors include:
 1. checking the returned pointer for functions defined as only
returning errno-pointer values, rather than using IS_ERR().
This leads to: ptr = foo(); if (IS_ERR_OR_NULL(ptr)) return
 PTR_ERR(ptr);
 2. using it to check functions which only ever return NULL on error,
thereby leading to another zero-error value return.
In the case of debugfs functions, these return errno-pointer values when
debugfs is configured out, which means code which blindly checks using 
IS_ERR_OR_NULL() ends up returning errors, which is rather perverse for
something that's not implemented.

Therefore, let's schedule it for removal in a few releases.

Nicolas Pitre comments:
 I do agree with Russell here.  Despite the original intentions behind
 IS_ERR_OR_NULL() which were certainly legitimate, the end result in
 practice is less reliable code with increased maintenance costs.
 Unlike other convenience macros in the kernel, this one is giving a
 false sense of correctness with too many people falling in the trap
 of using it just because it is available.
 
 I strongly think this macro should simply be removed from the source
 tree entirely and the code reverted to explicit tests against NULL
 when appropriate.

Suggested-by: David Howells dhowe...@redhat.com
Tape-measuring-service-offered-by: Will Deacon will.dea...@arm.com
Victim-for-firing-sqad: Russell King rmk+ker...@arm.linux.org.uk
Signed-off-by: Russell King rmk+ker...@arm.linux.org.uk
---
Ok, so I'm in the firing line for suggesting this, but it appears
several people wish this to happen.

I'm not intending to push this patch forwards _just_ yet: we need to
sort out the existing users _first_ to prevent the kernel turning into
one hell of a mess of warnings.

 include/linux/err.h |   17 -
 1 files changed, 16 insertions(+), 1 deletions(-)

diff --git a/include/linux/err.h b/include/linux/err.h
index f2edce2..d5a85df 100644
--- a/include/linux/err.h
+++ b/include/linux/err.h
@@ -34,7 +34,22 @@ static inline long __must_check IS_ERR(const void *ptr)
return IS_ERR_VALUE((unsigned long)ptr);
 }
 
-static inline long __must_check IS_ERR_OR_NULL(const void *ptr)
+/*
+ * IS_ERR_OR_NULL() attracts a lot of abuse: people use it without much
+ * thought about it's effects.  Common errors include:
+ *  1. checking the returned pointer for functions defined as only returning
+ * errno-pointer values, rather than using IS_ERR().
+ * This leads to: ptr = foo(); if (IS_ERR_OR_NULL(ptr)) return 
PTR_ERR(ptr);
+ *  2. using it to check functions which only ever return NULL on error,
+ * thereby leading to another zero-error value return.
+ * In the case of debugfs functions, these return errno-pointer values when
+ * debugfs is configured out, which means code which blindly checks using
+ * IS_ERR_OR_NULL() ends up returning errors, which is rather perverse for
+ * something that's not implemented.
+ *
+ * Therefore, let's schedule it for removal in a few releases.
+ */
+static inline long __must_check __deprecated IS_ERR_OR_NULL(const void *ptr)
 {
return !ptr || IS_ERR_VALUE((unsigned long)ptr);
 }

___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


Re: [PATCH] Proposed removal of IS_ERR_OR_NULL() (was: Re: [PATCH 1/4] gpiolib: introduce descriptor-based GPIO interface)

2013-01-09 Thread Russell King - ARM Linux
On Wed, Jan 09, 2013 at 10:27:53AM -0500, Nicolas Pitre wrote:
 Anyone with good coccinelle skills around to deal with the users?

I'm not sure that's a solution.

For example:

err = gpio_request(en_vdd_1v05, EN_VDD_1V05);
if (err) {
pr_err(%s: gpio_request failed: %d\n, __func__, err);   
return err;   
}
...
regulator = regulator_get(NULL, vdd_ldo0,vddio_pex_clk);
if (IS_ERR_OR_NULL(regulator)) {
pr_err(%s: regulator_get failed: %d\n, __func__,
   (int)PTR_ERR(regulator));
goto err_reg;
}

regulator_enable(regulator);

err = tegra_pcie_init(true, true);
...
err_reg:
gpio_free(en_vdd_1v05);

return err;
}

Now, regulator_get() returns error-pointers for real errors when it's
configured in.  When regulator support is not configured, it returns
NULL.

So, one solution here would be:

if (IS_ERR(regulator)) {
err = PTR_ERR(regulator);
pr_err(%s: regulator_get failed: %d\n, __func__, err);
goto err_reg;
}

but leaves us with the question: is it safe to call tegra_pcie_init()
without regulator support?

The problem there is that fixing this causes a behavioural change in
the code, and we don't know what effect that change may have.
___
devicetree-discuss mailing list
devicetree-discuss@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/devicetree-discuss


  1   2   3   4   >