Re: [PATCH v3 0/3] Support NVIDIA Tegra-based Ouya game console

2020-10-07 Thread Stephen Warren
On 10/7/20 7:53 AM, Dmitry Osipenko wrote:
> 07.10.2020 16:36, Bob Ham пишет:
>> Hi all,
>>
>> The Bluetooth controller driver sent to linux-input by Lukas Rusak
>> (CC'd) is a bit of a concern.  Here is the original driver:
>>
>> https://github.com/ouya/ouya_1_1-kernel/blob/master/drivers/hid/hid-ouya.c
>>
>> and you can see that there is no SPDX header, no license information and
>> no MODULE_LICENSE declaration.  I'd previously noticed this as a
>> possible cause for concern in upstreaming the driver.
>>
>> Meanwhile, Lukas's driver is clearly derived from the Ouya Inc. driver
>> and even retains the Ouya Inc. copyright notice line.  However, Lukas's
>> driver now has a MODULE_LICENSE("GPL") declaration added.
>>
>> Lukas, did you get clear permission to license the driver as GPL?
>>
>> Alternatively, kernel developers with greater legal or Ouya knowledge,
>> is this actually a concern or is it nothing to worry about?
> 
> Hello Bob,
> 
> That can't be a problem because:
> 
> 1. Ouya Inc. doesn't exists anymore.
> 
> 2. If code was officially published, then this implies that it can be
> derived.
> 
> 3. Ouya's driver uses kernel symbols that are explicitly marked as
> GPL-only, see hid_open_report for example. Hence Ouya's driver inherents
> the GPL license.
> 
> 4. Lukas's patch doesn't remind the original code at all.
> 
> 5. In practice nobody cares about legal much if money aren't involved.
> Even if it happened that driver was 100% violating some copyrights, then
> it either won't be accepted to upstream or will be removed by request
> from the copyrights holder.

This definitely isn't the correct attitude to copyright.

The facts[1] that Ouya published the code and that it used GPL-only
symbols certainly does imply that they *should* have published under GPL
or a compatible license, but doesn't mean that they definitely did. The
only way to know that for sure is for there to be evidence in the file
content or git history, such as license headers or Signed-off-by lines.

When someone writes Signed-off-by in their code submission, which is
required to contribute to upstream Linux, they are stating/warranting
certain things about the code they're submitting. One aspect is that
they definitely have the rights to submit the code under the given
license. Without evidence to this effect, or having written the code
themselves, nobody can state/warrant this. Take a look at the following
link to see what you're stating/warranting when writing Signed-off-by in
a code submission:

https://developercertificate.org/

[1] I haven't checked the facts.


Re: [PATCH 2/4] dt-bindings: sram: add documentation for reserved-only flag

2020-05-19 Thread Stephen Warren
On 5/13/20 4:41 AM, Mian Yousaf Kaukab wrote:
> On Tue, May 12, 2020 at 01:45:28PM -0600, Stephen Warren wrote:
>> On 5/12/20 8:48 AM, Mian Yousaf Kaukab wrote:
>>> Add documentation for the new optional flag added for SRAM driver.
>>
>>> diff --git a/Documentation/devicetree/bindings/sram/sram.yaml 
>>> b/Documentation/devicetree/bindings/sram/sram.yaml
>>
>>> +  reserved-only:
>>> +description:
>>> +  The flag indicating, that only SRAM reserved regions have to be 
>>> remapped.
>>> +  remapping type is selected depending upon no-memory-wc as usual.
>>> +type: boolean
>>
>> This feels a bit like a SW flag rather than a HW description, so I'm not
>> sure it's appropriate to put it into DT.
> 
> Reserved regions themselves are software descriptions, no? Then we have 'pool'
> flag which is again a software flag and so on. This flag falls into same
> category and nothing out of ordinary.

I suppose that's true to some extent. This is indeed a description of
the system environment presented to the SW that consumes the DT, which
is a bit more than pure HW description but still a description of
something imposed externally rather than describing something that's up
to the discretion of the consuming SW. So, go ahead.

>> Are there any cases where the SW should map all of the SRAM, i.e. where
>> we wouldn't expect to set reserved-only? [...]
> 
> Yes, here are a few examples:
> arch/arm/boot/dts/aspeed-g*.dtsi
> arch/arm/boot/dts/at91*.dtsi
> arch/arm/boot/dts/bcm7445.dtsi
> Then arch/arm/boot/dts/dra7.dtsi is an example where we should map everything
> except the reserved region.
> 
>> [...] I'd expect reserved-only to be
>> the default, and perhaps only, mode of operation for the SRAM driver.
> 
> It will break compatibility with existing dtbs.
> 
>> If we can't do that because some SW currently expects to be able to map
>> arbitrary portions of the SRAM, shouldn't that SW be fixed to tell the
>> SRAM driver which parts it's using, hence still allowing the driver to
>> only map in-use portions?
> 
> User doesn’t need sram driver in that case. It can use genalloc api directly.

This sounds a bit odd. Without a driver for the reserved region, nothing
should be touching it, since otherwise there's no code that owns an
manages the region. If any code needs to consume the region, it should
obtain info about the region from some form of provider code that can
handle both the allocation and mapping. Anything else sounds like some
consumer code directly making use of DT nodes it doesn't own. But since
I'm not familiar enough with the SRAM driver and genalloc code that you
mention to fully understand the allocation paths I guess I won't object
for now, although it does still sound fishy.


Re: [PATCH 2/4] dt-bindings: sram: add documentation for reserved-only flag

2020-05-12 Thread Stephen Warren
On 5/12/20 8:48 AM, Mian Yousaf Kaukab wrote:
> Add documentation for the new optional flag added for SRAM driver.

> diff --git a/Documentation/devicetree/bindings/sram/sram.yaml 
> b/Documentation/devicetree/bindings/sram/sram.yaml

> +  reserved-only:
> +description:
> +  The flag indicating, that only SRAM reserved regions have to be 
> remapped.
> +  remapping type is selected depending upon no-memory-wc as usual.
> +type: boolean

This feels a bit like a SW flag rather than a HW description, so I'm not
sure it's appropriate to put it into DT.

Are there any cases where the SW should map all of the SRAM, i.e. where
we wouldn't expect to set reserved-only? I'd expect reserved-only to be
the default, and perhaps only, mode of operation for the SRAM driver. If
we can't do that because some SW currently expects to be able to map
arbitrary portions of the SRAM, shouldn't that SW be fixed to tell the
SRAM driver which parts it's using, hence still allowing the driver to
only map in-use portions?


Re: arm64: tegra186: bpmp: kernel crash while decompressing initrd

2020-05-11 Thread Stephen Warren
On 5/11/20 9:23 AM, Mian Yousaf Kaukab wrote:
> On Mon, May 11, 2020 at 12:25:00PM +0100, Robin Murphy wrote:
>> On 2020-05-08 9:40 am, Mian Yousaf Kaukab wrote:
>>> I am seeing following kernel crash on Jetson TX2. Board is flashed with
>>> firmware bits from L4T R32.4.2 with upstream u-boot. Crash always
>>> happens while decompressing initrd. Initrd is approximately 80 MiB in
>>> size and compressed with xz (xz --check=crc32 --lzma2=dict=32MiB).
>>> Crash is not observed if the same initrd is compressed with gzip.
>>> [1] was a previous attempt to workaround the same issue.
>>>
...
>>>
>>> With some debugging aid ported from Nvidia downstream kernel [2] the
>>> actual cause was found:
>>>
>>> [0.761525] Trying to unpack rootfs image as initramfs...
>>> [2.955499] CPU0: SError: mpidr=0x8100, esr=0xbf40c000
>>> [2.955502] CPU1: SError: mpidr=0x8000, esr=0xbe00
>>> [2.955505] CPU2: SError: mpidr=0x8001, esr=0xbe00
>>> [2.955506] CPU3: SError: mpidr=0x8101, esr=0xbf40c000
>>> [2.955507] ROC:CCE Machine Check Error:
>>> [2.955508] ROC:CCE Registers:
>>> [2.955509]  STAT: 0xb4400415
>>> [2.955510]  ADDR: 0x400c00e7a00c
>>> [2.955511]  MSC1: 0x80ffc
>>> [2.955512]  MSC2: 0x39800
>>> [2.955513] --
>>> [2.955514] Decoded ROC:CCE Machine Check:
>>> [2.955515]  Uncorrected (this is fatal)
>>> [2.955516]  Error reporting enabled when error arrived
>>> [2.955517]  Error Code = 0x415
>>> [2.955518]  Poison Error
>>> [2.955518]  Command = NCRd (0xc)
>>> [2.955519]  Address Type = Non-Secure DRAM
>>> [2.955521]  Address = 0x30039e80 -- 3000.sysram + 0x39e80
>>> [2.955521]  TLimit = 0x3ff
>>> [2.955522]  Poison Error Mask = 0x80
>>> [2.955523]  More Info = 0x800
>>> [2.955524]  Timeout Info = 0x0
>>> [2.955525]  Poison Info = 0x800
>>> [2.955526]  Read Request failed GSC checks
>>> [2.955527]  Source = L2_1 (A57) (0x1)
>>> [2.955528]  TID = 0xe
>>>
>>> IIUC, there was read request for 0x30039e80 from EL1/2 which failed.
>>> This address falls in the sysram security aperture and hence a read
>>> from normal mode failed.
>>>
>>> sysram is mapped at 0x3000_ to 0x3004_ and is managed by the
>>> sram driver (drivers/misc/sram.c). There are two reserved pools for
>>> BPMP driver communication at 0x3004_e000 and 0x3004_f000 of 0x1000
>>> bytes each.
>>>
>>> sram driver maps complete 0x3000_ to 0x3004_ range as normal
>>> memory.
>
>> That's your problem. It's not really worth attempting to reason about, the
>> architecture says that anything mapped as Normal memory may be speculatively
>> accessed at any time, so no amount of second-guessing is going to save you
>> in general. Don't make stuff accessible to the kernel that it doesn't need
>> to access, and especially don't make stuff accessible to the kernel if
>> accessing it will kill the system.
>>
> I agree and [1] was an attempt in that direction. What I wonder here is that
> processor is speculating on an address range which kernel has never accessed.
> Is it correct behavior that cpu is speculating in EL1/EL2 on an address
> accessed in EL3?

That is indeed the way the ARM architecture is defined (at least the
version that this CPU implements; maybe other versions too), and this
certainly does happen in practice. I've seen this same kind of issue
arise in other cases too (see below). The only solution is to not map
memory as normal which isn't normal, so either (a) don't map it at all,
or (b) map it as some other type which can't be accessed speculatively.

Just as a related example, consider the following patch I had to make to
U-Boot to fix a similar issue that causes SError during boot:

> commit d40d69ee350b62af90c2b522e05cbb3eb5f27112
> Author: Stephen Warren 
> Date:   Mon Oct 10 09:50:55 2016 -0600
> 
> ARM: tegra: reduce DRAM size mapped into MMU on ARM64
> 
> ARM CPUs can architecturally (speculatively) prefetch completely arbitrary
> normal memory locations, as defined by the current translation tables. The
> current MMU configuration for 64-bit Tegras maps an extremely large range
> of addresses as DRAM, well beyond the actual physical maximum DRAM window,
> even though U-Boot only needs access to the first 2GB of DRAM; the Tegra
> port of U-Boot deliberately limits itself to 2GB of RAM since some HW
> modules on at least some 64-bit Tegra SoCs can only access a 32-bit
> physical address space. This change reduces the amount of RAM mapped via
> the MMU to disallow the CPU from ever speculatively accessing RAM that
> U-Boot will definitely not access. This avoids the possibility of the HW
> raising SError due to accesses to always-invalid physical addresses.
> 
> Signed-off-by: Stephen Warren 
> Signed-off-by: Tom Warren 


Re: [PATCH] arm64: tegra: only map accessible sysram

2019-10-02 Thread Stephen Warren

On 9/30/19 4:02 AM, Mian Yousaf Kaukab wrote:

On Sun, Sep 29, 2019 at 11:28:43PM -0600, Stephen Warren wrote:

On 9/29/19 2:08 PM, Mian Yousaf Kaukab wrote:

Most of the SysRAM is secure and only accessible by TF-A.
Don't map this inaccessible memory in kernel. Only map pages
used by bpmp driver.


I don't believe this change is correct. The actual patch doesn't
implement mapping a subset of the RAM (a software issue), but rather it
changes the DT representation of the SYSRAM hardware. The SYSRAM
hardware always does start at 0x3000, even if a subset of the
address range is dedicated to a specific purpose. If the kernel must map
only part of the RAM, then some additional property should indicate
this.[...]

I agree the hardware description becomes inaccurate with this change.

In the current setup complete 0x3000_ to 0x3005_ range is being mapped
as normal memory (MT_NORMAL_NC). Though only 0x3004_E000 to 0x3005_ are
accessible by the kernel.


Nit: I expect that a much larger region than that is *accessible*, 
although it's quite plausible that only that region is actually 
*accessed*/used right now.



I am seeing an issue where a read access (which I
believe is speculative) to inaccessible range causes an SError. Another
solution for this problem could be to add "no-memory-wc" to SysRAM node so that
it is mapped as device memory (MT_DEVICE_nGnRE). Would that be acceptable?


Why does the driver blindly map the entire memory at all? Surely it 
should only map the portions of RAM that other drivers request/use? And 
surely the BPMP driver or DT node is already providing that information?


But yes, changing the mapping type to avoid speculation might be an 
acceptable solution for now, although I think we'd want to work things 
out better later. I don't know if there would be any impact to the BPMP 
driver related to the slower SRAM access due to this change. Best 
consult a BPMP expert or Tegra maintainer about that.



[...] Also, I believe it's incorrect to hard-code into the kernel's DT
the range of addresses used by the secure monitor/OS, since this can
vary depending on what the user actually chooses to install as the
secure monitor/OS. Any indication of such regions should be filled in at
runtime by some boot firmware or the secure monitor/OS itself, or
retrieved using some runtime API rather than DT.

Secure-OS addresses are not of interest here. SysRAM is partitioned
between secure-OS and BPMP and kernel is only interested in the BPMP
part. The firmware can update these addresses in the device-tree if it
wants to. Would you prefer something similar implemented in u-boot so
that it updates SysRAM node to only expose kernel accessible part of it
to the kernel?

Can u-boot dynamically figure out the Secure-OS vs BPMP partition?

BR,
Yousaf





Re: [PATCH] arm64: tegra: only map accessible sysram

2019-09-29 Thread Stephen Warren
On 9/29/19 2:08 PM, Mian Yousaf Kaukab wrote:
> Most of the SysRAM is secure and only accessible by TF-A.
> Don't map this inaccessible memory in kernel. Only map pages
> used by bpmp driver.

I don't believe this change is correct. The actual patch doesn't
implement mapping a subset of the RAM (a software issue), but rather it
changes the DT representation of the SYSRAM hardware. The SYSRAM
hardware always does start at 0x3000, even if a subset of the
address range is dedicated to a specific purpose. If the kernel must map
only part of the RAM, then some additional property should indicate
this. Also, I believe it's incorrect to hard-code into the kernel's DT
the range of addresses used by the secure monitor/OS, since this can
vary depending on what the user actually chooses to install as the
secure monitor/OS. Any indication of such regions should be filled in at
runtime by some boot firmware or the secure monitor/OS itself, or
retrieved using some runtime API rather than DT.


Re: [PATCH V3 02/17] pinctrl: tegra: add suspend and resume support

2019-06-18 Thread Stephen Warren

On 6/18/19 3:30 AM, Dmitry Osipenko wrote:

18.06.2019 12:22, Dmitry Osipenko пишет:

18.06.2019 10:46, Sowjanya Komatineni пишет:

This patch adds suspend and resume support for Tegra pinctrl driver
and registers them to syscore so the pinmux settings are restored
before the devices resume.

Signed-off-by: Sowjanya Komatineni 
---
  drivers/pinctrl/tegra/pinctrl-tegra.c| 62 
  drivers/pinctrl/tegra/pinctrl-tegra.h|  5 +++
  drivers/pinctrl/tegra/pinctrl-tegra114.c |  1 +
  drivers/pinctrl/tegra/pinctrl-tegra124.c |  1 +
  drivers/pinctrl/tegra/pinctrl-tegra20.c  |  1 +
  drivers/pinctrl/tegra/pinctrl-tegra210.c | 13 +++
  drivers/pinctrl/tegra/pinctrl-tegra30.c  |  1 +
  7 files changed, 84 insertions(+)

diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c 
b/drivers/pinctrl/tegra/pinctrl-tegra.c
index 34596b246578..ceced30d8bd1 100644
--- a/drivers/pinctrl/tegra/pinctrl-tegra.c
+++ b/drivers/pinctrl/tegra/pinctrl-tegra.c
@@ -20,11 +20,16 @@
  #include 
  #include 
  #include 
+#include 
  
  #include "../core.h"

  #include "../pinctrl-utils.h"
  #include "pinctrl-tegra.h"
  
+#define EMMC2_PAD_CFGPADCTRL_0			0x1c8

+#define EMMC4_PAD_CFGPADCTRL_0 0x1e0
+#define EMMC_DPD_PARKING   (0x1fff << 14)
+
  static inline u32 pmx_readl(struct tegra_pmx *pmx, u32 bank, u32 reg)
  {
return readl(pmx->regs[bank] + reg);
@@ -619,6 +624,48 @@ static void tegra_pinctrl_clear_parked_bits(struct 
tegra_pmx *pmx)
pmx_writel(pmx, val, g->mux_bank, g->mux_reg);
}
}
+
+   if (pmx->soc->has_park_padcfg) {
+   val = pmx_readl(pmx, 0, EMMC2_PAD_CFGPADCTRL_0);
+   val &= ~EMMC_DPD_PARKING;
+   pmx_writel(pmx, val, 0, EMMC2_PAD_CFGPADCTRL_0);
+
+   val = pmx_readl(pmx, 0, EMMC4_PAD_CFGPADCTRL_0);
+   val &= ~EMMC_DPD_PARKING;
+   pmx_writel(pmx, val, 0, EMMC4_PAD_CFGPADCTRL_0);
+   }
+}


Is there any reason why parked_bit can't be changed to parked_bitmask like I was
asking in a comment to v2?

I suppose that it's more preferable to keep pinctrl-tegra.c platform-agnostic 
for
consistency when possible, hence adding platform specifics here should be 
discouraged.
And then the parked_bitmask will also result in a proper hardware description 
in the code.



I'm now also vaguely recalling that Stephen Warren had some kind of a "code 
generator"
for the pinctrl drivers. So I guess all those tables were auto-generated 
initially.

Stephen, maybe you could adjust the generator to take into account the bitmask 
(of
course if that's a part of the generated code) and then re-gen it all for 
Sowjanya?


https://github.com/NVIDIA/tegra-pinmux-scripts holds the scripts that 
generate tegra-pinctrlNNN.c. See  	soc-to-kernel-pinctrl-driver.py. 
IIRC, tegra-pinctrl.c (the core file) isn't auto-generated. Sowjanya is 
welcome to send a patch to that repo if the code needs to be updated.


Re: [PATCH 0/8] Virtio-over-PCIe on non-MIC

2019-01-18 Thread Stephen Warren

On 1/16/19 9:32 AM, Vincent Whitchurch wrote:

The Virtio-over-PCIe framework living under drivers/misc/mic/vop implements a
generic framework to use virtio between two Linux systems, given shared memory
and a couple of interrupts.  It does not actually require the Intel MIC
hardware, x86-64, or even PCIe for that matter.  This patch series makes it
buildable on more systems and adds a loopback driver to test it without special
hardware.

Note that I don't have access to Intel MIC hardware so some testing of the
patchset (especially the patch "vop: Use consistent DMA") on that platform
would be appreciated, to ensure that the series does not break anything there.


So a while ago I took a look at running virtio over PCIe. I found virtio 
basically had two parts:


1) The protocol used to enumerate which virtio devices exist, and 
perhaps configure them.


2) The ring buffer protocol that actually transfers the data.

I recall that data transfer was purely based on simple shared memory and 
interrupts, and hence could run over PCIe (e.g. via the PCIe endpoint 
subsystem in the kernel) without issue.


However, the enumeration/configuration protocol requires the host to be 
able to do all kinds of strange things that can't possibly be emulated 
over PCIe; IIRC the configuration data contains "registers" that when 
written select the data other "registers" access. When the virtio device 
is exposed by a hypervisor, and all the accesses are emulated 
synchronously through a trap, this is easy enough to implement. However, 
if the two ends of this configuration parsing are on different ends of a 
PCIe bus, there's no way this can work.


Are you thinking of doing something different for 
enumeration/configuration, and just using the virtio ring buffer 
protocol over PCIe?


I did post asking about this quite a while back, but IIRC I didn't 
receive much of a response. Yes, here it is:



https://lists.linuxfoundation.org/pipermail/virtualization/2018-March/037276.html

"virtio over SW-defined/CPU-driven PCIe endpoint"


Re: [RFC PATCH v2 09/17] ARM: dts: tegra20: harmony: Setup voltage regulators for DVFS

2018-10-22 Thread Stephen Warren

On 10/21/18 2:54 PM, Dmitry Osipenko wrote:

Set min/max regulators voltage and add CPU node that hooks up CPU with
voltage regulators.



diff --git a/arch/arm/boot/dts/tegra20-harmony.dts 
b/arch/arm/boot/dts/tegra20-harmony.dts



-   sm0 {
+   core_vdd_reg: sm0 {
regulator-name = "vdd_sm0,vdd_core";
-   regulator-min-microvolt = <120>;
-   regulator-max-microvolt = <120>;
+   regulator-min-microvolt = <100>;
+   regulator-max-microvolt = <130>;
+   regulator-coupled-with = <_vdd_reg>;
+   regulator-coupled-max-spread = <15>;
regulator-always-on;
};


How do you know for sure that these increased ranges are safe (high end) 
and stable (low end) for this particular board? IIRC the safe/legal 
range depends on the chip SKU, and to be honest I have no idea which SKU 
is present on Harmony... For public boards like Colibri I imagine 
there's enough information out there to tell what will work, but maybe 
not our internal boards like Harmony, unless you checked our ancient 
downstream kernels?


Re: [RFC PATCH v2 09/17] ARM: dts: tegra20: harmony: Setup voltage regulators for DVFS

2018-10-22 Thread Stephen Warren

On 10/21/18 2:54 PM, Dmitry Osipenko wrote:

Set min/max regulators voltage and add CPU node that hooks up CPU with
voltage regulators.



diff --git a/arch/arm/boot/dts/tegra20-harmony.dts 
b/arch/arm/boot/dts/tegra20-harmony.dts



-   sm0 {
+   core_vdd_reg: sm0 {
regulator-name = "vdd_sm0,vdd_core";
-   regulator-min-microvolt = <120>;
-   regulator-max-microvolt = <120>;
+   regulator-min-microvolt = <100>;
+   regulator-max-microvolt = <130>;
+   regulator-coupled-with = <_vdd_reg>;
+   regulator-coupled-max-spread = <15>;
regulator-always-on;
};


How do you know for sure that these increased ranges are safe (high end) 
and stable (low end) for this particular board? IIRC the safe/legal 
range depends on the chip SKU, and to be honest I have no idea which SKU 
is present on Harmony... For public boards like Colibri I imagine 
there's enough information out there to tell what will work, but maybe 
not our internal boards like Harmony, unless you checked our ancient 
downstream kernels?


Re: [PATCH v2 3/6] ARM: trusted_foundations: do not use naked function

2018-04-16 Thread Stephen Warren

On 04/16/2018 09:56 AM, Stefan Agner wrote:

On 27.03.2018 14:16, Dmitry Osipenko wrote:

On 27.03.2018 14:54, Robin Murphy wrote:

On 26/03/18 22:20, Dmitry Osipenko wrote:

On 25.03.2018 21:09, Stefan Agner wrote:

As documented in GCC naked functions should only use Basic asm
syntax. The Extended asm or mixture of Basic asm and "C" code is
not guaranteed. Currently this works because it was hard coded
to follow and check GCC behavior for arguments and register
placement.

Furthermore with clang using parameters in Extended asm in a
naked function is not supported:
    arch/arm/firmware/trusted_foundations.c:47:10: error: parameter
    references not allowed in naked functions
  : "r" (type), "r" (arg1), "r" (arg2)
     ^

Use a regular function to be more portable. This aligns also with
the other smc call implementations e.g. in qcom_scm-32.c and
bcm_kona_smc.c.

Cc: Dmitry Osipenko <dig...@gmail.com>
Cc: Stephen Warren <swar...@nvidia.com>
Cc: Thierry Reding <tred...@nvidia.com>
Signed-off-by: Stefan Agner <ste...@agner.ch>
---
Changes in v2:
- Keep stmfd/ldmfd to avoid potential ABI issues

   arch/arm/firmware/trusted_foundations.c | 14 +-
   1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/arm/firmware/trusted_foundations.c
b/arch/arm/firmware/trusted_foundations.c
index 3fb1b5a1dce9..689e6565abfc 100644
--- a/arch/arm/firmware/trusted_foundations.c
+++ b/arch/arm/firmware/trusted_foundations.c
@@ -31,21 +31,25 @@
     static unsigned long cpu_boot_addr;
   -static void __naked tf_generic_smc(u32 type, u32 arg1, u32 arg2)
+static void tf_generic_smc(u32 type, u32 arg1, u32 arg2)
   {
+    register u32 r0 asm("r0") = type;
+    register u32 r1 asm("r1") = arg1;
+    register u32 r2 asm("r2") = arg2;
+
   asm volatile(
   ".arch_extension    sec\n\t"
-    "stmfd    sp!, {r4 - r11, lr}\n\t"
+    "stmfd    sp!, {r4 - r11}\n\t"
   __asmeq("%0", "r0")
   __asmeq("%1", "r1")
   __asmeq("%2", "r2")
   "mov    r3, #0\n\t"
   "mov    r4, #0\n\t"
   "smc    #0\n\t"
-    "ldmfd    sp!, {r4 - r11, pc}"
+    "ldmfd    sp!, {r4 - r11}\n\t"
   :
-    : "r" (type), "r" (arg1), "r" (arg2)
-    : "memory");
+    : "r" (r0), "r" (r1), "r" (r2)
+    : "memory", "r3", "r12", "lr");


Although seems "lr" won't be affected by SMC invocation because it should be
banked and hence could be omitted entirely from the code. Maybe somebody could
confirm this.

Strictly per the letter of the architecture, the SMC could be trapped to Hyp
mode, and a hypervisor might clobber LR_usr in the process of forwarding the
call to the firmware secure monitor (since Hyp doesn't have a banked LR of its
own). Admittedly there are probably no real systems with the appropriate
hardware/software combination to hit that, but on the other hand if this gets
inlined where the compiler has already created a stack frame then an LR clobber
is essentially free, so I reckon we're better off keeping it for reassurance.
This isn't exactly a critical fast path anyway.


Okay, thank you for the clarification.


So it seems this change is fine?

Stephen, you picked up changes for this driver before, is this patch
going through your tree?


You had best ask Thierry; he's taken over Tegra maintenance upstream. 
But that said, don't files in arch/arm go through Russell?


Re: [PATCH v2 3/6] ARM: trusted_foundations: do not use naked function

2018-04-16 Thread Stephen Warren

On 04/16/2018 09:56 AM, Stefan Agner wrote:

On 27.03.2018 14:16, Dmitry Osipenko wrote:

On 27.03.2018 14:54, Robin Murphy wrote:

On 26/03/18 22:20, Dmitry Osipenko wrote:

On 25.03.2018 21:09, Stefan Agner wrote:

As documented in GCC naked functions should only use Basic asm
syntax. The Extended asm or mixture of Basic asm and "C" code is
not guaranteed. Currently this works because it was hard coded
to follow and check GCC behavior for arguments and register
placement.

Furthermore with clang using parameters in Extended asm in a
naked function is not supported:
    arch/arm/firmware/trusted_foundations.c:47:10: error: parameter
    references not allowed in naked functions
  : "r" (type), "r" (arg1), "r" (arg2)
     ^

Use a regular function to be more portable. This aligns also with
the other smc call implementations e.g. in qcom_scm-32.c and
bcm_kona_smc.c.

Cc: Dmitry Osipenko 
Cc: Stephen Warren 
Cc: Thierry Reding 
Signed-off-by: Stefan Agner 
---
Changes in v2:
- Keep stmfd/ldmfd to avoid potential ABI issues

   arch/arm/firmware/trusted_foundations.c | 14 +-
   1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/arch/arm/firmware/trusted_foundations.c
b/arch/arm/firmware/trusted_foundations.c
index 3fb1b5a1dce9..689e6565abfc 100644
--- a/arch/arm/firmware/trusted_foundations.c
+++ b/arch/arm/firmware/trusted_foundations.c
@@ -31,21 +31,25 @@
     static unsigned long cpu_boot_addr;
   -static void __naked tf_generic_smc(u32 type, u32 arg1, u32 arg2)
+static void tf_generic_smc(u32 type, u32 arg1, u32 arg2)
   {
+    register u32 r0 asm("r0") = type;
+    register u32 r1 asm("r1") = arg1;
+    register u32 r2 asm("r2") = arg2;
+
   asm volatile(
   ".arch_extension    sec\n\t"
-    "stmfd    sp!, {r4 - r11, lr}\n\t"
+    "stmfd    sp!, {r4 - r11}\n\t"
   __asmeq("%0", "r0")
   __asmeq("%1", "r1")
   __asmeq("%2", "r2")
   "mov    r3, #0\n\t"
   "mov    r4, #0\n\t"
   "smc    #0\n\t"
-    "ldmfd    sp!, {r4 - r11, pc}"
+    "ldmfd    sp!, {r4 - r11}\n\t"
   :
-    : "r" (type), "r" (arg1), "r" (arg2)
-    : "memory");
+    : "r" (r0), "r" (r1), "r" (r2)
+    : "memory", "r3", "r12", "lr");


Although seems "lr" won't be affected by SMC invocation because it should be
banked and hence could be omitted entirely from the code. Maybe somebody could
confirm this.

Strictly per the letter of the architecture, the SMC could be trapped to Hyp
mode, and a hypervisor might clobber LR_usr in the process of forwarding the
call to the firmware secure monitor (since Hyp doesn't have a banked LR of its
own). Admittedly there are probably no real systems with the appropriate
hardware/software combination to hit that, but on the other hand if this gets
inlined where the compiler has already created a stack frame then an LR clobber
is essentially free, so I reckon we're better off keeping it for reassurance.
This isn't exactly a critical fast path anyway.


Okay, thank you for the clarification.


So it seems this change is fine?

Stephen, you picked up changes for this driver before, is this patch
going through your tree?


You had best ask Thierry; he's taken over Tegra maintenance upstream. 
But that said, don't files in arch/arm go through Russell?


Re: [PATCH 3/5] ARM: trusted_foundations: do not use naked function

2018-03-21 Thread Stephen Warren

On 03/21/2018 09:26 AM, Dmitry Osipenko wrote:

On 21.03.2018 17:09, Stefan Agner wrote:

On 21.03.2018 13:13, Robin Murphy wrote:

On 20/03/18 23:02, Stefan Agner wrote:

As documented in GCC naked functions should only use Basic asm
syntax. The Extended asm or mixture of Basic asm and "C" code is
not guaranteed. Currently this works because it was hard coded
to follow and check GCC behavior for arguments and register
placement.

Furthermore with clang using parameters in Extended asm in a
naked function is not supported:
arch/arm/firmware/trusted_foundations.c:47:10: error: parameter
references not allowed in naked functions
  : "r" (type), "r" (arg1), "r" (arg2)
 ^

Use a regular function to be more portable. This aligns also with
the other smc call implementations e.g. in qcom_scm-32.c and
bcm_kona_smc.c.

Additionally also make sure all callee-saved registers get saved
as it has been done before.

Signed-off-by: Stefan Agner 
---
   arch/arm/firmware/trusted_foundations.c | 12 +++-
   1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/arm/firmware/trusted_foundations.c 
b/arch/arm/firmware/trusted_foundations.c
index 3fb1b5a1dce9..426d732e6591 100644
--- a/arch/arm/firmware/trusted_foundations.c
+++ b/arch/arm/firmware/trusted_foundations.c
@@ -31,21 +31,23 @@
 static unsigned long cpu_boot_addr;
   -static void __naked tf_generic_smc(u32 type, u32 arg1, u32 arg2)
+static void tf_generic_smc(u32 type, u32 arg1, u32 arg2)
   {
+   register u32 r0 asm("r0") = type;
+   register u32 r1 asm("r1") = arg1;
+   register u32 r2 asm("r2") = arg2;
+
asm volatile(
".arch_extension   sec\n\t"
-   "stmfd sp!, {r4 - r11, lr}\n\t"
__asmeq("%0", "r0")
__asmeq("%1", "r1")
__asmeq("%2", "r2")
"mov   r3, #0\n\t"
"mov   r4, #0\n\t"
"smc   #0\n\t"
-   "ldmfd sp!, {r4 - r11, pc}"
:
-   : "r" (type), "r" (arg1), "r" (arg2)
-   : "memory");
+   : "r" (r0), "r" (r1), "r" (r2)
+   : "memory", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10");


I may be missing a subtlety, but it looks like we no longer have a
guarantee that r11 will be caller-saved as it was previously. I don't
know the Trusted Foundations ABI to say whether that matters or not,
but if it is the case that it never needed preserving anyway, that
might be worth calling out in the commit message.


Adding r11 (fp) to the clobber list causes an error when using gcc and
CONFIG_FRAME_POINTER=y:
arch/arm/firmware/trusted_foundations.c: In function ‘tf_generic_smc’:
arch/arm/firmware/trusted_foundations.c:51:1: error: fp cannot be used
in asm here

Not sure what ABI Trusted Foundations follow.

[adding Stephen, Thierry and Dmitry]
Maybe someone more familiar with NVIDIA Tegra SoCs can help?

When CONFIG_FRAME_POINTER=y fp gets saved anyway. So we could add r11 to
clobber list ifndef CONFIG_FRAME_POINTER...


I have no idea about TF ABI either. Looking at the downstream kernel code, r4 -
r12 should be saved. I've CC'd Alexandre as he is the author of the original
patch and may still remember the details.

I'm also wondering why original code doesn't have r3 in the clobber list and why
r3 is set to '0', downstream sets it to the address of SP and on return from SMC
r3 contains the address of SP which should be restored. I'm now wondering how
SMC calling worked for me at all on T30, maybe it didn't..


I don't know what the ABI for ATF is. I assume it's documented in the 
ATF, PSCI, or similar specification, or ATF source code. Hence, I don't 
know whether ATF restores fp/r11.


My guess is that r3/r4 are set to 0 because they're defined as inputs by 
the SMC/ATF ABI, yet nothing the kernel does needed that many 
parameters, so they're hard-coded to 0 (to ensure they're set to 
something predictable) rather than also being parameters to 
tf_generic_smc().


The original code used to save/restore a lot of registers, including 
r11/fp. Can't we side-step the issue of including/not-including r11/fp 
in the clobber list by not removing those stmfd/ldmfd assembly instructions?


Re: [PATCH 3/5] ARM: trusted_foundations: do not use naked function

2018-03-21 Thread Stephen Warren

On 03/21/2018 09:26 AM, Dmitry Osipenko wrote:

On 21.03.2018 17:09, Stefan Agner wrote:

On 21.03.2018 13:13, Robin Murphy wrote:

On 20/03/18 23:02, Stefan Agner wrote:

As documented in GCC naked functions should only use Basic asm
syntax. The Extended asm or mixture of Basic asm and "C" code is
not guaranteed. Currently this works because it was hard coded
to follow and check GCC behavior for arguments and register
placement.

Furthermore with clang using parameters in Extended asm in a
naked function is not supported:
arch/arm/firmware/trusted_foundations.c:47:10: error: parameter
references not allowed in naked functions
  : "r" (type), "r" (arg1), "r" (arg2)
 ^

Use a regular function to be more portable. This aligns also with
the other smc call implementations e.g. in qcom_scm-32.c and
bcm_kona_smc.c.

Additionally also make sure all callee-saved registers get saved
as it has been done before.

Signed-off-by: Stefan Agner 
---
   arch/arm/firmware/trusted_foundations.c | 12 +++-
   1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/arm/firmware/trusted_foundations.c 
b/arch/arm/firmware/trusted_foundations.c
index 3fb1b5a1dce9..426d732e6591 100644
--- a/arch/arm/firmware/trusted_foundations.c
+++ b/arch/arm/firmware/trusted_foundations.c
@@ -31,21 +31,23 @@
 static unsigned long cpu_boot_addr;
   -static void __naked tf_generic_smc(u32 type, u32 arg1, u32 arg2)
+static void tf_generic_smc(u32 type, u32 arg1, u32 arg2)
   {
+   register u32 r0 asm("r0") = type;
+   register u32 r1 asm("r1") = arg1;
+   register u32 r2 asm("r2") = arg2;
+
asm volatile(
".arch_extension   sec\n\t"
-   "stmfd sp!, {r4 - r11, lr}\n\t"
__asmeq("%0", "r0")
__asmeq("%1", "r1")
__asmeq("%2", "r2")
"mov   r3, #0\n\t"
"mov   r4, #0\n\t"
"smc   #0\n\t"
-   "ldmfd sp!, {r4 - r11, pc}"
:
-   : "r" (type), "r" (arg1), "r" (arg2)
-   : "memory");
+   : "r" (r0), "r" (r1), "r" (r2)
+   : "memory", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10");


I may be missing a subtlety, but it looks like we no longer have a
guarantee that r11 will be caller-saved as it was previously. I don't
know the Trusted Foundations ABI to say whether that matters or not,
but if it is the case that it never needed preserving anyway, that
might be worth calling out in the commit message.


Adding r11 (fp) to the clobber list causes an error when using gcc and
CONFIG_FRAME_POINTER=y:
arch/arm/firmware/trusted_foundations.c: In function ‘tf_generic_smc’:
arch/arm/firmware/trusted_foundations.c:51:1: error: fp cannot be used
in asm here

Not sure what ABI Trusted Foundations follow.

[adding Stephen, Thierry and Dmitry]
Maybe someone more familiar with NVIDIA Tegra SoCs can help?

When CONFIG_FRAME_POINTER=y fp gets saved anyway. So we could add r11 to
clobber list ifndef CONFIG_FRAME_POINTER...


I have no idea about TF ABI either. Looking at the downstream kernel code, r4 -
r12 should be saved. I've CC'd Alexandre as he is the author of the original
patch and may still remember the details.

I'm also wondering why original code doesn't have r3 in the clobber list and why
r3 is set to '0', downstream sets it to the address of SP and on return from SMC
r3 contains the address of SP which should be restored. I'm now wondering how
SMC calling worked for me at all on T30, maybe it didn't..


I don't know what the ABI for ATF is. I assume it's documented in the 
ATF, PSCI, or similar specification, or ATF source code. Hence, I don't 
know whether ATF restores fp/r11.


My guess is that r3/r4 are set to 0 because they're defined as inputs by 
the SMC/ATF ABI, yet nothing the kernel does needed that many 
parameters, so they're hard-coded to 0 (to ensure they're set to 
something predictable) rather than also being parameters to 
tf_generic_smc().


The original code used to save/restore a lot of registers, including 
r11/fp. Can't we side-step the issue of including/not-including r11/fp 
in the clobber list by not removing those stmfd/ldmfd assembly instructions?


Re: [PATCH] clk: tegra: fix pllu rate configuration

2018-02-22 Thread Stephen Warren

On 02/22/2018 04:04 PM, Marcel Ziswiler wrote:

Turns out latest upstream U-Boot does not configure/enable pllu which
leaves it at some default rate of 500 kHz:


I assume this is only because U-Boot just happened not to access any USB 
devices. In other words, if you break into the U-Boot boot flow and 
explicitly initialize/access USB, then USB works both in U-Boot and in 
the kernel even without this patch? If that's not the case, it seems 
there's a bug in U-Boot.


Either way, I have no issue with this patch. I just want to make sure 
there weren't other bugs in U-Boot that needed fixing.


Re: [PATCH] clk: tegra: fix pllu rate configuration

2018-02-22 Thread Stephen Warren

On 02/22/2018 04:04 PM, Marcel Ziswiler wrote:

Turns out latest upstream U-Boot does not configure/enable pllu which
leaves it at some default rate of 500 kHz:


I assume this is only because U-Boot just happened not to access any USB 
devices. In other words, if you break into the U-Boot boot flow and 
explicitly initialize/access USB, then USB works both in U-Boot and in 
the kernel even without this patch? If that's not the case, it seems 
there's a bug in U-Boot.


Either way, I have no issue with this patch. I just want to make sure 
there weren't other bugs in U-Boot that needed fixing.


Re: [PATCH] ARM: dtc: tegra: enable front panel leds in TrimSlice

2018-02-21 Thread Stephen Warren

On 02/21/2018 09:20 AM, Tomasz Maciej Nowak wrote:

W dniu 20.02.2018 o 20:55, Stephen Warren pisze:

On 02/19/2018 01:16 PM, Tomasz Maciej Nowak wrote:

Adds device nodes for two front panel LEDs.


Why do you need to change the pinmux settings? Configuring a pin as a 
GPIO should override any pinmux special function selection and hence 
make it irrelevant, so I don't think you should need to change the 
pinmux.


At first I did exactly that without changing the pinmux, but the LEDs 
didn't light up. After that I compared with CompuLab source tree [1]. 
The pinmux was specified as in my patch. With this change the LEDs are 
fully functional.


1. 
https://gitorious.org/trimslice-kernel/trimslice-kernel?p=trimslice-kernel:trimslice-kernel.git;a=blob;f=arch/arm/mach-tegra/board-trimslice-pinmux.c;h=cc6d5225d97eb9327c820bf1d5b2bc16ab8c6dda;hb=d25bf45d6314089489b30d218ed8a0d6d94417f9#l45 


Oh I see. Your patch isn't changing the pinmux function selection but 
the other configuration bits, which are relevant even when the pin is in 
GPIO mode. In particular, it clears the tristate bit for the dte 
pingroup. That makes perfect sense.


Re: [PATCH] ARM: dtc: tegra: enable front panel leds in TrimSlice

2018-02-21 Thread Stephen Warren

On 02/21/2018 09:20 AM, Tomasz Maciej Nowak wrote:

W dniu 20.02.2018 o 20:55, Stephen Warren pisze:

On 02/19/2018 01:16 PM, Tomasz Maciej Nowak wrote:

Adds device nodes for two front panel LEDs.


Why do you need to change the pinmux settings? Configuring a pin as a 
GPIO should override any pinmux special function selection and hence 
make it irrelevant, so I don't think you should need to change the 
pinmux.


At first I did exactly that without changing the pinmux, but the LEDs 
didn't light up. After that I compared with CompuLab source tree [1]. 
The pinmux was specified as in my patch. With this change the LEDs are 
fully functional.


1. 
https://gitorious.org/trimslice-kernel/trimslice-kernel?p=trimslice-kernel:trimslice-kernel.git;a=blob;f=arch/arm/mach-tegra/board-trimslice-pinmux.c;h=cc6d5225d97eb9327c820bf1d5b2bc16ab8c6dda;hb=d25bf45d6314089489b30d218ed8a0d6d94417f9#l45 


Oh I see. Your patch isn't changing the pinmux function selection but 
the other configuration bits, which are relevant even when the pin is in 
GPIO mode. In particular, it clears the tristate bit for the dte 
pingroup. That makes perfect sense.


Re: [PATCH] ARM: dtc: tegra: enable front panel leds in TrimSlice

2018-02-20 Thread Stephen Warren

On 02/19/2018 01:16 PM, Tomasz Maciej Nowak wrote:

Adds device nodes for two front panel LEDs.


Why do you need to change the pinmux settings? Configuring a pin as a 
GPIO should override any pinmux special function selection and hence 
make it irrelevant, so I don't think you should need to change the pinmux.


Re: [PATCH] ARM: dtc: tegra: enable front panel leds in TrimSlice

2018-02-20 Thread Stephen Warren

On 02/19/2018 01:16 PM, Tomasz Maciej Nowak wrote:

Adds device nodes for two front panel LEDs.


Why do you need to change the pinmux settings? Configuring a pin as a 
GPIO should override any pinmux special function selection and hence 
make it irrelevant, so I don't think you should need to change the pinmux.


Re: [PATCH v1 3/5] dt-bindings: Add DT bindings for NVIDIA Tegra AHB DMA controller

2017-10-03 Thread Stephen Warren

On 10/03/2017 04:32 AM, Jon Hunter wrote:



On 03/10/17 00:02, Dmitry Osipenko wrote:

On 02.10.2017 20:05, Stephen Warren wrote:

On 09/29/2017 09:11 PM, Dmitry Osipenko wrote:

On 29.09.2017 22:30, Stephen Warren wrote:

On 09/27/2017 02:34 AM, Jon Hunter wrote:


On 27/09/17 02:57, Dmitry Osipenko wrote:

On 26.09.2017 17:50, Jon Hunter wrote:


On 26/09/17 00:22, Dmitry Osipenko wrote:

Document DT bindings for NVIDIA Tegra AHB DMA controller that presents
on Tegra20/30 SoC's.

Signed-off-by: Dmitry Osipenko <dig...@gmail.com>
---
    .../bindings/dma/nvidia,tegra20-ahbdma.txt | 23
++
    1 file changed, 23 insertions(+)
    create mode 100644
Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt

diff --git
a/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
new file mode 100644
index ..2af9aa76ae11
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
@@ -0,0 +1,23 @@
+* NVIDIA Tegra AHB DMA controller
+
+Required properties:
+- compatible:    Must be "nvidia,tegra20-ahbdma"
+- reg:    Should contain registers base address and length.
+- interrupts:    Should contain one entry, DMA controller interrupt.
+- clocks:    Should contain one entry, DMA controller clock.
+- resets :    Should contain one entry, DMA controller reset.
+- #dma-cells:    Should be <1>. The cell represents DMA request select
value
+    for the peripheral. For more details consult the Tegra TRM's
+    documentation, in particular AHB DMA channel control register
+    REQ_SEL field.


What about the TRIG_SEL field? Do we need to handle this here as well?



Actually, DMA transfer trigger isn't related a hardware description. It's
up to
software to decide what trigger to select. So it shouldn't be in the binding.


I think it could be, if say a board wanted a GPIO to trigger a transfer.


And I think the same applies to requester... any objections?


Well, the REQ_SEL should definitely be in the binding.

Laxman, Stephen, what are your thoughts on the TRIG_SEL field? Looks
like we never bothered with it for the APB DMA and so maybe no ones uses
this.


I don't think TRIG_SEL should be in the binding, at least at present. While
TRIG_SEL certainly is something used to configure the transfer, I believe the
semantics of the current DMA binding only cover DMA transfers that are initiated
when SW desires, rather than being a combination of after SW programs the
transfer plus some other HW event. So, we always use a default/hard-coded
TRIG_SEL value. As such, there's no need for a TRIG_SEL value in DT. There's
certainly no known use-case that requires a non-default TRIG_SEL value at
present. We could add an extra #dma-cells value later if we find a use for it,
and the semantics of that use-case make sense to add it to the DMA specifier,
rather than some other separate higher-level property/driver/...


Thank you for the comment. If we'd want to extend the binding further with the
trigger, how to differentiate trigger from the requester in a case of a single
#data-cell?

Of course realistically a chance that the further extension would be needed is
very-very low, so we may defer the efforts to solve that question and for now
make driver aware of the potential #dma-cells extension.


The request selector cell isn't optional, so is always present. If we later add
an optional trig_sel cell, we'll either have:

#dma-cells=<1>: req_sel

or:

#dma-cells=<2>: req_sel, trig_sel


Why request sel. couldn't be optional? Could you please elaborate a bit more?


The documentation currently says it's mandatory, and DT bindings must be 
evolved in a backwards-compatible fashion.



I think possible options are:

#dma-cells=<1>: req_sel
#dma-cells=<1>: trig_sel


With the above, how would you know that it is the req_sel or trig_sel
that is specified?


Also, if req_sel were optional, then it'd be impossible to distinguish 
between those cases, so we can't design a binding like that. In general, 
when adding extra optional cells to an #xxx-cells style binding, then 
whenever cell N is present, all cells before cell N must be present even 
if optional.


Re: [PATCH v1 3/5] dt-bindings: Add DT bindings for NVIDIA Tegra AHB DMA controller

2017-10-03 Thread Stephen Warren

On 10/03/2017 04:32 AM, Jon Hunter wrote:



On 03/10/17 00:02, Dmitry Osipenko wrote:

On 02.10.2017 20:05, Stephen Warren wrote:

On 09/29/2017 09:11 PM, Dmitry Osipenko wrote:

On 29.09.2017 22:30, Stephen Warren wrote:

On 09/27/2017 02:34 AM, Jon Hunter wrote:


On 27/09/17 02:57, Dmitry Osipenko wrote:

On 26.09.2017 17:50, Jon Hunter wrote:


On 26/09/17 00:22, Dmitry Osipenko wrote:

Document DT bindings for NVIDIA Tegra AHB DMA controller that presents
on Tegra20/30 SoC's.

Signed-off-by: Dmitry Osipenko 
---
    .../bindings/dma/nvidia,tegra20-ahbdma.txt | 23
++
    1 file changed, 23 insertions(+)
    create mode 100644
Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt

diff --git
a/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
new file mode 100644
index ..2af9aa76ae11
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
@@ -0,0 +1,23 @@
+* NVIDIA Tegra AHB DMA controller
+
+Required properties:
+- compatible:    Must be "nvidia,tegra20-ahbdma"
+- reg:    Should contain registers base address and length.
+- interrupts:    Should contain one entry, DMA controller interrupt.
+- clocks:    Should contain one entry, DMA controller clock.
+- resets :    Should contain one entry, DMA controller reset.
+- #dma-cells:    Should be <1>. The cell represents DMA request select
value
+    for the peripheral. For more details consult the Tegra TRM's
+    documentation, in particular AHB DMA channel control register
+    REQ_SEL field.


What about the TRIG_SEL field? Do we need to handle this here as well?



Actually, DMA transfer trigger isn't related a hardware description. It's
up to
software to decide what trigger to select. So it shouldn't be in the binding.


I think it could be, if say a board wanted a GPIO to trigger a transfer.


And I think the same applies to requester... any objections?


Well, the REQ_SEL should definitely be in the binding.

Laxman, Stephen, what are your thoughts on the TRIG_SEL field? Looks
like we never bothered with it for the APB DMA and so maybe no ones uses
this.


I don't think TRIG_SEL should be in the binding, at least at present. While
TRIG_SEL certainly is something used to configure the transfer, I believe the
semantics of the current DMA binding only cover DMA transfers that are initiated
when SW desires, rather than being a combination of after SW programs the
transfer plus some other HW event. So, we always use a default/hard-coded
TRIG_SEL value. As such, there's no need for a TRIG_SEL value in DT. There's
certainly no known use-case that requires a non-default TRIG_SEL value at
present. We could add an extra #dma-cells value later if we find a use for it,
and the semantics of that use-case make sense to add it to the DMA specifier,
rather than some other separate higher-level property/driver/...


Thank you for the comment. If we'd want to extend the binding further with the
trigger, how to differentiate trigger from the requester in a case of a single
#data-cell?

Of course realistically a chance that the further extension would be needed is
very-very low, so we may defer the efforts to solve that question and for now
make driver aware of the potential #dma-cells extension.


The request selector cell isn't optional, so is always present. If we later add
an optional trig_sel cell, we'll either have:

#dma-cells=<1>: req_sel

or:

#dma-cells=<2>: req_sel, trig_sel


Why request sel. couldn't be optional? Could you please elaborate a bit more?


The documentation currently says it's mandatory, and DT bindings must be 
evolved in a backwards-compatible fashion.



I think possible options are:

#dma-cells=<1>: req_sel
#dma-cells=<1>: trig_sel


With the above, how would you know that it is the req_sel or trig_sel
that is specified?


Also, if req_sel were optional, then it'd be impossible to distinguish 
between those cases, so we can't design a binding like that. In general, 
when adding extra optional cells to an #xxx-cells style binding, then 
whenever cell N is present, all cells before cell N must be present even 
if optional.


Re: [PATCH v1 3/5] dt-bindings: Add DT bindings for NVIDIA Tegra AHB DMA controller

2017-10-02 Thread Stephen Warren

On 09/29/2017 09:11 PM, Dmitry Osipenko wrote:

On 29.09.2017 22:30, Stephen Warren wrote:

On 09/27/2017 02:34 AM, Jon Hunter wrote:


On 27/09/17 02:57, Dmitry Osipenko wrote:

On 26.09.2017 17:50, Jon Hunter wrote:


On 26/09/17 00:22, Dmitry Osipenko wrote:

Document DT bindings for NVIDIA Tegra AHB DMA controller that presents
on Tegra20/30 SoC's.

Signed-off-by: Dmitry Osipenko <dig...@gmail.com>
---
   .../bindings/dma/nvidia,tegra20-ahbdma.txt | 23
++
   1 file changed, 23 insertions(+)
   create mode 100644
Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt

diff --git
a/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
new file mode 100644
index ..2af9aa76ae11
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
@@ -0,0 +1,23 @@
+* NVIDIA Tegra AHB DMA controller
+
+Required properties:
+- compatible:    Must be "nvidia,tegra20-ahbdma"
+- reg:    Should contain registers base address and length.
+- interrupts:    Should contain one entry, DMA controller interrupt.
+- clocks:    Should contain one entry, DMA controller clock.
+- resets :    Should contain one entry, DMA controller reset.
+- #dma-cells:    Should be <1>. The cell represents DMA request select value
+    for the peripheral. For more details consult the Tegra TRM's
+    documentation, in particular AHB DMA channel control register
+    REQ_SEL field.


What about the TRIG_SEL field? Do we need to handle this here as well?



Actually, DMA transfer trigger isn't related a hardware description. It's up to
software to decide what trigger to select. So it shouldn't be in the binding.


I think it could be, if say a board wanted a GPIO to trigger a transfer.


And I think the same applies to requester... any objections?


Well, the REQ_SEL should definitely be in the binding.

Laxman, Stephen, what are your thoughts on the TRIG_SEL field? Looks
like we never bothered with it for the APB DMA and so maybe no ones uses
this.


I don't think TRIG_SEL should be in the binding, at least at present. While
TRIG_SEL certainly is something used to configure the transfer, I believe the
semantics of the current DMA binding only cover DMA transfers that are initiated
when SW desires, rather than being a combination of after SW programs the
transfer plus some other HW event. So, we always use a default/hard-coded
TRIG_SEL value. As such, there's no need for a TRIG_SEL value in DT. There's
certainly no known use-case that requires a non-default TRIG_SEL value at
present. We could add an extra #dma-cells value later if we find a use for it,
and the semantics of that use-case make sense to add it to the DMA specifier,
rather than some other separate higher-level property/driver/...


Thank you for the comment. If we'd want to extend the binding further with the
trigger, how to differentiate trigger from the requester in a case of a single
#data-cell?

Of course realistically a chance that the further extension would be needed is
very-very low, so we may defer the efforts to solve that question and for now
make driver aware of the potential #dma-cells extension.


The request selector cell isn't optional, so is always present. If we 
later add an optional trig_sel cell, we'll either have:


#dma-cells=<1>: req_sel

or:

#dma-cells=<2>: req_sel, trig_sel


Re: [PATCH v1 3/5] dt-bindings: Add DT bindings for NVIDIA Tegra AHB DMA controller

2017-10-02 Thread Stephen Warren

On 09/29/2017 09:11 PM, Dmitry Osipenko wrote:

On 29.09.2017 22:30, Stephen Warren wrote:

On 09/27/2017 02:34 AM, Jon Hunter wrote:


On 27/09/17 02:57, Dmitry Osipenko wrote:

On 26.09.2017 17:50, Jon Hunter wrote:


On 26/09/17 00:22, Dmitry Osipenko wrote:

Document DT bindings for NVIDIA Tegra AHB DMA controller that presents
on Tegra20/30 SoC's.

Signed-off-by: Dmitry Osipenko 
---
   .../bindings/dma/nvidia,tegra20-ahbdma.txt | 23
++
   1 file changed, 23 insertions(+)
   create mode 100644
Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt

diff --git
a/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
new file mode 100644
index ..2af9aa76ae11
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
@@ -0,0 +1,23 @@
+* NVIDIA Tegra AHB DMA controller
+
+Required properties:
+- compatible:    Must be "nvidia,tegra20-ahbdma"
+- reg:    Should contain registers base address and length.
+- interrupts:    Should contain one entry, DMA controller interrupt.
+- clocks:    Should contain one entry, DMA controller clock.
+- resets :    Should contain one entry, DMA controller reset.
+- #dma-cells:    Should be <1>. The cell represents DMA request select value
+    for the peripheral. For more details consult the Tegra TRM's
+    documentation, in particular AHB DMA channel control register
+    REQ_SEL field.


What about the TRIG_SEL field? Do we need to handle this here as well?



Actually, DMA transfer trigger isn't related a hardware description. It's up to
software to decide what trigger to select. So it shouldn't be in the binding.


I think it could be, if say a board wanted a GPIO to trigger a transfer.


And I think the same applies to requester... any objections?


Well, the REQ_SEL should definitely be in the binding.

Laxman, Stephen, what are your thoughts on the TRIG_SEL field? Looks
like we never bothered with it for the APB DMA and so maybe no ones uses
this.


I don't think TRIG_SEL should be in the binding, at least at present. While
TRIG_SEL certainly is something used to configure the transfer, I believe the
semantics of the current DMA binding only cover DMA transfers that are initiated
when SW desires, rather than being a combination of after SW programs the
transfer plus some other HW event. So, we always use a default/hard-coded
TRIG_SEL value. As such, there's no need for a TRIG_SEL value in DT. There's
certainly no known use-case that requires a non-default TRIG_SEL value at
present. We could add an extra #dma-cells value later if we find a use for it,
and the semantics of that use-case make sense to add it to the DMA specifier,
rather than some other separate higher-level property/driver/...


Thank you for the comment. If we'd want to extend the binding further with the
trigger, how to differentiate trigger from the requester in a case of a single
#data-cell?

Of course realistically a chance that the further extension would be needed is
very-very low, so we may defer the efforts to solve that question and for now
make driver aware of the potential #dma-cells extension.


The request selector cell isn't optional, so is always present. If we 
later add an optional trig_sel cell, we'll either have:


#dma-cells=<1>: req_sel

or:

#dma-cells=<2>: req_sel, trig_sel


Re: [PATCH v1 3/5] dt-bindings: Add DT bindings for NVIDIA Tegra AHB DMA controller

2017-09-29 Thread Stephen Warren

On 09/27/2017 02:34 AM, Jon Hunter wrote:


On 27/09/17 02:57, Dmitry Osipenko wrote:

On 26.09.2017 17:50, Jon Hunter wrote:


On 26/09/17 00:22, Dmitry Osipenko wrote:

Document DT bindings for NVIDIA Tegra AHB DMA controller that presents
on Tegra20/30 SoC's.

Signed-off-by: Dmitry Osipenko 
---
  .../bindings/dma/nvidia,tegra20-ahbdma.txt | 23 ++
  1 file changed, 23 insertions(+)
  create mode 100644 
Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt

diff --git a/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt 
b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
new file mode 100644
index ..2af9aa76ae11
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
@@ -0,0 +1,23 @@
+* NVIDIA Tegra AHB DMA controller
+
+Required properties:
+- compatible:  Must be "nvidia,tegra20-ahbdma"
+- reg: Should contain registers base address and length.
+- interrupts:  Should contain one entry, DMA controller interrupt.
+- clocks:  Should contain one entry, DMA controller clock.
+- resets : Should contain one entry, DMA controller reset.
+- #dma-cells:  Should be <1>. The cell represents DMA request select value
+   for the peripheral. For more details consult the Tegra TRM's
+   documentation, in particular AHB DMA channel control register
+   REQ_SEL field.


What about the TRIG_SEL field? Do we need to handle this here as well?



Actually, DMA transfer trigger isn't related a hardware description. It's up to
software to decide what trigger to select. So it shouldn't be in the binding.


I think it could be, if say a board wanted a GPIO to trigger a transfer.


And I think the same applies to requester... any objections?


Well, the REQ_SEL should definitely be in the binding.

Laxman, Stephen, what are your thoughts on the TRIG_SEL field? Looks
like we never bothered with it for the APB DMA and so maybe no ones uses
this.


I don't think TRIG_SEL should be in the binding, at least at present. 
While TRIG_SEL certainly is something used to configure the transfer, I 
believe the semantics of the current DMA binding only cover DMA 
transfers that are initiated when SW desires, rather than being a 
combination of after SW programs the transfer plus some other HW event. 
So, we always use a default/hard-coded TRIG_SEL value. As such, there's 
no need for a TRIG_SEL value in DT. There's certainly no known use-case 
that requires a non-default TRIG_SEL value at present. We could add an 
extra #dma-cells value later if we find a use for it, and the semantics 
of that use-case make sense to add it to the DMA specifier, rather than 
some other separate higher-level property/driver/...


Re: [PATCH v1 3/5] dt-bindings: Add DT bindings for NVIDIA Tegra AHB DMA controller

2017-09-29 Thread Stephen Warren

On 09/27/2017 02:34 AM, Jon Hunter wrote:


On 27/09/17 02:57, Dmitry Osipenko wrote:

On 26.09.2017 17:50, Jon Hunter wrote:


On 26/09/17 00:22, Dmitry Osipenko wrote:

Document DT bindings for NVIDIA Tegra AHB DMA controller that presents
on Tegra20/30 SoC's.

Signed-off-by: Dmitry Osipenko 
---
  .../bindings/dma/nvidia,tegra20-ahbdma.txt | 23 ++
  1 file changed, 23 insertions(+)
  create mode 100644 
Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt

diff --git a/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt 
b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
new file mode 100644
index ..2af9aa76ae11
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/nvidia,tegra20-ahbdma.txt
@@ -0,0 +1,23 @@
+* NVIDIA Tegra AHB DMA controller
+
+Required properties:
+- compatible:  Must be "nvidia,tegra20-ahbdma"
+- reg: Should contain registers base address and length.
+- interrupts:  Should contain one entry, DMA controller interrupt.
+- clocks:  Should contain one entry, DMA controller clock.
+- resets : Should contain one entry, DMA controller reset.
+- #dma-cells:  Should be <1>. The cell represents DMA request select value
+   for the peripheral. For more details consult the Tegra TRM's
+   documentation, in particular AHB DMA channel control register
+   REQ_SEL field.


What about the TRIG_SEL field? Do we need to handle this here as well?



Actually, DMA transfer trigger isn't related a hardware description. It's up to
software to decide what trigger to select. So it shouldn't be in the binding.


I think it could be, if say a board wanted a GPIO to trigger a transfer.


And I think the same applies to requester... any objections?


Well, the REQ_SEL should definitely be in the binding.

Laxman, Stephen, what are your thoughts on the TRIG_SEL field? Looks
like we never bothered with it for the APB DMA and so maybe no ones uses
this.


I don't think TRIG_SEL should be in the binding, at least at present. 
While TRIG_SEL certainly is something used to configure the transfer, I 
believe the semantics of the current DMA binding only cover DMA 
transfers that are initiated when SW desires, rather than being a 
combination of after SW programs the transfer plus some other HW event. 
So, we always use a default/hard-coded TRIG_SEL value. As such, there's 
no need for a TRIG_SEL value in DT. There's certainly no known use-case 
that requires a non-default TRIG_SEL value at present. We could add an 
extra #dma-cells value later if we find a use for it, and the semantics 
of that use-case make sense to add it to the DMA specifier, rather than 
some other separate higher-level property/driver/...


Re: [PATCH v1 1/2] staging: Introduce NVIDIA Tegra20 video decoder driver

2017-09-25 Thread Stephen Warren
On 09/25/2017 05:45 PM, Dmitry Osipenko wrote:
> On 26.09.2017 02:01, Stephen Warren wrote:
>> On 09/25/2017 04:15 PM, Dmitry Osipenko wrote:
>>> Video decoder, found on NVIDIA Tegra20 SoC, supports a standard set of
>>> video formats like H.264 / MPEG-4 / WMV / VC1. Currently driver supports
>>> decoding of CAVLC H.264 only.
>>
>> Note: I don't know anything much about video decoding on Tegra (just NV 
>> desktop
>> GPUs, and that was a while ago), but I had a couple small comments on the DT
>> binding:
>>
>>> diff --git
>>> a/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-vde.txt
>>> b/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-vde.txt
>>
>>> +NVIDIA Tegra Video Decoder Engine
>>> +
>>> +Required properties:
>>> +- compatible : "nvidia,tegra20-vde"
>>> +- reg : Must contain 2 register ranges: registers and IRAM area.
>>> +- reg-names : Must include the following entries:
>>> +  - regs
>>> +  - iram
>>
>> I think the IRAM region needs more explanation: What is the region used for 
>> and
>> by what? Can it be moved, and if so does the move need to be co-ordinated 
>> with
>> any other piece of SW?
> 
> IRAM region is used by Video Decoder HW for internal use and some of decoding
> parameters are supplied via IRAM, like frames order list. AFAIK IRAM addresses
> are hardwired in HW and aren't movable, it is not 100% but I'm pretty sure.
> Should it be explained in the binding?

I think this should be briefly mentioned, yes. Otherwise at least people
who don't know the VDE HW well (like me) will wonder why on earth VDE
interacts with IRAM at all. I would have assumed all parameters were
supplied via registers or via descriptors in DRAM.

Thanks.


Re: [PATCH v1 1/2] staging: Introduce NVIDIA Tegra20 video decoder driver

2017-09-25 Thread Stephen Warren
On 09/25/2017 05:45 PM, Dmitry Osipenko wrote:
> On 26.09.2017 02:01, Stephen Warren wrote:
>> On 09/25/2017 04:15 PM, Dmitry Osipenko wrote:
>>> Video decoder, found on NVIDIA Tegra20 SoC, supports a standard set of
>>> video formats like H.264 / MPEG-4 / WMV / VC1. Currently driver supports
>>> decoding of CAVLC H.264 only.
>>
>> Note: I don't know anything much about video decoding on Tegra (just NV 
>> desktop
>> GPUs, and that was a while ago), but I had a couple small comments on the DT
>> binding:
>>
>>> diff --git
>>> a/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-vde.txt
>>> b/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-vde.txt
>>
>>> +NVIDIA Tegra Video Decoder Engine
>>> +
>>> +Required properties:
>>> +- compatible : "nvidia,tegra20-vde"
>>> +- reg : Must contain 2 register ranges: registers and IRAM area.
>>> +- reg-names : Must include the following entries:
>>> +  - regs
>>> +  - iram
>>
>> I think the IRAM region needs more explanation: What is the region used for 
>> and
>> by what? Can it be moved, and if so does the move need to be co-ordinated 
>> with
>> any other piece of SW?
> 
> IRAM region is used by Video Decoder HW for internal use and some of decoding
> parameters are supplied via IRAM, like frames order list. AFAIK IRAM addresses
> are hardwired in HW and aren't movable, it is not 100% but I'm pretty sure.
> Should it be explained in the binding?

I think this should be briefly mentioned, yes. Otherwise at least people
who don't know the VDE HW well (like me) will wonder why on earth VDE
interacts with IRAM at all. I would have assumed all parameters were
supplied via registers or via descriptors in DRAM.

Thanks.


Re: [PATCH v1 1/2] staging: Introduce NVIDIA Tegra20 video decoder driver

2017-09-25 Thread Stephen Warren

On 09/25/2017 04:15 PM, Dmitry Osipenko wrote:

Video decoder, found on NVIDIA Tegra20 SoC, supports a standard set of
video formats like H.264 / MPEG-4 / WMV / VC1. Currently driver supports
decoding of CAVLC H.264 only.


Note: I don't know anything much about video decoding on Tegra (just NV 
desktop GPUs, and that was a while ago), but I had a couple small 
comments on the DT binding:



diff --git a/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-vde.txt 
b/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-vde.txt



+NVIDIA Tegra Video Decoder Engine
+
+Required properties:
+- compatible : "nvidia,tegra20-vde"
+- reg : Must contain 2 register ranges: registers and IRAM area.
+- reg-names : Must include the following entries:
+  - regs
+  - iram


I think the IRAM region needs more explanation: What is the region used 
for and by what? Can it be moved, and if so does the move need to be 
co-ordinated with any other piece of SW?



+- clocks : Must contain one entry, for the module clock.
+  See ../clocks/clock-bindings.txt for details.
+- resets : Must contain an entry for each entry in reset-names.
+  See ../reset/reset.txt for details.
+- reset-names : Must include the following entries:
+  - vde


Let's require a clock-names property too.


Re: [PATCH v1 1/2] staging: Introduce NVIDIA Tegra20 video decoder driver

2017-09-25 Thread Stephen Warren

On 09/25/2017 04:15 PM, Dmitry Osipenko wrote:

Video decoder, found on NVIDIA Tegra20 SoC, supports a standard set of
video formats like H.264 / MPEG-4 / WMV / VC1. Currently driver supports
decoding of CAVLC H.264 only.


Note: I don't know anything much about video decoding on Tegra (just NV 
desktop GPUs, and that was a while ago), but I had a couple small 
comments on the DT binding:



diff --git a/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-vde.txt 
b/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-vde.txt



+NVIDIA Tegra Video Decoder Engine
+
+Required properties:
+- compatible : "nvidia,tegra20-vde"
+- reg : Must contain 2 register ranges: registers and IRAM area.
+- reg-names : Must include the following entries:
+  - regs
+  - iram


I think the IRAM region needs more explanation: What is the region used 
for and by what? Can it be moved, and if so does the move need to be 
co-ordinated with any other piece of SW?



+- clocks : Must contain one entry, for the module clock.
+  See ../clocks/clock-bindings.txt for details.
+- resets : Must contain an entry for each entry in reset-names.
+  See ../reset/reset.txt for details.
+- reset-names : Must include the following entries:
+  - vde


Let's require a clock-names property too.


[PATCH] usb: gadget: serial: fix oops when data rx'd after close

2017-08-16 Thread Stephen Warren
From: Stephen Warren <swar...@nvidia.com>

When the gadget serial device has no associated TTY, do not pass any
received data into the TTY layer for processing; simply drop it instead.
This prevents the TTY layer from calling back into the gadget serial
driver, which will then crash in e.g. gs_write_room() due to lack of
gadget serial device to TTY association (i.e. a NULL pointer dereference).

Signed-off-by: Stephen Warren <swar...@nvidia.com>
---
 drivers/usb/gadget/function/u_serial.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/usb/gadget/function/u_serial.c 
b/drivers/usb/gadget/function/u_serial.c
index 9b0805f55ad7..16bb24a047d9 100644
--- a/drivers/usb/gadget/function/u_serial.c
+++ b/drivers/usb/gadget/function/u_serial.c
@@ -537,7 +537,7 @@ static void gs_rx_push(unsigned long _port)
}
 
/* push data to (open) tty */
-   if (req->actual) {
+   if (req->actual & tty) {
char*packet = req->buf;
unsignedsize = req->actual;
unsignedn;
-- 
2.14.1



[PATCH] usb: gadget: serial: fix oops when data rx'd after close

2017-08-16 Thread Stephen Warren
From: Stephen Warren 

When the gadget serial device has no associated TTY, do not pass any
received data into the TTY layer for processing; simply drop it instead.
This prevents the TTY layer from calling back into the gadget serial
driver, which will then crash in e.g. gs_write_room() due to lack of
gadget serial device to TTY association (i.e. a NULL pointer dereference).

Signed-off-by: Stephen Warren 
---
 drivers/usb/gadget/function/u_serial.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/usb/gadget/function/u_serial.c 
b/drivers/usb/gadget/function/u_serial.c
index 9b0805f55ad7..16bb24a047d9 100644
--- a/drivers/usb/gadget/function/u_serial.c
+++ b/drivers/usb/gadget/function/u_serial.c
@@ -537,7 +537,7 @@ static void gs_rx_push(unsigned long _port)
}
 
/* push data to (open) tty */
-   if (req->actual) {
+   if (req->actual & tty) {
char*packet = req->buf;
unsignedsize = req->actual;
unsignedn;
-- 
2.14.1



Re: [PATCH 2/2] i2c: mux: pinctrl: drop the idle_state member

2017-08-03 Thread Stephen Warren

On 08/02/2017 11:19 PM, Peter Rosin wrote:

On 2017-08-03 00:50, Stephen Warren wrote:

On 08/02/2017 03:25 PM, Peter Rosin wrote:

(and muxc->max_adapters == num_names)


Well, unless muxc->deselect is true...


No, deselect does not affect neither max_adapters nor num_names. They
are always equal.


Ah yes, I was confusing max_adapters with num_adapters.


Re: [PATCH 2/2] i2c: mux: pinctrl: drop the idle_state member

2017-08-03 Thread Stephen Warren

On 08/02/2017 11:19 PM, Peter Rosin wrote:

On 2017-08-03 00:50, Stephen Warren wrote:

On 08/02/2017 03:25 PM, Peter Rosin wrote:

(and muxc->max_adapters == num_names)


Well, unless muxc->deselect is true...


No, deselect does not affect neither max_adapters nor num_names. They
are always equal.


Ah yes, I was confusing max_adapters with num_adapters.


Re: [PATCH 2/2] i2c: mux: pinctrl: drop the idle_state member

2017-08-02 Thread Stephen Warren

On 08/02/2017 03:25 PM, Peter Rosin wrote:

On 2017-08-02 21:06, Stephen Warren wrote:

On 08/02/2017 01:27 AM, Peter Rosin wrote:

The information is available elsewhere.



diff --git a/drivers/i2c/muxes/i2c-mux-pinctrl.c 
b/drivers/i2c/muxes/i2c-mux-pinctrl.c



   static int i2c_mux_pinctrl_deselect(struct i2c_mux_core *muxc, u32 chan)
   {
+   return i2c_mux_pinctrl_select(muxc, muxc->num_adapters);
   }



@@ -166,7 +162,7 @@ static int i2c_mux_pinctrl_probe(struct platform_device 
*pdev)



/* Do not add any adapter for the idle state (if it's there at all). */
-   for (i = 0; i < num_names - !!mux->state_idle; i++) {
+   for (i = 0; i < num_names - !!muxc->deselect; i++) {


I think that "num_names - !!muxc->deselect" could just be
muxc->num_adapters?


Not really, it's the i2c_mux_add_adapter call in the loop that bumps
muxc->num_adapters, so the loop would not be entered. Not desirable :-)


Ok, that makes sense.


(and muxc->max_adapters == num_names)


Well, unless muxc->deselect is true...


Re: [PATCH 2/2] i2c: mux: pinctrl: drop the idle_state member

2017-08-02 Thread Stephen Warren

On 08/02/2017 03:25 PM, Peter Rosin wrote:

On 2017-08-02 21:06, Stephen Warren wrote:

On 08/02/2017 01:27 AM, Peter Rosin wrote:

The information is available elsewhere.



diff --git a/drivers/i2c/muxes/i2c-mux-pinctrl.c 
b/drivers/i2c/muxes/i2c-mux-pinctrl.c



   static int i2c_mux_pinctrl_deselect(struct i2c_mux_core *muxc, u32 chan)
   {
+   return i2c_mux_pinctrl_select(muxc, muxc->num_adapters);
   }



@@ -166,7 +162,7 @@ static int i2c_mux_pinctrl_probe(struct platform_device 
*pdev)



/* Do not add any adapter for the idle state (if it's there at all). */
-   for (i = 0; i < num_names - !!mux->state_idle; i++) {
+   for (i = 0; i < num_names - !!muxc->deselect; i++) {


I think that "num_names - !!muxc->deselect" could just be
muxc->num_adapters?


Not really, it's the i2c_mux_add_adapter call in the loop that bumps
muxc->num_adapters, so the loop would not be entered. Not desirable :-)


Ok, that makes sense.


(and muxc->max_adapters == num_names)


Well, unless muxc->deselect is true...


Re: [PATCH 1/2] i2c: mux: pinctrl: remove platform_data

2017-08-02 Thread Stephen Warren

On 08/02/2017 01:27 AM, Peter Rosin wrote:

No platform (at least no upstreamed platform) has ever used this
platform_data. Just drop it and simplify the code.



diff --git a/drivers/i2c/muxes/i2c-mux-pinctrl.c 
b/drivers/i2c/muxes/i2c-mux-pinctrl.c



  static int i2c_mux_pinctrl_probe(struct platform_device *pdev)


(eliding some - lines for brevity in the following):


+   for (i = 0; i < num_names; i++) {
+   ret = of_property_read_string_index(np, "pinctrl-names", i,
+   );
+   if (ret < 0) {
+   dev_err(dev, "Cannot parse pinctrl-names: %d\n", ret);
+   goto err_put_parent;
+   }
+
+   mux->states[i] = pinctrl_lookup_state(mux->pinctrl, name);
if (IS_ERR(mux->states[i])) {
ret = PTR_ERR(mux->states[i]);
+   dev_err(dev, "Cannot look up pinctrl state %s: %d\n",
+   name, ret);
+   goto err_put_parent;


This error path doesn't undo pinctrl_lookup_state. Is that OK? I think 
so, but wanted to check.



+   muxc = i2c_mux_alloc(parent, dev, num_names,
+sizeof(*mux) + num_names * sizeof(*mux->states),
+0, i2c_mux_pinctrl_select, NULL);

...

+   /* Do not add any adapter for the idle state (if it's there at all). */
+   for (i = 0; i < num_names - !!mux->state_idle; i++) {
+   ret = i2c_mux_add_adapter(muxc, 0, i, 0);


Is it OK to potentially add one fewer adapter here than the child bus 
count passed to i2c_mux_alloc() earlier? The old code specifically 
excluded the idle state (if present) from the child bus count passed to 
i2c_mux_alloc(), which was aided by the fact that it parsed the DT 
before calling i2c_mux_alloc().


If those two things are OK, then:
Reviewed-by: Stephen Warren <swar...@nvidia.com>


Re: [PATCH 1/2] i2c: mux: pinctrl: remove platform_data

2017-08-02 Thread Stephen Warren

On 08/02/2017 01:27 AM, Peter Rosin wrote:

No platform (at least no upstreamed platform) has ever used this
platform_data. Just drop it and simplify the code.



diff --git a/drivers/i2c/muxes/i2c-mux-pinctrl.c 
b/drivers/i2c/muxes/i2c-mux-pinctrl.c



  static int i2c_mux_pinctrl_probe(struct platform_device *pdev)


(eliding some - lines for brevity in the following):


+   for (i = 0; i < num_names; i++) {
+   ret = of_property_read_string_index(np, "pinctrl-names", i,
+   );
+   if (ret < 0) {
+   dev_err(dev, "Cannot parse pinctrl-names: %d\n", ret);
+   goto err_put_parent;
+   }
+
+   mux->states[i] = pinctrl_lookup_state(mux->pinctrl, name);
if (IS_ERR(mux->states[i])) {
ret = PTR_ERR(mux->states[i]);
+   dev_err(dev, "Cannot look up pinctrl state %s: %d\n",
+   name, ret);
+   goto err_put_parent;


This error path doesn't undo pinctrl_lookup_state. Is that OK? I think 
so, but wanted to check.



+   muxc = i2c_mux_alloc(parent, dev, num_names,
+sizeof(*mux) + num_names * sizeof(*mux->states),
+0, i2c_mux_pinctrl_select, NULL);

...

+   /* Do not add any adapter for the idle state (if it's there at all). */
+   for (i = 0; i < num_names - !!mux->state_idle; i++) {
+   ret = i2c_mux_add_adapter(muxc, 0, i, 0);


Is it OK to potentially add one fewer adapter here than the child bus 
count passed to i2c_mux_alloc() earlier? The old code specifically 
excluded the idle state (if present) from the child bus count passed to 
i2c_mux_alloc(), which was aided by the fact that it parsed the DT 
before calling i2c_mux_alloc().


If those two things are OK, then:
Reviewed-by: Stephen Warren 


Re: [PATCH 2/2] i2c: mux: pinctrl: drop the idle_state member

2017-08-02 Thread Stephen Warren

On 08/02/2017 01:27 AM, Peter Rosin wrote:

The information is available elsewhere.



diff --git a/drivers/i2c/muxes/i2c-mux-pinctrl.c 
b/drivers/i2c/muxes/i2c-mux-pinctrl.c



  static int i2c_mux_pinctrl_deselect(struct i2c_mux_core *muxc, u32 chan)
  {
+   return i2c_mux_pinctrl_select(muxc, muxc->num_adapters);
  }



@@ -166,7 +162,7 @@ static int i2c_mux_pinctrl_probe(struct platform_device 
*pdev)



/* Do not add any adapter for the idle state (if it's there at all). */
-   for (i = 0; i < num_names - !!mux->state_idle; i++) {
+   for (i = 0; i < num_names - !!muxc->deselect; i++) {


I think that "num_names - !!muxc->deselect" could just be 
muxc->num_adapters? Otherwise,


Reviewed-by: Stephen Warren <swar...@nvidia.com>


Re: [PATCH 2/2] i2c: mux: pinctrl: drop the idle_state member

2017-08-02 Thread Stephen Warren

On 08/02/2017 01:27 AM, Peter Rosin wrote:

The information is available elsewhere.



diff --git a/drivers/i2c/muxes/i2c-mux-pinctrl.c 
b/drivers/i2c/muxes/i2c-mux-pinctrl.c



  static int i2c_mux_pinctrl_deselect(struct i2c_mux_core *muxc, u32 chan)
  {
+   return i2c_mux_pinctrl_select(muxc, muxc->num_adapters);
  }



@@ -166,7 +162,7 @@ static int i2c_mux_pinctrl_probe(struct platform_device 
*pdev)



/* Do not add any adapter for the idle state (if it's there at all). */
-   for (i = 0; i < num_names - !!mux->state_idle; i++) {
+   for (i = 0; i < num_names - !!muxc->deselect; i++) {


I think that "num_names - !!muxc->deselect" could just be 
muxc->num_adapters? Otherwise,


Reviewed-by: Stephen Warren 


Re: CLK_OF_DECLARE advice required

2017-05-31 Thread Stephen Warren

On 05/31/2017 10:28 AM, Phil Elwell wrote:

On 31/05/2017 16:58, Stefan Wahren wrote:

Am 31.05.2017 um 17:27 schrieb Stephen Warren:

On 05/30/2017 06:23 AM, Phil Elwell wrote:

Hi,

I've run into a problem using the fixed-factor clock on Raspberry Pi
and I'd
like some advice before I submit a patch.

Some context: the aim is to use a standard UART and some external
circuitry
as a MIDI interface. This would be straightforward except that Linux
doesn't
recognise the required 31.25KHz as a valid UART baud rate. Rhe
workaround is
to declare the UART clock such that the reported rate differs from
the actual
rate. If one sets the reported rate to be (actual*38400)/31250 then
requesting a 38400 baud rate will result in an actual 31250 baud signal.


This sounds like the wrong approach. Forcing the port to use a
different clock rate than the user requests would prevent anyone from
using that port for its standard purpose; it'd turn what should be a
runtime decision into a compile-time decision.

Are you sure there's no way to simply select the correct baud rate on
the port? I see plenty of references to configuring custom baud rates
under Linux when I search Google, e.g.:


https://stackoverflow.com/questions/12646324/how-to-set-a-custom-baud-rate-on-linux


"How to set a custom baud rate on Linux?"


https://sourceware.org/ml/libc-help/2009-06/msg00016.html

"Re: Terminal interface and non-standard baudrates"



I remember this gist from Peter Hurley:

https://gist.github.com/peterhurley/fbace59b55d87306a5b8


Thank you, Stephen and Stephan. Stephen - the clock scaling is applied by a DT 
overlay so
it effectively a runtime setting, but I take your point about the elegance of 
the solution.
Stefan - anybaud looks promising, although I would have preferred for users to 
continue to
use the existing user-space tools - kernel changes can be deployed more easily.

For my edification, can you pretend for a moment that the application was a 
valid one and
answer any of my original questions?:

1. Should all system clock drivers use OF_CLK_DECLARE? Doing so would probably
avoid this problem, but further initialisation order dependencies may
require more drivers to be initialised early.

2. Why does the clock initialisation hook registered by OF_CLK_DECLARE not
return any indication of success? If it did, and if the OF_POPULATED flag
was only set after successful initialisation then the normal retrying of
a deferred probe would also avoid this problem.

3. Would adding the OF_CLK_DECLARE hook prevent the use of the devm_ managed
functions like devm_kzalloc? If not, why doesn't fixed-factor-clock use it?


Sorry, I don't know the answers to these questions; I expect the clock 
subsystem maintainers will have to chime in. My only general comment is 
that probe deferral is the typical mechanism to handle 
driver/device/object dependencies, but I have no idea how that would 
interact with static initialization hooks like OF_CLK_DECLARE.


Re: CLK_OF_DECLARE advice required

2017-05-31 Thread Stephen Warren

On 05/31/2017 10:28 AM, Phil Elwell wrote:

On 31/05/2017 16:58, Stefan Wahren wrote:

Am 31.05.2017 um 17:27 schrieb Stephen Warren:

On 05/30/2017 06:23 AM, Phil Elwell wrote:

Hi,

I've run into a problem using the fixed-factor clock on Raspberry Pi
and I'd
like some advice before I submit a patch.

Some context: the aim is to use a standard UART and some external
circuitry
as a MIDI interface. This would be straightforward except that Linux
doesn't
recognise the required 31.25KHz as a valid UART baud rate. Rhe
workaround is
to declare the UART clock such that the reported rate differs from
the actual
rate. If one sets the reported rate to be (actual*38400)/31250 then
requesting a 38400 baud rate will result in an actual 31250 baud signal.


This sounds like the wrong approach. Forcing the port to use a
different clock rate than the user requests would prevent anyone from
using that port for its standard purpose; it'd turn what should be a
runtime decision into a compile-time decision.

Are you sure there's no way to simply select the correct baud rate on
the port? I see plenty of references to configuring custom baud rates
under Linux when I search Google, e.g.:


https://stackoverflow.com/questions/12646324/how-to-set-a-custom-baud-rate-on-linux


"How to set a custom baud rate on Linux?"


https://sourceware.org/ml/libc-help/2009-06/msg00016.html

"Re: Terminal interface and non-standard baudrates"



I remember this gist from Peter Hurley:

https://gist.github.com/peterhurley/fbace59b55d87306a5b8


Thank you, Stephen and Stephan. Stephen - the clock scaling is applied by a DT 
overlay so
it effectively a runtime setting, but I take your point about the elegance of 
the solution.
Stefan - anybaud looks promising, although I would have preferred for users to 
continue to
use the existing user-space tools - kernel changes can be deployed more easily.

For my edification, can you pretend for a moment that the application was a 
valid one and
answer any of my original questions?:

1. Should all system clock drivers use OF_CLK_DECLARE? Doing so would probably
avoid this problem, but further initialisation order dependencies may
require more drivers to be initialised early.

2. Why does the clock initialisation hook registered by OF_CLK_DECLARE not
return any indication of success? If it did, and if the OF_POPULATED flag
was only set after successful initialisation then the normal retrying of
a deferred probe would also avoid this problem.

3. Would adding the OF_CLK_DECLARE hook prevent the use of the devm_ managed
functions like devm_kzalloc? If not, why doesn't fixed-factor-clock use it?


Sorry, I don't know the answers to these questions; I expect the clock 
subsystem maintainers will have to chime in. My only general comment is 
that probe deferral is the typical mechanism to handle 
driver/device/object dependencies, but I have no idea how that would 
interact with static initialization hooks like OF_CLK_DECLARE.


Re: CLK_OF_DECLARE advice required

2017-05-31 Thread Stephen Warren

On 05/30/2017 06:23 AM, Phil Elwell wrote:

Hi,

I've run into a problem using the fixed-factor clock on Raspberry Pi and I'd
like some advice before I submit a patch.

Some context: the aim is to use a standard UART and some external circuitry
as a MIDI interface. This would be straightforward except that Linux doesn't
recognise the required 31.25KHz as a valid UART baud rate. Rhe workaround is
to declare the UART clock such that the reported rate differs from the actual
rate. If one sets the reported rate to be (actual*38400)/31250 then
requesting a 38400 baud rate will result in an actual 31250 baud signal.


This sounds like the wrong approach. Forcing the port to use a different 
clock rate than the user requests would prevent anyone from using that 
port for its standard purpose; it'd turn what should be a runtime 
decision into a compile-time decision.


Are you sure there's no way to simply select the correct baud rate on 
the port? I see plenty of references to configuring custom baud rates 
under Linux when I search Google, e.g.:



https://stackoverflow.com/questions/12646324/how-to-set-a-custom-baud-rate-on-linux

"How to set a custom baud rate on Linux?"


https://sourceware.org/ml/libc-help/2009-06/msg00016.html

"Re: Terminal interface and non-standard baudrates"


Re: CLK_OF_DECLARE advice required

2017-05-31 Thread Stephen Warren

On 05/30/2017 06:23 AM, Phil Elwell wrote:

Hi,

I've run into a problem using the fixed-factor clock on Raspberry Pi and I'd
like some advice before I submit a patch.

Some context: the aim is to use a standard UART and some external circuitry
as a MIDI interface. This would be straightforward except that Linux doesn't
recognise the required 31.25KHz as a valid UART baud rate. Rhe workaround is
to declare the UART clock such that the reported rate differs from the actual
rate. If one sets the reported rate to be (actual*38400)/31250 then
requesting a 38400 baud rate will result in an actual 31250 baud signal.


This sounds like the wrong approach. Forcing the port to use a different 
clock rate than the user requests would prevent anyone from using that 
port for its standard purpose; it'd turn what should be a runtime 
decision into a compile-time decision.


Are you sure there's no way to simply select the correct baud rate on 
the port? I see plenty of references to configuring custom baud rates 
under Linux when I search Google, e.g.:



https://stackoverflow.com/questions/12646324/how-to-set-a-custom-baud-rate-on-linux

"How to set a custom baud rate on Linux?"


https://sourceware.org/ml/libc-help/2009-06/msg00016.html

"Re: Terminal interface and non-standard baudrates"


Re: struct i2c_mux_pinctrl_platform_data for the i2c-mux-pinctrl driver

2017-05-22 Thread Stephen Warren

On 05/22/2017 02:24 AM, Peter Rosin wrote:

Hi Stephen,

I was looking at the i2c-mux-pinctrl driver and noticed that it has
code to feed it platform data from C code. There has never been any
in-kernel users of this interface and I would like to remove it. I
wonder if you added it way back when just because the i2c-mux-gpio
driver have such an interface or if, to your knowledge, any external
platform exists that depends on this?

I.e. I'm talking about removing include/linux/i2c-mux-pinctrl.h and
the code supporting it in the driver, thus only allowing devicetree
as source for the platform data.


I'm not aware of any in- or out-of-tree users of that structure/feature.

I added it because at the time I wrote that driver, it was common place 
to support both DT-based and platform-data-based 
instantiation/configuration of devices, so I did the same. If you're 
removing pdata-based configuration, it would make sense to do a blanket 
removal across the entire I2C subsystem (and other subsystems too) I 
suspect, but that's LinusW's call in the end.


Re: struct i2c_mux_pinctrl_platform_data for the i2c-mux-pinctrl driver

2017-05-22 Thread Stephen Warren

On 05/22/2017 02:24 AM, Peter Rosin wrote:

Hi Stephen,

I was looking at the i2c-mux-pinctrl driver and noticed that it has
code to feed it platform data from C code. There has never been any
in-kernel users of this interface and I would like to remove it. I
wonder if you added it way back when just because the i2c-mux-gpio
driver have such an interface or if, to your knowledge, any external
platform exists that depends on this?

I.e. I'm talking about removing include/linux/i2c-mux-pinctrl.h and
the code supporting it in the driver, thus only allowing devicetree
as source for the platform data.


I'm not aware of any in- or out-of-tree users of that structure/feature.

I added it because at the time I wrote that driver, it was common place 
to support both DT-based and platform-data-based 
instantiation/configuration of devices, so I did the same. If you're 
removing pdata-based configuration, it would make sense to do a blanket 
removal across the entire I2C subsystem (and other subsystems too) I 
suspect, but that's LinusW's call in the end.


Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-05-08 Thread Stephen Warren

On 05/07/2017 12:12 PM, Paul Kocialkowski wrote:

Hi,

Le mardi 25 avril 2017 à 10:57 -0600, Stephen Warren a écrit :

On 04/24/2017 12:41 PM, Paul Kocialkowski wrote:

Le lundi 24 avril 2017 à 09:35 -0600, Stephen Warren a écrit :

On 04/24/2017 09:07 AM, Paul Kocialkowski wrote:

Le mercredi 19 avril 2017 à 16:00 -0600, Stephen Warren a écrit :

On 04/18/2017 10:38 AM, Paul Kocialkowski wrote:

Le mardi 18 avril 2017 à 10:15 -0600, Stephen Warren a écrit :

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the
tegra124
SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.
diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC ||
ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency?


From a quick look at the code, I doubt this is really a build
dependency.


If so, don't we need to add T210 and T186 entries into that ||
condition
too,
since we could be building a kernel with just T210/T186 support
and no
T124
support?


In the spirit of this patch, adding entries for other tegra
platforms
would
make
sense. Would you prefer that we leave out the dependency from
SND_SOC_TEGRA30_*
and only select the right I2S driver to use in each codec driver?

If not, we'd have to list all relevant platforms both in the
I2S/AHUB
drivers
and in each codec's rules (which is not necessarily and issue, but
there's
no
need to have artificial platform dependencies).

What do you think?


I think we should just remove most of these "depends on" since they're
mostly set up to reflect runtime requirements rather than build time
requirements. The only points I'd make are:


I definitely agree we should do that for all the codec Kconfig options.


1)

Everything should "depends on SND_SOC_TEGRA" simply so the options
don't
show up and clutter menuconfig menus unless SND_SOC_TEGRA is enabled.


Agreed.


2)

SND_SOC_TEGRA30_I2S does need the Tegra30 AHUB driver in order to
compile/link, since it directly calls functions in that driver. This
is
already handled by SND_SOC_TEGRA30_I2S doing "select
SND_SOC_TEGRA30_AHUB".


Agreed.


3)

The machine drivers all do e.g. "select SND_SOC_TEGRA30_I2S if
ARCH_TEGRA_3x_SOC". This was an attempt to make the machine drivers
only
pull in the relevant drivers for the SoC(s) being compiled for. I'm
not
sure this still makes sense; this won't work on kernels that only
support T124/T210/T186 since ARCH_TEGRA_3x_SOC isn't enabled then.
Should we just remove all those and make sure the defconfigs are
updated
to make sure the relevant I2S/AHUB/SPDIF/AC97 drivers are explicitly
enabled? Perhaps we should default all the I2S/AHUB/SPDIF/AC97 to y
(which will only apply if SND_SOC_TEGRA is enabled)?


I think it would be easier for everyone to just auto-select the machine
drivers
automatically based on the architecture (so we could have the list of
ARCH_TEGRA_*_SOC here) when SND_SOC_TEGRA is selected.


I don't think selecting the machine drivers is the correct approach,
since then they can't be disabled.

Making certain machine drivers "default y if ARCH_TEGRA_nn_SOC" would
address that,


That's right, my mistake. Let's take that as the solution I'm backing then.


 but still isn't very scalable since we need to go back and
edit the Kconfig every time we define a new SoC, in order to add that
SoC into the default statement.


Well, that's what platform bringup is all about, isn't it? I think it makes
a
lot more sense to have to add a new platform once (and it's not like one
will
forget to look at the sound part when adding a new platform) rather than
requiring users to hand-pick the option.


Personally I'd expect all the Tegra machine drivers to be enabled in the
upstream defconfig files all the time, such that we don't need to make
the Kconfig options default y, nor have users manually turn the relevant
options on, at all.


I think I was previously mistaken about what the machine drivers are, so I got a
good occasion to find out about them an learn something new. Thanks!

Agreed regarding the machine drivers, they should indeed all be enabled in the
defconfig by default.


Not only does this preserve existing configs (including external ones
that
aren't part of the kernel tree), it also clearly maps which machine
driver
to use for which SoC instead of having users do it by hand.


The machine drivers aren't terribly tied to SoCs by design; most of them
would work on pretty much any SoC. They're only tied to SoCs as a
side-effect of a machine driver being tied to a certain CODEC, and
certain CODECS just by chance are only used (so far) on speci

Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-05-08 Thread Stephen Warren

On 05/07/2017 12:12 PM, Paul Kocialkowski wrote:

Hi,

Le mardi 25 avril 2017 à 10:57 -0600, Stephen Warren a écrit :

On 04/24/2017 12:41 PM, Paul Kocialkowski wrote:

Le lundi 24 avril 2017 à 09:35 -0600, Stephen Warren a écrit :

On 04/24/2017 09:07 AM, Paul Kocialkowski wrote:

Le mercredi 19 avril 2017 à 16:00 -0600, Stephen Warren a écrit :

On 04/18/2017 10:38 AM, Paul Kocialkowski wrote:

Le mardi 18 avril 2017 à 10:15 -0600, Stephen Warren a écrit :

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the
tegra124
SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.
diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC ||
ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency?


From a quick look at the code, I doubt this is really a build
dependency.


If so, don't we need to add T210 and T186 entries into that ||
condition
too,
since we could be building a kernel with just T210/T186 support
and no
T124
support?


In the spirit of this patch, adding entries for other tegra
platforms
would
make
sense. Would you prefer that we leave out the dependency from
SND_SOC_TEGRA30_*
and only select the right I2S driver to use in each codec driver?

If not, we'd have to list all relevant platforms both in the
I2S/AHUB
drivers
and in each codec's rules (which is not necessarily and issue, but
there's
no
need to have artificial platform dependencies).

What do you think?


I think we should just remove most of these "depends on" since they're
mostly set up to reflect runtime requirements rather than build time
requirements. The only points I'd make are:


I definitely agree we should do that for all the codec Kconfig options.


1)

Everything should "depends on SND_SOC_TEGRA" simply so the options
don't
show up and clutter menuconfig menus unless SND_SOC_TEGRA is enabled.


Agreed.


2)

SND_SOC_TEGRA30_I2S does need the Tegra30 AHUB driver in order to
compile/link, since it directly calls functions in that driver. This
is
already handled by SND_SOC_TEGRA30_I2S doing "select
SND_SOC_TEGRA30_AHUB".


Agreed.


3)

The machine drivers all do e.g. "select SND_SOC_TEGRA30_I2S if
ARCH_TEGRA_3x_SOC". This was an attempt to make the machine drivers
only
pull in the relevant drivers for the SoC(s) being compiled for. I'm
not
sure this still makes sense; this won't work on kernels that only
support T124/T210/T186 since ARCH_TEGRA_3x_SOC isn't enabled then.
Should we just remove all those and make sure the defconfigs are
updated
to make sure the relevant I2S/AHUB/SPDIF/AC97 drivers are explicitly
enabled? Perhaps we should default all the I2S/AHUB/SPDIF/AC97 to y
(which will only apply if SND_SOC_TEGRA is enabled)?


I think it would be easier for everyone to just auto-select the machine
drivers
automatically based on the architecture (so we could have the list of
ARCH_TEGRA_*_SOC here) when SND_SOC_TEGRA is selected.


I don't think selecting the machine drivers is the correct approach,
since then they can't be disabled.

Making certain machine drivers "default y if ARCH_TEGRA_nn_SOC" would
address that,


That's right, my mistake. Let's take that as the solution I'm backing then.


 but still isn't very scalable since we need to go back and
edit the Kconfig every time we define a new SoC, in order to add that
SoC into the default statement.


Well, that's what platform bringup is all about, isn't it? I think it makes
a
lot more sense to have to add a new platform once (and it's not like one
will
forget to look at the sound part when adding a new platform) rather than
requiring users to hand-pick the option.


Personally I'd expect all the Tegra machine drivers to be enabled in the
upstream defconfig files all the time, such that we don't need to make
the Kconfig options default y, nor have users manually turn the relevant
options on, at all.


I think I was previously mistaken about what the machine drivers are, so I got a
good occasion to find out about them an learn something new. Thanks!

Agreed regarding the machine drivers, they should indeed all be enabled in the
defconfig by default.


Not only does this preserve existing configs (including external ones
that
aren't part of the kernel tree), it also clearly maps which machine
driver
to use for which SoC instead of having users do it by hand.


The machine drivers aren't terribly tied to SoCs by design; most of them
would work on pretty much any SoC. They're only tied to SoCs as a
side-effect of a machine driver being tied to a certain CODEC, and
certain CODECS just by chance are only used (so far) on speci

Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-04-25 Thread Stephen Warren

On 04/24/2017 12:41 PM, Paul Kocialkowski wrote:

Le lundi 24 avril 2017 à 09:35 -0600, Stephen Warren a écrit :

On 04/24/2017 09:07 AM, Paul Kocialkowski wrote:

Le mercredi 19 avril 2017 à 16:00 -0600, Stephen Warren a écrit :

On 04/18/2017 10:38 AM, Paul Kocialkowski wrote:

Le mardi 18 avril 2017 à 10:15 -0600, Stephen Warren a écrit :

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the tegra124
SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.
diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC ||
ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency?


From a quick look at the code, I doubt this is really a build
dependency.


If so, don't we need to add T210 and T186 entries into that ||
condition
too,
since we could be building a kernel with just T210/T186 support and no
T124
support?


In the spirit of this patch, adding entries for other tegra platforms
would
make
sense. Would you prefer that we leave out the dependency from
SND_SOC_TEGRA30_*
and only select the right I2S driver to use in each codec driver?

If not, we'd have to list all relevant platforms both in the I2S/AHUB
drivers
and in each codec's rules (which is not necessarily and issue, but
there's
no
need to have artificial platform dependencies).

What do you think?


I think we should just remove most of these "depends on" since they're
mostly set up to reflect runtime requirements rather than build time
requirements. The only points I'd make are:


I definitely agree we should do that for all the codec Kconfig options.


1)

Everything should "depends on SND_SOC_TEGRA" simply so the options don't
show up and clutter menuconfig menus unless SND_SOC_TEGRA is enabled.


Agreed.


2)

SND_SOC_TEGRA30_I2S does need the Tegra30 AHUB driver in order to
compile/link, since it directly calls functions in that driver. This is
already handled by SND_SOC_TEGRA30_I2S doing "select
SND_SOC_TEGRA30_AHUB".


Agreed.


3)

The machine drivers all do e.g. "select SND_SOC_TEGRA30_I2S if
ARCH_TEGRA_3x_SOC". This was an attempt to make the machine drivers only
pull in the relevant drivers for the SoC(s) being compiled for. I'm not
sure this still makes sense; this won't work on kernels that only
support T124/T210/T186 since ARCH_TEGRA_3x_SOC isn't enabled then.
Should we just remove all those and make sure the defconfigs are updated
to make sure the relevant I2S/AHUB/SPDIF/AC97 drivers are explicitly
enabled? Perhaps we should default all the I2S/AHUB/SPDIF/AC97 to y
(which will only apply if SND_SOC_TEGRA is enabled)?


I think it would be easier for everyone to just auto-select the machine
drivers
automatically based on the architecture (so we could have the list of
ARCH_TEGRA_*_SOC here) when SND_SOC_TEGRA is selected.


I don't think selecting the machine drivers is the correct approach,
since then they can't be disabled.

Making certain machine drivers "default y if ARCH_TEGRA_nn_SOC" would
address that,


That's right, my mistake. Let's take that as the solution I'm backing then.


 but still isn't very scalable since we need to go back and
edit the Kconfig every time we define a new SoC, in order to add that
SoC into the default statement.


Well, that's what platform bringup is all about, isn't it? I think it makes a
lot more sense to have to add a new platform once (and it's not like one will
forget to look at the sound part when adding a new platform) rather than
requiring users to hand-pick the option.


Personally I'd expect all the Tegra machine drivers to be enabled in the 
upstream defconfig files all the time, such that we don't need to make 
the Kconfig options default y, nor have users manually turn the relevant 
options on, at all.



Not only does this preserve existing configs (including external ones that
aren't part of the kernel tree), it also clearly maps which machine driver
to use for which SoC instead of having users do it by hand.


The machine drivers aren't terribly tied to SoCs by design; most of them
would work on pretty much any SoC. They're only tied to SoCs as a
side-effect of a machine driver being tied to a certain CODEC, and
certain CODECS just by chance are only used (so far) on specific boards,
which have specific SoCs.


I'm a bit confused: aren't the machine driver (i2s/ahub/spdif/ac97) tied to
specific hardware blocks that are found in specific SoCs and not in others? I
can see these blocks haven't evolved much across generations, but they're still
either part of a specific SoC or not, aren't they?

The compatible strings in the common SoC dts

Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-04-25 Thread Stephen Warren

On 04/24/2017 12:41 PM, Paul Kocialkowski wrote:

Le lundi 24 avril 2017 à 09:35 -0600, Stephen Warren a écrit :

On 04/24/2017 09:07 AM, Paul Kocialkowski wrote:

Le mercredi 19 avril 2017 à 16:00 -0600, Stephen Warren a écrit :

On 04/18/2017 10:38 AM, Paul Kocialkowski wrote:

Le mardi 18 avril 2017 à 10:15 -0600, Stephen Warren a écrit :

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the tegra124
SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.
diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC ||
ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency?


From a quick look at the code, I doubt this is really a build
dependency.


If so, don't we need to add T210 and T186 entries into that ||
condition
too,
since we could be building a kernel with just T210/T186 support and no
T124
support?


In the spirit of this patch, adding entries for other tegra platforms
would
make
sense. Would you prefer that we leave out the dependency from
SND_SOC_TEGRA30_*
and only select the right I2S driver to use in each codec driver?

If not, we'd have to list all relevant platforms both in the I2S/AHUB
drivers
and in each codec's rules (which is not necessarily and issue, but
there's
no
need to have artificial platform dependencies).

What do you think?


I think we should just remove most of these "depends on" since they're
mostly set up to reflect runtime requirements rather than build time
requirements. The only points I'd make are:


I definitely agree we should do that for all the codec Kconfig options.


1)

Everything should "depends on SND_SOC_TEGRA" simply so the options don't
show up and clutter menuconfig menus unless SND_SOC_TEGRA is enabled.


Agreed.


2)

SND_SOC_TEGRA30_I2S does need the Tegra30 AHUB driver in order to
compile/link, since it directly calls functions in that driver. This is
already handled by SND_SOC_TEGRA30_I2S doing "select
SND_SOC_TEGRA30_AHUB".


Agreed.


3)

The machine drivers all do e.g. "select SND_SOC_TEGRA30_I2S if
ARCH_TEGRA_3x_SOC". This was an attempt to make the machine drivers only
pull in the relevant drivers for the SoC(s) being compiled for. I'm not
sure this still makes sense; this won't work on kernels that only
support T124/T210/T186 since ARCH_TEGRA_3x_SOC isn't enabled then.
Should we just remove all those and make sure the defconfigs are updated
to make sure the relevant I2S/AHUB/SPDIF/AC97 drivers are explicitly
enabled? Perhaps we should default all the I2S/AHUB/SPDIF/AC97 to y
(which will only apply if SND_SOC_TEGRA is enabled)?


I think it would be easier for everyone to just auto-select the machine
drivers
automatically based on the architecture (so we could have the list of
ARCH_TEGRA_*_SOC here) when SND_SOC_TEGRA is selected.


I don't think selecting the machine drivers is the correct approach,
since then they can't be disabled.

Making certain machine drivers "default y if ARCH_TEGRA_nn_SOC" would
address that,


That's right, my mistake. Let's take that as the solution I'm backing then.


 but still isn't very scalable since we need to go back and
edit the Kconfig every time we define a new SoC, in order to add that
SoC into the default statement.


Well, that's what platform bringup is all about, isn't it? I think it makes a
lot more sense to have to add a new platform once (and it's not like one will
forget to look at the sound part when adding a new platform) rather than
requiring users to hand-pick the option.


Personally I'd expect all the Tegra machine drivers to be enabled in the 
upstream defconfig files all the time, such that we don't need to make 
the Kconfig options default y, nor have users manually turn the relevant 
options on, at all.



Not only does this preserve existing configs (including external ones that
aren't part of the kernel tree), it also clearly maps which machine driver
to use for which SoC instead of having users do it by hand.


The machine drivers aren't terribly tied to SoCs by design; most of them
would work on pretty much any SoC. They're only tied to SoCs as a
side-effect of a machine driver being tied to a certain CODEC, and
certain CODECS just by chance are only used (so far) on specific boards,
which have specific SoCs.


I'm a bit confused: aren't the machine driver (i2s/ahub/spdif/ac97) tied to
specific hardware blocks that are found in specific SoCs and not in others? I
can see these blocks haven't evolved much across generations, but they're still
either part of a specific SoC or not, aren't they?

The compatible strings in the common SoC dts

Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-04-24 Thread Stephen Warren

On 04/24/2017 09:07 AM, Paul Kocialkowski wrote:

Le mercredi 19 avril 2017 à 16:00 -0600, Stephen Warren a écrit :

On 04/18/2017 10:38 AM, Paul Kocialkowski wrote:

Le mardi 18 avril 2017 à 10:15 -0600, Stephen Warren a écrit :

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the tegra124 SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.
diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC ||
ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency?


From a quick look at the code, I doubt this is really a build dependency.


If so, don't we need to add T210 and T186 entries into that || condition
too,
since we could be building a kernel with just T210/T186 support and no
T124
support?


In the spirit of this patch, adding entries for other tegra platforms would
make
sense. Would you prefer that we leave out the dependency from
SND_SOC_TEGRA30_*
and only select the right I2S driver to use in each codec driver?

If not, we'd have to list all relevant platforms both in the I2S/AHUB
drivers
and in each codec's rules (which is not necessarily and issue, but there's
no
need to have artificial platform dependencies).

What do you think?


I think we should just remove most of these "depends on" since they're
mostly set up to reflect runtime requirements rather than build time
requirements. The only points I'd make are:


I definitely agree we should do that for all the codec Kconfig options.


1)

Everything should "depends on SND_SOC_TEGRA" simply so the options don't
show up and clutter menuconfig menus unless SND_SOC_TEGRA is enabled.


Agreed.


2)

SND_SOC_TEGRA30_I2S does need the Tegra30 AHUB driver in order to
compile/link, since it directly calls functions in that driver. This is
already handled by SND_SOC_TEGRA30_I2S doing "select SND_SOC_TEGRA30_AHUB".


Agreed.


3)

The machine drivers all do e.g. "select SND_SOC_TEGRA30_I2S if
ARCH_TEGRA_3x_SOC". This was an attempt to make the machine drivers only
pull in the relevant drivers for the SoC(s) being compiled for. I'm not
sure this still makes sense; this won't work on kernels that only
support T124/T210/T186 since ARCH_TEGRA_3x_SOC isn't enabled then.
Should we just remove all those and make sure the defconfigs are updated
to make sure the relevant I2S/AHUB/SPDIF/AC97 drivers are explicitly
enabled? Perhaps we should default all the I2S/AHUB/SPDIF/AC97 to y
(which will only apply if SND_SOC_TEGRA is enabled)?


I think it would be easier for everyone to just auto-select the machine drivers
automatically based on the architecture (so we could have the list of
ARCH_TEGRA_*_SOC here) when SND_SOC_TEGRA is selected.


I don't think selecting the machine drivers is the correct approach, 
since then they can't be disabled.


Making certain machine drivers "default y if ARCH_TEGRA_nn_SOC" would 
address that, but still isn't very scalable since we need to go back and 
edit the Kconfig every time we define a new SoC, in order to add that 
SoC into the default statement.



Not only does this preserve existing configs (including external ones that
aren't part of the kernel tree), it also clearly maps which machine driver to
use for which SoC instead of having users do it by hand.


The machine drivers aren't terribly tied to SoCs by design; most of them 
would work on pretty much any SoC. They're only tied to SoCs as a 
side-effect of a machine driver being tied to a certain CODEC, and 
certain CODECS just by chance are only used (so far) on specific boards, 
which have specific SoCs.



I'm also opposed to auto-selecting them all, because I don't really like the
idea of auto-including things that might not be needed.

Would that be agreeable?




Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-04-24 Thread Stephen Warren

On 04/24/2017 09:07 AM, Paul Kocialkowski wrote:

Le mercredi 19 avril 2017 à 16:00 -0600, Stephen Warren a écrit :

On 04/18/2017 10:38 AM, Paul Kocialkowski wrote:

Le mardi 18 avril 2017 à 10:15 -0600, Stephen Warren a écrit :

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the tegra124 SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.
diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC ||
ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency?


From a quick look at the code, I doubt this is really a build dependency.


If so, don't we need to add T210 and T186 entries into that || condition
too,
since we could be building a kernel with just T210/T186 support and no
T124
support?


In the spirit of this patch, adding entries for other tegra platforms would
make
sense. Would you prefer that we leave out the dependency from
SND_SOC_TEGRA30_*
and only select the right I2S driver to use in each codec driver?

If not, we'd have to list all relevant platforms both in the I2S/AHUB
drivers
and in each codec's rules (which is not necessarily and issue, but there's
no
need to have artificial platform dependencies).

What do you think?


I think we should just remove most of these "depends on" since they're
mostly set up to reflect runtime requirements rather than build time
requirements. The only points I'd make are:


I definitely agree we should do that for all the codec Kconfig options.


1)

Everything should "depends on SND_SOC_TEGRA" simply so the options don't
show up and clutter menuconfig menus unless SND_SOC_TEGRA is enabled.


Agreed.


2)

SND_SOC_TEGRA30_I2S does need the Tegra30 AHUB driver in order to
compile/link, since it directly calls functions in that driver. This is
already handled by SND_SOC_TEGRA30_I2S doing "select SND_SOC_TEGRA30_AHUB".


Agreed.


3)

The machine drivers all do e.g. "select SND_SOC_TEGRA30_I2S if
ARCH_TEGRA_3x_SOC". This was an attempt to make the machine drivers only
pull in the relevant drivers for the SoC(s) being compiled for. I'm not
sure this still makes sense; this won't work on kernels that only
support T124/T210/T186 since ARCH_TEGRA_3x_SOC isn't enabled then.
Should we just remove all those and make sure the defconfigs are updated
to make sure the relevant I2S/AHUB/SPDIF/AC97 drivers are explicitly
enabled? Perhaps we should default all the I2S/AHUB/SPDIF/AC97 to y
(which will only apply if SND_SOC_TEGRA is enabled)?


I think it would be easier for everyone to just auto-select the machine drivers
automatically based on the architecture (so we could have the list of
ARCH_TEGRA_*_SOC here) when SND_SOC_TEGRA is selected.


I don't think selecting the machine drivers is the correct approach, 
since then they can't be disabled.


Making certain machine drivers "default y if ARCH_TEGRA_nn_SOC" would 
address that, but still isn't very scalable since we need to go back and 
edit the Kconfig every time we define a new SoC, in order to add that 
SoC into the default statement.



Not only does this preserve existing configs (including external ones that
aren't part of the kernel tree), it also clearly maps which machine driver to
use for which SoC instead of having users do it by hand.


The machine drivers aren't terribly tied to SoCs by design; most of them 
would work on pretty much any SoC. They're only tied to SoCs as a 
side-effect of a machine driver being tied to a certain CODEC, and 
certain CODECS just by chance are only used (so far) on specific boards, 
which have specific SoCs.



I'm also opposed to auto-selecting them all, because I don't really like the
idea of auto-including things that might not be needed.

Would that be agreeable?




Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-04-19 Thread Stephen Warren

On 04/18/2017 10:38 AM, Paul Kocialkowski wrote:

Le mardi 18 avril 2017 à 10:15 -0600, Stephen Warren a écrit :

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the tegra124 SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.
diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC ||
ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency?


From a quick look at the code, I doubt this is really a build dependency.


If so, don't we need to add T210 and T186 entries into that || condition too,
since we could be building a kernel with just T210/T186 support and no T124
support?


In the spirit of this patch, adding entries for other tegra platforms would make
sense. Would you prefer that we leave out the dependency from SND_SOC_TEGRA30_*
and only select the right I2S driver to use in each codec driver?

If not, we'd have to list all relevant platforms both in the I2S/AHUB drivers
and in each codec's rules (which is not necessarily and issue, but there's no
need to have artificial platform dependencies).

What do you think?


I think we should just remove most of these "depends on" since they're 
mostly set up to reflect runtime requirements rather than build time 
requirements. The only points I'd make are:


1)

Everything should "depends on SND_SOC_TEGRA" simply so the options don't 
show up and clutter menuconfig menus unless SND_SOC_TEGRA is enabled.


2)

SND_SOC_TEGRA30_I2S does need the Tegra30 AHUB driver in order to 
compile/link, since it directly calls functions in that driver. This is 
already handled by SND_SOC_TEGRA30_I2S doing "select SND_SOC_TEGRA30_AHUB".


3)

The machine drivers all do e.g. "select SND_SOC_TEGRA30_I2S if 
ARCH_TEGRA_3x_SOC". This was an attempt to make the machine drivers only 
pull in the relevant drivers for the SoC(s) being compiled for. I'm not 
sure this still makes sense; this won't work on kernels that only 
support T124/T210/T186 since ARCH_TEGRA_3x_SOC isn't enabled then. 
Should we just remove all those and make sure the defconfigs are updated 
to make sure the relevant I2S/AHUB/SPDIF/AC97 drivers are explicitly 
enabled? Perhaps we should default all the I2S/AHUB/SPDIF/AC97 to y 
(which will only apply if SND_SOC_TEGRA is enabled)?


Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-04-19 Thread Stephen Warren

On 04/18/2017 10:38 AM, Paul Kocialkowski wrote:

Le mardi 18 avril 2017 à 10:15 -0600, Stephen Warren a écrit :

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the tegra124 SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.
diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC ||
ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency?


From a quick look at the code, I doubt this is really a build dependency.


If so, don't we need to add T210 and T186 entries into that || condition too,
since we could be building a kernel with just T210/T186 support and no T124
support?


In the spirit of this patch, adding entries for other tegra platforms would make
sense. Would you prefer that we leave out the dependency from SND_SOC_TEGRA30_*
and only select the right I2S driver to use in each codec driver?

If not, we'd have to list all relevant platforms both in the I2S/AHUB drivers
and in each codec's rules (which is not necessarily and issue, but there's no
need to have artificial platform dependencies).

What do you think?


I think we should just remove most of these "depends on" since they're 
mostly set up to reflect runtime requirements rather than build time 
requirements. The only points I'd make are:


1)

Everything should "depends on SND_SOC_TEGRA" simply so the options don't 
show up and clutter menuconfig menus unless SND_SOC_TEGRA is enabled.


2)

SND_SOC_TEGRA30_I2S does need the Tegra30 AHUB driver in order to 
compile/link, since it directly calls functions in that driver. This is 
already handled by SND_SOC_TEGRA30_I2S doing "select SND_SOC_TEGRA30_AHUB".


3)

The machine drivers all do e.g. "select SND_SOC_TEGRA30_I2S if 
ARCH_TEGRA_3x_SOC". This was an attempt to make the machine drivers only 
pull in the relevant drivers for the SoC(s) being compiled for. I'm not 
sure this still makes sense; this won't work on kernels that only 
support T124/T210/T186 since ARCH_TEGRA_3x_SOC isn't enabled then. 
Should we just remove all those and make sure the defconfigs are updated 
to make sure the relevant I2S/AHUB/SPDIF/AC97 drivers are explicitly 
enabled? Perhaps we should default all the I2S/AHUB/SPDIF/AC97 to y 
(which will only apply if SND_SOC_TEGRA is enabled)?


Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-04-18 Thread Stephen Warren

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the tegra124 SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.



diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC || ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency? If so, don't we need to add 
T210 and T186 entries into that || condition too, since we could be 
building a kernel with just T210/T186 support and no T124 support?


Re: [PATCH 3/3] sound: ASoC: tegra: Select tegra30 i2s and ahub for tegra124 SoC

2017-04-18 Thread Stephen Warren

On 04/18/2017 09:11 AM, Paul Kocialkowski wrote:

This selects the tegra30 i2s and ahub controllers for the tegra124 SoC.
These are needed when building without ARCH_TEGRA_3x_SOC set.



diff --git a/sound/soc/tegra/Kconfig b/sound/soc/tegra/Kconfig
index efbe8d4c019e..bcd18d2cf7a7 100644
--- a/sound/soc/tegra/Kconfig
+++ b/sound/soc/tegra/Kconfig
@@ -46,7 +46,7 @@ config SND_SOC_TEGRA20_SPDIF

 config SND_SOC_TEGRA30_AHUB
tristate
-   depends on SND_SOC_TEGRA && ARCH_TEGRA_3x_SOC
+   depends on SND_SOC_TEGRA && (ARCH_TEGRA_3x_SOC || ARCH_TEGRA_124_SOC)


Is this really a compile-time dependency? If so, don't we need to add 
T210 and T186 entries into that || condition too, since we could be 
building a kernel with just T210/T186 support and no T124 support?


Re: outreachy

2017-03-09 Thread Stephen Warren

On 03/09/2017 01:51 PM, Scott Branden wrote:

Hi Julia,

On 17-03-09 12:36 PM, Julia Lawall wrote:

Hello,

I discussed the issue of outreachy patches for bcm with Greg, and we are
not convinced that not having the patches CCd to you is such a good idea.
While we don't want to spam you with noise, some of the applicants are
starting to make more significant changes that it could be useful for you
to be aware of.

Could we try a compromise where you are not CCd on whitespace patches,
but
you are CCd on patches that actually modify the code?

>

All I'm asking is you work through your outreachy patches internal first
to get rid of the most basic mistakes and email traffic it is geerating.
 Once that learning process is through then they can be sent out like
any other patches to the kernel mailing lists and maintainers.


+1 from me too; I find these patches rather high volume and had to add a 
filter to keep them out of my primary inbox. I don't know what process 
is in place, but I would suggest:


1) Senders send everything to the outreachy list, where they are 
reviewed for basic issues, like learning to use git send-email, learning 
checkpatch, etc. In this case, only send the patch to the outreachy 
mailing list and nowhere else.


2) Once a patch has passed review there, then send the patch to the 
regular kernel mailing list just like any other patch; follow the output 
of get_maintainers.pl.


We have something like (1) inside NVIDIA for new contributors and 
pre-upstreaming IP review. It helps all the newcomers, but without 
requiring anyone involved in (2) to change behaviour.


The process I suggest is very much inline with the typically suggested 
"asking questions" process: (1) read docs yourself (2) ask local 
contacts for help, (3) start asking wider audiences for help.


Re: outreachy

2017-03-09 Thread Stephen Warren

On 03/09/2017 01:51 PM, Scott Branden wrote:

Hi Julia,

On 17-03-09 12:36 PM, Julia Lawall wrote:

Hello,

I discussed the issue of outreachy patches for bcm with Greg, and we are
not convinced that not having the patches CCd to you is such a good idea.
While we don't want to spam you with noise, some of the applicants are
starting to make more significant changes that it could be useful for you
to be aware of.

Could we try a compromise where you are not CCd on whitespace patches,
but
you are CCd on patches that actually modify the code?

>

All I'm asking is you work through your outreachy patches internal first
to get rid of the most basic mistakes and email traffic it is geerating.
 Once that learning process is through then they can be sent out like
any other patches to the kernel mailing lists and maintainers.


+1 from me too; I find these patches rather high volume and had to add a 
filter to keep them out of my primary inbox. I don't know what process 
is in place, but I would suggest:


1) Senders send everything to the outreachy list, where they are 
reviewed for basic issues, like learning to use git send-email, learning 
checkpatch, etc. In this case, only send the patch to the outreachy 
mailing list and nowhere else.


2) Once a patch has passed review there, then send the patch to the 
regular kernel mailing list just like any other patch; follow the output 
of get_maintainers.pl.


We have something like (1) inside NVIDIA for new contributors and 
pre-upstreaming IP review. It helps all the newcomers, but without 
requiring anyone involved in (2) to change behaviour.


The process I suggest is very much inline with the typically suggested 
"asking questions" process: (1) read docs yourself (2) ask local 
contacts for help, (3) start asking wider audiences for help.


Re: [PATCH 5/7] net: stmmac: Program RX queue size and flow control

2017-03-09 Thread Stephen Warren

On 03/09/2017 12:42 PM, Thierry Reding wrote:

On Mon, Feb 27, 2017 at 12:09:02PM +0200, Mikko Perttunen wrote:

On 23.02.2017 19:24, Thierry Reding wrote:

From: Thierry Reding 

Program the receive queue size based on the RX FIFO size and enable
hardware flow control for large FIFOs.



diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c 
b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c



@@ -252,6 +253,44 @@ static void dwmac4_dma_chan_op_mode(void __iomem *ioaddr, 
int txmode,
mtl_rx_op |= MTL_OP_MODE_RTC_128;
}

+   mtl_rx_op &= ~MTL_OP_MODE_RQS_MASK;
+   mtl_rx_op |= rqs << MTL_OP_MODE_RQS_SHIFT;
+
+   /* enable flow control only if each channel gets 4 KiB or more FIFO */
+   if (rxfifosz >= 4096) {
+   unsigned int rfd, rfa;
+
+   mtl_rx_op |= MTL_OP_MODE_EHFC;
+
+   switch (rxfifosz) {
+   case 4096:
+   rfd = 0x03;
+   rfa = 0x01;
+   break;
+
+   case 8192:
+   rfd = 0x06;
+   rfa = 0x0a;
+   break;
+
+   case 16384:
+   rfd = 0x06;
+   rfa = 0x12;
+   break;
+
+   default:
+   rfd = 0x06;
+   rfa = 0x1e;
+   break;
+   }


Are these values correct? In the 4096 case, rfd > rfa, in all other cases
the other way around. In any case it would be useful to have a comment
clarifying the thresholds in bytes.


I'll investigate. To be honest I simply took this from Stephen's U-Boot
driver since that's already tested. I trust Stephen, so I didn't bother
double-checking.


I don't recall for sure, but I think these values came directly from 
either the upstream kernel (the non-stmmac driver) or NV downstream 
kernel EQoS driver, and I re-used them without investigating. I'm not 
even sure if the outer if() expression is true; these numbers might not 
even end up being used?


Re: [PATCH 5/7] net: stmmac: Program RX queue size and flow control

2017-03-09 Thread Stephen Warren

On 03/09/2017 12:42 PM, Thierry Reding wrote:

On Mon, Feb 27, 2017 at 12:09:02PM +0200, Mikko Perttunen wrote:

On 23.02.2017 19:24, Thierry Reding wrote:

From: Thierry Reding 

Program the receive queue size based on the RX FIFO size and enable
hardware flow control for large FIFOs.



diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c 
b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c



@@ -252,6 +253,44 @@ static void dwmac4_dma_chan_op_mode(void __iomem *ioaddr, 
int txmode,
mtl_rx_op |= MTL_OP_MODE_RTC_128;
}

+   mtl_rx_op &= ~MTL_OP_MODE_RQS_MASK;
+   mtl_rx_op |= rqs << MTL_OP_MODE_RQS_SHIFT;
+
+   /* enable flow control only if each channel gets 4 KiB or more FIFO */
+   if (rxfifosz >= 4096) {
+   unsigned int rfd, rfa;
+
+   mtl_rx_op |= MTL_OP_MODE_EHFC;
+
+   switch (rxfifosz) {
+   case 4096:
+   rfd = 0x03;
+   rfa = 0x01;
+   break;
+
+   case 8192:
+   rfd = 0x06;
+   rfa = 0x0a;
+   break;
+
+   case 16384:
+   rfd = 0x06;
+   rfa = 0x12;
+   break;
+
+   default:
+   rfd = 0x06;
+   rfa = 0x1e;
+   break;
+   }


Are these values correct? In the 4096 case, rfd > rfa, in all other cases
the other way around. In any case it would be useful to have a comment
clarifying the thresholds in bytes.


I'll investigate. To be honest I simply took this from Stephen's U-Boot
driver since that's already tested. I trust Stephen, so I didn't bother
double-checking.


I don't recall for sure, but I think these values came directly from 
either the upstream kernel (the non-stmmac driver) or NV downstream 
kernel EQoS driver, and I re-used them without investigating. I'm not 
even sure if the outer if() expression is true; these numbers might not 
even end up being used?


Re: [PATCH] MAINTAINERS: Update for the current location of the bcm2835 tree.

2017-01-31 Thread Stephen Warren

On 01/31/2017 12:48 PM, Eric Anholt wrote:

I've been maintaining the bcm2835 branches here for a year or so.


Acked-by: Stephen Warren <swar...@wwwdotorg.org>


Re: [PATCH] MAINTAINERS: Update for the current location of the bcm2835 tree.

2017-01-31 Thread Stephen Warren

On 01/31/2017 12:48 PM, Eric Anholt wrote:

I've been maintaining the bcm2835 branches here for a year or so.


Acked-by: Stephen Warren 


Re: [PATCH v3] dt-bindings: power: supply: bq24735: reverse the polarity of ac-detect

2016-12-15 Thread Stephen Warren

On 12/15/2016 05:21 AM, Peter Rosin wrote:

The ACOK pin on the bq24735 is active-high, of course meaning that when
AC is OK the pin is high. However, all Tegra dts files have incorrectly
specified active-high even though the signal is inverted on the Tegra
boards. This has worked since the Linux driver has also inverted the
meaning of the GPIO. Fix this situation by simply specifying in the
bindings what everybody else agrees on; that the ti,ac-detect-gpios is
active on AC adapter absence.

Signed-off-by: Peter Rosin 
---

Hi!

This patch is the result of this discussion:
http://marc.info/?t=14815253182

I don't like how it changes the one thing that is seems correct, but
what to do?


I haven't followed this thread so hopefully what I say is relevant. My 
take is:


If the DT binding is correct or reasonable, keep it.

If the Tegra DTs contain incorrect content, and never worked correctly 
in this aspect, then fix them. We do need to maintain DT 
ABI/compatibility, but I believe only with stuff that actually worked 
correctly. If the DT has a bug, just fix it.


That said, if ti,ac-detect-gpios is describing a host GPIO, then it's 
entirely arbitrary which polarity it should have, i.e. the polarity is 
not something specified by the bq24735 HW. In that case, feel free to 
change either the binding to match the DT or the DT to match the 
binding. Changing the DT to match the binding might still be better 
since there could be other users you're not aware of, and they might 
have written their DT correctly, and you don't want to break them.


Re: [PATCH v3] dt-bindings: power: supply: bq24735: reverse the polarity of ac-detect

2016-12-15 Thread Stephen Warren

On 12/15/2016 05:21 AM, Peter Rosin wrote:

The ACOK pin on the bq24735 is active-high, of course meaning that when
AC is OK the pin is high. However, all Tegra dts files have incorrectly
specified active-high even though the signal is inverted on the Tegra
boards. This has worked since the Linux driver has also inverted the
meaning of the GPIO. Fix this situation by simply specifying in the
bindings what everybody else agrees on; that the ti,ac-detect-gpios is
active on AC adapter absence.

Signed-off-by: Peter Rosin 
---

Hi!

This patch is the result of this discussion:
http://marc.info/?t=14815253182

I don't like how it changes the one thing that is seems correct, but
what to do?


I haven't followed this thread so hopefully what I say is relevant. My 
take is:


If the DT binding is correct or reasonable, keep it.

If the Tegra DTs contain incorrect content, and never worked correctly 
in this aspect, then fix them. We do need to maintain DT 
ABI/compatibility, but I believe only with stuff that actually worked 
correctly. If the DT has a bug, just fix it.


That said, if ti,ac-detect-gpios is describing a host GPIO, then it's 
entirely arbitrary which polarity it should have, i.e. the polarity is 
not something specified by the bq24735 HW. In that case, feel free to 
change either the binding to match the DT or the DT to match the 
binding. Changing the DT to match the binding might still be better 
since there could be other users you're not aware of, and they might 
have written their DT correctly, and you don't want to break them.


Re: [PATCH] gpio: tegra186: Add support for T186 GPIO

2016-11-08 Thread Stephen Warren

On 11/02/2016 04:48 AM, Suresh Mangipudi wrote:

Add GPIO driver for T186 based platforms.
Adds support for MAIN and AON GPIO's from T186.


I'm not sure how you/Thierry will approach merging this with the other 
GPIO driver he has, but here's a very quick review of this one in case 
it's useful.



diff --git a/drivers/gpio/gpio-tegra186.c b/drivers/gpio/gpio-tegra186.c



+#define TEGRA_MAIN_GPIO_PORT_INFO(port, cid, cind, npins)  \


A comment indicating what "cid" and "cind" mean (and perhaps the other 
parameters too) would be useful.


A brief description of the overall register layout and structure and 
differences between the MAIN/AON controllers would be useful.



+[TEGRA_MAIN_GPIO_PORT_##port] = {  \
+   .port_name = #port, \
+   .cont_id = cid, \
+   .port_index = cind, \


Why not make the parameter names match the field names they're assigned to?


+   .valid_pins = npins,\
+   .scr_offset = cid * 0x1000 + cind * 0x40,   \
+   .reg_offset = cid * 0x1000 + cind * 0x200,  \


While C does define operator precedence rules that make that expression 
OK, I personally prefer using () to make it explict:


+   .reg_offset = (cid * 0x1000) + (cind * 0x200),  \

That way, the reader doesn't have to think/remember so much.

Also, if these values can be calculated based on .cont_id and 
.port_index, I wonder why we need to pre-calculate them here and/or what 
else could be pre-calculated from .cont_id/.port_index? I'm tend to 
either (a) just store .cont_id and .port_index and calculate everything 
from them always, or (b) store just derived data and not both storing 
.cont_id/.port_index.



+static struct tegra_gpio_port_soc_info tegra_main_gpio_cinfo[] = {
+   TEGRA_MAIN_GPIO_PORT_INFO(A, 2, 0, 7),
+   TEGRA_MAIN_GPIO_PORT_INFO(B, 3, 0, 7),


I assume the entries in this file must be in the same order as the DT 
binding port IDs? A comment to that effect would be useful.



+struct tegra_gpio_info;
+
+struct tegra_gpio_soc_info {
+   const char *name;
+   const struct tegra_gpio_port_soc_info *port;
+   int nports;
+};


This isn't information about an SoC; it's information about a 
controller, and there are 2 controllers within Tegra186. Rename to 
tegra_gpio_ctlr_info?



+struct tegra_gpio_controller {
+   int controller;
+   int irq;
+   struct tegra_gpio_info *tgi;
+};
+
+struct tegra_gpio_info {


Is this structure per-bank/-port? Also, "info" seems to be used above 
for static configuration info/data. I think this should be called 
"tegra_gpio_port"?



+   struct device *dev;
+   int nbanks;
+   void __iomem *gpio_regs;
+   void __iomem *scr_regs;
+   struct irq_domain *irq_domain;
+   const struct tegra_gpio_soc_info *soc;
+   struct tegra_gpio_controller tg_contrlr[MAX_GPIO_CONTROLLERS];
+   struct gpio_chip gc;
+   struct irq_chip ic;
+};



+#define GPIO_CNTRL_REG(tgi, gpio, roffset) \
+   ((tgi)->gpio_regs + (tgi)->soc->port[GPIO_PORT(gpio)].reg_offset + \
+   (GPIO_REG_DIFF * GPIO_PIN(gpio)) + (roffset))


Writing a static inline function would make formatting and type safety 
easier.



+static void tegra_gpio_update(struct tegra_gpio_info *tgi, u32 gpio,
+u32 reg_offset,u32 mask, u32 val)
+{
+   u32 rval;
+
+   rval = __raw_readl(GPIO_CNTRL_REG(tgi, gpio, reg_offset));
+   rval = (rval & ~mask) | (val & mask);
+   __raw_writel(rval, GPIO_CNTRL_REG(tgi, gpio, reg_offset));
+}


If you use a regmap object rather than a raw MMIO pointer, I believe 
there's already a function that does this read-modify-write.



+/* This function will return if the GPIO is accessible by CPU */
+static bool gpio_is_accessible(struct tegra_gpio_info *tgi, u32 offset)


I'd prefer all functions use the same name prefix. "tegra_gpio" seems to 
be used so far. Actually, given there's already an existing Tegra GPIO 
driver, and this driver covers the new controller(s) in Tegra186, I'd 
prefer everything be named tegra186_gpio_xxx.



+   if (cont_id  < 0)
+   return false;


That should trigger a checkpatch error due to the presence of two spaces 
in the expression. Try running checkpatch and fixing any issues.



+   val = __raw_readl(tgi->scr_regs + scr_offset +
+   (pin * GPIO_SCR_DIFF) + GPIO_SCR_REG);
+
+   if ((val & GPIO_FULL_ACCESS) == GPIO_FULL_ACCESS)
+   return true;


I'm not entirely convinced about this. It's possible to have only read 
or only write access. I believe the CPU can be assigned to an arbitrary 
bus master group, whereas the value of GPIO_FULL_ACCESS assumes it's on 
group 1. Do we actually need this function at all except for 

Re: [PATCH] gpio: tegra186: Add support for T186 GPIO

2016-11-08 Thread Stephen Warren

On 11/02/2016 04:48 AM, Suresh Mangipudi wrote:

Add GPIO driver for T186 based platforms.
Adds support for MAIN and AON GPIO's from T186.


I'm not sure how you/Thierry will approach merging this with the other 
GPIO driver he has, but here's a very quick review of this one in case 
it's useful.



diff --git a/drivers/gpio/gpio-tegra186.c b/drivers/gpio/gpio-tegra186.c



+#define TEGRA_MAIN_GPIO_PORT_INFO(port, cid, cind, npins)  \


A comment indicating what "cid" and "cind" mean (and perhaps the other 
parameters too) would be useful.


A brief description of the overall register layout and structure and 
differences between the MAIN/AON controllers would be useful.



+[TEGRA_MAIN_GPIO_PORT_##port] = {  \
+   .port_name = #port, \
+   .cont_id = cid, \
+   .port_index = cind, \


Why not make the parameter names match the field names they're assigned to?


+   .valid_pins = npins,\
+   .scr_offset = cid * 0x1000 + cind * 0x40,   \
+   .reg_offset = cid * 0x1000 + cind * 0x200,  \


While C does define operator precedence rules that make that expression 
OK, I personally prefer using () to make it explict:


+   .reg_offset = (cid * 0x1000) + (cind * 0x200),  \

That way, the reader doesn't have to think/remember so much.

Also, if these values can be calculated based on .cont_id and 
.port_index, I wonder why we need to pre-calculate them here and/or what 
else could be pre-calculated from .cont_id/.port_index? I'm tend to 
either (a) just store .cont_id and .port_index and calculate everything 
from them always, or (b) store just derived data and not both storing 
.cont_id/.port_index.



+static struct tegra_gpio_port_soc_info tegra_main_gpio_cinfo[] = {
+   TEGRA_MAIN_GPIO_PORT_INFO(A, 2, 0, 7),
+   TEGRA_MAIN_GPIO_PORT_INFO(B, 3, 0, 7),


I assume the entries in this file must be in the same order as the DT 
binding port IDs? A comment to that effect would be useful.



+struct tegra_gpio_info;
+
+struct tegra_gpio_soc_info {
+   const char *name;
+   const struct tegra_gpio_port_soc_info *port;
+   int nports;
+};


This isn't information about an SoC; it's information about a 
controller, and there are 2 controllers within Tegra186. Rename to 
tegra_gpio_ctlr_info?



+struct tegra_gpio_controller {
+   int controller;
+   int irq;
+   struct tegra_gpio_info *tgi;
+};
+
+struct tegra_gpio_info {


Is this structure per-bank/-port? Also, "info" seems to be used above 
for static configuration info/data. I think this should be called 
"tegra_gpio_port"?



+   struct device *dev;
+   int nbanks;
+   void __iomem *gpio_regs;
+   void __iomem *scr_regs;
+   struct irq_domain *irq_domain;
+   const struct tegra_gpio_soc_info *soc;
+   struct tegra_gpio_controller tg_contrlr[MAX_GPIO_CONTROLLERS];
+   struct gpio_chip gc;
+   struct irq_chip ic;
+};



+#define GPIO_CNTRL_REG(tgi, gpio, roffset) \
+   ((tgi)->gpio_regs + (tgi)->soc->port[GPIO_PORT(gpio)].reg_offset + \
+   (GPIO_REG_DIFF * GPIO_PIN(gpio)) + (roffset))


Writing a static inline function would make formatting and type safety 
easier.



+static void tegra_gpio_update(struct tegra_gpio_info *tgi, u32 gpio,
+u32 reg_offset,u32 mask, u32 val)
+{
+   u32 rval;
+
+   rval = __raw_readl(GPIO_CNTRL_REG(tgi, gpio, reg_offset));
+   rval = (rval & ~mask) | (val & mask);
+   __raw_writel(rval, GPIO_CNTRL_REG(tgi, gpio, reg_offset));
+}


If you use a regmap object rather than a raw MMIO pointer, I believe 
there's already a function that does this read-modify-write.



+/* This function will return if the GPIO is accessible by CPU */
+static bool gpio_is_accessible(struct tegra_gpio_info *tgi, u32 offset)


I'd prefer all functions use the same name prefix. "tegra_gpio" seems to 
be used so far. Actually, given there's already an existing Tegra GPIO 
driver, and this driver covers the new controller(s) in Tegra186, I'd 
prefer everything be named tegra186_gpio_xxx.



+   if (cont_id  < 0)
+   return false;


That should trigger a checkpatch error due to the presence of two spaces 
in the expression. Try running checkpatch and fixing any issues.



+   val = __raw_readl(tgi->scr_regs + scr_offset +
+   (pin * GPIO_SCR_DIFF) + GPIO_SCR_REG);
+
+   if ((val & GPIO_FULL_ACCESS) == GPIO_FULL_ACCESS)
+   return true;


I'm not entirely convinced about this. It's possible to have only read 
or only write access. I believe the CPU can be assigned to an arbitrary 
bus master group, whereas the value of GPIO_FULL_ACCESS assumes it's on 
group 1. Do we actually need this function at all except for 

Re: [PATCH] gpio: tegra186: Add support for T186 GPIO

2016-11-08 Thread Stephen Warren

On 11/08/2016 08:55 AM, Thierry Reding wrote:

On Mon, Nov 07, 2016 at 05:42:33PM -0800, Olof Johansson wrote:

On Mon, Nov 7, 2016 at 5:21 AM, Thierry Reding  wrote:

On Mon, Nov 07, 2016 at 08:53:37AM +0100, Linus Walleij wrote:

On Wed, Nov 2, 2016 at 11:48 AM, Suresh Mangipudi  wrote:


Add GPIO driver for T186 based platforms.
Adds support for MAIN and AON GPIO's from T186.

Signed-off-by: Suresh Mangipudi 


Stephen/Thierry/Alexandre:
Can I get your review on this Tegra thing?


Can we hold off on this for a bit? I've got my own implementation of
this in my Tegra186 tree because I thought nobody else was working on
it. From a brief look they seem mostly similar but there are a couple
of differences that I need to take a closer look at (and do some more
intensive testing on).


Be careful about discouraging other developers internally from
participating upstream, Thierry.


That was certainly not my intention. I do welcome any contributions,
internal or external.


Please don't become a bottleneck for your company's contributions.
Rewriting stuff on your own isn't a scalable model.


Like I said, I didn't rewrite this, I merely wrote the GPIO driver
because there wasn't one at the time and I wasn't aware of anyone else
working on one. Waiting for a GPIO driver to emerge would've prevented
me from making progress on other fronts.


One issue here is that there are lots of patches destined for upstream 
in your own personal tree, but they aren't actually upstream yet. I 
think pushing more of your work into linux-next (and further upstream) 
faster would help people out. linux-next and other "standard" repos 
should be easily discoverable by anyway following any upstreaming 
HOWTOs, but those HOWTOs aren't going to mention your personal trees, so 
patches there effectively don't exist. Making the already extant work 
more discoverable will help prevent people duplicating work.


Related to this, I've been waiting rather a while for the Tegra186 DT 
binding patches I sent to be applied. I'd love to see them go in ASAP 
even if there's no kernel driver behind them. The bindings have been 
reviewed, ack'd, and they're in use in U-Boot already.


A thought on Tegra186: For a older supported SoCs, we obviously don't 
want to push changes upstream that aren't too well baked. However, for a 
new SoC that doesn't work yet, I'm tend to prioritize getting as much 
early work upstream as fast as possible (to try and unblock people 
working in other areas) over getting those patches perfect first. 
Release early, release often will help unblock people and parallelize 
work. Pipeline your own work rather than batching it all up to release 
at once.


P.S. don't take this email too personally or anything; I'm not trying to 
be complaining/critical or anything like that. It's just a few mild 
thoughts.


Re: [PATCH] gpio: tegra186: Add support for T186 GPIO

2016-11-08 Thread Stephen Warren

On 11/08/2016 08:55 AM, Thierry Reding wrote:

On Mon, Nov 07, 2016 at 05:42:33PM -0800, Olof Johansson wrote:

On Mon, Nov 7, 2016 at 5:21 AM, Thierry Reding  wrote:

On Mon, Nov 07, 2016 at 08:53:37AM +0100, Linus Walleij wrote:

On Wed, Nov 2, 2016 at 11:48 AM, Suresh Mangipudi  wrote:


Add GPIO driver for T186 based platforms.
Adds support for MAIN and AON GPIO's from T186.

Signed-off-by: Suresh Mangipudi 


Stephen/Thierry/Alexandre:
Can I get your review on this Tegra thing?


Can we hold off on this for a bit? I've got my own implementation of
this in my Tegra186 tree because I thought nobody else was working on
it. From a brief look they seem mostly similar but there are a couple
of differences that I need to take a closer look at (and do some more
intensive testing on).


Be careful about discouraging other developers internally from
participating upstream, Thierry.


That was certainly not my intention. I do welcome any contributions,
internal or external.


Please don't become a bottleneck for your company's contributions.
Rewriting stuff on your own isn't a scalable model.


Like I said, I didn't rewrite this, I merely wrote the GPIO driver
because there wasn't one at the time and I wasn't aware of anyone else
working on one. Waiting for a GPIO driver to emerge would've prevented
me from making progress on other fronts.


One issue here is that there are lots of patches destined for upstream 
in your own personal tree, but they aren't actually upstream yet. I 
think pushing more of your work into linux-next (and further upstream) 
faster would help people out. linux-next and other "standard" repos 
should be easily discoverable by anyway following any upstreaming 
HOWTOs, but those HOWTOs aren't going to mention your personal trees, so 
patches there effectively don't exist. Making the already extant work 
more discoverable will help prevent people duplicating work.


Related to this, I've been waiting rather a while for the Tegra186 DT 
binding patches I sent to be applied. I'd love to see them go in ASAP 
even if there's no kernel driver behind them. The bindings have been 
reviewed, ack'd, and they're in use in U-Boot already.


A thought on Tegra186: For a older supported SoCs, we obviously don't 
want to push changes upstream that aren't too well baked. However, for a 
new SoC that doesn't work yet, I'm tend to prioritize getting as much 
early work upstream as fast as possible (to try and unblock people 
working in other areas) over getting those patches perfect first. 
Release early, release often will help unblock people and parallelize 
work. Pipeline your own work rather than batching it all up to release 
at once.


P.S. don't take this email too personally or anything; I'm not trying to 
be complaining/critical or anything like that. It's just a few mild 
thoughts.


Re: [PATCH v3] ARM: bcm2835: Add names for the Raspberry Pi GPIO lines

2016-11-05 Thread Stephen Warren

On 10/27/2016 10:52 AM, Eric Anholt wrote:

From: Linus Walleij 

The idea is to give useful names to GPIO lines that an implementer
will be using from userspace, e.g. for maker type projects.  These are
user-visible using tools/gpio/lsgpio.c



 arch/arm/boot/dts/bcm2835-rpi-a-plus.dts | 65 +++
 arch/arm/boot/dts/bcm2835-rpi-a.dts  | 67 
 arch/arm/boot/dts/bcm2835-rpi-b-plus.dts | 66 +++
 arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts | 66 +++
 arch/arm/boot/dts/bcm2835-rpi-b.dts  | 67 


Aren't the A and B rev 2 pinouts the same. If so, why duplicate the 
content between the files instead of creating an inclue file? Same for 
A+, B+, Pi 2, and Pi 3. Shouldn't this patch update the Pi 2 and Pi 3 
DTs too?



diff --git a/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts 
b/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts



  {
+   /*
+* This is based on the unreleased schematic for the Model A+.
+*
+* Legend:
+* "NC" = not connected (no rail from the SoC)
+* "FOO" = GPIO line named "FOO" on the schematic
+* "FOO_N" = GPIO line named "FOO" on schematic, active low
+*/
+   gpio-line-names = "SDA0",
+ "SCL0",



diff --git a/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts 
b/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts



  {
+   /*
+* Taken from Raspberry-Pi-B-Plus-V1.2-Schematics.pdf
+* RPI-BPLUS sheet 1
+*
+* Legend:
+* "NC" = not connected (no rail from the SoC)
+* "FOO" = GPIO line named "FOO" on the schematic
+* "FOO_N" = GPIO line named "FOO" on schematic, active low
+*/
+   gpio-line-names = "ID_SD",
+ "ID_SC",


I think the whole point of naming GPIOs is to give users the same 
experience across the different boards where the same semantics exist in 
HW. Both the A+ and B+ use GPIO0/1 (a/k/a ID_SD/ID_SC a/k/a SDA0/SCL0) 
for the same semantic purpose and are exposed in the same externally 
visible way (same pins on the expansion header); the board ID EEPROM. 
Therefore I assert the names of these GPIOs should be identical on all 
boards that use them for that purpose, to allow SW (or human knowledge) 
portability between the boards.



+ "GPIO17",


This pin is known as GPIO_GEN0 on the expansion header. Given the 
expansion header is all end-users likely care about, and other pins 
(e.g. SPI_CE1_N) are named after the expansion header, shouldn't this 
patch use the GPIO expansion header naming for all pins that are routed 
to that header?


Re: [PATCH v3] ARM: bcm2835: Add names for the Raspberry Pi GPIO lines

2016-11-05 Thread Stephen Warren

On 10/27/2016 10:52 AM, Eric Anholt wrote:

From: Linus Walleij 

The idea is to give useful names to GPIO lines that an implementer
will be using from userspace, e.g. for maker type projects.  These are
user-visible using tools/gpio/lsgpio.c



 arch/arm/boot/dts/bcm2835-rpi-a-plus.dts | 65 +++
 arch/arm/boot/dts/bcm2835-rpi-a.dts  | 67 
 arch/arm/boot/dts/bcm2835-rpi-b-plus.dts | 66 +++
 arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts | 66 +++
 arch/arm/boot/dts/bcm2835-rpi-b.dts  | 67 


Aren't the A and B rev 2 pinouts the same. If so, why duplicate the 
content between the files instead of creating an inclue file? Same for 
A+, B+, Pi 2, and Pi 3. Shouldn't this patch update the Pi 2 and Pi 3 
DTs too?



diff --git a/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts 
b/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts



  {
+   /*
+* This is based on the unreleased schematic for the Model A+.
+*
+* Legend:
+* "NC" = not connected (no rail from the SoC)
+* "FOO" = GPIO line named "FOO" on the schematic
+* "FOO_N" = GPIO line named "FOO" on schematic, active low
+*/
+   gpio-line-names = "SDA0",
+ "SCL0",



diff --git a/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts 
b/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts



  {
+   /*
+* Taken from Raspberry-Pi-B-Plus-V1.2-Schematics.pdf
+* RPI-BPLUS sheet 1
+*
+* Legend:
+* "NC" = not connected (no rail from the SoC)
+* "FOO" = GPIO line named "FOO" on the schematic
+* "FOO_N" = GPIO line named "FOO" on schematic, active low
+*/
+   gpio-line-names = "ID_SD",
+ "ID_SC",


I think the whole point of naming GPIOs is to give users the same 
experience across the different boards where the same semantics exist in 
HW. Both the A+ and B+ use GPIO0/1 (a/k/a ID_SD/ID_SC a/k/a SDA0/SCL0) 
for the same semantic purpose and are exposed in the same externally 
visible way (same pins on the expansion header); the board ID EEPROM. 
Therefore I assert the names of these GPIOs should be identical on all 
boards that use them for that purpose, to allow SW (or human knowledge) 
portability between the boards.



+ "GPIO17",


This pin is known as GPIO_GEN0 on the expansion header. Given the 
expansion header is all end-users likely care about, and other pins 
(e.g. SPI_CE1_N) are named after the expansion header, shouldn't this 
patch use the GPIO expansion header naming for all pins that are routed 
to that header?


Re: [PATCH 1/3] dt-bindings: Add a binding for the RPi firmware GPIO driver.

2016-09-28 Thread Stephen Warren

On 09/26/2016 12:42 PM, Stefan Wahren wrote:



Stephen Warren <swar...@wwwdotorg.org> hat am 26. September 2016 um 18:38
geschrieben:


On 09/23/2016 12:39 PM, Stefan Wahren wrote:

Hi Eric,


Eric Anholt <e...@anholt.net> hat am 19. September 2016 um 18:13
geschrieben:


The RPi firmware exposes all of the board's GPIO lines through
property calls.  Linux chooses to control most lines directly through
the pinctrl driver, but for the FXL6408 GPIO expander on the Pi3, we
need to access them through the firmware.

Signed-off-by: Eric Anholt <e...@anholt.net>
---
 .../bindings/gpio/gpio-raspberrypi-firmware.txt| 22
++
 1 file changed, 22 insertions(+)
 create mode 100644
Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt

diff --git
a/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
new file mode 100644
index ..2b635c23a6f8
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
@@ -0,0 +1,22 @@
+Raspberry Pi power domain driver
+
+Required properties:
+
+- compatible:  Should be "raspberrypi,firmware-gpio"


i think the compatible should be more specific like

raspberrypi,rpi3-firmware-gpio

and all information which aren't requestable from the firmware should be
stored
in a info structure. This makes the driver easier to extend in the future by
adding new compatibles and their info structures.


Is this actually specific to the Pi3 at all?


AFAIK only the Raspberry Pi 3 has a GPIO expander which is accessible via the
common FW. My suggestion tries to follow the basic guideline "A precise
compatible string is better than a vague one" from Device Tree for Dummies [1].
So in case the next Raspberry Pi would have a different GPIO expander with
different parameters we could add a new compatible.

But you are right the word order in "rpi3-firmware-gpio" suggests that there are
different FW which is wrong. At the end it's only a compatible string. So no
strong opinion about the naming.

[1] -
https://events.linuxfoundation.org/sites/events/files/slides/petazzoni-device-tree-dummies.pdf


To deal with that requirement, what you'd want is the following:

RPi 1:
"raspberrypi,firmware-gpio"

RPi 2:
"raspberrypi,rpi2-firmware-gpio", "raspberrypi,firmware-gpio"

RPi 3:
"raspberrypi,rpi3-firmware-gpio", "raspberrypi,firmware-gpio"

Compatible values should always list both the most specific value and 
the baseline value that it's 100% backwards compatible with.


However, I don't think this argument applies in this case. It isn't the 
case that there's a separate Pi1 FW, Pi2 FW, Pi3 FW. Rather, there is a 
single FW image that runs on all (currently existing) Pis. As such, I 
believe that using solely "raspberrypi,firmware-gpio" is appropriate 
everywhere, since the thing being described is always the exact same 
thing with no variance.


This contrasts with HW modules in the SoC, since the different Pis 
really do have a different SoC, and hence potentially different HW 
modules, so e.g. compatible = "brcm,bcm2837-i2c", "brcm,bcm2835-i2c" 
could actually be useful.



Isn't the FW the same
across all Pis; the part that's specific to the Pi3 is whether it's
useful to use that API?

As such, I'd suggest just raspberrypi,firmware-gpio as the compatible value.




Re: [PATCH 1/3] dt-bindings: Add a binding for the RPi firmware GPIO driver.

2016-09-28 Thread Stephen Warren

On 09/26/2016 12:42 PM, Stefan Wahren wrote:



Stephen Warren  hat am 26. September 2016 um 18:38
geschrieben:


On 09/23/2016 12:39 PM, Stefan Wahren wrote:

Hi Eric,


Eric Anholt  hat am 19. September 2016 um 18:13
geschrieben:


The RPi firmware exposes all of the board's GPIO lines through
property calls.  Linux chooses to control most lines directly through
the pinctrl driver, but for the FXL6408 GPIO expander on the Pi3, we
need to access them through the firmware.

Signed-off-by: Eric Anholt 
---
 .../bindings/gpio/gpio-raspberrypi-firmware.txt| 22
++
 1 file changed, 22 insertions(+)
 create mode 100644
Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt

diff --git
a/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
new file mode 100644
index ..2b635c23a6f8
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
@@ -0,0 +1,22 @@
+Raspberry Pi power domain driver
+
+Required properties:
+
+- compatible:  Should be "raspberrypi,firmware-gpio"


i think the compatible should be more specific like

raspberrypi,rpi3-firmware-gpio

and all information which aren't requestable from the firmware should be
stored
in a info structure. This makes the driver easier to extend in the future by
adding new compatibles and their info structures.


Is this actually specific to the Pi3 at all?


AFAIK only the Raspberry Pi 3 has a GPIO expander which is accessible via the
common FW. My suggestion tries to follow the basic guideline "A precise
compatible string is better than a vague one" from Device Tree for Dummies [1].
So in case the next Raspberry Pi would have a different GPIO expander with
different parameters we could add a new compatible.

But you are right the word order in "rpi3-firmware-gpio" suggests that there are
different FW which is wrong. At the end it's only a compatible string. So no
strong opinion about the naming.

[1] -
https://events.linuxfoundation.org/sites/events/files/slides/petazzoni-device-tree-dummies.pdf


To deal with that requirement, what you'd want is the following:

RPi 1:
"raspberrypi,firmware-gpio"

RPi 2:
"raspberrypi,rpi2-firmware-gpio", "raspberrypi,firmware-gpio"

RPi 3:
"raspberrypi,rpi3-firmware-gpio", "raspberrypi,firmware-gpio"

Compatible values should always list both the most specific value and 
the baseline value that it's 100% backwards compatible with.


However, I don't think this argument applies in this case. It isn't the 
case that there's a separate Pi1 FW, Pi2 FW, Pi3 FW. Rather, there is a 
single FW image that runs on all (currently existing) Pis. As such, I 
believe that using solely "raspberrypi,firmware-gpio" is appropriate 
everywhere, since the thing being described is always the exact same 
thing with no variance.


This contrasts with HW modules in the SoC, since the different Pis 
really do have a different SoC, and hence potentially different HW 
modules, so e.g. compatible = "brcm,bcm2837-i2c", "brcm,bcm2835-i2c" 
could actually be useful.



Isn't the FW the same
across all Pis; the part that's specific to the Pi3 is whether it's
useful to use that API?

As such, I'd suggest just raspberrypi,firmware-gpio as the compatible value.




Re: [PATCH 2/3] gpio: Add a driver for the Raspberry Pi's firmware GPIO calls.

2016-09-26 Thread Stephen Warren

On 09/23/2016 07:15 AM, Eric Anholt wrote:

Linus Walleij  writes:


On Mon, Sep 19, 2016 at 6:13 PM, Eric Anholt  wrote:


This driver will be used for accessing the FXL6408 GPIO exander on the
Pi3.  We can't drive it directly from Linux because the firmware is
continuously polling one of the expander's lines to do its
undervoltage detection.

Signed-off-by: Eric Anholt 

(...)


+config GPIO_RASPBERRYPI
+   tristate "Raspberry Pi firmware-based GPIO access"
+   depends on OF_GPIO && RASPBERRYPI_FIRMWARE && (ARCH_BCM2835 || 
COMPILE_TEST)
+   help
+ Turns on support for using the Raspberry Pi firmware to
+ control GPIO pins.  Used for access to the FXL6408 GPIO
+ expander on the Pi 3.


Maybe it should be named GPIO_RPI_FXL6408 ?

(No strong opinion.)


See DT binding comment -- I think since this driver has no dependency on
being to the 6408 on the pi3, we shouldn't needlessly bind it to the
FXL6408.  (the help comment was just context for why you would want the
driver today).


I'd suggest including "FW" or "FIRMWARE" in the Kconfig option too; the 
Raspberry Pi has multiple GPIO drivers; one accessing the BCM283x HW 
directly and the other going through the FW. It'd be good if each 
Kconfig name was pretty explicit re: which one it represented.


Re: [PATCH 2/3] gpio: Add a driver for the Raspberry Pi's firmware GPIO calls.

2016-09-26 Thread Stephen Warren

On 09/23/2016 07:15 AM, Eric Anholt wrote:

Linus Walleij  writes:


On Mon, Sep 19, 2016 at 6:13 PM, Eric Anholt  wrote:


This driver will be used for accessing the FXL6408 GPIO exander on the
Pi3.  We can't drive it directly from Linux because the firmware is
continuously polling one of the expander's lines to do its
undervoltage detection.

Signed-off-by: Eric Anholt 

(...)


+config GPIO_RASPBERRYPI
+   tristate "Raspberry Pi firmware-based GPIO access"
+   depends on OF_GPIO && RASPBERRYPI_FIRMWARE && (ARCH_BCM2835 || 
COMPILE_TEST)
+   help
+ Turns on support for using the Raspberry Pi firmware to
+ control GPIO pins.  Used for access to the FXL6408 GPIO
+ expander on the Pi 3.


Maybe it should be named GPIO_RPI_FXL6408 ?

(No strong opinion.)


See DT binding comment -- I think since this driver has no dependency on
being to the 6408 on the pi3, we shouldn't needlessly bind it to the
FXL6408.  (the help comment was just context for why you would want the
driver today).


I'd suggest including "FW" or "FIRMWARE" in the Kconfig option too; the 
Raspberry Pi has multiple GPIO drivers; one accessing the BCM283x HW 
directly and the other going through the FW. It'd be good if each 
Kconfig name was pretty explicit re: which one it represented.


Re: [PATCH 1/3] dt-bindings: Add a binding for the RPi firmware GPIO driver.

2016-09-26 Thread Stephen Warren

On 09/23/2016 07:08 AM, Eric Anholt wrote:

Linus Walleij  writes:


On Mon, Sep 19, 2016 at 6:13 PM, Eric Anholt  wrote:


The RPi firmware exposes all of the board's GPIO lines through
property calls.  Linux chooses to control most lines directly through
the pinctrl driver, but for the FXL6408 GPIO expander on the Pi3, we
need to access them through the firmware.

Signed-off-by: Eric Anholt 


Aha


+++ b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
@@ -0,0 +1,22 @@
+Raspberry Pi power domain driver


Really? :)


Thanks.


+Required properties:
+
+- compatible:  Should be "raspberrypi,firmware-gpio"


Usually this is vendor,compat, is the vendors name "raspberrypi"?


Yes, this driver is for part of the Raspberry Pi Foundation's firmware
code (you can find the same pattern in the firmware and firmware power
domain drivers).


+- gpio-controller: Marks the device node as a gpio controller
+- #gpio-cells: Should be <2> for GPIO number and flags
+- ngpios:  Number of GPIO lines to control.  See gpio.txt


Is this ever anything else than 8? Else omit it and hardcode
8 in the driver instead.


(see below)


+- firmware:Reference to the RPi firmware device node


Reference the DT binding for this.


+- raspberrypi,firmware-gpio-offset:
+   Number the firmware uses for the first GPIO line
+ controlled by this driver


Does this differ between different instances of this hardware or
can it just be open coded in the driver instead?


This is which range (128-135) of the firmware's GPIOs we're controlling.
If another GPIO expander appears later (quite believable, I think
they're down to 1 spare line on this expander), then we would just make
another node with a new offset and ngpios for that expander.


Why would we make another node for that? Wouldn't we always have a 
single node to represent the FW's control over GPIOs, and have that node 
expose all GPIOs that the FW supports. Which GPIO IDs clients actually 
use will simply be determined by the HW schematic, and kernel-side SW 
would just act as a conduit to pass those IDs between clients and the FW.



Sort of related: I also worry that we have races with the firmware for
the platform GPIO bits, since both ARM and firmware are doing RMWs (or,
even worse, maybe just Ws?) of the registers controlled by the pinctrl
driver.  Hopefully I can get the firmware to pass control of devices
like this over to Linux, with firmware making requests to us, but I
don't know if that will happen and we may need to access other GPIOs
using this interface :(


Aren't there write-to-set/write-to-clear registers? If not, then either 
FW owns everything in a particular register or Linux does; the HW won't 
allow sharing.


Re: [PATCH 1/3] dt-bindings: Add a binding for the RPi firmware GPIO driver.

2016-09-26 Thread Stephen Warren

On 09/23/2016 07:08 AM, Eric Anholt wrote:

Linus Walleij  writes:


On Mon, Sep 19, 2016 at 6:13 PM, Eric Anholt  wrote:


The RPi firmware exposes all of the board's GPIO lines through
property calls.  Linux chooses to control most lines directly through
the pinctrl driver, but for the FXL6408 GPIO expander on the Pi3, we
need to access them through the firmware.

Signed-off-by: Eric Anholt 


Aha


+++ b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
@@ -0,0 +1,22 @@
+Raspberry Pi power domain driver


Really? :)


Thanks.


+Required properties:
+
+- compatible:  Should be "raspberrypi,firmware-gpio"


Usually this is vendor,compat, is the vendors name "raspberrypi"?


Yes, this driver is for part of the Raspberry Pi Foundation's firmware
code (you can find the same pattern in the firmware and firmware power
domain drivers).


+- gpio-controller: Marks the device node as a gpio controller
+- #gpio-cells: Should be <2> for GPIO number and flags
+- ngpios:  Number of GPIO lines to control.  See gpio.txt


Is this ever anything else than 8? Else omit it and hardcode
8 in the driver instead.


(see below)


+- firmware:Reference to the RPi firmware device node


Reference the DT binding for this.


+- raspberrypi,firmware-gpio-offset:
+   Number the firmware uses for the first GPIO line
+ controlled by this driver


Does this differ between different instances of this hardware or
can it just be open coded in the driver instead?


This is which range (128-135) of the firmware's GPIOs we're controlling.
If another GPIO expander appears later (quite believable, I think
they're down to 1 spare line on this expander), then we would just make
another node with a new offset and ngpios for that expander.


Why would we make another node for that? Wouldn't we always have a 
single node to represent the FW's control over GPIOs, and have that node 
expose all GPIOs that the FW supports. Which GPIO IDs clients actually 
use will simply be determined by the HW schematic, and kernel-side SW 
would just act as a conduit to pass those IDs between clients and the FW.



Sort of related: I also worry that we have races with the firmware for
the platform GPIO bits, since both ARM and firmware are doing RMWs (or,
even worse, maybe just Ws?) of the registers controlled by the pinctrl
driver.  Hopefully I can get the firmware to pass control of devices
like this over to Linux, with firmware making requests to us, but I
don't know if that will happen and we may need to access other GPIOs
using this interface :(


Aren't there write-to-set/write-to-clear registers? If not, then either 
FW owns everything in a particular register or Linux does; the HW won't 
allow sharing.


Re: [PATCH 1/3] dt-bindings: Add a binding for the RPi firmware GPIO driver.

2016-09-26 Thread Stephen Warren

On 09/23/2016 12:39 PM, Stefan Wahren wrote:

Hi Eric,


Eric Anholt  hat am 19. September 2016 um 18:13 geschrieben:


The RPi firmware exposes all of the board's GPIO lines through
property calls.  Linux chooses to control most lines directly through
the pinctrl driver, but for the FXL6408 GPIO expander on the Pi3, we
need to access them through the firmware.

Signed-off-by: Eric Anholt 
---
 .../bindings/gpio/gpio-raspberrypi-firmware.txt| 22
++
 1 file changed, 22 insertions(+)
 create mode 100644
Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt

diff --git
a/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
new file mode 100644
index ..2b635c23a6f8
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
@@ -0,0 +1,22 @@
+Raspberry Pi power domain driver
+
+Required properties:
+
+- compatible:  Should be "raspberrypi,firmware-gpio"


i think the compatible should be more specific like

raspberrypi,rpi3-firmware-gpio

and all information which aren't requestable from the firmware should be stored
in a info structure. This makes the driver easier to extend in the future by
adding new compatibles and their info structures.


Is this actually specific to the Pi3 at all? Isn't the FW the same 
across all Pis; the part that's specific to the Pi3 is whether it's 
useful to use that API?


As such, I'd suggest just raspberrypi,firmware-gpio as the compatible value.



Re: [PATCH 1/3] dt-bindings: Add a binding for the RPi firmware GPIO driver.

2016-09-26 Thread Stephen Warren

On 09/23/2016 12:39 PM, Stefan Wahren wrote:

Hi Eric,


Eric Anholt  hat am 19. September 2016 um 18:13 geschrieben:


The RPi firmware exposes all of the board's GPIO lines through
property calls.  Linux chooses to control most lines directly through
the pinctrl driver, but for the FXL6408 GPIO expander on the Pi3, we
need to access them through the firmware.

Signed-off-by: Eric Anholt 
---
 .../bindings/gpio/gpio-raspberrypi-firmware.txt| 22
++
 1 file changed, 22 insertions(+)
 create mode 100644
Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt

diff --git
a/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
new file mode 100644
index ..2b635c23a6f8
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpio/gpio-raspberrypi-firmware.txt
@@ -0,0 +1,22 @@
+Raspberry Pi power domain driver
+
+Required properties:
+
+- compatible:  Should be "raspberrypi,firmware-gpio"


i think the compatible should be more specific like

raspberrypi,rpi3-firmware-gpio

and all information which aren't requestable from the firmware should be stored
in a info structure. This makes the driver easier to extend in the future by
adding new compatibles and their info structures.


Is this actually specific to the Pi3 at all? Isn't the FW the same 
across all Pis; the part that's specific to the Pi3 is whether it's 
useful to use that API?


As such, I'd suggest just raspberrypi,firmware-gpio as the compatible value.



Re: [PATCH 1/3] arm64: dts: berlin4ct: add missing unit name to /soc node

2016-09-06 Thread Stephen Warren

On 09/06/2016 04:57 AM, Mark Rutland wrote:

On Tue, Sep 06, 2016 at 06:20:48PM +0800, Jisheng Zhang wrote:

Hi Mark,

On Tue, 6 Sep 2016 11:22:08 +0100 Mark Rutland wrote:


On Tue, Sep 06, 2016 at 04:55:55PM +0800, Jisheng Zhang wrote:

This patch fixes the following DTC warning with W=1:

"Node /soc has a reg or ranges property, but no unit name"

Signed-off-by: Jisheng Zhang 


The node is only compatible with simple-bus, and so shouldn't have a
reg.


IIUC, the warning is caused by "ranges = <0 0 0xf700 0x100>;"


Hmm.. I've rather confused by that warning. Per ePAPR and the
devicetree.org spec, the unit-addresss is meant to match the reg
property, and no mention is made of the ranges property. So I do not
think that it is necessary to require this.

That warning seems to have gone into DTC in commit c9d9121683b35281
("Warn on node name unit-address presence/absence mismatch").

Rob, Stephen, was there some discussion that prompted ranges requiring
a matching unit-address?


It looks like there was some in response to V2 of the patch which 
introduced this warning in dtc:


https://lkml.org/lkml/2013/9/19/301

I assume that's why Rob added that part to the patch when he reposted it.


Re: [PATCH 1/3] arm64: dts: berlin4ct: add missing unit name to /soc node

2016-09-06 Thread Stephen Warren

On 09/06/2016 04:57 AM, Mark Rutland wrote:

On Tue, Sep 06, 2016 at 06:20:48PM +0800, Jisheng Zhang wrote:

Hi Mark,

On Tue, 6 Sep 2016 11:22:08 +0100 Mark Rutland wrote:


On Tue, Sep 06, 2016 at 04:55:55PM +0800, Jisheng Zhang wrote:

This patch fixes the following DTC warning with W=1:

"Node /soc has a reg or ranges property, but no unit name"

Signed-off-by: Jisheng Zhang 


The node is only compatible with simple-bus, and so shouldn't have a
reg.


IIUC, the warning is caused by "ranges = <0 0 0xf700 0x100>;"


Hmm.. I've rather confused by that warning. Per ePAPR and the
devicetree.org spec, the unit-addresss is meant to match the reg
property, and no mention is made of the ranges property. So I do not
think that it is necessary to require this.

That warning seems to have gone into DTC in commit c9d9121683b35281
("Warn on node name unit-address presence/absence mismatch").

Rob, Stephen, was there some discussion that prompted ranges requiring
a matching unit-address?


It looks like there was some in response to V2 of the patch which 
introduced this warning in dtc:


https://lkml.org/lkml/2013/9/19/301

I assume that's why Rob added that part to the patch when he reposted it.


Re: [PATCH V2 9/9] i2c: tegra: Add pinctrl support

2016-08-26 Thread Stephen Warren

On 08/26/2016 10:38 AM, Jon Hunter wrote:


On 26/08/16 16:55, Stephen Warren wrote:

On 08/26/2016 07:09 AM, Jon Hunter wrote:

On Tegra124/132 the pins for I2C6 are shared with the Display Port AUX
(DPAUX) channel and on Tegra210 the pins for I2C4 and I2C6 are shared
with DPAUX1 and DPAUX0, respectively. The multiplexing of the pins is
handled by a register in the DPAUX and so the Tegra DPAUX driver has
been updated to register a pinctrl device for managing these pins.

The pins for these particular I2C devices are bound to the I2C device
prior to probing. However, these I2C devices are in a different power
partition to the DPAUX devices that own the pins. Hence, it is desirable
to place the pins in the 'idle' state and allow the DPAUX power
partition to switch off, when these I2C devices is not in use.
Therefore, add calls to place the I2C pins in the 'default' and 'idle'
states when the I2C device is runtime resumed and suspended,
respectively.

Please note that the pinctrl functions that set the state of the pins
check to see if the devices has pins associated and will return zero
if they do not. Therefore, it is safe to call these pinctrl functions
even for I2C devices that do not have any pins associated.


I think this should be handled by drivers/i2c/muxes/i2c-mux-pinctrl.c
instead?


I remember having a look at i2c-mux some time back, but we did not have
requirement to share the pins dynamically at runtime between the DPAUX
and I2C devices.

The pins are just configured at probe time for either the DPAUX or I2C
device and then with this change when we are not active we can power
down the pins. However, the pins are always bound to either the DPAUX or
I2C.


Oh, so this isn't about 1 controller accessing 2 different physical 
buses at runtime by re-routing the SoC pinmux (which is the use-case 
i2c-mux-pinctrl handles), but simply about power-management.


If the I2C controller didn't need to set a different pinctrl state 
during idle, I would say that all the pin muxing should be set up at 
system boot time, just like every other part of the pinmux is. In that 
case, the fact that the pins could be routed to 1 of 2 I2C controllers 
(DPAUX0 vs. I2C4) is irrelevant to the patch; the routing is a static 
thing anyway.


Thinking some more, i2c-mux-pinctrl actually could handle this case, 
since it does define an idle pinmux state IIRC, or at least could be 
trivially extended to do so. That said, doing this directly in the I2C 
driver does seem better in this case, since the use-case is power about 
the power advantages for a single bus, rather than anything to do with 
multiple buses.


Hence I'm OK with the concept of this patch. I didn't review the code 
though.


Re: [PATCH V2 9/9] i2c: tegra: Add pinctrl support

2016-08-26 Thread Stephen Warren

On 08/26/2016 10:38 AM, Jon Hunter wrote:


On 26/08/16 16:55, Stephen Warren wrote:

On 08/26/2016 07:09 AM, Jon Hunter wrote:

On Tegra124/132 the pins for I2C6 are shared with the Display Port AUX
(DPAUX) channel and on Tegra210 the pins for I2C4 and I2C6 are shared
with DPAUX1 and DPAUX0, respectively. The multiplexing of the pins is
handled by a register in the DPAUX and so the Tegra DPAUX driver has
been updated to register a pinctrl device for managing these pins.

The pins for these particular I2C devices are bound to the I2C device
prior to probing. However, these I2C devices are in a different power
partition to the DPAUX devices that own the pins. Hence, it is desirable
to place the pins in the 'idle' state and allow the DPAUX power
partition to switch off, when these I2C devices is not in use.
Therefore, add calls to place the I2C pins in the 'default' and 'idle'
states when the I2C device is runtime resumed and suspended,
respectively.

Please note that the pinctrl functions that set the state of the pins
check to see if the devices has pins associated and will return zero
if they do not. Therefore, it is safe to call these pinctrl functions
even for I2C devices that do not have any pins associated.


I think this should be handled by drivers/i2c/muxes/i2c-mux-pinctrl.c
instead?


I remember having a look at i2c-mux some time back, but we did not have
requirement to share the pins dynamically at runtime between the DPAUX
and I2C devices.

The pins are just configured at probe time for either the DPAUX or I2C
device and then with this change when we are not active we can power
down the pins. However, the pins are always bound to either the DPAUX or
I2C.


Oh, so this isn't about 1 controller accessing 2 different physical 
buses at runtime by re-routing the SoC pinmux (which is the use-case 
i2c-mux-pinctrl handles), but simply about power-management.


If the I2C controller didn't need to set a different pinctrl state 
during idle, I would say that all the pin muxing should be set up at 
system boot time, just like every other part of the pinmux is. In that 
case, the fact that the pins could be routed to 1 of 2 I2C controllers 
(DPAUX0 vs. I2C4) is irrelevant to the patch; the routing is a static 
thing anyway.


Thinking some more, i2c-mux-pinctrl actually could handle this case, 
since it does define an idle pinmux state IIRC, or at least could be 
trivially extended to do so. That said, doing this directly in the I2C 
driver does seem better in this case, since the use-case is power about 
the power advantages for a single bus, rather than anything to do with 
multiple buses.


Hence I'm OK with the concept of this patch. I didn't review the code 
though.


Re: [PATCH V2 9/9] i2c: tegra: Add pinctrl support

2016-08-26 Thread Stephen Warren

On 08/26/2016 07:09 AM, Jon Hunter wrote:

On Tegra124/132 the pins for I2C6 are shared with the Display Port AUX
(DPAUX) channel and on Tegra210 the pins for I2C4 and I2C6 are shared
with DPAUX1 and DPAUX0, respectively. The multiplexing of the pins is
handled by a register in the DPAUX and so the Tegra DPAUX driver has
been updated to register a pinctrl device for managing these pins.

The pins for these particular I2C devices are bound to the I2C device
prior to probing. However, these I2C devices are in a different power
partition to the DPAUX devices that own the pins. Hence, it is desirable
to place the pins in the 'idle' state and allow the DPAUX power
partition to switch off, when these I2C devices is not in use.
Therefore, add calls to place the I2C pins in the 'default' and 'idle'
states when the I2C device is runtime resumed and suspended,
respectively.

Please note that the pinctrl functions that set the state of the pins
check to see if the devices has pins associated and will return zero
if they do not. Therefore, it is safe to call these pinctrl functions
even for I2C devices that do not have any pins associated.


I think this should be handled by drivers/i2c/muxes/i2c-mux-pinctrl.c 
instead?


Re: [PATCH V2 9/9] i2c: tegra: Add pinctrl support

2016-08-26 Thread Stephen Warren

On 08/26/2016 07:09 AM, Jon Hunter wrote:

On Tegra124/132 the pins for I2C6 are shared with the Display Port AUX
(DPAUX) channel and on Tegra210 the pins for I2C4 and I2C6 are shared
with DPAUX1 and DPAUX0, respectively. The multiplexing of the pins is
handled by a register in the DPAUX and so the Tegra DPAUX driver has
been updated to register a pinctrl device for managing these pins.

The pins for these particular I2C devices are bound to the I2C device
prior to probing. However, these I2C devices are in a different power
partition to the DPAUX devices that own the pins. Hence, it is desirable
to place the pins in the 'idle' state and allow the DPAUX power
partition to switch off, when these I2C devices is not in use.
Therefore, add calls to place the I2C pins in the 'default' and 'idle'
states when the I2C device is runtime resumed and suspended,
respectively.

Please note that the pinctrl functions that set the state of the pins
check to see if the devices has pins associated and will return zero
if they do not. Therefore, it is safe to call these pinctrl functions
even for I2C devices that do not have any pins associated.


I think this should be handled by drivers/i2c/muxes/i2c-mux-pinctrl.c 
instead?


Re: [ANNOUNCE] git-series: track changes to a patch series over time

2016-08-01 Thread Stephen Warren

On 07/29/2016 12:40 AM, Josh Triplett wrote:

I'd like to announce a project I've been working on for a while:

git-series provides a tool for managing patch series with git, tracking
the "history of history". git series tracks changes to the patch series
over time, including rebases and other non-fast-forwarding changes. git
series also tracks a cover letter for the patch series, formats the
series for email, and prepares pull requests.


Just as an FYI, I wouldn't be surprised if there's some overlap, or 
potential for merging of tools, between this tool and the "patman" tool 
that's part of the U-Boot source tree:


http://git.denx.de/?p=u-boot.git;a=blob;f=tools/patman/README;h=e36857dedea1d0dbafa41732aaf9bf0988d63f38;hb=HEAD



Re: [ANNOUNCE] git-series: track changes to a patch series over time

2016-08-01 Thread Stephen Warren

On 07/29/2016 12:40 AM, Josh Triplett wrote:

I'd like to announce a project I've been working on for a while:

git-series provides a tool for managing patch series with git, tracking
the "history of history". git series tracks changes to the patch series
over time, including rebases and other non-fast-forwarding changes. git
series also tracks a cover letter for the patch series, formats the
series for email, and prepares pull requests.


Just as an FYI, I wouldn't be surprised if there's some overlap, or 
potential for merging of tools, between this tool and the "patman" tool 
that's part of the U-Boot source tree:


http://git.denx.de/?p=u-boot.git;a=blob;f=tools/patman/README;h=e36857dedea1d0dbafa41732aaf9bf0988d63f38;hb=HEAD



Re: [PATCH V3 3/10] Documentation: dt-bindings: firmware: tegra: add bindings of the BPMP

2016-07-19 Thread Stephen Warren

On 07/19/2016 03:17 AM, Joseph Lo wrote:

The BPMP is a specific processor in Tegra chip, which is designed for
booting process handling and offloading the power management, clock
management, and reset control tasks from the CPU. The binding document
defines the resources that would be used by the BPMP firmware driver,
which can create the interprocessor communication (IPC) between the CPU
and BPMP.


Acked-by: Stephen Warren <swar...@nvidia.com>


Re: [PATCH V3 3/10] Documentation: dt-bindings: firmware: tegra: add bindings of the BPMP

2016-07-19 Thread Stephen Warren

On 07/19/2016 03:17 AM, Joseph Lo wrote:

The BPMP is a specific processor in Tegra chip, which is designed for
booting process handling and offloading the power management, clock
management, and reset control tasks from the CPU. The binding document
defines the resources that would be used by the BPMP firmware driver,
which can create the interprocessor communication (IPC) between the CPU
and BPMP.


Acked-by: Stephen Warren 


Re: [PATCH V3 01/10] Documentation: dt-bindings: mailbox: tegra: Add binding for HSP mailbox

2016-07-19 Thread Stephen Warren

On 07/19/2016 03:17 AM, Joseph Lo wrote:

Add DT binding for the Hardware Synchronization Primitives (HSP). The
HSP is designed for the processors to share resources and communicate
together. It provides a set of hardware synchronization primitives for
interprocessor communication. So the interprocessor communication (IPC)
protocols can use hardware synchronization primitive, when operating
between two processors not in an SMP relationship.


Acked-by: Stephen Warren <swar...@nvidia.com>

A couple of nits below that you might want to fix up when posting the 
final version of the series; your call.



diff --git a/include/dt-bindings/mailbox/tegra186-hsp.h 
b/include/dt-bindings/mailbox/tegra186-hsp.h



+ */
+
+
+#ifndef _DT_BINDINGS_MAILBOX_TEGRA186_HSP_H


Nit: There are 2 blank lines there.


+#endif /* _DT_BINDINGS_MAILBOX_TEGRA186_HSP_H */


Nit: Personally I hate "filename" comments on the #endif, since it's 
just one more place to update if the file moves or is copied.


Re: [PATCH V3 01/10] Documentation: dt-bindings: mailbox: tegra: Add binding for HSP mailbox

2016-07-19 Thread Stephen Warren

On 07/19/2016 03:17 AM, Joseph Lo wrote:

Add DT binding for the Hardware Synchronization Primitives (HSP). The
HSP is designed for the processors to share resources and communicate
together. It provides a set of hardware synchronization primitives for
interprocessor communication. So the interprocessor communication (IPC)
protocols can use hardware synchronization primitive, when operating
between two processors not in an SMP relationship.


Acked-by: Stephen Warren 

A couple of nits below that you might want to fix up when posting the 
final version of the series; your call.



diff --git a/include/dt-bindings/mailbox/tegra186-hsp.h 
b/include/dt-bindings/mailbox/tegra186-hsp.h



+ */
+
+
+#ifndef _DT_BINDINGS_MAILBOX_TEGRA186_HSP_H


Nit: There are 2 blank lines there.


+#endif /* _DT_BINDINGS_MAILBOX_TEGRA186_HSP_H */


Nit: Personally I hate "filename" comments on the #endif, since it's 
just one more place to update if the file moves or is copied.


Re: [PATCH V2 01/10] Documentation: dt-bindings: mailbox: tegra: Add binding for HSP mailbox

2016-07-18 Thread Stephen Warren

On 07/11/2016 10:08 AM, Stephen Warren wrote:

On 07/11/2016 08:14 AM, Rob Herring wrote:

On Thu, Jul 07, 2016 at 12:35:02PM -0600, Stephen Warren wrote:

On 07/07/2016 12:13 PM, Sivaram Nair wrote:

On Tue, Jul 05, 2016 at 05:04:22PM +0800, Joseph Lo wrote:

Add DT binding for the Hardware Synchronization Primitives (HSP). The
HSP is designed for the processors to share resources and communicate
together. It provides a set of hardware synchronization primitives for
interprocessor communication. So the interprocessor communication
(IPC)
protocols can use hardware synchronization primitive, when operating
between two processors not in an SMP relationship.



diff --git a/include/dt-bindings/mailbox/tegra186-hsp.h
b/include/dt-bindings/mailbox/tegra186-hsp.h



+#define HSP_MBOX_TYPE_DB 0x0
+#define HSP_MBOX_TYPE_SM 0x1
+#define HSP_MBOX_TYPE_SS 0x2
+#define HSP_MBOX_TYPE_AS 0x3
+
+#define HSP_DB_MASTER_CCPLEX 17
+#define HSP_DB_MASTER_BPMP 19
+
+#define HSP_MBOX_ID(type, ID) \
+(HSP_MBOX_TYPE_##type << 16 | ID)


It will be nicer if you avoid the macro glue magic '##' for 'type'. I
would also suggest to use braces around 'type' and 'ID'.


This technique been used without issue in quite a few other places
without
issue, and has the benefit of simplifying the text wherever the macro is
used. What issue do you foresee?


I'm not a fan of using the macros to begin with and less so anything
more complex than a single constant value. I'd rather see 2 cells here
with the first being the id and the 2nd being the type.

An issue with token pasting is grepping for DB, SM, etc. in kernel tree
is probably noisy. Not such a big deal here, but a major PIA when you
have more complex sets of includes.


Is that a NAK or simply a suggestion? Having a single cell makes DT
parsing a bit simpler, since pretty much every SW stack provides a
default "one-cell" of_xlate implementation, whereas >1 cell means custom
code for of_xlate.


I didn't see a response to this. Joseph, let's just use two cells 
instead. I'm rather desperately waiting for this binding to be complete 
so I can finalize the U-Boot code that uses it, and it sounds like 
changing to two cells will get an ack faster. Can you post an updated 
version of this series today/ASAP to get things moving? Thanks.




Re: [PATCH V2 01/10] Documentation: dt-bindings: mailbox: tegra: Add binding for HSP mailbox

2016-07-18 Thread Stephen Warren

On 07/11/2016 10:08 AM, Stephen Warren wrote:

On 07/11/2016 08:14 AM, Rob Herring wrote:

On Thu, Jul 07, 2016 at 12:35:02PM -0600, Stephen Warren wrote:

On 07/07/2016 12:13 PM, Sivaram Nair wrote:

On Tue, Jul 05, 2016 at 05:04:22PM +0800, Joseph Lo wrote:

Add DT binding for the Hardware Synchronization Primitives (HSP). The
HSP is designed for the processors to share resources and communicate
together. It provides a set of hardware synchronization primitives for
interprocessor communication. So the interprocessor communication
(IPC)
protocols can use hardware synchronization primitive, when operating
between two processors not in an SMP relationship.



diff --git a/include/dt-bindings/mailbox/tegra186-hsp.h
b/include/dt-bindings/mailbox/tegra186-hsp.h



+#define HSP_MBOX_TYPE_DB 0x0
+#define HSP_MBOX_TYPE_SM 0x1
+#define HSP_MBOX_TYPE_SS 0x2
+#define HSP_MBOX_TYPE_AS 0x3
+
+#define HSP_DB_MASTER_CCPLEX 17
+#define HSP_DB_MASTER_BPMP 19
+
+#define HSP_MBOX_ID(type, ID) \
+(HSP_MBOX_TYPE_##type << 16 | ID)


It will be nicer if you avoid the macro glue magic '##' for 'type'. I
would also suggest to use braces around 'type' and 'ID'.


This technique been used without issue in quite a few other places
without
issue, and has the benefit of simplifying the text wherever the macro is
used. What issue do you foresee?


I'm not a fan of using the macros to begin with and less so anything
more complex than a single constant value. I'd rather see 2 cells here
with the first being the id and the 2nd being the type.

An issue with token pasting is grepping for DB, SM, etc. in kernel tree
is probably noisy. Not such a big deal here, but a major PIA when you
have more complex sets of includes.


Is that a NAK or simply a suggestion? Having a single cell makes DT
parsing a bit simpler, since pretty much every SW stack provides a
default "one-cell" of_xlate implementation, whereas >1 cell means custom
code for of_xlate.


I didn't see a response to this. Joseph, let's just use two cells 
instead. I'm rather desperately waiting for this binding to be complete 
so I can finalize the U-Boot code that uses it, and it sounds like 
changing to two cells will get an ack faster. Can you post an updated 
version of this series today/ASAP to get things moving? Thanks.




Re: [PATCH V2 03/10] Documentation: dt-bindings: firmware: tegra: add bindings of the BPMP

2016-07-18 Thread Stephen Warren

On 07/18/2016 01:44 AM, Joseph Lo wrote:

Hi Rob,

Thanks for your reviewing.

On 07/12/2016 12:05 AM, Stephen Warren wrote:

On 07/11/2016 08:22 AM, Rob Herring wrote:

On Tue, Jul 05, 2016 at 05:04:24PM +0800, Joseph Lo wrote:

The BPMP is a specific processor in Tegra chip, which is designed for
booting process handling and offloading the power management, clock
management, and reset control tasks from the CPU. The binding document
defines the resources that would be used by the BPMP firmware driver,
which can create the interprocessor communication (IPC) between the CPU
and BPMP.



diff --git
a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.txt
b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.txt



+NVIDIA Tegra Boot and Power Management Processor (BPMP)
+
+The BPMP is a specific processor in Tegra chip, which is designed for
+booting process handling and offloading the power management, clock
+management, and reset control tasks from the CPU. The binding document
+defines the resources that would be used by the BPMP firmware driver,
+which can create the interprocessor communication (IPC) between the
CPU
+and BPMP.
+
+Required properties:
+- name : Should be bpmp
+- compatible
+Array of strings
+One of:
+- "nvidia,tegra186-bpmp"
+- mboxes : The phandle of mailbox controller and the mailbox
specifier.
+- shmem : List of the phandle of the TX and RX shared memory area that
+  the IPC between CPU and BPMP is based on.


I think you can use memory-region here.


Isn't memory-region intended for references into the /reserved-memory
node. If so, that isn't appropriate in this case since this property
typically points at on-chip SRAM that isn't included in the OS's view of
"system RAM".

Agree with that.



Or, should /reserved-memory be used even for (e.g. non-DRAM) memory
regions that aren't represented by the /memory/reg property?



For shmem, I follow the same concept of the binding for arm,scpi
(.../arm/arm,scpi.txt) that is currently using in mainline. Do you think
that is more appropriate here?


Personally I think the shmem property name used by the current patch is 
fine. Still, if Rob feels strongly about changing it, that's fine too.


Re: [PATCH V2 03/10] Documentation: dt-bindings: firmware: tegra: add bindings of the BPMP

2016-07-18 Thread Stephen Warren

On 07/18/2016 01:44 AM, Joseph Lo wrote:

Hi Rob,

Thanks for your reviewing.

On 07/12/2016 12:05 AM, Stephen Warren wrote:

On 07/11/2016 08:22 AM, Rob Herring wrote:

On Tue, Jul 05, 2016 at 05:04:24PM +0800, Joseph Lo wrote:

The BPMP is a specific processor in Tegra chip, which is designed for
booting process handling and offloading the power management, clock
management, and reset control tasks from the CPU. The binding document
defines the resources that would be used by the BPMP firmware driver,
which can create the interprocessor communication (IPC) between the CPU
and BPMP.



diff --git
a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.txt
b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.txt



+NVIDIA Tegra Boot and Power Management Processor (BPMP)
+
+The BPMP is a specific processor in Tegra chip, which is designed for
+booting process handling and offloading the power management, clock
+management, and reset control tasks from the CPU. The binding document
+defines the resources that would be used by the BPMP firmware driver,
+which can create the interprocessor communication (IPC) between the
CPU
+and BPMP.
+
+Required properties:
+- name : Should be bpmp
+- compatible
+Array of strings
+One of:
+- "nvidia,tegra186-bpmp"
+- mboxes : The phandle of mailbox controller and the mailbox
specifier.
+- shmem : List of the phandle of the TX and RX shared memory area that
+  the IPC between CPU and BPMP is based on.


I think you can use memory-region here.


Isn't memory-region intended for references into the /reserved-memory
node. If so, that isn't appropriate in this case since this property
typically points at on-chip SRAM that isn't included in the OS's view of
"system RAM".

Agree with that.



Or, should /reserved-memory be used even for (e.g. non-DRAM) memory
regions that aren't represented by the /memory/reg property?



For shmem, I follow the same concept of the binding for arm,scpi
(.../arm/arm,scpi.txt) that is currently using in mainline. Do you think
that is more appropriate here?


Personally I think the shmem property name used by the current patch is 
fine. Still, if Rob feels strongly about changing it, that's fine too.


Re: [PATCH] ARM: tegra: fix erroneous address in dts

2016-07-15 Thread Stephen Warren

On 07/15/2016 03:37 AM, Ralf Ramsauer wrote:

On 07/15/2016 12:02 AM, Thierry Reding wrote:

On Thu, Jul 14, 2016 at 06:48:57PM +0200, Ralf Ramsauer wrote:

c90bb7b enabled the high speed UARTs of the Jetson TK1. The address
specification inside the dts is wrong. Fix it and use the correct
address.

Fixes: c90bb7b9b9 ("ARM: tegra: Add high speed UARTs to Jetson TK1 device tree")
Signed-off-by: Ralf Ramsauer 
---
  arch/arm/boot/dts/tegra124-jetson-tk1.dts | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)


These addresses are correct. The 0, prefix was dropped from the unit
address in commit b5896f67ab3c ("ARM: tegra: Remove commas from unit
addresses on Tegra124").

What's the problem that you're seeing? What's not working for you?

>

I cannot find b5896f67ab3c neither in swarren's tree nor in linux
upstream. But there's d0bc5aaf890 in swarren's linux-tegra tree that
matches your described changes and was committed on 1st of July. But
this patch is not upstream yet, while the other patch is.


The fix is in linux-next, and will be in mainline soon I expect.

My github linux-tegra.git isn't relevant since it's just my own personal 
dev branch, but for reference, the commit is there since it's based on 
linux-next.



Have a look at mainline tegra124-jetson-tk1.dts, there the addresses are
erroneous as they still use the 0, annotation. And I just realised, that
somehow, upstream patch c90bb7b slightly differs from my initial patch
[1] on the mailing list.


Your patch should probably be CC: stable so that existing kernel 
versions get fixed. That said, I'd argue that since the original patch 
never actually had any effect since it applied to the wrong node, the 
best fix for stable kernels is simply to revert it rather than fixing 
it, to avoid the potential for behaviour changes and regressions. 
Starting with kernel 4.8 (I think), that patch will begin to have actual 
effect.


Re: [PATCH] ARM: tegra: fix erroneous address in dts

2016-07-15 Thread Stephen Warren

On 07/15/2016 03:37 AM, Ralf Ramsauer wrote:

On 07/15/2016 12:02 AM, Thierry Reding wrote:

On Thu, Jul 14, 2016 at 06:48:57PM +0200, Ralf Ramsauer wrote:

c90bb7b enabled the high speed UARTs of the Jetson TK1. The address
specification inside the dts is wrong. Fix it and use the correct
address.

Fixes: c90bb7b9b9 ("ARM: tegra: Add high speed UARTs to Jetson TK1 device tree")
Signed-off-by: Ralf Ramsauer 
---
  arch/arm/boot/dts/tegra124-jetson-tk1.dts | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)


These addresses are correct. The 0, prefix was dropped from the unit
address in commit b5896f67ab3c ("ARM: tegra: Remove commas from unit
addresses on Tegra124").

What's the problem that you're seeing? What's not working for you?

>

I cannot find b5896f67ab3c neither in swarren's tree nor in linux
upstream. But there's d0bc5aaf890 in swarren's linux-tegra tree that
matches your described changes and was committed on 1st of July. But
this patch is not upstream yet, while the other patch is.


The fix is in linux-next, and will be in mainline soon I expect.

My github linux-tegra.git isn't relevant since it's just my own personal 
dev branch, but for reference, the commit is there since it's based on 
linux-next.



Have a look at mainline tegra124-jetson-tk1.dts, there the addresses are
erroneous as they still use the 0, annotation. And I just realised, that
somehow, upstream patch c90bb7b slightly differs from my initial patch
[1] on the mailing list.


Your patch should probably be CC: stable so that existing kernel 
versions get fixed. That said, I'd argue that since the original patch 
never actually had any effect since it applied to the wrong node, the 
best fix for stable kernels is simply to revert it rather than fixing 
it, to avoid the potential for behaviour changes and regressions. 
Starting with kernel 4.8 (I think), that patch will begin to have actual 
effect.


Re: [PATCH V2 03/10] Documentation: dt-bindings: firmware: tegra: add bindings of the BPMP

2016-07-13 Thread Stephen Warren

On 07/05/2016 03:04 AM, Joseph Lo wrote:

The BPMP is a specific processor in Tegra chip, which is designed for
booting process handling and offloading the power management, clock
management, and reset control tasks from the CPU. The binding document
defines the resources that would be used by the BPMP firmware driver,
which can create the interprocessor communication (IPC) between the CPU
and BPMP.



diff --git 
a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.txt 
b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.txt



+- Documentation/devicetree/bindings/mailbox/mailbox.txt
+- Documentation/devicetree/bindings/mailbox/nvidia,tegra186-hsp.txt



+- Documentation/devicetree/bindings/clock/clock-bindings.txt
+- include/dt-bindings/clock/tegra186-clock.h
+- Documentation/devicetree/bindings/reset/reset.txt
+- include/dt-bindings/reset/tegra186-reset.h


If you end up needing to repost this, it would be nice to make all those 
file references more generic. In particular, some SW projects store 
binding docs somewhere other than Documentation/devicetree/bindings/ 
(e.g. U-Boot uses doc/device-tree-bindings/), and it's possible that the 
header files aren't stored in include/ but somewhere else. To make these 
file references valid everywhere, I'd suggest using relative paths for 
the binding docs, and #include style paths for the headers, e.g.:


../clock/clock-bindings.txt





Re: [PATCH V2 03/10] Documentation: dt-bindings: firmware: tegra: add bindings of the BPMP

2016-07-13 Thread Stephen Warren

On 07/05/2016 03:04 AM, Joseph Lo wrote:

The BPMP is a specific processor in Tegra chip, which is designed for
booting process handling and offloading the power management, clock
management, and reset control tasks from the CPU. The binding document
defines the resources that would be used by the BPMP firmware driver,
which can create the interprocessor communication (IPC) between the CPU
and BPMP.



diff --git 
a/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.txt 
b/Documentation/devicetree/bindings/firmware/nvidia,tegra186-bpmp.txt



+- Documentation/devicetree/bindings/mailbox/mailbox.txt
+- Documentation/devicetree/bindings/mailbox/nvidia,tegra186-hsp.txt



+- Documentation/devicetree/bindings/clock/clock-bindings.txt
+- include/dt-bindings/clock/tegra186-clock.h
+- Documentation/devicetree/bindings/reset/reset.txt
+- include/dt-bindings/reset/tegra186-reset.h


If you end up needing to repost this, it would be nice to make all those 
file references more generic. In particular, some SW projects store 
binding docs somewhere other than Documentation/devicetree/bindings/ 
(e.g. U-Boot uses doc/device-tree-bindings/), and it's possible that the 
header files aren't stored in include/ but somewhere else. To make these 
file references valid everywhere, I'd suggest using relative paths for 
the binding docs, and #include style paths for the headers, e.g.:


../clock/clock-bindings.txt





Re: [PATCH V2 01/10] Documentation: dt-bindings: mailbox: tegra: Add binding for HSP mailbox

2016-07-11 Thread Stephen Warren

On 07/11/2016 08:14 AM, Rob Herring wrote:

On Thu, Jul 07, 2016 at 12:35:02PM -0600, Stephen Warren wrote:

On 07/07/2016 12:13 PM, Sivaram Nair wrote:

On Tue, Jul 05, 2016 at 05:04:22PM +0800, Joseph Lo wrote:

Add DT binding for the Hardware Synchronization Primitives (HSP). The
HSP is designed for the processors to share resources and communicate
together. It provides a set of hardware synchronization primitives for
interprocessor communication. So the interprocessor communication (IPC)
protocols can use hardware synchronization primitive, when operating
between two processors not in an SMP relationship.



diff --git a/include/dt-bindings/mailbox/tegra186-hsp.h 
b/include/dt-bindings/mailbox/tegra186-hsp.h



+#define HSP_MBOX_TYPE_DB 0x0
+#define HSP_MBOX_TYPE_SM 0x1
+#define HSP_MBOX_TYPE_SS 0x2
+#define HSP_MBOX_TYPE_AS 0x3
+
+#define HSP_DB_MASTER_CCPLEX 17
+#define HSP_DB_MASTER_BPMP 19
+
+#define HSP_MBOX_ID(type, ID) \
+   (HSP_MBOX_TYPE_##type << 16 | ID)


It will be nicer if you avoid the macro glue magic '##' for 'type'. I
would also suggest to use braces around 'type' and 'ID'.


This technique been used without issue in quite a few other places without
issue, and has the benefit of simplifying the text wherever the macro is
used. What issue do you foresee?


I'm not a fan of using the macros to begin with and less so anything
more complex than a single constant value. I'd rather see 2 cells here
with the first being the id and the 2nd being the type.

An issue with token pasting is grepping for DB, SM, etc. in kernel tree
is probably noisy. Not such a big deal here, but a major PIA when you
have more complex sets of includes.


Is that a NAK or simply a suggestion? Having a single cell makes DT 
parsing a bit simpler, since pretty much every SW stack provides a 
default "one-cell" of_xlate implementation, whereas >1 cell means custom 
code for of_xlate.


Re: [PATCH V2 01/10] Documentation: dt-bindings: mailbox: tegra: Add binding for HSP mailbox

2016-07-11 Thread Stephen Warren

On 07/11/2016 08:14 AM, Rob Herring wrote:

On Thu, Jul 07, 2016 at 12:35:02PM -0600, Stephen Warren wrote:

On 07/07/2016 12:13 PM, Sivaram Nair wrote:

On Tue, Jul 05, 2016 at 05:04:22PM +0800, Joseph Lo wrote:

Add DT binding for the Hardware Synchronization Primitives (HSP). The
HSP is designed for the processors to share resources and communicate
together. It provides a set of hardware synchronization primitives for
interprocessor communication. So the interprocessor communication (IPC)
protocols can use hardware synchronization primitive, when operating
between two processors not in an SMP relationship.



diff --git a/include/dt-bindings/mailbox/tegra186-hsp.h 
b/include/dt-bindings/mailbox/tegra186-hsp.h



+#define HSP_MBOX_TYPE_DB 0x0
+#define HSP_MBOX_TYPE_SM 0x1
+#define HSP_MBOX_TYPE_SS 0x2
+#define HSP_MBOX_TYPE_AS 0x3
+
+#define HSP_DB_MASTER_CCPLEX 17
+#define HSP_DB_MASTER_BPMP 19
+
+#define HSP_MBOX_ID(type, ID) \
+   (HSP_MBOX_TYPE_##type << 16 | ID)


It will be nicer if you avoid the macro glue magic '##' for 'type'. I
would also suggest to use braces around 'type' and 'ID'.


This technique been used without issue in quite a few other places without
issue, and has the benefit of simplifying the text wherever the macro is
used. What issue do you foresee?


I'm not a fan of using the macros to begin with and less so anything
more complex than a single constant value. I'd rather see 2 cells here
with the first being the id and the 2nd being the type.

An issue with token pasting is grepping for DB, SM, etc. in kernel tree
is probably noisy. Not such a big deal here, but a major PIA when you
have more complex sets of includes.


Is that a NAK or simply a suggestion? Having a single cell makes DT 
parsing a bit simpler, since pretty much every SW stack provides a 
default "one-cell" of_xlate implementation, whereas >1 cell means custom 
code for of_xlate.


  1   2   3   4   5   6   7   8   9   10   >