Re: [PATCH v3 0/9] Xilinx AI engine kernel driver

2020-12-17 Thread Jiaying Liang



On 12/15/20 7:23 AM, Alex Deucher wrote:

On Mon, Dec 14, 2020 at 7:24 PM Jiaying Liang  wrote:

On 12/11/20 11:39 AM, Daniel Vetter wrote:

Hi all

On Fri, Dec 11, 2020 at 8:03 PM Alex Deucher   wrote:

On Mon, Nov 30, 2020 at 3:25 AM Wendy Liang   wrote:

AI engine is the acceleration engine provided by Xilinx. These engines
provide high compute density for vector-based algorithms, and flexible
custom compute and data movement. It has core tiles for compute and
shim tiles to interface the FPGA fabric.

You can check the AI engine architecture document for more hardware details:
https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf

This patch series adds a Linux kernel driver to manage the Xilinx AI
engine array device and AI engine partitions (groups of AI engine tiles
dedicated to an application).

Hi Wendy,

I think it would be good to provide an overview of how your stack
works in general.  That would give reviewers a better handle on how
all of this fits together.  I'd suggest including an overview in the
cover letter and also in the commit message and/or as a comment in the
code in one of the patches.  I'm not really an expert when it comes to
FPGAs, but this basically looks like a pretty low level interface to
set up the data fabric for a kernel that will run on the soft logic or
maybe the microcontroller on the board.  It doesn't have to be super
detailed, just a nice flow for how you might use this.  E.g.,

Userspace uses ioctls X, Y, Z to configure the data fabric for the
FPGA kernel.  The kernels can run on... .  DMA access to system memory
for data sets can be allocated using ioctl A.  DMA access is limited
by... . The user can then load the FPGA kernel on to one of the
engines using ioctl B and finally they can kick off the whole thing
using ioctl C.  FPGA kernels are compiled using YYY toolchain and use
use the following runtime (link to runtime) to configure the data
fabric using ioctls X, Y, Z.

At least for drm drivers we ideally have that as a .rst file in
Documentation/. With that you can even do full svg graphs, or just dot
graphs, of the overall stack if you really want to go overboard :-)


It would also be good to go over the security implications of the
design.  E.g., can the FPGA kernel(s) access the DMA engine directly,
or is it limited to just the DMA regions set up by the ioctls?  Also,
does the hardware and software design allow for multiple users?  If
so, how does that work?

I've also seen indications that there's some on-chip or on-card
memory. How that's planned to be used and whether we want to manage
this (maybe even with something like ttm) would be good to understand.

All excellent questions from Alex, just figured I add some more.

Cheers, Daniel

Hi Alex, Daniel,

Below is an overview of the driver.

AI engine kernel driver manages Xilinx AI engine device. An AI engine device
contains cores tiles and SHIM tiles. Core tiles are the computation tiles
, the SHIM tiles are the tiles interfacing to external components.

+++++
 | Core| Core| Core| Core | ...
 ||| ||
+---+
 | Core| Core| Core| Core | ...
 ||| | |
++++-
 ...
+++-+
| SHIM| SHIM   | SHIM   |SHIM|
| PL| PL   | PL|PL | NOC  |
+---++---++---+-+---+
AXI Streams   |||  ||AXI MM
 ||| ||
Events Singals |||  ||
 ||| ||
 ||| ||
+---+++-+ +--+--+
|   FPGA| |
NOC|
| | |  |
+---+ +--+---+
 |
 |
 +---+--+
 |   DDR   |
 +--+

Each Core tile contains computing module, local memory and DMA module. The
local memory DMA module takes data from or to the AXI streams and writes
it to or reads it from the local memory. The computing module can also
directly get/put data from/to the AXI stream. The AIE SHIM enables AIE tiles
to get/put data from/to AXI streams from FPGA, enables external master to
access AI engine address space through AXI MM. SHIM NoC module has DMA
engine,
which can

Re: [PATCH v3 0/9] Xilinx AI engine kernel driver

2020-12-14 Thread Jiaying Liang



On 12/11/20 11:39 AM, Daniel Vetter wrote:

Hi all

On Fri, Dec 11, 2020 at 8:03 PM Alex Deucher  wrote:

On Mon, Nov 30, 2020 at 3:25 AM Wendy Liang  wrote:

AI engine is the acceleration engine provided by Xilinx. These engines
provide high compute density for vector-based algorithms, and flexible
custom compute and data movement. It has core tiles for compute and
shim tiles to interface the FPGA fabric.

You can check the AI engine architecture document for more hardware details:
https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf

This patch series adds a Linux kernel driver to manage the Xilinx AI
engine array device and AI engine partitions (groups of AI engine tiles
dedicated to an application).

Hi Wendy,

I think it would be good to provide an overview of how your stack
works in general.  That would give reviewers a better handle on how
all of this fits together.  I'd suggest including an overview in the
cover letter and also in the commit message and/or as a comment in the
code in one of the patches.  I'm not really an expert when it comes to
FPGAs, but this basically looks like a pretty low level interface to
set up the data fabric for a kernel that will run on the soft logic or
maybe the microcontroller on the board.  It doesn't have to be super
detailed, just a nice flow for how you might use this.  E.g.,

Userspace uses ioctls X, Y, Z to configure the data fabric for the
FPGA kernel.  The kernels can run on... .  DMA access to system memory
for data sets can be allocated using ioctl A.  DMA access is limited
by... . The user can then load the FPGA kernel on to one of the
engines using ioctl B and finally they can kick off the whole thing
using ioctl C.  FPGA kernels are compiled using YYY toolchain and use
use the following runtime (link to runtime) to configure the data
fabric using ioctls X, Y, Z.

At least for drm drivers we ideally have that as a .rst file in
Documentation/. With that you can even do full svg graphs, or just dot
graphs, of the overall stack if you really want to go overboard :-)


It would also be good to go over the security implications of the
design.  E.g., can the FPGA kernel(s) access the DMA engine directly,
or is it limited to just the DMA regions set up by the ioctls?  Also,
does the hardware and software design allow for multiple users?  If
so, how does that work?

I've also seen indications that there's some on-chip or on-card
memory. How that's planned to be used and whether we want to manage
this (maybe even with something like ttm) would be good to understand.

All excellent questions from Alex, just figured I add some more.

Cheers, Daniel


Hi Alex, Daniel,

Below is an overview of the driver.

AI engine kernel driver manages Xilinx AI engine device. An AI engine device
contains cores tiles and SHIM tiles. Core tiles are the computation tiles
, the SHIM tiles are the tiles interfacing to external components.

  +++++
   | Core    | Core    | Core        | Core | ...
   |            |    | |            |
  +---+
   | Core    | Core        | Core    | Core | ...
   |            |            | |         |
  ++++-
   ...
  +++-+
  | SHIM        | SHIM   | SHIM   |SHIM    |
  | PL            | PL           | PL    |PL | NOC  |
  +---++---++---+-+---+
  AXI Streams   |    |            |              |    |AXI MM
   |    |            | |    |
Events Singals |    |            |              |    |
               |    |            | |    |
           |    |            | |    |
  +---+++-+ +--+--+
  |   FPGA                            | |   
NOC    |

  | | |              |
  +---+ +--+---+
   |
   |
   +---+--+
   |   DDR           |
   +--+

Each Core tile contains computing module, local memory and DMA module. The
local memory DMA module takes data from or to the AXI streams and writes
it to or reads it from the local memory. The computing module can also
directly get/put data from/to the AXI stream. The AIE SHIM enables AIE tiles
to get/put data from/to AXI streams from FPGA, enables external master to
access AI engine address space through AXI MM. SHIM NoC module has DMA 
engine,

which can access extern memory though AXI MM and push it to internal AXI
streams.

At runtime, the AI engine tiles interconnection needs to be configured 

Re: [PATCH 2/9] misc: Add Xilinx AI engine device driver

2020-12-11 Thread Jiaying Liang



On 12/9/20 4:47 AM, Daniel Vetter wrote:

On Tue, Dec 08, 2020 at 11:54:57AM -0800, Jiaying Liang wrote:

On 12/8/20 9:12 AM, Nicolas Dufresne wrote:

Le mercredi 18 novembre 2020 à 00:06 -0800, Wendy Liang a écrit :

Create AI engine device/partition hierarchical structure.

Each AI engine device can have multiple logical partitions(groups of AI
engine tiles). Each partition is column based and has its own node ID
in the system. AI engine device driver manages its partitions.

Applications can access AI engine partition through the AI engine
partition driver instance. AI engine registers write is moved to kernel
as there are registers in the AI engine array needs privilege
permission.

Hi there, it's nice to see an effort to upstream an AI driver. I'm a little
worried this driver is not obvious to use from it's source code itself. So you
have reference to some Open Source code that demonstrate it's usage ?

We have AI engine library which provides a cross platforms APIs for other

libraries/application to use the hardware. Here is the source code:

https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/drivers/aienginev2/src

The cross platforms AI engine library runs in LInux userspace it defines how
to

configure, and the kernel driver controls if what can be access and manage
errors from device.

So I kinda ignored this driver submission because in the past all these AI
drivers had at best incomplete open source (usually the compiler is
closed, often also large parts of the runtime). I think yours would be the
first that breaks this trend, is that really the case? I.e. I could make
full use of this hw without any closed source bits to run DNN workloads
and things like that?
AI engine can be used for signaling processing or high performance 
computing


the kernel driver works on the AI engine software library which I 
mentioned above,


which will be used by Xilinx runtime: 
https://xilinx.github.io/XRT/2020.2/html/index.html


Xilinx runtime is a layer for acceleration libraries or applications to 
use Xilinx accelerators.


e.g. it has OpenCL implementation

If that's the case then I think there's nothing stopping us from doing the
right thing and merging this driver into the right subsystem: The
subsystem for accelerators which their own memory and who want dma-buf
integration is drivers/gpu, not drivers/misc.


The AI engine kernel driver is used for device runtime configuration update,

and runtime monitoring, such as async errors detection. The buffer 
management


is out of the AI engine driver, but it is covered by Xilinx runtime:

https://github.com/Xilinx/XRT/tree/master/src/runtime_src/core/edge/drm/zocl

AI engine driver imports the DMA buf.


The AI engine device is quite different to the GPU devices. The AI engine

operations are still needs driver specific ioctls.

We have more than 100 cores tiles, each tiles functions can be defined 
at compilation


time, at runtime, we load the configuration (application defined I/O 
commands) to


configure each tiles registers to set up routing, set up DMAs, configure 
local memories,


and enable the tiles.


As the AI engine device hardware is different to the GPUs,

we are not able to make use of functions abstracted for GPUs, and we 
don't manage the


buffers in this driver. I am not sure if it is ok to add the driver to 
drivers/gpu but not


using abstractions from the GPU abstraction.


There is another reply to the patch series to ask for clarification on 
the overview of the


driver, and I had some discussions with other team members. I will reply 
to that email


to provide more details on overall how this driver is used.

Any suggestions on how to fit the driver in the drivers/gpu or other 
drivers framework


will be much appreciated.


Thanks,

Wendy


Apologies that I'm jumping with the really big arch review when v3 is
already on the list. But last few times merging AI drivers to drivers/misc
was really just a way to avoid the merge criteria for drivers/gpu
acceleration drivers. I'd love to land the first real open AI driver in
upstream, properly.

Cheers, Daniel




Best Regards,

Wendy



Signed-off-by: Wendy Liang
Signed-off-by: Hyun Kwon
---
   MAINTAINERS    |   8 +
   drivers/misc/Kconfig   |  12 +
   drivers/misc/Makefile  |   1 +
   drivers/misc/xilinx-ai-engine/Makefile |  11 +
   drivers/misc/xilinx-ai-engine/ai-engine-aie.c  | 115 +
   drivers/misc/xilinx-ai-engine/ai-engine-dev.c  | 448 ++
   drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 226 ++
   drivers/misc/xilinx-ai-engine/ai-engine-part.c | 498
+
   drivers/misc/xilinx-ai-engine/ai-engine-res.c  | 114 +
   include/uapi/linux/xlnx-ai-engine.h    | 107 +
   10 files changed, 1540 insertions(+)
   create mode 100644 drivers/misc/xilinx-ai-engine/Makefile

Re: [PATCH v3 1/9] dt-binding: soc: xilinx: ai-engine: Add AI engine binding

2020-12-09 Thread Jiaying Liang



On 12/8/20 3:41 PM, Rob Herring wrote:

On Sun, Nov 29, 2020 at 11:48:17PM -0800, Wendy Liang wrote:

Xilinx AI engine array can be partitioned statically for different
applications. In the device tree, there will be device node for the AI
engine device, and device nodes for the statically configured AI engine
partitions. Each of the statically configured partition has a partition
ID in the system.

Signed-off-by: Wendy Liang 
---
  .../bindings/soc/xilinx/xlnx,ai-engine.yaml| 126 +
  1 file changed, 126 insertions(+)
  create mode 100644 
Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml

diff --git a/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml 
b/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
new file mode 100644
index 000..1de5623
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
@@ -0,0 +1,126 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/soc/xilinx/xlnx,ai-engine.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Xilinx AI Engine
+
+maintainers:
+  - Wendy Liang 
+
+description: |+

You don't need '|' unless there's formatting to preserve.

Will change



+  The Xilinx AI Engine is a tile processor with many cores (up to 400) that
+  can run in parallel. The data routing between cores is configured through
+  internal switches, and shim tiles interface with external interconnect, such
+  as memory or PL.
+
+properties:
+  compatible:
+const: xlnx,ai-engine-v1.0

This is soft logic? If not, don't use version numbers.


It is not soft logic, if there is a future version of the device, can we 
use version number


in compatible to describe the device version?




+
+  reg:
+description: |
+  Physical base address and length of the device registers.

That's every 'reg' property. Drop.

[Wendy] will drop it.



+  The AI engine address space assigned to Linux is defined by Xilinx
+  platform design tool.
+
+  '#address-cells':
+enum: [2]

const: 2

Will change



+description: |
+  size of cell to describe AI engine range of tiles address.
+  It is the location of the starting tile of the range.
+  As the AI engine tiles are 2D array, the location of a tile
+  is presented as (column, row), the address cell is 2.
+
+  '#size-cells':
+enum: [2]
+description: |
+  size of cell to describe AI engine range of tiles size.
+  As the AI engine tiles are 2D array, the size cell is 2.
+
+  power-domains:
+maxItems: 1
+description: phandle to the associated power domain
+
+  interrupts:
+maxItems: 3
+
+  interrupt-names:
+description: |
+  Should be "interrupt1", "interrupt2" or "interrupt3".

Really, not useful names. If you do have names, they should be a schema,
not freeform text.


+
+required:
+  - compatible
+  - reg
+  - '#address-cells'
+  - '#size-cells'
+  - power-domains
+  - interrupt-parent

Generally, never required because it could be in the parent node.

will remove



+  - interrupts
+  - interrupt-names
+
+patternProperties:
+  "^aie_partition@[0-9]+$":

aie-partition@

The unit-address is just the 1st cell of reg (the row)? Or needs to be
row and column, in which case you'd want something like '@0,0'. Also,
unit-address values are typically hex, not decimal.
It will be col,row, will change to address format with starting column 
and row



+type: object
+description: |
+  AI engine partition which is a group of column based tiles of the AI
+  engine device. Each AI engine partition is isolated from the other
+  AI engine partitions. An AI engine partition is defined by Xilinx
+  platform design tools. Each partition has a SHIM row and core tiles rows.
+  A SHIM row contains SHIM tiles which are the interface to external
+  components. AXI master can access AI engine registers, push data to and
+  fetch data from AI engine through the SHIM tiles. Core tiles are the
+  compute tiles.
+
+properties:
+  reg:
+description: |
+  It describes the group of tiles of the AI engine partition. It needs
+  to include the SHIM row. The format is defined by the parent AI 
engine
+  device node's '#address-cells' and '#size-cells' properties. e.g. a 
v1
+  AI engine device has 2D tiles array, the first row is SHIM row. A
+  partition which has 50 columns and 8 rows of core tiles and 1 row of
+  SHIM tiles will be presented as <0 0 50 9>.

You should be able to write some constraints like max row and column
values?


The max row and columns depends on the underline hardware platform, the 
driver can


get the max allowed row and columns from the size field of the "reg" 
property in the parent


AI engine device node.




+
+  label:
+maxItems: 1

'label' is not an array. Why do you need label?


+
+  

Re: [PATCH 2/9] misc: Add Xilinx AI engine device driver

2020-12-08 Thread Jiaying Liang



On 12/8/20 9:12 AM, Nicolas Dufresne wrote:

Le mercredi 18 novembre 2020 à 00:06 -0800, Wendy Liang a écrit :

Create AI engine device/partition hierarchical structure.

Each AI engine device can have multiple logical partitions(groups of AI
engine tiles). Each partition is column based and has its own node ID
in the system. AI engine device driver manages its partitions.

Applications can access AI engine partition through the AI engine
partition driver instance. AI engine registers write is moved to kernel
as there are registers in the AI engine array needs privilege
permission.

Hi there, it's nice to see an effort to upstream an AI driver. I'm a little
worried this driver is not obvious to use from it's source code itself. So you
have reference to some Open Source code that demonstrate it's usage ?


We have AI engine library which provides a cross platforms APIs for other

libraries/application to use the hardware. Here is the source code:

https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/drivers/aienginev2/src

The cross platforms AI engine library runs in LInux userspace it defines 
how to


configure, and the kernel driver controls if what can be access and 
manage errors from device.



Best Regards,

Wendy





Signed-off-by: Wendy Liang 
Signed-off-by: Hyun Kwon 
---
  MAINTAINERS    |   8 +
  drivers/misc/Kconfig   |  12 +
  drivers/misc/Makefile  |   1 +
  drivers/misc/xilinx-ai-engine/Makefile |  11 +
  drivers/misc/xilinx-ai-engine/ai-engine-aie.c  | 115 +
  drivers/misc/xilinx-ai-engine/ai-engine-dev.c  | 448 ++
  drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 226 ++
  drivers/misc/xilinx-ai-engine/ai-engine-part.c | 498
+
  drivers/misc/xilinx-ai-engine/ai-engine-res.c  | 114 +
  include/uapi/linux/xlnx-ai-engine.h    | 107 +
  10 files changed, 1540 insertions(+)
  create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
  create mode 100644 include/uapi/linux/xlnx-ai-engine.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 5cc595a..40e3351 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19283,6 +19283,14 @@ T: git https://github.com/Xilinx/linux-xlnx.git
  F: Documentation/devicetree/bindings/phy/xlnx,zynqmp-psgtr.yaml
  F: drivers/phy/xilinx/phy-zynqmp.c
  
+XILINX AI ENGINE DRIVER

+M: Wendy Liang 
+S: Supported
+F: Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
+F: drivers/misc/xilinx-ai-engine/
+F: include/linux/xlnx-ai-engine.h
+F: include/uapi/linux/xlnx-ai-engine.h
+
  XILLYBUS DRIVER
  M: Eli Billauer 
  L: linux-kernel@vger.kernel.org
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index fafa8b0..0b8ce4d 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -444,6 +444,18 @@ config XILINX_SDFEC
  
   If unsure, say N.
  
+config XILINX_AIE

+   tristate "Xilinx AI engine"
+   depends on ARM64 || COMPILE_TEST
+   help
+ This option enables support for the Xilinx AI engine driver.
+ One Xilinx AI engine device can have multiple partitions (groups of
+ AI engine tiles). Xilinx AI engine device driver instance manages
+ AI engine partitions. User application access its partitions through
+ AI engine partition instance file operations.
+
+ If unsure, say N
+
  config MISC_RTSX
 tristate
 default MISC_RTSX_PCI || MISC_RTSX_USB
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index d23231e..2176b18 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -57,3 +57,4 @@ obj-$(CONFIG_HABANA_AI)   += habanalabs/
  obj-$(CONFIG_UACCE)+= uacce/
  obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
  obj-$(CONFIG_HISI_HIKEY_USB)   += hisi_hikey_usb.o
+obj-$(CONFIG_XILINX_AIE)   += xilinx-ai-engine/
diff --git a/drivers/misc/xilinx-ai-engine/Makefile b/drivers/misc/xilinx-ai-
engine/Makefile
new file mode 100644
index 000..7827a0a
--- /dev/null
+++ b/drivers/misc/xilinx-ai-engine/Makefile
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Makefile for Xilinx AI engine device driver
+#
+
+obj-$(CONFIG_XILINX_AIE)   += xilinx-aie.o
+
+xilinx-aie-$(CONFIG_XILINX_AIE) := ai-engine-aie.o \
+  ai-engine-dev.o \
+  ai-engine-part.o \
+  ai-engine-res.o
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c

Re: [PATCH 2/9] misc: Add Xilinx AI engine device driver

2020-12-08 Thread Jiaying Liang



On 12/8/20 9:12 AM, Nicolas Dufresne wrote:

Le mercredi 18 novembre 2020 à 00:06 -0800, Wendy Liang a écrit :

Create AI engine device/partition hierarchical structure.

Each AI engine device can have multiple logical partitions(groups of AI
engine tiles). Each partition is column based and has its own node ID
in the system. AI engine device driver manages its partitions.

Applications can access AI engine partition through the AI engine
partition driver instance. AI engine registers write is moved to kernel
as there are registers in the AI engine array needs privilege
permission.

Hi there, it's nice to see an effort to upstream an AI driver. I'm a little
worried this driver is not obvious to use from it's source code itself. So you
have reference to some Open Source code that demonstrate it's usage ?

We have AI engine library which provides a cross platforms APIs for other

libraries/application to use the hardware. Here is the source code:

https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/drivers/aienginev2/src 



The cross platforms AI engine library runs in LInux userspace it defines 
how to


configure, and the kernel driver controls if what can be access and 
manage errors from device.



Best Regards,

Wendy



Signed-off-by: Wendy Liang 
Signed-off-by: Hyun Kwon 
---
  MAINTAINERS    |   8 +
  drivers/misc/Kconfig   |  12 +
  drivers/misc/Makefile  |   1 +
  drivers/misc/xilinx-ai-engine/Makefile |  11 +
  drivers/misc/xilinx-ai-engine/ai-engine-aie.c  | 115 +
  drivers/misc/xilinx-ai-engine/ai-engine-dev.c  | 448 ++
  drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 226 ++
  drivers/misc/xilinx-ai-engine/ai-engine-part.c | 498
+
  drivers/misc/xilinx-ai-engine/ai-engine-res.c  | 114 +
  include/uapi/linux/xlnx-ai-engine.h    | 107 +
  10 files changed, 1540 insertions(+)
  create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
  create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
  create mode 100644 include/uapi/linux/xlnx-ai-engine.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 5cc595a..40e3351 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19283,6 +19283,14 @@ T: git https://github.com/Xilinx/linux-xlnx.git
  F: Documentation/devicetree/bindings/phy/xlnx,zynqmp-psgtr.yaml
  F: drivers/phy/xilinx/phy-zynqmp.c
  
+XILINX AI ENGINE DRIVER

+M: Wendy Liang 
+S: Supported
+F: Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
+F: drivers/misc/xilinx-ai-engine/
+F: include/linux/xlnx-ai-engine.h
+F: include/uapi/linux/xlnx-ai-engine.h
+
  XILLYBUS DRIVER
  M: Eli Billauer 
  L: linux-kernel@vger.kernel.org
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index fafa8b0..0b8ce4d 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -444,6 +444,18 @@ config XILINX_SDFEC
  
   If unsure, say N.
  
+config XILINX_AIE

+   tristate "Xilinx AI engine"
+   depends on ARM64 || COMPILE_TEST
+   help
+ This option enables support for the Xilinx AI engine driver.
+ One Xilinx AI engine device can have multiple partitions (groups of
+ AI engine tiles). Xilinx AI engine device driver instance manages
+ AI engine partitions. User application access its partitions through
+ AI engine partition instance file operations.
+
+ If unsure, say N
+
  config MISC_RTSX
 tristate
 default MISC_RTSX_PCI || MISC_RTSX_USB
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index d23231e..2176b18 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -57,3 +57,4 @@ obj-$(CONFIG_HABANA_AI)   += habanalabs/
  obj-$(CONFIG_UACCE)+= uacce/
  obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
  obj-$(CONFIG_HISI_HIKEY_USB)   += hisi_hikey_usb.o
+obj-$(CONFIG_XILINX_AIE)   += xilinx-ai-engine/
diff --git a/drivers/misc/xilinx-ai-engine/Makefile b/drivers/misc/xilinx-ai-
engine/Makefile
new file mode 100644
index 000..7827a0a
--- /dev/null
+++ b/drivers/misc/xilinx-ai-engine/Makefile
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Makefile for Xilinx AI engine device driver
+#
+
+obj-$(CONFIG_XILINX_AIE)   += xilinx-aie.o
+
+xilinx-aie-$(CONFIG_XILINX_AIE) := ai-engine-aie.o \
+  ai-engine-dev.o \
+  ai-engine-part.o \
+  ai-engine-res.o
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c

RE: [PATH v7 1/2] mailbox: ZynqMP IPI mailbox controller

2019-01-10 Thread Jiaying Liang


> -Original Message-
> From: Jassi Brar [mailto:jassisinghb...@gmail.com]
> Sent: Friday, December 21, 2018 6:00 PM
> To: Jiaying Liang 
> Cc: Michal Simek ; Rob Herring ;
> Mark Rutland ; Linux Kernel Mailing List  ker...@vger.kernel.org>; linux-arm-ker...@lists.infradead.org; Devicetree
> List 
> Subject: Re: [PATH v7 1/2] mailbox: ZynqMP IPI mailbox controller
> 
> On Thu, Dec 20, 2018 at 11:32 AM Wendy Liang 
> wrote:
> 
> 
> > diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c
> > b/drivers/mailbox/zynqmp-ipi-mailbox.c
> > new file mode 100644
> > index 000..bbddfd5
> > --- /dev/null
> > +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
> > @@ -0,0 +1,764 @@
> 
> ..
> > +
> > +/* IPI SMC Macros */
> > +#define IPI_SMC_OPEN_IRQ_MASK  0x0001UL /* IRQ enable bit
> in IPI
> > + * open SMC call
> > + */
> > +#define IPI_SMC_NOTIFY_BLOCK_MASK  0x0001UL /* Flag to
> indicate if
> > + * IPI notification 
> > needs
> > + * to be blocking.
> > + */
> > +#define IPI_SMC_ENQUIRY_DIRQ_MASK  0x0001UL /* Flag to
> indicate if
> > + * notification 
> > interrupt
> > + * to be disabled.
> > + */
> > +#define IPI_SMC_ACK_EIRQ_MASK  0x0001UL /* Flag to indicate
> if
> > + * notification 
> > interrupt
> > + * to be enabled.
> > + */
> > +
> The first two are unused.
[Wendy] Will remove the unused macros

> 
> 
> > +struct zynqmp_ipi_pdata {
> > +   struct device *dev;
> > +   int irq;
> > +   unsigned int method;
> >
> 'method' doesn't track the HVC option in the driver. Please have a look.
[Wendy] I will add one more checking in the function implementation
to check HVC and error if it is neither SMC nor HVC.
> 
> ..
> > +
> > +static void zynqmp_ipi_fw_call(struct zynqmp_ipi_mbox *ipi_mbox,
> > +  unsigned long a0, unsigned long a3,
> > +  unsigned long a4, unsigned long a5,
> > +  unsigned long a6, unsigned long a7,
> > +  struct arm_smccc_res *res)
> >
> [a4,a7] are always 0, so maybe just drop them?
[Wendy] Will drop them from the API, and set them to 0.
> 
> 
> > +static bool zynqmp_ipi_last_tx_done(struct mbox_chan *chan) {
> > +   struct device *dev = chan->mbox->dev;
> > +   struct zynqmp_ipi_mbox *ipi_mbox = dev_get_drvdata(dev);
> > +   struct zynqmp_ipi_mchan *mchan = chan->con_priv;
> > +   int ret;
> > +   u64 arg0;
> > +   struct arm_smccc_res res;
> > +   struct zynqmp_ipi_message *msg;
> > +
> > +   if (WARN_ON(!ipi_mbox)) {
> > +   dev_err(dev, "no platform drv data??\n");
> > +   return false;
> > +   }
> > +
> > +   if (mchan->chan_type == IPI_MB_CHNL_TX) {
> > +   /* We only need to check if the message been taken
> > +* by the remote in the TX channel
> > +*/
> > +   arg0 = SMC_IPI_MAILBOX_STATUS_ENQUIRY;
> > +   zynqmp_ipi_fw_call(ipi_mbox, arg0, 0, 0, 0, 0, 0, );
> > +   /* Check the SMC call status, a0 of the result */
> > +   ret = (int)(res.a0 & 0x);
> > +   if (ret < 0 || ret & IPI_MB_STATUS_SEND_PENDING)
> > +   return false;
> > +
> OK, but ...
> 
> > +   msg = mchan->rx_buf;
> > +   msg->len = mchan->resp_buf_size;
> > +   memcpy_fromio(msg->data, mchan->resp_buf, msg->len);
> > +   mbox_chan_received_data(chan, (void *)msg);
> >
>  wouldn't this be done from zynqmp_ipi_interrupt()?
[Wendy] The IPI hardware supports both the synchronous mode and
Asynchronous mode.
The rx interrupt used for remote to notify host, or for responding asynchronous
Request. In synchronous mode, the IPI hardware allow remote to wr

RE: [PATCH v6 1/2] mailbox: ZynqMP IPI mailbox controller

2018-12-10 Thread Jiaying Liang



> -Original Message-
> From: Wendy Liang [mailto:wendy.li...@xilinx.com]
> Sent: Monday, November 19, 2018 1:26 PM
> To: jassisinghb...@gmail.com; Michal Simek ;
> robh...@kernel.org; mark.rutl...@arm.com
> Cc: linux-kernel@vger.kernel.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; Jiaying Liang 
> Subject: [PATCH v6 1/2] mailbox: ZynqMP IPI mailbox controller
> 
> This patch is to introduce ZynqMP IPI mailbox controller driver to use the
> ZynqMP IPI block as mailboxes.
> 
> Signed-off-by: Wendy Liang 
> ---
>  drivers/mailbox/Kconfig|   9 +
>  drivers/mailbox/Makefile   |   2 +
>  drivers/mailbox/zynqmp-ipi-mailbox.c   | 762
> +
>  include/linux/mailbox/zynqmp-ipi-message.h |  24 +
>  4 files changed, 797 insertions(+)
>  create mode 100644 drivers/mailbox/zynqmp-ipi-mailbox.c
>  create mode 100644 include/linux/mailbox/zynqmp-ipi-message.h
> 
> diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig index
> 3eeb12e9..10bfe3f 100644
> --- a/drivers/mailbox/Kconfig
> +++ b/drivers/mailbox/Kconfig
> @@ -205,4 +205,13 @@ config MTK_CMDQ_MBOX
> mailbox driver. The CMDQ is used to help read/write registers with
> critical time limitation, such as updating display configuration
> during the vblank.
> +
> +config ZYNQMP_IPI_MBOX
> + tristate "Xilinx ZynqMP IPI Mailbox"
> + depends on ARCH_ZYNQMP && OF
> + help
> +   Mailbox implementation for Xilinx ZynqMP IPI controller. It is used
> +   to send notification or short message between processors on Xilinx
> +   UltraScale+ MPSoC platforms. Say Y here if you want to have this
> +   support.
>  endif
> diff --git a/drivers/mailbox/Makefile b/drivers/mailbox/Makefile index
> c818b5d..bb3d604 100644
> --- a/drivers/mailbox/Makefile
> +++ b/drivers/mailbox/Makefile
> @@ -44,3 +44,5 @@ obj-$(CONFIG_TEGRA_HSP_MBOX)+= tegra-
> hsp.o
>  obj-$(CONFIG_STM32_IPCC) += stm32-ipcc.o
> 
>  obj-$(CONFIG_MTK_CMDQ_MBOX)  += mtk-cmdq-mailbox.o
> +
> +obj-$(CONFIG_ZYNQMP_IPI_MBOX)  += zynqmp-ipi-mailbox.o
> diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-
> ipi-mailbox.c
> new file mode 100644
> index 000..bc02864
> --- /dev/null
> +++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
> @@ -0,0 +1,762 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Inter Processor Interrupt(IPI) Mailbox Driver
> + *
> + * Copyright (C) 2018 Xilinx Inc.
> + *
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +/* IPI agent ID any */
> +#define IPI_ID_ANY 0xFFUL
> +
> +/* indicate if ZynqMP IPI mailbox driver uses SMC calls or HVC calls */
> +#define USE_SMC 0 #define USE_HVC 1
> +
> +/* Default IPI SMC function IDs */
> +#define SMC_IPI_MAILBOX_OPEN0x82001000U
> +#define SMC_IPI_MAILBOX_RELEASE 0x82001001U
> +#define SMC_IPI_MAILBOX_STATUS_ENQUIRY  0x82001002U
> +#define SMC_IPI_MAILBOX_NOTIFY  0x82001003U
> +#define SMC_IPI_MAILBOX_ACK 0x82001004U
> +#define SMC_IPI_MAILBOX_ENABLE_IRQ  0x82001005U
> +#define SMC_IPI_MAILBOX_DISABLE_IRQ 0x82001006U
> +
> +/* IPI SMC Macros */
> +#define IPI_SMC_OPEN_IRQ_MASK0x0001UL /* IRQ enable
> bit in IPI
> +   * open SMC call
> +   */
> +#define IPI_SMC_NOTIFY_BLOCK_MASK0x0001UL /* Flag to
> indicate if
> +   * IPI notification needs
> +   * to be blocking.
> +   */
> +#define IPI_SMC_ENQUIRY_DIRQ_MASK   0x0001UL /* Flag to
> indicate if
> +   * notification interrupt
> +   * to be disabled.
> +   */
> +#define IPI_SMC_ACK_EIRQ_MASK   0x0001UL /* Flag to indicate if
> +   * notification interrupt
> +   * to be enabled.
> +   */
> +
> +/* IPI mailbox status */
> +#define IPI_MB_STATUS_IDLE  0
> +#define IPI_MB_STATUS_SEND_PENDING  1
> +#define IPI_MB_STATUS_RECV_PENDING  2
> +
> +#define IPI_MB_CHNL_

RE: [PATCH v5 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-11-12 Thread Jiaying Liang



> -Original Message-
> From: Rob Herring [mailto:r...@kernel.org]
> Sent: Monday, November 12, 2018 9:56 AM
> To: Jiaying Liang 
> Cc: jassisinghb...@gmail.com; Michal Simek ;
> mark.rutl...@arm.com; linux-kernel@vger.kernel.org; linux-arm-
> ker...@lists.infradead.org; devicet...@vger.kernel.org
> Subject: Re: [PATCH v5 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> On Mon, Nov 05, 2018 at 02:37:01PM -0800, Wendy Liang wrote:
> > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > ZynqMP SoC used for the communication between various processor
> > systems.
> >
> > Signed-off-by: Wendy Liang 
> > ---
> >  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 128
> +
> >  1 file changed, 128 insertions(+)
> >  create mode 100644
> > Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> >
> > diff --git
> > a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.tx
> > t
> > b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.tx
> > t
> > new file mode 100644
> > index 000..18fd7b4
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbo
> > +++ x.txt
> > @@ -0,0 +1,128 @@
> > +Xilinx IPI Mailbox Controller
> > +
> > +
> > +The Xilinx IPI(Inter Processor Interrupt) mailbox controller is to
> > +manage messaging between two Xilinx Zynq UltraScale+ MPSoC IPI
> > +agents. Each IPI agent owns registers used for notification and buffers for
> message.
> > +
> > +   +-+
> > +   | Xilinx ZynqMP IPI Controller|
> > +   +-+
> > ++--+
> > +ATF| |
> > +   | |
> > +   | |
> > ++--+ |
> > +   | |
> > +   | |
> > ++--+
> > ++--+
> > +|  ++   ++ |
> > +Hardware|  |  IPI Agent |   |  IPI Buffers   | |
> > +|  |  Registers |   || |
> > +|  ||   || |
> > +|  ++   ++ |
> > +|  |
> > +| Xilinx IPI Agent Block   |
> > ++--+
> > +
> > +
> > +Controller Device Node:
> > +===
> > +Required properties:
> > +
> > +IPI agent node:
> > +- compatible:  Shall be: "xlnx,zynqmp-ipi-mailbox"
> > +- interrupt-parent:Phandle for the interrupt controller
> > +- interrupts:  Interrupt information corresponding to the
> > +   interrupt-names property.
> > +- xlnx,ipi-id: local Xilinx IPI agent ID
> > +- #address-cells:  number of address cells of internal IPI mailbox nodes
> > +- #size-cells: number of size cells of internal IPI mailbox 
> > nodes
> > +
> > +Internal IPI mailbox node:
> > +- reg: IPI buffers address ranges
> > +- reg-names:   Names of the reg resources. It should have:
> > +   * local_request_region
> > + - IPI request msg buffer written by local and read
> > +   by remote
> > +   * local_response_region
> > + - IPI response msg buffer written by local and read
> > +   by remote
> > +   * remote_request_region
> > + - IPI request msg buffer written by remote and read
> > +   by local
> > +   * remote_response_region
> > + - IPI response msg buffer written by remote and
> read
> > +   by local
> > +- #mbox-cells: Shall be 1. It contains:
> > +   * tx(0) or rx(1) channel
> > +- xlnx,ipi-id: remote Xilinx IPI agent ID of which the mailbox 
> > is
> > +   

RE: [PATCH v5 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-11-12 Thread Jiaying Liang



> -Original Message-
> From: Rob Herring [mailto:r...@kernel.org]
> Sent: Monday, November 12, 2018 9:56 AM
> To: Jiaying Liang 
> Cc: jassisinghb...@gmail.com; Michal Simek ;
> mark.rutl...@arm.com; linux-kernel@vger.kernel.org; linux-arm-
> ker...@lists.infradead.org; devicet...@vger.kernel.org
> Subject: Re: [PATCH v5 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> On Mon, Nov 05, 2018 at 02:37:01PM -0800, Wendy Liang wrote:
> > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > ZynqMP SoC used for the communication between various processor
> > systems.
> >
> > Signed-off-by: Wendy Liang 
> > ---
> >  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 128
> +
> >  1 file changed, 128 insertions(+)
> >  create mode 100644
> > Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> >
> > diff --git
> > a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.tx
> > t
> > b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.tx
> > t
> > new file mode 100644
> > index 000..18fd7b4
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbo
> > +++ x.txt
> > @@ -0,0 +1,128 @@
> > +Xilinx IPI Mailbox Controller
> > +
> > +
> > +The Xilinx IPI(Inter Processor Interrupt) mailbox controller is to
> > +manage messaging between two Xilinx Zynq UltraScale+ MPSoC IPI
> > +agents. Each IPI agent owns registers used for notification and buffers for
> message.
> > +
> > +   +-+
> > +   | Xilinx ZynqMP IPI Controller|
> > +   +-+
> > ++--+
> > +ATF| |
> > +   | |
> > +   | |
> > ++--+ |
> > +   | |
> > +   | |
> > ++--+
> > ++--+
> > +|  ++   ++ |
> > +Hardware|  |  IPI Agent |   |  IPI Buffers   | |
> > +|  |  Registers |   || |
> > +|  ||   || |
> > +|  ++   ++ |
> > +|  |
> > +| Xilinx IPI Agent Block   |
> > ++--+
> > +
> > +
> > +Controller Device Node:
> > +===
> > +Required properties:
> > +
> > +IPI agent node:
> > +- compatible:  Shall be: "xlnx,zynqmp-ipi-mailbox"
> > +- interrupt-parent:Phandle for the interrupt controller
> > +- interrupts:  Interrupt information corresponding to the
> > +   interrupt-names property.
> > +- xlnx,ipi-id: local Xilinx IPI agent ID
> > +- #address-cells:  number of address cells of internal IPI mailbox nodes
> > +- #size-cells: number of size cells of internal IPI mailbox 
> > nodes
> > +
> > +Internal IPI mailbox node:
> > +- reg: IPI buffers address ranges
> > +- reg-names:   Names of the reg resources. It should have:
> > +   * local_request_region
> > + - IPI request msg buffer written by local and read
> > +   by remote
> > +   * local_response_region
> > + - IPI response msg buffer written by local and read
> > +   by remote
> > +   * remote_request_region
> > + - IPI request msg buffer written by remote and read
> > +   by local
> > +   * remote_response_region
> > + - IPI response msg buffer written by remote and
> read
> > +   by local
> > +- #mbox-cells: Shall be 1. It contains:
> > +   * tx(0) or rx(1) channel
> > +- xlnx,ipi-id: remote Xilinx IPI agent ID of which the mailbox 
> > is
> > +   

RE: [PATCH v5 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-11-12 Thread Jiaying Liang
Ping, any comments?
Thanks,
Wendy

> -Original Message-
> From: Wendy Liang [mailto:wendy.li...@xilinx.com]
> Sent: Monday, November 05, 2018 2:37 PM
> To: jassisinghb...@gmail.com; Michal Simek ;
> robh...@kernel.org; mark.rutl...@arm.com
> Cc: linux-kernel@vger.kernel.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; Jiaying Liang 
> Subject: [PATCH v5 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in ZynqMP
> SoC used for the communication between various processor systems.
> 
> Signed-off-by: Wendy Liang 
> ---
>  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 128
> +
>  1 file changed, 128 insertions(+)
>  create mode 100644
> Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> 
> diff --git a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.txt b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.txt
> new file mode 100644
> index 000..18fd7b4
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.
> +++ txt
> @@ -0,0 +1,128 @@
> +Xilinx IPI Mailbox Controller
> +
> +
> +The Xilinx IPI(Inter Processor Interrupt) mailbox controller is to
> +manage messaging between two Xilinx Zynq UltraScale+ MPSoC IPI agents.
> +Each IPI agent owns registers used for notification and buffers for message.
> +
> +   +-+
> +   | Xilinx ZynqMP IPI Controller|
> +   +-+
> ++--+
> +ATF| |
> +   | |
> +   | |
> ++--+ |
> +   | |
> +   | |
> ++--+
> ++--+
> +|  ++   ++ |
> +Hardware|  |  IPI Agent |   |  IPI Buffers   | |
> +|  |  Registers |   || |
> +|  ||   || |
> +|  ++   ++ |
> +|  |
> +| Xilinx IPI Agent Block   |
> ++--+
> +
> +
> +Controller Device Node:
> +===
> +Required properties:
> +
> +IPI agent node:
> +- compatible:Shall be: "xlnx,zynqmp-ipi-mailbox"
> +- interrupt-parent:  Phandle for the interrupt controller
> +- interrupts:Interrupt information corresponding to the
> + interrupt-names property.
> +- xlnx,ipi-id:   local Xilinx IPI agent ID
> +- #address-cells:number of address cells of internal IPI mailbox nodes
> +- #size-cells:   number of size cells of internal IPI mailbox 
> nodes
> +
> +Internal IPI mailbox node:
> +- reg:   IPI buffers address ranges
> +- reg-names: Names of the reg resources. It should have:
> + * local_request_region
> +   - IPI request msg buffer written by local and read
> + by remote
> + * local_response_region
> +   - IPI response msg buffer written by local and read
> + by remote
> + * remote_request_region
> +   - IPI request msg buffer written by remote and read
> + by local
> + * remote_response_region
> +   - IPI response msg buffer written by remote and
> read
> + by local
> +- #mbox-cells:   Shall be 1. It contains:
> + * tx(0) or rx(1) channel
> +- xlnx,ipi-id:   remote Xilinx IPI agent ID of which the mailbox 
> is
> + connected to.
> +
> +Optional properties:
> +
> +- method:  The method of accessing the IPI agent registers.
> +   Permitted values are: "smc" and "hvc". Default is
> +   "smc".
> +
> +Client Device Node:
> +===
> +Required properties:
> +
> +- mboxes:  

RE: [PATCH v5 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-11-12 Thread Jiaying Liang
Ping, any comments?
Thanks,
Wendy

> -Original Message-
> From: Wendy Liang [mailto:wendy.li...@xilinx.com]
> Sent: Monday, November 05, 2018 2:37 PM
> To: jassisinghb...@gmail.com; Michal Simek ;
> robh...@kernel.org; mark.rutl...@arm.com
> Cc: linux-kernel@vger.kernel.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; Jiaying Liang 
> Subject: [PATCH v5 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in ZynqMP
> SoC used for the communication between various processor systems.
> 
> Signed-off-by: Wendy Liang 
> ---
>  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 128
> +
>  1 file changed, 128 insertions(+)
>  create mode 100644
> Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> 
> diff --git a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.txt b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.txt
> new file mode 100644
> index 000..18fd7b4
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.
> +++ txt
> @@ -0,0 +1,128 @@
> +Xilinx IPI Mailbox Controller
> +
> +
> +The Xilinx IPI(Inter Processor Interrupt) mailbox controller is to
> +manage messaging between two Xilinx Zynq UltraScale+ MPSoC IPI agents.
> +Each IPI agent owns registers used for notification and buffers for message.
> +
> +   +-+
> +   | Xilinx ZynqMP IPI Controller|
> +   +-+
> ++--+
> +ATF| |
> +   | |
> +   | |
> ++--+ |
> +   | |
> +   | |
> ++--+
> ++--+
> +|  ++   ++ |
> +Hardware|  |  IPI Agent |   |  IPI Buffers   | |
> +|  |  Registers |   || |
> +|  ||   || |
> +|  ++   ++ |
> +|  |
> +| Xilinx IPI Agent Block   |
> ++--+
> +
> +
> +Controller Device Node:
> +===
> +Required properties:
> +
> +IPI agent node:
> +- compatible:Shall be: "xlnx,zynqmp-ipi-mailbox"
> +- interrupt-parent:  Phandle for the interrupt controller
> +- interrupts:Interrupt information corresponding to the
> + interrupt-names property.
> +- xlnx,ipi-id:   local Xilinx IPI agent ID
> +- #address-cells:number of address cells of internal IPI mailbox nodes
> +- #size-cells:   number of size cells of internal IPI mailbox 
> nodes
> +
> +Internal IPI mailbox node:
> +- reg:   IPI buffers address ranges
> +- reg-names: Names of the reg resources. It should have:
> + * local_request_region
> +   - IPI request msg buffer written by local and read
> + by remote
> + * local_response_region
> +   - IPI response msg buffer written by local and read
> + by remote
> + * remote_request_region
> +   - IPI request msg buffer written by remote and read
> + by local
> + * remote_response_region
> +   - IPI response msg buffer written by remote and
> read
> + by local
> +- #mbox-cells:   Shall be 1. It contains:
> + * tx(0) or rx(1) channel
> +- xlnx,ipi-id:   remote Xilinx IPI agent ID of which the mailbox 
> is
> + connected to.
> +
> +Optional properties:
> +
> +- method:  The method of accessing the IPI agent registers.
> +   Permitted values are: "smc" and "hvc". Default is
> +   "smc".
> +
> +Client Device Node:
> +===
> +Required properties:
> +
> +- mboxes:  

RE: [PATCH v4 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-10-29 Thread Jiaying Liang


> -Original Message-
> From: Wendy Liang [mailto:sunnylian...@gmail.com]
> Sent: Wednesday, October 10, 2018 3:54 PM
> To: Sudeep Holla 
> Cc: Jiaying Liang ; Jassi Brar ;
> Michal Simek ; Rob Herring ;
> Mark Rutland ; Devicetree List
> ; Linux Kernel Mailing List  ker...@vger.kernel.org>; linux-arm-kernel  ker...@lists.infradead.org>
> Subject: Re: [PATCH v4 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> On Wed, Oct 10, 2018 at 2:59 AM Sudeep Holla 
> wrote:
> >
> > On Wed, Oct 10, 2018 at 12:18:32AM -0700, Wendy Liang wrote:
> > > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > > ZynqMP SoC used for the communication between various processor
> > > systems.
> > >
> > > Signed-off-by: Wendy Liang 
> >
> > [...]
> >
> > > +Optional properties:
> > > +
> > > +- method:  The method of accessing the IPI agent registers.
> > > +   Permitted values are: "smc" and "hvc". Default is
> > > +   "smc".
> >
> > You are mixing the hardware messaging based mailbox and the software
> > "smc/hvc" based mailbox together here. Please keep them separated.
> > IIUC smc/hvc based mailcox is used for "tx" or too keep it simple in
> > one direction and hardware based is used for "rx" or the other
> > direction for communication.
> >
> Hi Sudeep,
> 
> Thanks for your comments.
> 
> The IPI hardware block has both buffers and registers. The hardware block
> has dedicated buffers for each mailboxes, and thus, in the implementation,
> we directly access the buffers from IPI driver. However, the controller
> registers are shared between mailboxes in the hardware, as the ATF will also
> access the registers, we need to use SMC/HVC to access the registers (control
> or ISR). And the SMC/HVC here is for the register access.
> 
> I am not clear on smc/hvc based mailbox is used for tx, and hardware based
> is used for  "rx". As for both TX and RX, we need to write/read the registers
> (through SMC) and write/read the buffers provided by the IPI hardware block
> directly.
[Wendy] Hi Sudeep,

The SMC/HVC is for hardware registers access as but not sending messages.
Do you have further comments or are you fine with the explanation?

Thanks,
Wendy

> 
> Thanks,
> Wendy
> 
> > You *should not* mix them as single unit. Also lots of other vendor
> > need SMC/HVC based mailbox. So make it generic and keep it separate.
> >
> > --
> > Regards,
> > Sudeep


RE: [PATCH v4 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-10-29 Thread Jiaying Liang


> -Original Message-
> From: Wendy Liang [mailto:sunnylian...@gmail.com]
> Sent: Wednesday, October 10, 2018 3:54 PM
> To: Sudeep Holla 
> Cc: Jiaying Liang ; Jassi Brar ;
> Michal Simek ; Rob Herring ;
> Mark Rutland ; Devicetree List
> ; Linux Kernel Mailing List  ker...@vger.kernel.org>; linux-arm-kernel  ker...@lists.infradead.org>
> Subject: Re: [PATCH v4 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> On Wed, Oct 10, 2018 at 2:59 AM Sudeep Holla 
> wrote:
> >
> > On Wed, Oct 10, 2018 at 12:18:32AM -0700, Wendy Liang wrote:
> > > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > > ZynqMP SoC used for the communication between various processor
> > > systems.
> > >
> > > Signed-off-by: Wendy Liang 
> >
> > [...]
> >
> > > +Optional properties:
> > > +
> > > +- method:  The method of accessing the IPI agent registers.
> > > +   Permitted values are: "smc" and "hvc". Default is
> > > +   "smc".
> >
> > You are mixing the hardware messaging based mailbox and the software
> > "smc/hvc" based mailbox together here. Please keep them separated.
> > IIUC smc/hvc based mailcox is used for "tx" or too keep it simple in
> > one direction and hardware based is used for "rx" or the other
> > direction for communication.
> >
> Hi Sudeep,
> 
> Thanks for your comments.
> 
> The IPI hardware block has both buffers and registers. The hardware block
> has dedicated buffers for each mailboxes, and thus, in the implementation,
> we directly access the buffers from IPI driver. However, the controller
> registers are shared between mailboxes in the hardware, as the ATF will also
> access the registers, we need to use SMC/HVC to access the registers (control
> or ISR). And the SMC/HVC here is for the register access.
> 
> I am not clear on smc/hvc based mailbox is used for tx, and hardware based
> is used for  "rx". As for both TX and RX, we need to write/read the registers
> (through SMC) and write/read the buffers provided by the IPI hardware block
> directly.
[Wendy] Hi Sudeep,

The SMC/HVC is for hardware registers access as but not sending messages.
Do you have further comments or are you fine with the explanation?

Thanks,
Wendy

> 
> Thanks,
> Wendy
> 
> > You *should not* mix them as single unit. Also lots of other vendor
> > need SMC/HVC based mailbox. So make it generic and keep it separate.
> >
> > --
> > Regards,
> > Sudeep


RE: [PATCH 6/7] remoteproc: Add Xilinx ZynqMP R5 remoteproc

2018-09-10 Thread Jiaying Liang



> -Original Message-
> From: Loic PALLARDY [mailto:loic.palla...@st.com]
> Sent: Monday, September 10, 2018 1:25 PM
> To: Jiaying Liang ; o...@wizery.com;
> bjorn.anders...@linaro.org; Michal Simek ;
> robh...@kernel.org; mark.rutl...@arm.com; Rajan Vaja
> ; Jolly Shah 
> Cc: linux-remotep...@vger.kernel.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; linux-kernel@vger.kernel.org; Jiaying Liang
> 
> Subject: RE: [PATCH 6/7] remoteproc: Add Xilinx ZynqMP R5 remoteproc
> 
> Hi Wendy,
> Please find below few comments.
> 
> > -Original Message-
> > From: linux-remoteproc-ow...@vger.kernel.org  > ow...@vger.kernel.org> On Behalf Of Wendy Liang
> > Sent: Thursday, August 16, 2018 9:06 AM
> > To: o...@wizery.com; bjorn.anders...@linaro.org;
> > michal.si...@xilinx.com; robh...@kernel.org; mark.rutl...@arm.com;
> > rajan.v...@xilinx.com; jol...@xilinx.com
> > Cc: linux-remotep...@vger.kernel.org; linux-arm-
> > ker...@lists.infradead.org; devicet...@vger.kernel.org; linux-
> > ker...@vger.kernel.org; Wendy Liang 
> > Subject: [PATCH 6/7] remoteproc: Add Xilinx ZynqMP R5 remoteproc
> >
> > There are cortex-r5 processors in Xilinx Zynq UltraScale+ MPSoC
> > platforms. This remoteproc driver is to manage the
> > R5 processors.
> >
> > Signed-off-by: Wendy Liang 
> 
> Jason Wu' signed-off-by missing as he is mentioned as author of this driver?
[Wendy] He was the one who wrote the init version of the driver.
But he left the company a few years ago. In this case, maybe I should remove
Module author.
> 
> > ---
> >  drivers/remoteproc/Kconfig|   9 +
> >  drivers/remoteproc/Makefile   |   1 +
> >  drivers/remoteproc/zynqmp_r5_remoteproc.c | 692
> > ++
> >  3 files changed, 702 insertions(+)
> >  create mode 100644 drivers/remoteproc/zynqmp_r5_remoteproc.c
> >
> > diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
> > index cd1c168..83aac63 100644
> > --- a/drivers/remoteproc/Kconfig
> > +++ b/drivers/remoteproc/Kconfig
> > @@ -158,6 +158,15 @@ config ST_REMOTEPROC  config
> ST_SLIM_REMOTEPROC
> > tristate
> >
> > +config ZYNQMP_R5_REMOTEPROC
> > +   tristate "ZynqMP_r5 remoteproc support"
> > +   depends on ARM64 && PM && ARCH_ZYNQMP
> > +   select RPMSG_VIRTIO
> > +   select ZYNQMP_FIRMWARE
> > +   help
> > + Say y here to support ZynqMP R5 remote processors via the remote
> > + processor framework.
> > +
> >  endif # REMOTEPROC
> >
> >  endmenu
> > diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
> > index 02627ed..147923c 100644
> > --- a/drivers/remoteproc/Makefile
> > +++ b/drivers/remoteproc/Makefile
> > @@ -23,3 +23,4 @@ qcom_wcnss_pil-y  +=
> > qcom_wcnss.o
> >  qcom_wcnss_pil-y   += qcom_wcnss_iris.o
> >  obj-$(CONFIG_ST_REMOTEPROC)+= st_remoteproc.o
> >  obj-$(CONFIG_ST_SLIM_REMOTEPROC)   += st_slim_rproc.o
> > +obj-$(CONFIG_ZYNQMP_R5_REMOTEPROC) += zynqmp_r5_remoteproc.o
> > diff --git a/drivers/remoteproc/zynqmp_r5_remoteproc.c
> > b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> > new file mode 100644
> > index 000..7fc3718
> > --- /dev/null
> > +++ b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> > @@ -0,0 +1,692 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Zynq R5 Remote Processor driver
> > + *
> > + * Copyright (C) 2015 Xilinx, Inc.
> > + *
> > + */
> > +
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> 
> Includes to be classified in alphabetical order
[Wendy] Will do in the next version
> 
> > +
> > +#include "remoteproc_internal.h"
> > +
> > +/* IPI reg offsets */
> > +#define TRIG_OFFSET0x
> > +#define OBS_OFFSET 0x0004
> > +#define ISR_OFFSET 0x0010
> > +#define IMR_OFFSET 0x0014
> > +#define IER_OFFSET 0x0018
> > +#define IDR_OFFSET 0x001C
> > +#define IPI_ALL_MASK   0x0F0F0301
> > +
> > +/* RPU IPI mask */
> > +#define RPU_IPI_INIT_MASK  0x0100
> > +#

RE: [PATCH 6/7] remoteproc: Add Xilinx ZynqMP R5 remoteproc

2018-09-10 Thread Jiaying Liang



> -Original Message-
> From: Loic PALLARDY [mailto:loic.palla...@st.com]
> Sent: Monday, September 10, 2018 1:25 PM
> To: Jiaying Liang ; o...@wizery.com;
> bjorn.anders...@linaro.org; Michal Simek ;
> robh...@kernel.org; mark.rutl...@arm.com; Rajan Vaja
> ; Jolly Shah 
> Cc: linux-remotep...@vger.kernel.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; linux-kernel@vger.kernel.org; Jiaying Liang
> 
> Subject: RE: [PATCH 6/7] remoteproc: Add Xilinx ZynqMP R5 remoteproc
> 
> Hi Wendy,
> Please find below few comments.
> 
> > -Original Message-
> > From: linux-remoteproc-ow...@vger.kernel.org  > ow...@vger.kernel.org> On Behalf Of Wendy Liang
> > Sent: Thursday, August 16, 2018 9:06 AM
> > To: o...@wizery.com; bjorn.anders...@linaro.org;
> > michal.si...@xilinx.com; robh...@kernel.org; mark.rutl...@arm.com;
> > rajan.v...@xilinx.com; jol...@xilinx.com
> > Cc: linux-remotep...@vger.kernel.org; linux-arm-
> > ker...@lists.infradead.org; devicet...@vger.kernel.org; linux-
> > ker...@vger.kernel.org; Wendy Liang 
> > Subject: [PATCH 6/7] remoteproc: Add Xilinx ZynqMP R5 remoteproc
> >
> > There are cortex-r5 processors in Xilinx Zynq UltraScale+ MPSoC
> > platforms. This remoteproc driver is to manage the
> > R5 processors.
> >
> > Signed-off-by: Wendy Liang 
> 
> Jason Wu' signed-off-by missing as he is mentioned as author of this driver?
[Wendy] He was the one who wrote the init version of the driver.
But he left the company a few years ago. In this case, maybe I should remove
Module author.
> 
> > ---
> >  drivers/remoteproc/Kconfig|   9 +
> >  drivers/remoteproc/Makefile   |   1 +
> >  drivers/remoteproc/zynqmp_r5_remoteproc.c | 692
> > ++
> >  3 files changed, 702 insertions(+)
> >  create mode 100644 drivers/remoteproc/zynqmp_r5_remoteproc.c
> >
> > diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
> > index cd1c168..83aac63 100644
> > --- a/drivers/remoteproc/Kconfig
> > +++ b/drivers/remoteproc/Kconfig
> > @@ -158,6 +158,15 @@ config ST_REMOTEPROC  config
> ST_SLIM_REMOTEPROC
> > tristate
> >
> > +config ZYNQMP_R5_REMOTEPROC
> > +   tristate "ZynqMP_r5 remoteproc support"
> > +   depends on ARM64 && PM && ARCH_ZYNQMP
> > +   select RPMSG_VIRTIO
> > +   select ZYNQMP_FIRMWARE
> > +   help
> > + Say y here to support ZynqMP R5 remote processors via the remote
> > + processor framework.
> > +
> >  endif # REMOTEPROC
> >
> >  endmenu
> > diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
> > index 02627ed..147923c 100644
> > --- a/drivers/remoteproc/Makefile
> > +++ b/drivers/remoteproc/Makefile
> > @@ -23,3 +23,4 @@ qcom_wcnss_pil-y  +=
> > qcom_wcnss.o
> >  qcom_wcnss_pil-y   += qcom_wcnss_iris.o
> >  obj-$(CONFIG_ST_REMOTEPROC)+= st_remoteproc.o
> >  obj-$(CONFIG_ST_SLIM_REMOTEPROC)   += st_slim_rproc.o
> > +obj-$(CONFIG_ZYNQMP_R5_REMOTEPROC) += zynqmp_r5_remoteproc.o
> > diff --git a/drivers/remoteproc/zynqmp_r5_remoteproc.c
> > b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> > new file mode 100644
> > index 000..7fc3718
> > --- /dev/null
> > +++ b/drivers/remoteproc/zynqmp_r5_remoteproc.c
> > @@ -0,0 +1,692 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Zynq R5 Remote Processor driver
> > + *
> > + * Copyright (C) 2015 Xilinx, Inc.
> > + *
> > + */
> > +
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> > +#include 
> 
> Includes to be classified in alphabetical order
[Wendy] Will do in the next version
> 
> > +
> > +#include "remoteproc_internal.h"
> > +
> > +/* IPI reg offsets */
> > +#define TRIG_OFFSET0x
> > +#define OBS_OFFSET 0x0004
> > +#define ISR_OFFSET 0x0010
> > +#define IMR_OFFSET 0x0014
> > +#define IER_OFFSET 0x0018
> > +#define IDR_OFFSET 0x001C
> > +#define IPI_ALL_MASK   0x0F0F0301
> > +
> > +/* RPU IPI mask */
> > +#define RPU_IPI_INIT_MASK  0x0100
> > +#

Fwd: [linux-sunxi] Re: [PATCH v2 2/3] mailbox: introduce ARM SMC based mailbox

2018-07-31 Thread Jiaying Liang
Added missing maintainers from the previous reply

On Sunday, 23 July 2017 16:26:55 UTC-7, Andre Przywara wrote:
>
> This mailbox driver implements a mailbox which signals transmitted data
> via an ARM smc (secure monitor call) instruction. The mailbox receiver
> is implemented in firmware and can synchronously return data when it
> returns execution to the non-secure world again.
> An asynchronous receive path is not implemented.
> This allows the usage of a mailbox to trigger firmware actions on SoCs
> which either don't have a separate management processor or on which such
> a core is not available. A user of this mailbox could be the SCP
> interface.
>
> Signed-off-by: Andre Przywara 
> ---
>  drivers/mailbox/Kconfig   |   8 ++
>  drivers/mailbox/Makefile  |   2 +
>  drivers/mailbox/arm-smc-mailbox.c | 155 
> ++
>
>  3 files changed, 165 insertions(+)
>  create mode 100644 drivers/mailbox/arm-smc-mailbox.c
>
> diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
> index c5731e5..5664b7f 100644
> --- a/drivers/mailbox/Kconfig
> +++ b/drivers/mailbox/Kconfig
> @@ -170,4 +170,12 @@ config BCM_FLEXRM_MBOX
>Mailbox implementation of the Broadcom FlexRM ring manager,
>which provides access to various offload engines on Broadcom
>SoCs. Say Y here if you want to use the Broadcom FlexRM.
> +
> +config ARM_SMC_MBOX
> +tristate "Generic ARM smc mailbox"
> +depends on OF && HAVE_ARM_SMCCC
> +help
> +  Generic mailbox driver which uses ARM smc calls to call into
> +  firmware for triggering mailboxes.
> +
>  endif
> diff --git a/drivers/mailbox/Makefile b/drivers/mailbox/Makefile
> index d54e412..8ec6869 100644
> --- a/drivers/mailbox/Makefile
> +++ b/drivers/mailbox/Makefile
> @@ -35,3 +35,5 @@ obj-$(CONFIG_BCM_FLEXRM_MBOX)+=
> bcm-flexrm-mailbox.o
>  obj-$(CONFIG_QCOM_APCS_IPC)+= qcom-apcs-ipc-mailbox.o
>
>  obj-$(CONFIG_TEGRA_HSP_MBOX)+= tegra-hsp.o
> +
> +obj-$(CONFIG_ARM_SMC_MBOX)+= arm-smc-mailbox.o
> diff --git a/drivers/mailbox/arm-smc-mailbox.c
> b/drivers/mailbox/arm-smc-mailbox.c
> new file mode 100644
> index 000..d7b61a7
> --- /dev/null
> +++ b/drivers/mailbox/arm-smc-mailbox.c
> @@ -0,0 +1,155 @@
> +/*
> + *  Copyright (C) 2016,2017 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This device provides a mechanism for emulating a mailbox by using
> + * smc calls, allowing a "mailbox" consumer to sit in firmware running
> + * on the same core.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#define ARM_SMC_MBOX_USE_HVCBIT(0)
> +
> +struct arm_smc_chan_data {
> +u32 function_id;
> +u32 flags;
> +};
> +
> +static int arm_smc_send_data(struct mbox_chan *link, void *data)
> +{
> +struct arm_smc_chan_data *chan_data = link->con_priv;
> +u32 function_id = chan_data->function_id;
> +struct arm_smccc_res res;
> +u32 msg = *(u32 *)data;
> +
> +if (chan_data->flags & ARM_SMC_MBOX_USE_HVC)
> +arm_smccc_hvc(function_id, msg, 0, 0, 0, 0, 0, 0, );
> +else
> +arm_smccc_smc(function_id, msg, 0, 0, 0, 0, 0, 0, );
> +
> +mbox_chan_received_data(link, (void *)res.a0);
> +
> +return 0;
> +}
>
We have a use case that the message to post to the mailbox is larger
than 32bit. Can we change the SMC request to take the pointer as the
message argument instead of the the value?
But in this case, I am not clear on how the ATF to validate if the pointer
is valid. Any suggestions?

Furthermore, the received response can be larger that smc response
a0, any suggestion to solve this issue? reuse the input data pointer for
ATF to write copy the response data?

In case of asynchronous request, the request can be from remote first.
How to solve this issue to use a generic SMC mailbox driver?
Use mailbox mbox_send_message() for a separate rx request channel?

Thanks,
Wendy

+
> +/* This mailbox is synchronous, so we are always done. */
> +static bool arm_smc_last_tx_done(struct mbox_chan *link)
> +{
> +return true;
> +}
> +
> +static const struct mbox_chan_ops arm_smc_mbox_chan_ops = {
> +.send_data= arm_smc_send_data,
> +.last_tx_done= arm_smc_last_tx_done
> +};
> +
> +static int arm_smc_mbox_probe(struct platform_device *pdev)
> +{
> +struct device *dev = >dev;
> +struct mbox_controller *mbox;
> +struct arm_smc_chan_data *chan_data;
> +const char *method;
> +bool use_hvc = false;
> +int ret, i;
> +
> +ret = of_property_count_elems_of_size(dev->of_node,
> "arm,func-ids",
> +  sizeof(u32));
> +   

Fwd: [linux-sunxi] Re: [PATCH v2 2/3] mailbox: introduce ARM SMC based mailbox

2018-07-31 Thread Jiaying Liang
Added missing maintainers from the previous reply

On Sunday, 23 July 2017 16:26:55 UTC-7, Andre Przywara wrote:
>
> This mailbox driver implements a mailbox which signals transmitted data
> via an ARM smc (secure monitor call) instruction. The mailbox receiver
> is implemented in firmware and can synchronously return data when it
> returns execution to the non-secure world again.
> An asynchronous receive path is not implemented.
> This allows the usage of a mailbox to trigger firmware actions on SoCs
> which either don't have a separate management processor or on which such
> a core is not available. A user of this mailbox could be the SCP
> interface.
>
> Signed-off-by: Andre Przywara 
> ---
>  drivers/mailbox/Kconfig   |   8 ++
>  drivers/mailbox/Makefile  |   2 +
>  drivers/mailbox/arm-smc-mailbox.c | 155 
> ++
>
>  3 files changed, 165 insertions(+)
>  create mode 100644 drivers/mailbox/arm-smc-mailbox.c
>
> diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
> index c5731e5..5664b7f 100644
> --- a/drivers/mailbox/Kconfig
> +++ b/drivers/mailbox/Kconfig
> @@ -170,4 +170,12 @@ config BCM_FLEXRM_MBOX
>Mailbox implementation of the Broadcom FlexRM ring manager,
>which provides access to various offload engines on Broadcom
>SoCs. Say Y here if you want to use the Broadcom FlexRM.
> +
> +config ARM_SMC_MBOX
> +tristate "Generic ARM smc mailbox"
> +depends on OF && HAVE_ARM_SMCCC
> +help
> +  Generic mailbox driver which uses ARM smc calls to call into
> +  firmware for triggering mailboxes.
> +
>  endif
> diff --git a/drivers/mailbox/Makefile b/drivers/mailbox/Makefile
> index d54e412..8ec6869 100644
> --- a/drivers/mailbox/Makefile
> +++ b/drivers/mailbox/Makefile
> @@ -35,3 +35,5 @@ obj-$(CONFIG_BCM_FLEXRM_MBOX)+=
> bcm-flexrm-mailbox.o
>  obj-$(CONFIG_QCOM_APCS_IPC)+= qcom-apcs-ipc-mailbox.o
>
>  obj-$(CONFIG_TEGRA_HSP_MBOX)+= tegra-hsp.o
> +
> +obj-$(CONFIG_ARM_SMC_MBOX)+= arm-smc-mailbox.o
> diff --git a/drivers/mailbox/arm-smc-mailbox.c
> b/drivers/mailbox/arm-smc-mailbox.c
> new file mode 100644
> index 000..d7b61a7
> --- /dev/null
> +++ b/drivers/mailbox/arm-smc-mailbox.c
> @@ -0,0 +1,155 @@
> +/*
> + *  Copyright (C) 2016,2017 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This device provides a mechanism for emulating a mailbox by using
> + * smc calls, allowing a "mailbox" consumer to sit in firmware running
> + * on the same core.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#define ARM_SMC_MBOX_USE_HVCBIT(0)
> +
> +struct arm_smc_chan_data {
> +u32 function_id;
> +u32 flags;
> +};
> +
> +static int arm_smc_send_data(struct mbox_chan *link, void *data)
> +{
> +struct arm_smc_chan_data *chan_data = link->con_priv;
> +u32 function_id = chan_data->function_id;
> +struct arm_smccc_res res;
> +u32 msg = *(u32 *)data;
> +
> +if (chan_data->flags & ARM_SMC_MBOX_USE_HVC)
> +arm_smccc_hvc(function_id, msg, 0, 0, 0, 0, 0, 0, );
> +else
> +arm_smccc_smc(function_id, msg, 0, 0, 0, 0, 0, 0, );
> +
> +mbox_chan_received_data(link, (void *)res.a0);
> +
> +return 0;
> +}
>
We have a use case that the message to post to the mailbox is larger
than 32bit. Can we change the SMC request to take the pointer as the
message argument instead of the the value?
But in this case, I am not clear on how the ATF to validate if the pointer
is valid. Any suggestions?

Furthermore, the received response can be larger that smc response
a0, any suggestion to solve this issue? reuse the input data pointer for
ATF to write copy the response data?

In case of asynchronous request, the request can be from remote first.
How to solve this issue to use a generic SMC mailbox driver?
Use mailbox mbox_send_message() for a separate rx request channel?

Thanks,
Wendy

+
> +/* This mailbox is synchronous, so we are always done. */
> +static bool arm_smc_last_tx_done(struct mbox_chan *link)
> +{
> +return true;
> +}
> +
> +static const struct mbox_chan_ops arm_smc_mbox_chan_ops = {
> +.send_data= arm_smc_send_data,
> +.last_tx_done= arm_smc_last_tx_done
> +};
> +
> +static int arm_smc_mbox_probe(struct platform_device *pdev)
> +{
> +struct device *dev = >dev;
> +struct mbox_controller *mbox;
> +struct arm_smc_chan_data *chan_data;
> +const char *method;
> +bool use_hvc = false;
> +int ret, i;
> +
> +ret = of_property_count_elems_of_size(dev->of_node,
> "arm,func-ids",
> +  sizeof(u32));
> +   

RE: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support

2018-01-10 Thread Jiaying Liang

> -Original Message-
> From: Arnaud Pouliquen [mailto:arnaud.pouliq...@st.com]
> Sent: Tuesday, January 09, 2018 4:56 AM
> To: Jiaying Liang <jli...@xilinx.com>; o...@wizery.com;
> bjorn.anders...@linaro.org
> Cc: linux-remotep...@vger.kernel.org; linux-kernel@vger.kernel.org
> Subject: Re: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support
> 
> 
> 
> On 01/05/2018 11:10 PM, Jiaying Liang wrote:
> >
> >
> >> -Original Message-
> >> From: Arnaud Pouliquen [mailto:arnaud.pouliq...@st.com]
> >> Sent: Friday, January 05, 2018 6:48 AM
> >> To: Jiaying Liang <jli...@xilinx.com>; o...@wizery.com;
> >> bjorn.anders...@linaro.org
> >> Cc: linux-remotep...@vger.kernel.org; linux-kernel@vger.kernel.org;
> >> Jiaying Liang <jli...@xilinx.com>
> >> Subject: Re: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support
> >>
> >> Hi Wendy,
> >>
> >> Few remarks on your patch.
> >>
> >> On 01/05/2018 12:18 AM, Wendy Liang wrote:
> >>> virtio rpmsg was not implemented to use RPMsg char driver.
> >>> Each virtio ns announcement will create a new RPMsg device which is
> >>> supposed to bound to a RPMsg driver. It doesn't support dynamic
> >>> endpoints with name service per RPMsg device.
> >>> With RPMsg char driver, you can have multiple endpoints per RPMsg
> >>> device.
> >>>
> >>> Here is the change from this patch:
> >>> * Introduce a macro to indicate if want to use RPMsg char driver
> >>>   for virtio RPMsg. The RPMsg device can either be bounded to
> >>>   a simple RPMsg driver or the RPMsg char driver.
> >>> * Create Virtio RPMsg char device when the virtio RPMsg driver is
> >>>   probed.
> >>> * when there is a remote service announced, keep it in the virtio
> >>>   proc remote services list.
> >>> * when there is an endpoint created, bind it to a remote service
> >>>   from the remote services list. If the service doesn't exist yet,
> >>>   create one and mark the service address as ANY.
> >> Would be nice to simplify the review if patch was split in several
> >> patches (for instance per feature introduced).
> > [Wendy] These changes are made to use the RPMsg char driver.
> > Some items such when an endpoint is created while the remote hasn't
> > announced this service yet is "new feature", but at the moment I am
> > not 100% sure if this change follows the right direction.
> >
> >>
> >>>
> >>> Signed-off-by: Wendy Liang <jli...@xilinx.com>
> >>> ---
> >>> We have different userspace applications to use RPMsg differently,
> >>> what we need is a RPMsg char driver which can supports multiple
> >>> endpoints per remote device.
> >>> The virtio rpmsg driver at the moment doesn't support the RPMsg char
> >>> driver.
> >>> Please advise if this is patch is the right direction. If not, any
> >>> suggestions? Thanks
> >>> ---
> >>>  drivers/rpmsg/Kconfig|   8 +
> >>>  drivers/rpmsg/virtio_rpmsg_bus.c | 364
> >>> ++-
> >>>  2 files changed, 365 insertions(+), 7 deletions(-)
> >>>
> >>> diff --git a/drivers/rpmsg/Kconfig b/drivers/rpmsg/Kconfig index
> >>> 65a9f6b..746f07e 100644
> >>> --- a/drivers/rpmsg/Kconfig
> >>> +++ b/drivers/rpmsg/Kconfig
> >>> @@ -52,4 +52,12 @@ config RPMSG_VIRTIO
> >>>   select RPMSG
> >>>   select VIRTIO
> >>>
> >>> +config RPMSG_VIRTIO_CHAR
> >>> + bool "Enable Virtio RPMSG char device driver support"
> >>> + default y
> >>> + depends on RPMSG_VIRTIO
> >>> + depends on RPMSG_CHAR
> >>> + help
> >>> +   Say y here to enable to use RPMSG char device interface.
> >>> +
> >>>  endmenu
> >>> diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c
> >>> b/drivers/rpmsg/virtio_rpmsg_bus.c
> >>> index 82b8300..6e30a3cc 100644
> >>> --- a/drivers/rpmsg/virtio_rpmsg_bus.c
> >>> +++ b/drivers/rpmsg/virtio_rpmsg_bus.c
> >>> @@ -56,6 +56,7 @@
> >>>   * @sendq:   wait queue of sending contexts waiting for a tx
> buffers
> >>>   * @sleepers:number of senders that are waiting for a tx buffer
> >>>   * @ns_ept:  the bus's name service endpoint
> >>>

RE: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support

2018-01-10 Thread Jiaying Liang

> -Original Message-
> From: Arnaud Pouliquen [mailto:arnaud.pouliq...@st.com]
> Sent: Tuesday, January 09, 2018 4:56 AM
> To: Jiaying Liang ; o...@wizery.com;
> bjorn.anders...@linaro.org
> Cc: linux-remotep...@vger.kernel.org; linux-kernel@vger.kernel.org
> Subject: Re: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support
> 
> 
> 
> On 01/05/2018 11:10 PM, Jiaying Liang wrote:
> >
> >
> >> -Original Message-
> >> From: Arnaud Pouliquen [mailto:arnaud.pouliq...@st.com]
> >> Sent: Friday, January 05, 2018 6:48 AM
> >> To: Jiaying Liang ; o...@wizery.com;
> >> bjorn.anders...@linaro.org
> >> Cc: linux-remotep...@vger.kernel.org; linux-kernel@vger.kernel.org;
> >> Jiaying Liang 
> >> Subject: Re: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support
> >>
> >> Hi Wendy,
> >>
> >> Few remarks on your patch.
> >>
> >> On 01/05/2018 12:18 AM, Wendy Liang wrote:
> >>> virtio rpmsg was not implemented to use RPMsg char driver.
> >>> Each virtio ns announcement will create a new RPMsg device which is
> >>> supposed to bound to a RPMsg driver. It doesn't support dynamic
> >>> endpoints with name service per RPMsg device.
> >>> With RPMsg char driver, you can have multiple endpoints per RPMsg
> >>> device.
> >>>
> >>> Here is the change from this patch:
> >>> * Introduce a macro to indicate if want to use RPMsg char driver
> >>>   for virtio RPMsg. The RPMsg device can either be bounded to
> >>>   a simple RPMsg driver or the RPMsg char driver.
> >>> * Create Virtio RPMsg char device when the virtio RPMsg driver is
> >>>   probed.
> >>> * when there is a remote service announced, keep it in the virtio
> >>>   proc remote services list.
> >>> * when there is an endpoint created, bind it to a remote service
> >>>   from the remote services list. If the service doesn't exist yet,
> >>>   create one and mark the service address as ANY.
> >> Would be nice to simplify the review if patch was split in several
> >> patches (for instance per feature introduced).
> > [Wendy] These changes are made to use the RPMsg char driver.
> > Some items such when an endpoint is created while the remote hasn't
> > announced this service yet is "new feature", but at the moment I am
> > not 100% sure if this change follows the right direction.
> >
> >>
> >>>
> >>> Signed-off-by: Wendy Liang 
> >>> ---
> >>> We have different userspace applications to use RPMsg differently,
> >>> what we need is a RPMsg char driver which can supports multiple
> >>> endpoints per remote device.
> >>> The virtio rpmsg driver at the moment doesn't support the RPMsg char
> >>> driver.
> >>> Please advise if this is patch is the right direction. If not, any
> >>> suggestions? Thanks
> >>> ---
> >>>  drivers/rpmsg/Kconfig|   8 +
> >>>  drivers/rpmsg/virtio_rpmsg_bus.c | 364
> >>> ++-
> >>>  2 files changed, 365 insertions(+), 7 deletions(-)
> >>>
> >>> diff --git a/drivers/rpmsg/Kconfig b/drivers/rpmsg/Kconfig index
> >>> 65a9f6b..746f07e 100644
> >>> --- a/drivers/rpmsg/Kconfig
> >>> +++ b/drivers/rpmsg/Kconfig
> >>> @@ -52,4 +52,12 @@ config RPMSG_VIRTIO
> >>>   select RPMSG
> >>>   select VIRTIO
> >>>
> >>> +config RPMSG_VIRTIO_CHAR
> >>> + bool "Enable Virtio RPMSG char device driver support"
> >>> + default y
> >>> + depends on RPMSG_VIRTIO
> >>> + depends on RPMSG_CHAR
> >>> + help
> >>> +   Say y here to enable to use RPMSG char device interface.
> >>> +
> >>>  endmenu
> >>> diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c
> >>> b/drivers/rpmsg/virtio_rpmsg_bus.c
> >>> index 82b8300..6e30a3cc 100644
> >>> --- a/drivers/rpmsg/virtio_rpmsg_bus.c
> >>> +++ b/drivers/rpmsg/virtio_rpmsg_bus.c
> >>> @@ -56,6 +56,7 @@
> >>>   * @sendq:   wait queue of sending contexts waiting for a tx
> buffers
> >>>   * @sleepers:number of senders that are waiting for a tx buffer
> >>>   * @ns_ept:  the bus's name service endpoint
> >>> + * @rsvcs:   remote services
> >>>   *
> >>>   * This structure sto

RE: [PATCH v3 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-01-09 Thread Jiaying Liang


> -Original Message-
> From: Jassi Brar [mailto:jassisinghb...@gmail.com]
> Sent: Tuesday, January 09, 2018 12:00 AM
> To: Jiaying Liang <jli...@xilinx.com>
> Cc: Michal Simek <michal.si...@xilinx.com>; Rob Herring
> <robh...@kernel.org>; Mark Rutland <mark.rutl...@arm.com>; linux-arm-
> ker...@lists.infradead.org; Devicetree List <devicet...@vger.kernel.org>;
> Linux Kernel Mailing List <linux-kernel@vger.kernel.org>; Jiaying Liang
> <jli...@xilinx.com>
> Subject: Re: [PATCH v3 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> On Fri, Jan 5, 2018 at 5:21 AM, Wendy Liang <wendy.li...@xilinx.com> wrote:
> > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > ZynqMP SoC used for the communication between various processor
> > systems.
> >
> > Signed-off-by: Wendy Liang <jli...@xilinx.com>
> > ---
> >  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 104
> +
> >  1 file changed, 104 insertions(+)
> >  create mode 100644
> > Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> >
> > diff --git
> > a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.tx
> > t
> > b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.tx
> > t
> > new file mode 100644
> > index 000..5e270a3
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbo
> > +++ x.txt
> > @@ -0,0 +1,104 @@
> > +Xilinx IPI Mailbox Controller
> > +
> > +
> > +The Xilinx IPI(Inter Processor Interrupt) mailbox controller is to
> > +manage messaging between two Xilinx Zynq UltraScale+ MPSoC IPI
> > +agents. Each IPI agent owns registers used for notification and buffers for
> message.
> > +
> > +   +-+
> > +   | Xilinx ZynqMP IPI Controller|
> > +   +-+
> > ++--+
> > +ATF| |
> > +   | |
> > +   | |
> > ++--+ |
> > +   | |
> > +   | |
> > ++--+
> > ++--+
> > +|  ++   ++ |
> > +Hardware|  |  IPI Agent |   |  IPI Buffers   | |
> > +|  |  Registers |   || |
> > +|  ||   || |
> > +|  ++   ++ |
> > +|  |
> > +| Xilinx IPI Agent Block   |
> > ++--+
> > +
> > +
> > +Controller Device Node:
> > +===
> > +Required properties:
> > +
> > +- compatible:  Shall be: "xlnx,zynqmp-ipi-mailbox"
> > +- reg: IPI buffers address ranges
> > +- reg-names:   Names of the reg resources. It should have:
> > +   * local_request_region
> > + - IPI request msg buffer written by local and read
> > +   by remote
> > +   * local_response_region
> > + - IPI response msg buffer written by local and 
> > read
> > +   by remote
> > +   * remote_request_region
> > + - IPI request msg buffer written by remote and 
> > read
> > +   by local
> > +   * remote_response_region
> > + - IPI response msg buffer written by remote and 
> > read
> > +   by local
> >
> shmem is option and external to the controller. It should be passed via
> client's binding.
> Please have a look at Sudeep's proposed patch
> https://www.spinics.net/lists/arm-kernel/msg626120.html
[Wendy] thanks for the link, but those 'buffers" are registers in the hardware
but not memory. It looks like a bit hacky to access them as memory.
> 
> > +- #mbox-cells: Shall be 1. It contains:
> > +  

RE: [PATCH v3 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-01-09 Thread Jiaying Liang


> -Original Message-
> From: Jassi Brar [mailto:jassisinghb...@gmail.com]
> Sent: Tuesday, January 09, 2018 12:00 AM
> To: Jiaying Liang 
> Cc: Michal Simek ; Rob Herring
> ; Mark Rutland ; linux-arm-
> ker...@lists.infradead.org; Devicetree List ;
> Linux Kernel Mailing List ; Jiaying Liang
> 
> Subject: Re: [PATCH v3 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> On Fri, Jan 5, 2018 at 5:21 AM, Wendy Liang  wrote:
> > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > ZynqMP SoC used for the communication between various processor
> > systems.
> >
> > Signed-off-by: Wendy Liang 
> > ---
> >  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 104
> +
> >  1 file changed, 104 insertions(+)
> >  create mode 100644
> > Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> >
> > diff --git
> > a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.tx
> > t
> > b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.tx
> > t
> > new file mode 100644
> > index 000..5e270a3
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbo
> > +++ x.txt
> > @@ -0,0 +1,104 @@
> > +Xilinx IPI Mailbox Controller
> > +
> > +
> > +The Xilinx IPI(Inter Processor Interrupt) mailbox controller is to
> > +manage messaging between two Xilinx Zynq UltraScale+ MPSoC IPI
> > +agents. Each IPI agent owns registers used for notification and buffers for
> message.
> > +
> > +   +-+
> > +   | Xilinx ZynqMP IPI Controller|
> > +   +-+
> > ++--+
> > +ATF| |
> > +   | |
> > +   | |
> > ++--+ |
> > +   | |
> > +   | |
> > ++--+
> > ++--+
> > +|  ++   ++ |
> > +Hardware|  |  IPI Agent |   |  IPI Buffers   | |
> > +|  |  Registers |   || |
> > +|  ||   || |
> > +|  ++   ++ |
> > +|  |
> > +| Xilinx IPI Agent Block   |
> > ++--+
> > +
> > +
> > +Controller Device Node:
> > +===
> > +Required properties:
> > +
> > +- compatible:  Shall be: "xlnx,zynqmp-ipi-mailbox"
> > +- reg: IPI buffers address ranges
> > +- reg-names:   Names of the reg resources. It should have:
> > +   * local_request_region
> > + - IPI request msg buffer written by local and read
> > +   by remote
> > +   * local_response_region
> > + - IPI response msg buffer written by local and 
> > read
> > +   by remote
> > +   * remote_request_region
> > + - IPI request msg buffer written by remote and 
> > read
> > +   by local
> > +   * remote_response_region
> > + - IPI response msg buffer written by remote and 
> > read
> > +   by local
> >
> shmem is option and external to the controller. It should be passed via
> client's binding.
> Please have a look at Sudeep's proposed patch
> https://www.spinics.net/lists/arm-kernel/msg626120.html
[Wendy] thanks for the link, but those 'buffers" are registers in the hardware
but not memory. It looks like a bit hacky to access them as memory.
> 
> > +- #mbox-cells: Shall be 1. It contains:
> > +   * tx(0) or rx(1) channel
> > +- xlnx,ipi-ids:Xilinx IPI agent IDs of the two peers of the
> > +   Xilinx IPI communication channel.
> > +- interrupt-parent:Phandle for the interrupt controller
> >

RE: [PATCH v3 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-01-05 Thread Jiaying Liang


> -Original Message-
> From: Rob Herring [mailto:r...@kernel.org]
> Sent: Friday, January 05, 2018 7:32 AM
> To: Jiaying Liang <jli...@xilinx.com>
> Cc: jassisinghb...@gmail.com; michal.si...@xilinx.com;
> mark.rutl...@arm.com; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; linux-kernel@vger.kernel.org; Jiaying Liang
> <jli...@xilinx.com>
> Subject: Re: [PATCH v3 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> On Thu, Jan 04, 2018 at 03:51:31PM -0800, Wendy Liang wrote:
> > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > ZynqMP SoC used for the communication between various processor
> > systems.
> >
> > Signed-off-by: Wendy Liang <jli...@xilinx.com>
> > ---
> >  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 104
> +
> >  1 file changed, 104 insertions(+)
> >  create mode 100644
> > Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> 
> Please add acks and reviewed-by's when posting new versions.
[Wendy] Thanks, will add this in the next version.


RE: [PATCH v3 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox

2018-01-05 Thread Jiaying Liang


> -Original Message-
> From: Rob Herring [mailto:r...@kernel.org]
> Sent: Friday, January 05, 2018 7:32 AM
> To: Jiaying Liang 
> Cc: jassisinghb...@gmail.com; michal.si...@xilinx.com;
> mark.rutl...@arm.com; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; linux-kernel@vger.kernel.org; Jiaying Liang
> 
> Subject: Re: [PATCH v3 2/2] dt-bindings: mailbox: Add Xilinx IPI Mailbox
> 
> On Thu, Jan 04, 2018 at 03:51:31PM -0800, Wendy Liang wrote:
> > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > ZynqMP SoC used for the communication between various processor
> > systems.
> >
> > Signed-off-by: Wendy Liang 
> > ---
> >  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 104
> +
> >  1 file changed, 104 insertions(+)
> >  create mode 100644
> > Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> 
> Please add acks and reviewed-by's when posting new versions.
[Wendy] Thanks, will add this in the next version.


RE: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support

2018-01-05 Thread Jiaying Liang


> -Original Message-
> From: Arnaud Pouliquen [mailto:arnaud.pouliq...@st.com]
> Sent: Friday, January 05, 2018 6:48 AM
> To: Jiaying Liang <jli...@xilinx.com>; o...@wizery.com;
> bjorn.anders...@linaro.org
> Cc: linux-remotep...@vger.kernel.org; linux-kernel@vger.kernel.org; Jiaying
> Liang <jli...@xilinx.com>
> Subject: Re: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support
> 
> Hi Wendy,
> 
> Few remarks on your patch.
> 
> On 01/05/2018 12:18 AM, Wendy Liang wrote:
> > virtio rpmsg was not implemented to use RPMsg char driver.
> > Each virtio ns announcement will create a new RPMsg device which is
> > supposed to bound to a RPMsg driver. It doesn't support dynamic
> > endpoints with name service per RPMsg device.
> > With RPMsg char driver, you can have multiple endpoints per RPMsg
> > device.
> >
> > Here is the change from this patch:
> > * Introduce a macro to indicate if want to use RPMsg char driver
> >   for virtio RPMsg. The RPMsg device can either be bounded to
> >   a simple RPMsg driver or the RPMsg char driver.
> > * Create Virtio RPMsg char device when the virtio RPMsg driver is
> >   probed.
> > * when there is a remote service announced, keep it in the virtio
> >   proc remote services list.
> > * when there is an endpoint created, bind it to a remote service
> >   from the remote services list. If the service doesn't exist yet,
> >   create one and mark the service address as ANY.
> Would be nice to simplify the review if patch was split in several patches 
> (for
> instance per feature introduced).
[Wendy] These changes are made to use the RPMsg char driver.
Some items such when an endpoint is created while the remote hasn't announced
this service yet is "new feature", but at the moment I am not 100% sure if this 
change follows
the right direction.

> 
> >
> > Signed-off-by: Wendy Liang <jli...@xilinx.com>
> > ---
> > We have different userspace applications to use RPMsg differently,
> > what we need is a RPMsg char driver which can supports multiple
> > endpoints per remote device.
> > The virtio rpmsg driver at the moment doesn't support the RPMsg char
> > driver.
> > Please advise if this is patch is the right direction. If not, any
> > suggestions? Thanks
> > ---
> >  drivers/rpmsg/Kconfig|   8 +
> >  drivers/rpmsg/virtio_rpmsg_bus.c | 364
> > ++-
> >  2 files changed, 365 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/rpmsg/Kconfig b/drivers/rpmsg/Kconfig index
> > 65a9f6b..746f07e 100644
> > --- a/drivers/rpmsg/Kconfig
> > +++ b/drivers/rpmsg/Kconfig
> > @@ -52,4 +52,12 @@ config RPMSG_VIRTIO
> > select RPMSG
> > select VIRTIO
> >
> > +config RPMSG_VIRTIO_CHAR
> > +   bool "Enable Virtio RPMSG char device driver support"
> > +   default y
> > +   depends on RPMSG_VIRTIO
> > +   depends on RPMSG_CHAR
> > +   help
> > + Say y here to enable to use RPMSG char device interface.
> > +
> >  endmenu
> > diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c
> > b/drivers/rpmsg/virtio_rpmsg_bus.c
> > index 82b8300..6e30a3cc 100644
> > --- a/drivers/rpmsg/virtio_rpmsg_bus.c
> > +++ b/drivers/rpmsg/virtio_rpmsg_bus.c
> > @@ -56,6 +56,7 @@
> >   * @sendq: wait queue of sending contexts waiting for a tx buffers
> >   * @sleepers:  number of senders that are waiting for a tx buffer
> >   * @ns_ept:the bus's name service endpoint
> > + * @rsvcs: remote services
> >   *
> >   * This structure stores the rpmsg state of a given virtio remote processor
> >   * device (there might be several virtio proc devices for each
> > physical @@ -75,6 +76,9 @@ struct virtproc_info {
> > wait_queue_head_t sendq;
> > atomic_t sleepers;
> > struct rpmsg_endpoint *ns_ept;
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > +   struct list_head rsvcs;
> > +#endif
> >  };
> >
> >  /* The feature bitmap for virtio rpmsg */ @@ -141,6 +145,36 @@ struct
> > virtio_rpmsg_channel {  #define to_virtio_rpmsg_channel(_rpdev) \
> > container_of(_rpdev, struct virtio_rpmsg_channel, rpdev)
> >
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > +/**
> > + * struct virtio_rpmsg_rsvc - virtio RPMsg remote service
> > + * @name: name of the RPMsg remote service
> > + * @addr: RPMsg address of the remote service
> > + * @ept:  local endpoint bound to the remote service
> > + * @node: list node
> > + */
> > +struct virtio_rpmsg_rsvc {
> &g

RE: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support

2018-01-05 Thread Jiaying Liang


> -Original Message-
> From: Arnaud Pouliquen [mailto:arnaud.pouliq...@st.com]
> Sent: Friday, January 05, 2018 6:48 AM
> To: Jiaying Liang ; o...@wizery.com;
> bjorn.anders...@linaro.org
> Cc: linux-remotep...@vger.kernel.org; linux-kernel@vger.kernel.org; Jiaying
> Liang 
> Subject: Re: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support
> 
> Hi Wendy,
> 
> Few remarks on your patch.
> 
> On 01/05/2018 12:18 AM, Wendy Liang wrote:
> > virtio rpmsg was not implemented to use RPMsg char driver.
> > Each virtio ns announcement will create a new RPMsg device which is
> > supposed to bound to a RPMsg driver. It doesn't support dynamic
> > endpoints with name service per RPMsg device.
> > With RPMsg char driver, you can have multiple endpoints per RPMsg
> > device.
> >
> > Here is the change from this patch:
> > * Introduce a macro to indicate if want to use RPMsg char driver
> >   for virtio RPMsg. The RPMsg device can either be bounded to
> >   a simple RPMsg driver or the RPMsg char driver.
> > * Create Virtio RPMsg char device when the virtio RPMsg driver is
> >   probed.
> > * when there is a remote service announced, keep it in the virtio
> >   proc remote services list.
> > * when there is an endpoint created, bind it to a remote service
> >   from the remote services list. If the service doesn't exist yet,
> >   create one and mark the service address as ANY.
> Would be nice to simplify the review if patch was split in several patches 
> (for
> instance per feature introduced).
[Wendy] These changes are made to use the RPMsg char driver.
Some items such when an endpoint is created while the remote hasn't announced
this service yet is "new feature", but at the moment I am not 100% sure if this 
change follows
the right direction.

> 
> >
> > Signed-off-by: Wendy Liang 
> > ---
> > We have different userspace applications to use RPMsg differently,
> > what we need is a RPMsg char driver which can supports multiple
> > endpoints per remote device.
> > The virtio rpmsg driver at the moment doesn't support the RPMsg char
> > driver.
> > Please advise if this is patch is the right direction. If not, any
> > suggestions? Thanks
> > ---
> >  drivers/rpmsg/Kconfig|   8 +
> >  drivers/rpmsg/virtio_rpmsg_bus.c | 364
> > ++-
> >  2 files changed, 365 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/rpmsg/Kconfig b/drivers/rpmsg/Kconfig index
> > 65a9f6b..746f07e 100644
> > --- a/drivers/rpmsg/Kconfig
> > +++ b/drivers/rpmsg/Kconfig
> > @@ -52,4 +52,12 @@ config RPMSG_VIRTIO
> > select RPMSG
> > select VIRTIO
> >
> > +config RPMSG_VIRTIO_CHAR
> > +   bool "Enable Virtio RPMSG char device driver support"
> > +   default y
> > +   depends on RPMSG_VIRTIO
> > +   depends on RPMSG_CHAR
> > +   help
> > + Say y here to enable to use RPMSG char device interface.
> > +
> >  endmenu
> > diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c
> > b/drivers/rpmsg/virtio_rpmsg_bus.c
> > index 82b8300..6e30a3cc 100644
> > --- a/drivers/rpmsg/virtio_rpmsg_bus.c
> > +++ b/drivers/rpmsg/virtio_rpmsg_bus.c
> > @@ -56,6 +56,7 @@
> >   * @sendq: wait queue of sending contexts waiting for a tx buffers
> >   * @sleepers:  number of senders that are waiting for a tx buffer
> >   * @ns_ept:the bus's name service endpoint
> > + * @rsvcs: remote services
> >   *
> >   * This structure stores the rpmsg state of a given virtio remote processor
> >   * device (there might be several virtio proc devices for each
> > physical @@ -75,6 +76,9 @@ struct virtproc_info {
> > wait_queue_head_t sendq;
> > atomic_t sleepers;
> > struct rpmsg_endpoint *ns_ept;
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > +   struct list_head rsvcs;
> > +#endif
> >  };
> >
> >  /* The feature bitmap for virtio rpmsg */ @@ -141,6 +145,36 @@ struct
> > virtio_rpmsg_channel {  #define to_virtio_rpmsg_channel(_rpdev) \
> > container_of(_rpdev, struct virtio_rpmsg_channel, rpdev)
> >
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > +/**
> > + * struct virtio_rpmsg_rsvc - virtio RPMsg remote service
> > + * @name: name of the RPMsg remote service
> > + * @addr: RPMsg address of the remote service
> > + * @ept:  local endpoint bound to the remote service
> > + * @node: list node
> > + */
> > +struct virtio_rpmsg_rsvc {
> > +   char name[RPMSG_NAME_SIZE];
> > +   u32 addr;
> > +   str

RE: [RFC LINUX PATCH] Dcoumentation: dt: mailbox: Add Xilinx IPI Mailbox

2017-09-22 Thread Jiaying Liang
HI Sudeep,

> -Original Message-
> From: Sudeep Holla [mailto:sudeep.ho...@arm.com]
> Sent: Friday, September 22, 2017 4:10 AM
> To: Jiaying Liang <jli...@xilinx.com>
> Cc: linux-kernel@vger.kernel.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; jassisinghb...@gmail.com; Cyril Chemparathy
> <cyr...@xilinx.com>; Michal Simek <mich...@xilinx.com>;
> robh...@kernel.org; mark.rutl...@arm.com; Soren Brinkmann
> <sor...@xilinx.com>; Sudeep Holla <sudeep.ho...@arm.com>
> Subject: Re: [RFC LINUX PATCH] Dcoumentation: dt: mailbox: Add Xilinx IPI
> Mailbox
> 
> On Fri, Sep 22, 2017 at 06:05:18AM +, Jiaying Liang wrote:
> >
> > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > ZynqMP SoC used for the communication between various processor
> systems.
> >
> > Signed-off-by: Wendy Liang <jli...@xilinx.com>
> > ---
> >  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 88
> ++
> >  1 file changed, 88 insertions(+)
> >  create mode 100644
> > Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> >
> > diff --git
> > a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> > mailbox.txt
> > b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> > mailbox.txt
> > new file mode 100644
> > index 000..5d915d1
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.
> > +++ txt
> > @@ -0,0 +1,88 @@
> > +Xilinx IPI Mailbox Driver
> > +
> > +
> > +The Xilinx IPI(Inter Processor Interrupt) mailbox driver is a mailbox
> > +controller that manages the messaging between two IPI agents. Each
> > +IPI mailbox has request and response buffers between the two IPI agents.
> > +
> > ++-+
> > +| | Xilinx ZynqMP IPI Mailbox
> > +| Controller|
> > +| |
> > +|   +-+
> > +|   | SMC |
> > +|   | |
> > +++--+--+--+
> > + | |
> > + |  +-+
> > + | |   ATF (ARM trusted firmware)
> 
> I suppose it should work with any EL3 firmware, ATF reference can be
> removed IMO.
> 
> > + | |
> > ++-+
> > + | |   Hardware
> > + | |
> > + +--+
> > +   ||
> > + +--+ +---+ |
> > + | | Buffers between| | IPI Agent | |
> > + | | two IPI agents | | Registers | |
> > + | ++ +---+ |
> > + |  |
> > + |   Xilinx ZynqMP IPI  |
> > + +--+
> > +
> > +
> > +Message Manager Device Node:
> > +===
> > +Required properties:
> > +
> > +- compatible:  Shall be: "xlnx,zynqmp-ipi-mailbox"
> > +- ipi-smc-fid-base Base offset of SMC function IDs for IPI mailbox SMC.
> > +   It contains the IPI IDs of the two IPI agents.
> 
> Why is "SMC" associated with this hardware block ? Is it secure device ?
> Can Linux access it ? If so, why do you need SMC ?
[Wendy] one IPI agent uses its own IPI agent registers to notify (write to 
register to raise interrupt)
Other IPI agents. The IPI agent registers are shared between secure and 
non-secure.
And thus, I think about to access the IPI agent registers in ATF. And thus
Use SMC for registers access.

> 
> > +- reg: IPI request and response buffers address
> range. It
> > +   can be the IPI buffers from the hardware or it can
> > +   be carved out shared memory.
> 
> It sounds like buffer used for communication and not part of this IP.
> Shouldn't this be part of mailbox client binding rather than controller
> binding.
[Wendy] the IPI hardware has IPI buffers (32bytes request buffer and 32bytes 
response buffer per)
But we also want user to be able to use other shared memory.
The reason I am considering to make it part of the mailbox is, we can have 
logical
Channels on top of the physical channels.
Each mailbox controller c

RE: [RFC LINUX PATCH] Dcoumentation: dt: mailbox: Add Xilinx IPI Mailbox

2017-09-22 Thread Jiaying Liang
HI Sudeep,

> -Original Message-
> From: Sudeep Holla [mailto:sudeep.ho...@arm.com]
> Sent: Friday, September 22, 2017 4:10 AM
> To: Jiaying Liang 
> Cc: linux-kernel@vger.kernel.org; linux-arm-ker...@lists.infradead.org;
> devicet...@vger.kernel.org; jassisinghb...@gmail.com; Cyril Chemparathy
> ; Michal Simek ;
> robh...@kernel.org; mark.rutl...@arm.com; Soren Brinkmann
> ; Sudeep Holla 
> Subject: Re: [RFC LINUX PATCH] Dcoumentation: dt: mailbox: Add Xilinx IPI
> Mailbox
> 
> On Fri, Sep 22, 2017 at 06:05:18AM +, Jiaying Liang wrote:
> >
> > Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in
> > ZynqMP SoC used for the communication between various processor
> systems.
> >
> > Signed-off-by: Wendy Liang 
> > ---
> >  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 88
> ++
> >  1 file changed, 88 insertions(+)
> >  create mode 100644
> > Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> >
> > diff --git
> > a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> > mailbox.txt
> > b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> > mailbox.txt
> > new file mode 100644
> > index 000..5d915d1
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.
> > +++ txt
> > @@ -0,0 +1,88 @@
> > +Xilinx IPI Mailbox Driver
> > +
> > +
> > +The Xilinx IPI(Inter Processor Interrupt) mailbox driver is a mailbox
> > +controller that manages the messaging between two IPI agents. Each
> > +IPI mailbox has request and response buffers between the two IPI agents.
> > +
> > ++-+
> > +| | Xilinx ZynqMP IPI Mailbox
> > +| Controller|
> > +| |
> > +|   +-+
> > +|   | SMC |
> > +|   | |
> > +++--+--+--+
> > + | |
> > + |  +-+
> > + | |   ATF (ARM trusted firmware)
> 
> I suppose it should work with any EL3 firmware, ATF reference can be
> removed IMO.
> 
> > + | |
> > ++-+
> > + | |   Hardware
> > + | |
> > + +--+
> > +   ||
> > + +--+ +---+ |
> > + | | Buffers between| | IPI Agent | |
> > + | | two IPI agents | | Registers | |
> > + | ++ +---+ |
> > + |  |
> > + |   Xilinx ZynqMP IPI  |
> > + +--+
> > +
> > +
> > +Message Manager Device Node:
> > +===
> > +Required properties:
> > +
> > +- compatible:  Shall be: "xlnx,zynqmp-ipi-mailbox"
> > +- ipi-smc-fid-base Base offset of SMC function IDs for IPI mailbox SMC.
> > +   It contains the IPI IDs of the two IPI agents.
> 
> Why is "SMC" associated with this hardware block ? Is it secure device ?
> Can Linux access it ? If so, why do you need SMC ?
[Wendy] one IPI agent uses its own IPI agent registers to notify (write to 
register to raise interrupt)
Other IPI agents. The IPI agent registers are shared between secure and 
non-secure.
And thus, I think about to access the IPI agent registers in ATF. And thus
Use SMC for registers access.

> 
> > +- reg: IPI request and response buffers address
> range. It
> > +   can be the IPI buffers from the hardware or it can
> > +   be carved out shared memory.
> 
> It sounds like buffer used for communication and not part of this IP.
> Shouldn't this be part of mailbox client binding rather than controller
> binding.
[Wendy] the IPI hardware has IPI buffers (32bytes request buffer and 32bytes 
response buffer per)
But we also want user to be able to use other shared memory.
The reason I am considering to make it part of the mailbox is, we can have 
logical
Channels on top of the physical channels.
Each mailbox controller controls the physical connection(IPI agent registers 
and buffers), each
Mailbox client requests for a logical channel.

> 
> > +- reg-names:   Reg resource name of th

RE: [RFC LINUX PATCH] Dcoumentation: dt: mailbox: Add Xilinx IPI Mailbox

2017-09-22 Thread Jiaying Liang
> -Original Message-
> From: Wendy Liang [mailto:wendy.li...@xilinx.com]
> Sent: Thursday, September 21, 2017 3:59 PM
> To: linux-kernel@vger.kernel.org
> Cc: jassisinghb...@gmail.com; Cyril Chemparathy <cyr...@xilinx.com>;
> Michal Simek <mich...@xilinx.com>; Jiaying Liang <jli...@xilinx.com>
> Subject: [RFC LINUX PATCH] Dcoumentation: dt: mailbox: Add Xilinx IPI
> Mailbox
> 
> Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in ZynqMP
> SoC used for the communication between various processor systems.
> 
> Signed-off-by: Wendy Liang <jli...@xilinx.com>
> ---
>  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 88
> ++
>  1 file changed, 88 insertions(+)
>  create mode 100644
> Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> 
> diff --git a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.txt b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.txt
> new file mode 100644
> index 000..5d915d1
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.
> +++ txt
> @@ -0,0 +1,88 @@
> +Xilinx IPI Mailbox Driver
> +
> +
> +The Xilinx IPI(Inter Processor Interrupt) mailbox driver is a mailbox
> +controller that manages the messaging between two IPI agents. Each IPI
> +mailbox has request and response buffers between the two IPI agents.
> +
> ++-+
> +| | Xilinx ZynqMP IPI Mailbox
> +| Controller|
> +| |
> +|   +-+
> +|   | SMC |
> +|   | |
> +++--+--+--+
> + | |
> + |  +-+
> + | |   ATF (ARM trusted firmware)
> + | |
> ++-+
> + | |   Hardware
> + | |
> + +--+
> +   ||
> + +--+ +---+ |
> + | | Buffers between| | IPI Agent | |
> + | | two IPI agents | | Registers | |
> + | ++ +---+ |
> + |  |
> + |   Xilinx ZynqMP IPI  |
> + +--+
> +
> +
> +Message Manager Device Node:
> +===
> +Required properties:
> +
> +- compatible:Shall be: "xlnx,zynqmp-ipi-mailbox"
> +- ipi-smc-fid-base   Base offset of SMC function IDs for IPI mailbox SMC.
> + It contains the IPI IDs of the two IPI agents.
> +- reg:   IPI request and response buffers address range. 
> It
> + can be the IPI buffers from the hardware or it can
> + be carved out shared memory.
> +- reg-names: Reg resource name of the IPI request and response
> + buffers.
> +- #mbox-cells:   Shall be 1. Contains the logical channel IDs of 
> the
> + channels on the IPI mailbox.
> +- interrupt-parent:  Phandle for the interrupt controller.
> +- interrupts:Interrupt mapping.
> +
> +Required properties:
> +
> +- method:The method of accessing the IPI agent registers.
> + Permitted values are: "smc" and "hvc". Default is
> + "smc".
> +Example:
> +
> + /* APU IPI mailbox driver */
> + ipis {
> + #address-cells = <1>;
> + #size-cells = <0>;
> + ipi_mailbox_apu_rpu0: ipi_mailbox@0 {
> + compatible = "xlnx,zynqmp-ipi-mailbox";
> + reg = <0 0xff990400 40>;
> + reg-names = "apu-rpu0";
> + ipi-smc-fid-base = <0x1010>;
> + method = "smc";
> + #mbox-cells = <1>;
> + interrupt-parent = <>;
> + interrupts = <0 35 4>;
> + };
> + ipi_mailbox_apu_rpu1: ipi_mailbox@1 {
> + compatible = "xlnx,zynqmp-ipi-mailbox";
> + reg = <0 0xff990440 40>;
> + reg-names = "apu-rpu1";
> + ipi-smc-fid-base = <0x1020>;
> + method = "smc";
> + #mbox-cells = <1>;
> + interrupt-parent = <>;
> + interrupts = <0 35 4>;
> + };
> + };
> + device0: device0 {
> + ...
> + mbox-names = "rpu0", "rpu1",
> + mboxes = <_mailbox_apu_rpu0 0>,
> +  < _mailbox_apu_rpu1 0>;
> + };
> --
> 2.7.4

cc: device tree and linux arm kernel mailing lists.


RE: [RFC LINUX PATCH] Dcoumentation: dt: mailbox: Add Xilinx IPI Mailbox

2017-09-22 Thread Jiaying Liang
> -Original Message-
> From: Wendy Liang [mailto:wendy.li...@xilinx.com]
> Sent: Thursday, September 21, 2017 3:59 PM
> To: linux-kernel@vger.kernel.org
> Cc: jassisinghb...@gmail.com; Cyril Chemparathy ;
> Michal Simek ; Jiaying Liang 
> Subject: [RFC LINUX PATCH] Dcoumentation: dt: mailbox: Add Xilinx IPI
> Mailbox
> 
> Xilinx ZynqMP IPI(Inter Processor Interrupt) is a hardware block in ZynqMP
> SoC used for the communication between various processor systems.
> 
> Signed-off-by: Wendy Liang 
> ---
>  .../bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt   | 88
> ++
>  1 file changed, 88 insertions(+)
>  create mode 100644
> Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.txt
> 
> diff --git a/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.txt b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-
> mailbox.txt
> new file mode 100644
> index 000..5d915d1
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/mailbox/xlnx,zynqmp-ipi-mailbox.
> +++ txt
> @@ -0,0 +1,88 @@
> +Xilinx IPI Mailbox Driver
> +
> +
> +The Xilinx IPI(Inter Processor Interrupt) mailbox driver is a mailbox
> +controller that manages the messaging between two IPI agents. Each IPI
> +mailbox has request and response buffers between the two IPI agents.
> +
> ++-+
> +| | Xilinx ZynqMP IPI Mailbox
> +| Controller|
> +| |
> +|   +-+
> +|   | SMC |
> +|   | |
> +++--+--+--+
> + | |
> + |  +-+
> + | |   ATF (ARM trusted firmware)
> + | |
> ++-+
> + | |   Hardware
> + | |
> + +--+
> +   ||
> + +--+ +---+ |
> + | | Buffers between| | IPI Agent | |
> + | | two IPI agents | | Registers | |
> + | ++ +---+ |
> + |  |
> + |   Xilinx ZynqMP IPI  |
> + +--+
> +
> +
> +Message Manager Device Node:
> +===
> +Required properties:
> +
> +- compatible:Shall be: "xlnx,zynqmp-ipi-mailbox"
> +- ipi-smc-fid-base   Base offset of SMC function IDs for IPI mailbox SMC.
> + It contains the IPI IDs of the two IPI agents.
> +- reg:   IPI request and response buffers address range. 
> It
> + can be the IPI buffers from the hardware or it can
> + be carved out shared memory.
> +- reg-names: Reg resource name of the IPI request and response
> + buffers.
> +- #mbox-cells:   Shall be 1. Contains the logical channel IDs of 
> the
> + channels on the IPI mailbox.
> +- interrupt-parent:  Phandle for the interrupt controller.
> +- interrupts:Interrupt mapping.
> +
> +Required properties:
> +
> +- method:The method of accessing the IPI agent registers.
> + Permitted values are: "smc" and "hvc". Default is
> + "smc".
> +Example:
> +
> + /* APU IPI mailbox driver */
> + ipis {
> + #address-cells = <1>;
> + #size-cells = <0>;
> + ipi_mailbox_apu_rpu0: ipi_mailbox@0 {
> + compatible = "xlnx,zynqmp-ipi-mailbox";
> + reg = <0 0xff990400 40>;
> + reg-names = "apu-rpu0";
> + ipi-smc-fid-base = <0x1010>;
> + method = "smc";
> + #mbox-cells = <1>;
> + interrupt-parent = <>;
> + interrupts = <0 35 4>;
> + };
> + ipi_mailbox_apu_rpu1: ipi_mailbox@1 {
> + compatible = "xlnx,zynqmp-ipi-mailbox";
> + reg = <0 0xff990440 40>;
> + reg-names = "apu-rpu1";
> + ipi-smc-fid-base = <0x1020>;
> + method = "smc";
> + #mbox-cells = <1>;
> + interrupt-parent = <>;
> + interrupts = <0 35 4>;
> + };
> + };
> + device0: device0 {
> + ...
> + mbox-names = "rpu0", "rpu1",
> + mboxes = <_mailbox_apu_rpu0 0>,
> +  < _mailbox_apu_rpu1 0>;
> + };
> --
> 2.7.4

cc: device tree and linux arm kernel mailing lists.


RE: [RFC LINUX PATCH 0/3] Allow remote to specify shared memory

2017-03-30 Thread Jiaying Liang
HI Loic,

> -Original Message-
> From: Loic PALLARDY [mailto:loic.palla...@st.com]
> Sent: Wednesday, March 29, 2017 11:57 AM
> To: Jiaying Liang; Suman Anna; Wendy Liang
> Cc: Bjorn Andersson; linux-remotep...@vger.kernel.org; linux-
> ker...@vger.kernel.org
> Subject: RE: [RFC LINUX PATCH 0/3] Allow remote to specify shared memory
>
>
>
> > -Original Message-
> > From: linux-remoteproc-ow...@vger.kernel.org [mailto:linux-remoteproc-
> > ow...@vger.kernel.org] On Behalf Of Jiaying Liang
> > Sent: Wednesday, March 29, 2017 6:41 PM
> > To: Suman Anna <s-a...@ti.com>; Wendy Liang <sunnylian...@gmail.com>
> > Cc: Bjorn Andersson <bjorn.anders...@linaro.org>; linux-
> > remotep...@vger.kernel.org; linux-kernel@vger.kernel.org
> > Subject: RE: [RFC LINUX PATCH 0/3] Allow remote to specify shared
> > memory
> >
> > Hi Suman,
> >
> > > -Original Message-
> > > From: Suman Anna [mailto:s-a...@ti.com]
> > > Sent: Tuesday, March 28, 2017 4:24 PM
> > > To: Wendy Liang
> > > Cc: Jiaying Liang; Bjorn Andersson;
> > > linux-remotep...@vger.kernel.org;
> > > linux- ker...@vger.kernel.org; Jiaying Liang
> > > Subject: Re: [RFC LINUX PATCH 0/3] Allow remote to specify shared
> > > memory
> > >
> > > Hi Wendy,
> > >
> > > On 03/28/2017 01:52 PM, Wendy Liang wrote:
> > > > Thanks Suman for your comments.
> > > >
> > > > On Mon, Mar 27, 2017 at 8:54 AM, Suman Anna <s-a...@ti.com>
> wrote:
> > > >> Hi Wendy,
> > > >>
> > > >> On 03/24/2017 02:22 PM, Wendy Liang wrote:
> > > >>> This patch enables the remoteproc to specify the shared memory.
> > > >>> Remoteproc declared this memory as DMA memory.
> > > >>> It can be used for virtio, or shared buffers.
> > > >>
> > > >> You should be able to achieve this without any remoteproc core
> > changes.
> > > >> You can do this by defining a reserved-memory node in your DTS
> > > >> file (can be a CMA pool or a DMA pool), assigning the node using
> > > >> memory-region in your remoteproc DT node and using the function,
> > > >> of_reserved_mem_device_init() in your remoteproc driver.
> > > >
> > > > The idea to introduce the rproc_mem is to let the remote to
> > > > specify the shared memory.
> > > > I am trying to see if there is a way to specify this software
> > > > attribute without touching the device tree as it doesn't look like
> > > > it is
> > > hardware related.
> > > > And try to see if there is a way that when I change the firmware,
> > > > i don't need to change the device tree.
> > >
> > > So is this shared memory going to be accessed through an MMU by the
> > > remote processor? If not, don't you need a specific carveout, which
> > > would then in turn mean boot-time memory reservation?
> > [Wendy] This memory is not accessed through MMU by remote.
> > Here is the usecase, the number of remotes can be changed at run time,
> > Also the firmware running on the remotes can be changed. And the
> > remote will Need to memory map those memory before it can use it.
> >
> > From what you have suggested, we reserved memory from the device node,
> > and then remoteproc driver linked to that reserved memory with
> > "memory- region", I suppose different remoteproc drivers can share one
> > "memory- region". However, now the question is how the remote knows
> > the shared memory.
> > Let say I use rpmsg for the communication between the two. But how can
> > remote knows about the shared buffers before it can used it.
> >
> > I  saw Loic has a patch to add virtio config to specify this buffer,
> > however, it is not in the latest linux kernel master. And thus, trying
> > to see if there is another way to solve this issue. Use existing
> > carveout to specify this memory?
> Hi Wendy,
>
> Potential issue with this proposal if the order resource table is proceed. 
> This
> memory region should be assigned to device driver first before starting any
> allocation.
[Wendy] User will need to put it to the first entry in the resource table. So 
that
It can be declared first before device driver to allocate memory.
> Moreover you can assign only one region to a device. But you need different
> memory regions with different attributes: one for firmware and one for vring
> for example.
> That's why we propose sub-d

RE: [RFC LINUX PATCH 0/3] Allow remote to specify shared memory

2017-03-30 Thread Jiaying Liang
HI Loic,

> -Original Message-
> From: Loic PALLARDY [mailto:loic.palla...@st.com]
> Sent: Wednesday, March 29, 2017 11:57 AM
> To: Jiaying Liang; Suman Anna; Wendy Liang
> Cc: Bjorn Andersson; linux-remotep...@vger.kernel.org; linux-
> ker...@vger.kernel.org
> Subject: RE: [RFC LINUX PATCH 0/3] Allow remote to specify shared memory
>
>
>
> > -Original Message-
> > From: linux-remoteproc-ow...@vger.kernel.org [mailto:linux-remoteproc-
> > ow...@vger.kernel.org] On Behalf Of Jiaying Liang
> > Sent: Wednesday, March 29, 2017 6:41 PM
> > To: Suman Anna ; Wendy Liang 
> > Cc: Bjorn Andersson ; linux-
> > remotep...@vger.kernel.org; linux-kernel@vger.kernel.org
> > Subject: RE: [RFC LINUX PATCH 0/3] Allow remote to specify shared
> > memory
> >
> > Hi Suman,
> >
> > > -Original Message-
> > > From: Suman Anna [mailto:s-a...@ti.com]
> > > Sent: Tuesday, March 28, 2017 4:24 PM
> > > To: Wendy Liang
> > > Cc: Jiaying Liang; Bjorn Andersson;
> > > linux-remotep...@vger.kernel.org;
> > > linux- ker...@vger.kernel.org; Jiaying Liang
> > > Subject: Re: [RFC LINUX PATCH 0/3] Allow remote to specify shared
> > > memory
> > >
> > > Hi Wendy,
> > >
> > > On 03/28/2017 01:52 PM, Wendy Liang wrote:
> > > > Thanks Suman for your comments.
> > > >
> > > > On Mon, Mar 27, 2017 at 8:54 AM, Suman Anna 
> wrote:
> > > >> Hi Wendy,
> > > >>
> > > >> On 03/24/2017 02:22 PM, Wendy Liang wrote:
> > > >>> This patch enables the remoteproc to specify the shared memory.
> > > >>> Remoteproc declared this memory as DMA memory.
> > > >>> It can be used for virtio, or shared buffers.
> > > >>
> > > >> You should be able to achieve this without any remoteproc core
> > changes.
> > > >> You can do this by defining a reserved-memory node in your DTS
> > > >> file (can be a CMA pool or a DMA pool), assigning the node using
> > > >> memory-region in your remoteproc DT node and using the function,
> > > >> of_reserved_mem_device_init() in your remoteproc driver.
> > > >
> > > > The idea to introduce the rproc_mem is to let the remote to
> > > > specify the shared memory.
> > > > I am trying to see if there is a way to specify this software
> > > > attribute without touching the device tree as it doesn't look like
> > > > it is
> > > hardware related.
> > > > And try to see if there is a way that when I change the firmware,
> > > > i don't need to change the device tree.
> > >
> > > So is this shared memory going to be accessed through an MMU by the
> > > remote processor? If not, don't you need a specific carveout, which
> > > would then in turn mean boot-time memory reservation?
> > [Wendy] This memory is not accessed through MMU by remote.
> > Here is the usecase, the number of remotes can be changed at run time,
> > Also the firmware running on the remotes can be changed. And the
> > remote will Need to memory map those memory before it can use it.
> >
> > From what you have suggested, we reserved memory from the device node,
> > and then remoteproc driver linked to that reserved memory with
> > "memory- region", I suppose different remoteproc drivers can share one
> > "memory- region". However, now the question is how the remote knows
> > the shared memory.
> > Let say I use rpmsg for the communication between the two. But how can
> > remote knows about the shared buffers before it can used it.
> >
> > I  saw Loic has a patch to add virtio config to specify this buffer,
> > however, it is not in the latest linux kernel master. And thus, trying
> > to see if there is another way to solve this issue. Use existing
> > carveout to specify this memory?
> Hi Wendy,
>
> Potential issue with this proposal if the order resource table is proceed. 
> This
> memory region should be assigned to device driver first before starting any
> allocation.
[Wendy] User will need to put it to the first entry in the resource table. So 
that
It can be declared first before device driver to allocate memory.
> Moreover you can assign only one region to a device. But you need different
> memory regions with different attributes: one for firmware and one for vring
> for example.
> That's why we propose sub-dev mechanism some months ago.
[Wendy]  Just in our case, we only use virtio devices. But I agree, there ca

RE: [RFC LINUX PATCH 0/3] Allow remote to specify shared memory

2017-03-29 Thread Jiaying Liang
Hi Suman,

> -Original Message-
> From: Suman Anna [mailto:s-a...@ti.com]
> Sent: Tuesday, March 28, 2017 4:24 PM
> To: Wendy Liang
> Cc: Jiaying Liang; Bjorn Andersson; linux-remotep...@vger.kernel.org; linux-
> ker...@vger.kernel.org; Jiaying Liang
> Subject: Re: [RFC LINUX PATCH 0/3] Allow remote to specify shared memory
>
> Hi Wendy,
>
> On 03/28/2017 01:52 PM, Wendy Liang wrote:
> > Thanks Suman for your comments.
> >
> > On Mon, Mar 27, 2017 at 8:54 AM, Suman Anna <s-a...@ti.com> wrote:
> >> Hi Wendy,
> >>
> >> On 03/24/2017 02:22 PM, Wendy Liang wrote:
> >>> This patch enables the remoteproc to specify the shared memory.
> >>> Remoteproc declared this memory as DMA memory.
> >>> It can be used for virtio, or shared buffers.
> >>
> >> You should be able to achieve this without any remoteproc core changes.
> >> You can do this by defining a reserved-memory node in your DTS file
> >> (can be a CMA pool or a DMA pool), assigning the node using
> >> memory-region in your remoteproc DT node and using the function,
> >> of_reserved_mem_device_init() in your remoteproc driver.
> >
> > The idea to introduce the rproc_mem is to let the remote to specify
> > the shared memory.
> > I am trying to see if there is a way to specify this software
> > attribute without touching the device tree as it doesn't look like it is
> hardware related.
> > And try to see if there is a way that when I change the firmware, i
> > don't need to change the device tree.
>
> So is this shared memory going to be accessed through an MMU by the
> remote processor? If not, don't you need a specific carveout, which would
> then in turn mean boot-time memory reservation?
[Wendy] This memory is not accessed through MMU by remote.
Here is the usecase, the number of remotes can be changed at run time,
Also the firmware running on the remotes can be changed. And the remote will
Need to memory map those memory before it can use it.

From what you have suggested, we reserved memory from the device node, and then 
remoteproc driver linked to that reserved memory with "memory-region", I 
suppose different remoteproc drivers can share one "memory-region". However, 
now the question is how the remote knows the shared memory.
Let say I use rpmsg for the communication between the two. But how can remote 
knows about the shared buffers before it can used it.

I  saw Loic has a patch to add virtio config to specify this buffer, however, 
it is not in the latest linux kernel master. And thus, trying to see if there 
is another way to solve this issue. Use existing carveout to specify this 
memory?

Thanks,
Wendy

>
> regards
> Suman
>
> >
> > Thanks,
> > Wendy
> >
> >>
> >> regards
> >> Suman
> >>
> >>>
> >>> Wendy Liang (3):
> >>>   remoteproc: add rproc mem resource entry
> >>>   remoteproc: add rproc_mem resource entry handler
> >>>   remoteproc: Release DMA declare mem when cleanup rsc
> >>>
> >>>  drivers/remoteproc/remoteproc_core.c | 40
> 
> >>>  include/linux/remoteproc.h   | 23 -
> >>>  2 files changed, 62 insertions(+), 1 deletion(-)
> >>>
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe
> >> linux-remoteproc" in the body of a message to
> >> majord...@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html



This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.



RE: [RFC LINUX PATCH 0/3] Allow remote to specify shared memory

2017-03-29 Thread Jiaying Liang
Hi Suman,

> -Original Message-
> From: Suman Anna [mailto:s-a...@ti.com]
> Sent: Tuesday, March 28, 2017 4:24 PM
> To: Wendy Liang
> Cc: Jiaying Liang; Bjorn Andersson; linux-remotep...@vger.kernel.org; linux-
> ker...@vger.kernel.org; Jiaying Liang
> Subject: Re: [RFC LINUX PATCH 0/3] Allow remote to specify shared memory
>
> Hi Wendy,
>
> On 03/28/2017 01:52 PM, Wendy Liang wrote:
> > Thanks Suman for your comments.
> >
> > On Mon, Mar 27, 2017 at 8:54 AM, Suman Anna  wrote:
> >> Hi Wendy,
> >>
> >> On 03/24/2017 02:22 PM, Wendy Liang wrote:
> >>> This patch enables the remoteproc to specify the shared memory.
> >>> Remoteproc declared this memory as DMA memory.
> >>> It can be used for virtio, or shared buffers.
> >>
> >> You should be able to achieve this without any remoteproc core changes.
> >> You can do this by defining a reserved-memory node in your DTS file
> >> (can be a CMA pool or a DMA pool), assigning the node using
> >> memory-region in your remoteproc DT node and using the function,
> >> of_reserved_mem_device_init() in your remoteproc driver.
> >
> > The idea to introduce the rproc_mem is to let the remote to specify
> > the shared memory.
> > I am trying to see if there is a way to specify this software
> > attribute without touching the device tree as it doesn't look like it is
> hardware related.
> > And try to see if there is a way that when I change the firmware, i
> > don't need to change the device tree.
>
> So is this shared memory going to be accessed through an MMU by the
> remote processor? If not, don't you need a specific carveout, which would
> then in turn mean boot-time memory reservation?
[Wendy] This memory is not accessed through MMU by remote.
Here is the usecase, the number of remotes can be changed at run time,
Also the firmware running on the remotes can be changed. And the remote will
Need to memory map those memory before it can use it.

From what you have suggested, we reserved memory from the device node, and then 
remoteproc driver linked to that reserved memory with "memory-region", I 
suppose different remoteproc drivers can share one "memory-region". However, 
now the question is how the remote knows the shared memory.
Let say I use rpmsg for the communication between the two. But how can remote 
knows about the shared buffers before it can used it.

I  saw Loic has a patch to add virtio config to specify this buffer, however, 
it is not in the latest linux kernel master. And thus, trying to see if there 
is another way to solve this issue. Use existing carveout to specify this 
memory?

Thanks,
Wendy

>
> regards
> Suman
>
> >
> > Thanks,
> > Wendy
> >
> >>
> >> regards
> >> Suman
> >>
> >>>
> >>> Wendy Liang (3):
> >>>   remoteproc: add rproc mem resource entry
> >>>   remoteproc: add rproc_mem resource entry handler
> >>>   remoteproc: Release DMA declare mem when cleanup rsc
> >>>
> >>>  drivers/remoteproc/remoteproc_core.c | 40
> 
> >>>  include/linux/remoteproc.h   | 23 -
> >>>  2 files changed, 62 insertions(+), 1 deletion(-)
> >>>
> >>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe
> >> linux-remoteproc" in the body of a message to
> >> majord...@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html



This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.