Re: [PATCH v3 0/3] Imagis touch keys and FIELD_GET cleanup

2024-03-09 Thread Dmitry Torokhov
On Wed, Mar 06, 2024 at 03:40:05PM +0100, Duje Mihanović wrote:
> Tiny series to clean up the field extraction and add touch key support.
> This version is based on the next branch of Dmitry's input tree.
> 
> Signed-off-by: Duje Mihanović 
> ---
> Changes in v3:
> - Rebase on input/next
> - Add changelog to binding patch
> - Fix binding constraint
> - Allow changing keycodes in userspace as in 872e57abd171 ("Input:
>   tm2-touchkey - allow changing keycodes from userspace")
> - Allow up to 5 keycodes (the key status field has 5 bits)
> - Link to v2: 
> https://lore.kernel.org/r/20240120-b4-imagis-keys-v2-0-d7fc16f2e...@skole.hr
> 
> Changes in v2:
> - Fix compile error
> - Add FIELD_GET patch
> - Allow specifying custom keycodes
> - Link to v1: 
> https://lore.kernel.org/20231112194124.24916-1-duje.mihano...@skole.hr
> 
> ---
> Duje Mihanović (3):
>   input: touchscreen: imagis: use FIELD_GET where applicable
>   dt-bindings: input: imagis: Document touch keys
>   input: touchscreen: imagis: Add touch key support
> 
>  .../input/touchscreen/imagis,ist3038c.yaml | 19 +++--
>  drivers/input/touchscreen/imagis.c | 46 
> --
>  2 files changed, 50 insertions(+), 15 deletions(-)

Applied the lot, thank you.

-- 
Dmitry



Re: [PATCH v4 1/4] remoteproc: Add TEE support

2024-03-09 Thread kernel test robot
Hi Arnaud,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 62210f7509e13a2caa7b080722a45229b8f17a0a]

url:
https://github.com/intel-lab-lkp/linux/commits/Arnaud-Pouliquen/remoteproc-Add-TEE-support/20240308-225116
base:   62210f7509e13a2caa7b080722a45229b8f17a0a
patch link:
https://lore.kernel.org/r/20240308144708.62362-2-arnaud.pouliquen%40foss.st.com
patch subject: [PATCH v4 1/4] remoteproc: Add TEE support
config: arm-randconfig-r123-20240310 
(https://download.01.org/0day-ci/archive/20240310/202403101139.nizjmqwp-...@intel.com/config)
compiler: arm-linux-gnueabi-gcc (GCC) 13.2.0
reproduce: 
(https://download.01.org/0day-ci/archive/20240310/202403101139.nizjmqwp-...@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot 
| Closes: 
https://lore.kernel.org/oe-kbuild-all/202403101139.nizjmqwp-...@intel.com/

sparse warnings: (new ones prefixed by >>)
>> drivers/remoteproc/tee_remoteproc.c:163:19: sparse: sparse: incorrect type 
>> in assignment (different address spaces) @@ expected struct 
>> resource_table *rsc_table @@ got void [noderef] __iomem * @@
   drivers/remoteproc/tee_remoteproc.c:163:19: sparse: expected struct 
resource_table *rsc_table
   drivers/remoteproc/tee_remoteproc.c:163:19: sparse: got void [noderef] 
__iomem *
>> drivers/remoteproc/tee_remoteproc.c:276:23: sparse: sparse: incorrect type 
>> in argument 1 (different address spaces) @@ expected void volatile 
>> [noderef] __iomem *io_addr @@ got struct resource_table *rsc_table @@
   drivers/remoteproc/tee_remoteproc.c:276:23: sparse: expected void 
volatile [noderef] __iomem *io_addr
   drivers/remoteproc/tee_remoteproc.c:276:23: sparse: got struct 
resource_table *rsc_table
   drivers/remoteproc/tee_remoteproc.c:399:38: sparse: sparse: incorrect type 
in argument 1 (different address spaces) @@ expected void volatile 
[noderef] __iomem *io_addr @@ got struct resource_table *rsc_table @@
   drivers/remoteproc/tee_remoteproc.c:399:38: sparse: expected void 
volatile [noderef] __iomem *io_addr
   drivers/remoteproc/tee_remoteproc.c:399:38: sparse: got struct 
resource_table *rsc_table
   drivers/remoteproc/tee_remoteproc.c: note: in included file (through 
arch/arm/include/asm/traps.h, arch/arm/include/asm/thread_info.h, 
include/linux/thread_info.h, ...):
   include/linux/list.h:83:21: sparse: sparse: self-comparison always evaluates 
to true

vim +163 drivers/remoteproc/tee_remoteproc.c

   131  
   132  struct resource_table *tee_rproc_get_loaded_rsc_table(struct rproc 
*rproc, size_t *table_sz)
   133  {
   134  struct tee_ioctl_invoke_arg arg;
   135  struct tee_param param[MAX_TEE_PARAM_ARRY_MEMBER];
   136  struct tee_rproc *trproc = rproc->tee_interface;
   137  struct resource_table *rsc_table;
   138  int ret;
   139  
   140  if (!trproc)
   141  return ERR_PTR(-EINVAL);
   142  
   143  tee_rproc_prepare_args(trproc, TA_RPROC_FW_CMD_GET_RSC_TABLE, 
, param, 2);
   144  
   145  param[1].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT;
   146  param[2].attr = TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_OUTPUT;
   147  
   148  ret = tee_client_invoke_func(tee_rproc_ctx->tee_ctx, , 
param);
   149  if (ret < 0 || arg.ret != 0) {
   150  dev_err(tee_rproc_ctx->dev,
   151  "TA_RPROC_FW_CMD_GET_RSC_TABLE invoke failed 
TEE err: %x, ret:%x\n",
   152  arg.ret, ret);
   153  return ERR_PTR(-EIO);
   154  }
   155  
   156  *table_sz = param[2].u.value.a;
   157  
   158  /* If the size is null no resource table defined in the image */
   159  if (!*table_sz)
   160  return NULL;
   161  
   162  /* Store the resource table address that would be updated by 
the remote core. */
 > 163  rsc_table = ioremap_wc(param[1].u.value.a, *table_sz);
   164  if (IS_ERR_OR_NULL(rsc_table)) {
   165  dev_err(tee_rproc_ctx->dev, "Unable to map memory 
region: %lld+%zx\n",
   166  param[1].u.value.a, *table_sz);
   167  return ERR_PTR(-ENOMEM);
   168  }
   169  
   170  return rsc_table;
   171  }
   172  EXPORT_SYMBOL_GPL(tee_rproc_get_loaded_rsc_table);
   173  
   174  int tee_rproc_parse_fw(struct rproc *rproc, const struct firmware *fw)
   175  {
   176  struct tee_rproc *trproc = rproc->tee_interface;
   177  struct resource_table *rsc_table;
   178  size_t table_sz;
   179  int ret;
   180  
   181  ret = tee_rproc_load_fw(rproc, fw);
   182  if (ret)
   183  return ret;
   184  
   185  rsc_table = tee_rproc_get_loaded_rsc_table(rproc, 

Re: [RFC PATCH] riscv: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS

2024-03-09 Thread Guo Ren
On Fri, Mar 08, 2024 at 05:18:21PM +0800, Andy Chiu wrote:
> Hi Puranjay,
> 
> On Fri, Mar 8, 2024 at 3:53 AM Puranjay Mohan  wrote:
> >
> > Hi Björn,
> >
> > On Thu, Mar 7, 2024 at 8:27 PM Björn Töpel  wrote:
> > >
> > > Puranjay!
> > >
> > > Puranjay Mohan  writes:
> > >
> > > > This patch enables support for DYNAMIC_FTRACE_WITH_CALL_OPS on RISC-V.
> > > > This allows each ftrace callsite to provide an ftrace_ops to the common
> > > > ftrace trampoline, allowing each callsite to invoke distinct tracer
> > > > functions without the need to fall back to list processing or to
> > > > allocate custom trampolines for each callsite. This significantly speeds
> > > > up cases where multiple distinct trace functions are used and callsites
> > > > are mostly traced by a single tracer.
> > > >
> > > > The idea and most of the implementation is taken from the ARM64's
> > > > implementation of the same feature. The idea is to place a pointer to
> > > > the ftrace_ops as a literal at a fixed offset from the function entry
> > > > point, which can be recovered by the common ftrace trampoline.
> > >
> > > Not really a review, but some more background; Another rationale (on-top
> > > of the improved per-call performance!) for CALL_OPS was to use it to
> > > build ftrace direct call support (which BPF uses a lot!). Mark, please
> > > correct me if I'm lying here!
> > >
> > > On Arm64, CALL_OPS makes it possible to implement direct calls, while
> > > only patching one BL instruction -- nice!
> > >
> > > On RISC-V we cannot use use the same ideas as Arm64 straight off,
> > > because the range of jal (compare to BL) is simply too short (+/-1M).
> > > So, on RISC-V we need to use a full auipc/jal pair (the text patching
> > > story is another chapter, but let's leave that aside for now). Since we
> > > have to patch multiple instructions, the cmodx situation doesn't really
> > > improve with CALL_OPS.
> > >
> > > Let's say that we continue building on your patch and implement direct
> > > calls on CALL_OPS for RISC-V as well.
> > >
> > > From Florent's commit message for direct calls:
> > >
> > >   |There are a few cases to distinguish:
> > >   |- If a direct call ops is the only one tracing a function:
> > >   |  - If the direct called trampoline is within the reach of a BL
> > >   |instruction
> > >   | -> the ftrace patchsite jumps to the trampoline
> > >   |  - Else
> > >   | -> the ftrace patchsite jumps to the ftrace_caller trampoline 
> > > which
> > >   |reads the ops pointer in the patchsite and jumps to the 
> > > direct
> > >   |call address stored in the ops
> > >   |- Else
> > >   |  -> the ftrace patchsite jumps to the ftrace_caller trampoline 
> > > and its
> > >   | ops literal points to ftrace_list_ops so it iterates over all
> > >   | registered ftrace ops, including the direct call ops and 
> > > calls its
> > >   | call_direct_funcs handler which stores the direct called
> > >   | trampoline's address in the ftrace_regs and the ftrace_caller
> > >   | trampoline will return to that address instead of returning 
> > > to the
> > >   | traced function
> > >
> > > On RISC-V, where auipc/jalr is used, the direct called trampoline would
> > > always be reachable, and then first Else-clause would never be entered.
> > > This means the the performance for direct calls would be the same as the
> > > one we have today (i.e. no regression!).
> > >
> > > RISC-V does like x86 does (-ish) -- patch multiple instructions, long
> > > reach.
> > >
> > > Arm64 uses CALL_OPS and patch one instruction BL.
> > >
> > > Now, with this background in mind, compared to what we have today,
> > > CALL_OPS would give us (again assuming we're using it for direct calls):
> > >
> > > * Better performance for tracer per-call (faster ops lookup) GOOD
> >
> > ^ this was the only motivation for me to implement this patch.
> >
> > I don't think implementing direct calls over call ops is fruitful for
> > RISC-V because once
> > the auipc/jalr can be patched atomically, the direct call trampoline
> > is always reachable.
> 
> Yes, the auipc/jalr instruction pair can be patched atomically just as
> long as their size is naturally aligned on. However, we cannot prevent
> 2 instructions from being fetched atomically :P
There are some micro-arch optimization methods here, such as:
 - Disable interrupt when auipc retired.
 - When auipc -> auipc, the second one still could cause an
   interruption.

> 
> > Solving the atomic text patching problem would be fun!! I am eager to
> > see how it will be
> > solved.
> 
> I have a patch series to solve the atomic code patching issue, which I
> am about to respin [1]. The idea is to solve it with yet another layer
> of indirection. We add a 8-B aligned space at each function entry. The
> space is a pointer to the ftrace entry. During boot, each function
> entry code is updated to perform a 

Re: [PATCH v9 04/15] x86/sgx: Implement basic EPC misc cgroup functionality

2024-03-09 Thread Haitao Huang
On Tue, 27 Feb 2024 15:35:38 -0600, Haitao Huang  
 wrote:


On Mon, 26 Feb 2024 12:25:58 -0600, Michal Koutný   
wrote:


On Mon, Feb 05, 2024 at 01:06:27PM -0800, Haitao Huang  
 wrote:

+static int sgx_epc_cgroup_alloc(struct misc_cg *cg);
+
+const struct misc_res_ops sgx_epc_cgroup_ops = {
+   .alloc = sgx_epc_cgroup_alloc,
+   .free = sgx_epc_cgroup_free,
+};
+
+static void sgx_epc_misc_init(struct misc_cg *cg, struct  
sgx_epc_cgroup *epc_cg)

+{
+   cg->res[MISC_CG_RES_SGX_EPC].priv = epc_cg;
+   epc_cg->cg = cg;
+}


This is a possibly a nit pick but I share it here for consideration.

Would it be more prudent to have the signature like
  alloc(struct misc_res *res, struct misc_cg *cg)
so that implementations are free of the assumption of how cg and res are
stored?


Thanks,
Michal


Will do.

Thanks
Haitao

Actually, because the root node is initialized in sgx_cgroup_init(), which  
only has access to misc_cg_root() so we can't pass a misc_res struct  
without knowing cg->res relationship. We could hide it with a getter, but  
I think it's a little overkill at the moment. I can sign up for adding  
this improvement if we feel it needed in future.


Thanks
Haitao



Re: [PATCH 0/8] tracing: Persistent traces across a reboot or crash

2024-03-09 Thread Kees Cook
On Sat, Mar 09, 2024 at 01:51:16PM -0500, Steven Rostedt wrote:
> On Sat, 9 Mar 2024 10:27:47 -0800
> Kees Cook  wrote:
> 
> > On Tue, Mar 05, 2024 at 08:59:10PM -0500, Steven Rostedt wrote:
> > > This is a way to map a ring buffer instance across reboots.  
> > 
> > As mentioned on Fedi, check out the persistent storage subsystem
> > (pstore)[1]. It already does what you're starting to construct for RAM
> > backends (but also supports reed-solomon ECC), and supports several
> > other backends including EFI storage (which is default enabled on at
> > least Fedora[2]), block devices, etc. It has an existing mechanism for
> > handling reservations (including via device tree), and supports multiple
> > "frontends" including the Oops handler, console output, and even ftrace
> > which does per-cpu recording and event reconstruction (Joel wrote this
> > frontend).
> 
> Mathieu was telling me about the pmem infrastructure.

I use nvdimm to back my RAM backend testing with qemu so I can examine
the storage "externally":

RAM_SIZE=16384
NVDIMM_SIZE=200
MAX_SIZE=$(( RAM_SIZE + NVDIMM_SIZE ))
...
qemu-system-x86_64 \
...
-machine pc,nvdimm=on \
-m ${RAM_SIZE}M,slots=2,maxmem=${MAX_SIZE}M \
-object 
memory-backend-file,id=mem1,share=on,mem-path=$IMAGES/x86/nvdimm.img,size=${NVDIMM_SIZE}M,align=128M
\
-device nvdimm,id=nvdimm1,memdev=mem1,label-size=1M \
...
-append 'console=uart,io,0x3f8,115200n8 loglevel=8 root=/dev/vda1 ro 
ramoops.mem_size=1048576 ramoops.ecc=1 ramoops.mem_address=0x44000 
ramoops.console_size=16384 ramoops.ftrace_size=16384 ramoops.pmsg_size=16384 
ramoops.record_size=32768 panic=-1 init=/root/resume.sh '"$@"


The part I'd like to get wired up sanely is having pstore find the
nvdimm area automatically, but it never quite happened:
https://lore.kernel.org/lkml/CAGXu5jLtmb3qinZnX3rScUJLUFdf+pRDVPjy=cs4kutw9tl...@mail.gmail.com/

> Thanks for the info. We use pstore on ChromeOS, but it is currently
> restricted to 1MB which is too small for the tracing buffers. From what
> I understand, it's also in a specific location where there's only 1MB
> available for contiguous memory.

That's the area that is specifically hardware backed with persistent
RAM.

> I'm looking at finding a way to get consistent memory outside that
> range. That's what I'll be doing next week ;-)
> 
> But this code was just to see if I could get a single contiguous range
> of memory mapped to ftrace, and this patch set does exactly that.

Well, please take a look at pstore. It should be able to do everything
you mention already; it just needs a way to define multiple regions if
you want to use an area outside of the persistent ram area defined by
Chrome OS's platform driver.

-Kees

-- 
Kees Cook



Re: [PATCH 0/8] tracing: Persistent traces across a reboot or crash

2024-03-09 Thread Steven Rostedt
On Sat, 9 Mar 2024 10:27:47 -0800
Kees Cook  wrote:

> On Tue, Mar 05, 2024 at 08:59:10PM -0500, Steven Rostedt wrote:
> > This is a way to map a ring buffer instance across reboots.  
> 
> As mentioned on Fedi, check out the persistent storage subsystem
> (pstore)[1]. It already does what you're starting to construct for RAM
> backends (but also supports reed-solomon ECC), and supports several
> other backends including EFI storage (which is default enabled on at
> least Fedora[2]), block devices, etc. It has an existing mechanism for
> handling reservations (including via device tree), and supports multiple
> "frontends" including the Oops handler, console output, and even ftrace
> which does per-cpu recording and event reconstruction (Joel wrote this
> frontend).

Mathieu was telling me about the pmem infrastructure.

This patch set doesn't care where the memory comes from. You just give
it an address and size, and it will do the rest.

> 
> It should be pretty straight forward to implement a new frontend if the
> ftrace one isn't flexible enough. It's a bit clunky still to add one,
> but search for "ftrace" in fs/pstore/ram.c to see how to plumb a new
> frontend into the RAM backend.
> 
> I continue to want to lift the frontend configuration options up into
> the pstore core, since it would avoid a bunch of redundancy, but this is
> where we are currently. :)

Thanks for the info. We use pstore on ChromeOS, but it is currently
restricted to 1MB which is too small for the tracing buffers. From what
I understand, it's also in a specific location where there's only 1MB
available for contiguous memory.

I'm looking at finding a way to get consistent memory outside that
range. That's what I'll be doing next week ;-)

But this code was just to see if I could get a single contiguous range
of memory mapped to ftrace, and this patch set does exactly that.

> 
> -Kees
> 
> [1] CONFIG_PSTORE et. al. in fs/pstore/ 
> https://docs.kernel.org/admin-guide/ramoops.html
> [2] 
> https://www.freedesktop.org/software/systemd/man/latest/systemd-pstore.service.html
> 

Thanks!

-- Steve



Re: [PATCH 0/8] tracing: Persistent traces across a reboot or crash

2024-03-09 Thread Kees Cook
On Tue, Mar 05, 2024 at 08:59:10PM -0500, Steven Rostedt wrote:
> This is a way to map a ring buffer instance across reboots.

As mentioned on Fedi, check out the persistent storage subsystem
(pstore)[1]. It already does what you're starting to construct for RAM
backends (but also supports reed-solomon ECC), and supports several
other backends including EFI storage (which is default enabled on at
least Fedora[2]), block devices, etc. It has an existing mechanism for
handling reservations (including via device tree), and supports multiple
"frontends" including the Oops handler, console output, and even ftrace
which does per-cpu recording and event reconstruction (Joel wrote this
frontend).

It should be pretty straight forward to implement a new frontend if the
ftrace one isn't flexible enough. It's a bit clunky still to add one,
but search for "ftrace" in fs/pstore/ram.c to see how to plumb a new
frontend into the RAM backend.

I continue to want to lift the frontend configuration options up into
the pstore core, since it would avoid a bunch of redundancy, but this is
where we are currently. :)

-Kees

[1] CONFIG_PSTORE et. al. in fs/pstore/ 
https://docs.kernel.org/admin-guide/ramoops.html
[2] 
https://www.freedesktop.org/software/systemd/man/latest/systemd-pstore.service.html

-- 
Kees Cook



Re: [PATCH v12 2/4] dt-bindings: remoteproc: add Tightly Coupled Memory (TCM) bindings

2024-03-09 Thread Krzysztof Kozlowski
On 01/03/2024 19:16, Tanmay Shah wrote:
> From: Radhey Shyam Pandey 
> 
> Introduce bindings for TCM memory address space on AMD-xilinx Zynq
> UltraScale+ platform. It will help in defining TCM in device-tree
> and make it's access platform agnostic and data-driven.
> 
> Tightly-coupled memories(TCMs) are low-latency memory that provides
> predictable instruction execution and predictable data load/store
> timing. Each Cortex-R5F processor contains two 64-bit wide 64 KB memory
> banks on the ATCM and BTCM ports, for a total of 128 KB of memory.
> 
> The TCM resources(reg, reg-names and power-domain) are documented for
> each TCM in the R5 node. The reg and reg-names are made as required
> properties as we don't want to hardcode TCM addresses for future
> platforms and for zu+ legacy implementation will ensure that the
> old dts w/o reg/reg-names works and stable ABI is maintained.
> 
> It also extends the examples for TCM split and lockstep modes.
> 
> Signed-off-by: Radhey Shyam Pandey 
> Signed-off-by: Tanmay Shah 
> ---
> 
> Changes in v12:
>   - add "reg", "reg-names" and "power-domains" in pattern properties
>   - add "reg" and "reg-names" in required list
>   - keep "power-domains" in required list as it was before the change
> 
> Changes in v11:
>   - Fix yamllint warning and reduce indentation as needed
> 
>  .../remoteproc/xlnx,zynqmp-r5fss.yaml | 188 --
>  1 file changed, 168 insertions(+), 20 deletions(-)
> 
> diff --git 
> a/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml 
> b/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
> index 78aac69f1060..dc6ce308688f 100644
> --- a/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
> +++ b/Documentation/devicetree/bindings/remoteproc/xlnx,zynqmp-r5fss.yaml
> @@ -20,9 +20,21 @@ properties:
>compatible:
>  const: xlnx,zynqmp-r5fss
>  
> +  "#address-cells":
> +const: 2
> +
> +  "#size-cells":
> +const: 2
> +
> +  ranges:
> +description: |
> +  Standard ranges definition providing address translations for
> +  local R5F TCM address spaces to bus addresses.
> +
>xlnx,cluster-mode:
>  $ref: /schemas/types.yaml#/definitions/uint32
>  enum: [0, 1, 2]
> +default: 1
>  description: |
>The RPU MPCore can operate in split mode (Dual-processor performance), 
> Safety
>lock-step mode(Both RPU cores execute the same code in lock-step,
> @@ -37,7 +49,7 @@ properties:
>2: single cpu mode
>  
>  patternProperties:
> -  "^r5f-[a-f0-9]+$":
> +  "^r5f@[0-9a-f]+$":
>  type: object
>  description: |
>The RPU is located in the Low Power Domain of the Processor Subsystem.
> @@ -54,8 +66,17 @@ patternProperties:
>compatible:
>  const: xlnx,zynqmp-r5f
>  
> +  reg:
> +minItems: 1
> +maxItems: 4
> +
> +  reg-names:
> +minItems: 1
> +maxItems: 4
> +
>power-domains:
> -maxItems: 1
> +minItems: 2
> +maxItems: 5
>  
>mboxes:
>  minItems: 1
> @@ -101,35 +122,162 @@ patternProperties:
>  
>  required:
>- compatible
> +  - reg
> +  - reg-names
>- power-domains
>  
> -unevaluatedProperties: false
> -
>  required:
>- compatible
> +  - "#address-cells"
> +  - "#size-cells"
> +  - ranges
> +
> +allOf:
> +  - if:
> +  properties:
> +xlnx,cluster-mode:
> +  enum:
> +- 1
> +then:
> +  patternProperties:
> +"^r5f@[0-9a-f]+$":
> +  type: object
> +
> +  properties:
> +reg:
> +  minItems: 1
> +  items:
> +- description: ATCM internal memory
> +- description: BTCM internal memory
> +- description: extra ATCM memory in lockstep mode
> +- description: extra BTCM memory in lockstep mode
> +
> +reg-names:
> +  minItems: 1
> +  items:
> +- const: atcm0
> +- const: btcm0
> +- const: atcm1
> +- const: btcm1

Why power domains are flexible?

> +
> +else:
> +  patternProperties:
> +"^r5f@[0-9a-f]+$":
> +  type: object
> +
> +  properties:
> +reg:
> +  minItems: 1
> +  items:
> +- description: ATCM internal memory
> +- description: BTCM internal memory
> +
> +reg-names:
> +  minItems: 1
> +  items:
> +- const: atcm0
> +- const: btcm0
> +
> +power-domains:
> +  maxItems: 3

Please list power domains.

>  
>  additionalProperties: false


Best regards,
Krzysztof




[PATCH] openrisc: Use asm-generic's version of fix_to_virt() & virt_to_fix()

2024-03-09 Thread Dawei Li
Openrisc's implementation of fix_to_virt() & virt_to_fix() share same
functionality with ones of asm generic.

Plus, generic version of fix_to_virt() can trap invalid index at compile
time.

Thus, Replace the arch-specific implementations with asm generic's ones.

Signed-off-by: Dawei Li 
---
 arch/openrisc/include/asm/fixmap.h | 31 +-
 1 file changed, 1 insertion(+), 30 deletions(-)

diff --git a/arch/openrisc/include/asm/fixmap.h 
b/arch/openrisc/include/asm/fixmap.h
index ad78e50b7ba3..ecdb98a5839f 100644
--- a/arch/openrisc/include/asm/fixmap.h
+++ b/arch/openrisc/include/asm/fixmap.h
@@ -50,35 +50,6 @@ enum fixed_addresses {
 /* FIXADDR_BOTTOM might be a better name here... */
 #define FIXADDR_START  (FIXADDR_TOP - FIXADDR_SIZE)
 
-#define __fix_to_virt(x)   (FIXADDR_TOP - ((x) << PAGE_SHIFT))
-#define __virt_to_fix(x)   ((FIXADDR_TOP - ((x)_MASK)) >> PAGE_SHIFT)
-
-/*
- * 'index to address' translation. If anyone tries to use the idx
- * directly without tranlation, we catch the bug with a NULL-deference
- * kernel oops. Illegal ranges of incoming indices are caught too.
- */
-static __always_inline unsigned long fix_to_virt(const unsigned int idx)
-{
-   /*
-* this branch gets completely eliminated after inlining,
-* except when someone tries to use fixaddr indices in an
-* illegal way. (such as mixing up address types or using
-* out-of-range indices).
-*
-* If it doesn't get removed, the linker will complain
-* loudly with a reasonably clear error message..
-*/
-   if (idx >= __end_of_fixed_addresses)
-   BUG();
-
-   return __fix_to_virt(idx);
-}
-
-static inline unsigned long virt_to_fix(const unsigned long vaddr)
-{
-   BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
-   return __virt_to_fix(vaddr);
-}
+#include 
 
 #endif
-- 
2.25.1