Re: [PATCH v2 4/4] target/ppc: Add POWER10 exception model

2021-04-16 Thread Nicholas Piggin
Excerpts from David Gibson's message of April 16, 2021 2:28 pm:
> On Thu, Apr 15, 2021 at 03:42:27PM +1000, Nicholas Piggin wrote:
>> POWER10 adds a new bit that modifies interrupt behaviour, LPCR[HAIL],
>> and it removes support for the LPCR[AIL]=0b10 mode.
>> 
>> Reviewed-by: Cédric Le Goater 
>> Tested-by: Cédric Le Goater 
>> Signed-off-by: Nicholas Piggin 
>> ---
>>  hw/ppc/spapr_hcall.c|  7 -
>>  target/ppc/cpu-qom.h|  2 ++
>>  target/ppc/cpu.h|  5 ++--
>>  target/ppc/excp_helper.c| 51 +++--
>>  target/ppc/translate.c  |  3 +-
>>  target/ppc/translate_init.c.inc |  2 +-
>>  6 files changed, 62 insertions(+), 8 deletions(-)
>> 
>> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
>> index 2fbe04a689..6802cd4dc8 100644
>> --- a/hw/ppc/spapr_hcall.c
>> +++ b/hw/ppc/spapr_hcall.c
>> @@ -1396,7 +1396,12 @@ static target_ulong 
>> h_set_mode_resource_addr_trans_mode(PowerPCCPU *cpu,
>>  }
>>  
>>  if (mflags == 1) {
>> -/* AIL=1 is reserved */
>> +/* AIL=1 is reserved in POWER8/POWER9 */
>> +return H_UNSUPPORTED_FLAG;
>> +}
>> +
>> +if (mflags == 2 && (pcc->insns_flags2 & PPC2_ISA310)) {
>> +/* AIL=2 is also reserved in POWER10 (ISA v3.1) */
>>  return H_UNSUPPORTED_FLAG;
>>  }
>>  
>> diff --git a/target/ppc/cpu-qom.h b/target/ppc/cpu-qom.h
>> index 118baf8d41..06b6571bc9 100644
>> --- a/target/ppc/cpu-qom.h
>> +++ b/target/ppc/cpu-qom.h
>> @@ -116,6 +116,8 @@ enum powerpc_excp_t {
>>  POWERPC_EXCP_POWER8,
>>  /* POWER9 exception model   */
>>  POWERPC_EXCP_POWER9,
>> +/* POWER10 exception model   */
>> +POWERPC_EXCP_POWER10,
>>  };
>>  
>>  
>> /*/
>> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
>> index 5200a16d23..9d35cdfa92 100644
>> --- a/target/ppc/cpu.h
>> +++ b/target/ppc/cpu.h
>> @@ -354,10 +354,11 @@ typedef struct ppc_v3_pate_t {
>>  #define LPCR_PECE_U_SHIFT (63 - 19)
>>  #define LPCR_PECE_U_MASK  (0x7ull << LPCR_PECE_U_SHIFT)
>>  #define LPCR_HVEE PPC_BIT(17) /* Hypervisor Virt Exit Enable */
>> -#define LPCR_RMLS_SHIFT   (63 - 37)
>> +#define LPCR_RMLS_SHIFT   (63 - 37)   /* RMLS (removed in ISA v3.0) */
>>  #define LPCR_RMLS (0xfull << LPCR_RMLS_SHIFT)
>> +#define LPCR_HAIL PPC_BIT(37) /* ISA v3.1 HV AIL=3 equivalent */
>>  #define LPCR_ILE  PPC_BIT(38)
>> -#define LPCR_AIL_SHIFT(63 - 40)  /* Alternate interrupt location */
>> +#define LPCR_AIL_SHIFT(63 - 40)   /* Alternate interrupt location */
>>  #define LPCR_AIL  (3ull << LPCR_AIL_SHIFT)
>>  #define LPCR_UPRT PPC_BIT(41) /* Use Process Table */
>>  #define LPCR_EVIRTPPC_BIT(42) /* Enhanced Virtualisation */
>> diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
>> index 964a58cfdc..38a1482519 100644
>> --- a/target/ppc/excp_helper.c
>> +++ b/target/ppc/excp_helper.c
>> @@ -170,7 +170,27 @@ static int powerpc_reset_wakeup(CPUState *cs, 
>> CPUPPCState *env, int excp,
>>   * +---+
>>   *
>>   * The difference with POWER9 being that MSR[HV] 0->1 interrupts can be 
>> sent to
>> - * the hypervisor in AIL mode if the guest is radix.
>> + * the hypervisor in AIL mode if the guest is radix. This is good for
>> + * performance but allows the guest to influence the AIL of hypervisor
>> + * interrupts using its MSR, and also the hypervisor must disallow guest
>> + * interrupts (MSR[HV] 0->0) from using AIL if the hypervisor does not want 
>> to
>> + * use AIL for its MSR[HV] 0->1 interrupts.
>> + *
>> + * POWER10 addresses those issues with a new LPCR[HAIL] bit that is applied 
>> to
>> + * interrupts that begin execution with MSR[HV]=1 (so both MSR[HV] 0->1 and
>> + * MSR[HV] 1->1).
>> + *
>> + * HAIL=1 is equivalent to AIL=3, for interrupts delivered with MSR[HV]=1.
>> + *
>> + * POWER10 behaviour is
>> + * | LPCR[AIL] | LPCR[HAIL] | MSR[IR||DR] | MSR[HV] | new MSR[HV] | AIL |
>> + * +---++-+-+-+-+
>> + * | a | h  | 00/01/10| 0   | 0   | 0   |
>> + * | a | h  | 11  | 0   | 0   | a   |
>> + * | a | h  | x   | 0   | 1   | h   |
>> + * | a | h  | 00/01/10| 1   | 1   | 0   |
>> + * | a | h  | 11  | 1   | 1   | h   |
>> + * ++
>>   */
>>  static inline void ppc_excp_apply_ail(PowerPCCPU *cpu, int excp_model, int 
>> excp,
>>target_ulong msr,
>> @@ -210,6 +230,29 @@ static inline void ppc_excp_apply_ail(PowerPCCPU *cpu, 
>> int excp_model, int excp,
>>  /* AIL=1 is reserved */
>>  return;
>>  

Re: [PATCH v2 3/4] target/ppc: Rework AIL logic in interrupt delivery

2021-04-16 Thread Nicholas Piggin
Excerpts from David Gibson's message of April 16, 2021 2:24 pm:
> On Thu, Apr 15, 2021 at 03:42:26PM +1000, Nicholas Piggin wrote:
>> The AIL logic is becoming unmanageable spread all over powerpc_excp(),
>> and it is slated to get even worse with POWER10 support.
>> 
>> Move it all to a new helper function.
>> 
>> Reviewed-by: Cédric Le Goater 
>> Tested-by: Cédric Le Goater 
>> Signed-off-by: Nicholas Piggin 
> 
> Looks like a nice cleanup overall, just a few minor comments.
> 
>> ---
>>  hw/ppc/spapr_hcall.c|   3 +-
>>  target/ppc/cpu.h|   8 --
>>  target/ppc/excp_helper.c| 159 
>>  target/ppc/translate_init.c.inc |   2 +-
>>  4 files changed, 102 insertions(+), 70 deletions(-)
>> 
>> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
>> index 7b5cd3553c..2fbe04a689 100644
>> --- a/hw/ppc/spapr_hcall.c
>> +++ b/hw/ppc/spapr_hcall.c
>> @@ -1395,7 +1395,8 @@ static target_ulong 
>> h_set_mode_resource_addr_trans_mode(PowerPCCPU *cpu,
>>  return H_P4;
>>  }
>>  
>> -if (mflags == AIL_RESERVED) {
>> +if (mflags == 1) {
>> +/* AIL=1 is reserved */
>>  return H_UNSUPPORTED_FLAG;
>>  }
>>  
>> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
>> index e73416da68..5200a16d23 100644
>> --- a/target/ppc/cpu.h
>> +++ b/target/ppc/cpu.h
>> @@ -2375,14 +2375,6 @@ enum {
>>  HMER_XSCOM_STATUS_MASK  = PPC_BITMASK(21, 23),
>>  };
>>  
>> -/* Alternate Interrupt Location (AIL) */
>> -enum {
>> -AIL_NONE= 0,
>> -AIL_RESERVED= 1,
>> -AIL_0001_8000   = 2,
>> -AIL_C000___4000 = 3,
>> -};
> 
> Yeah, I always thought these particular constants were a but
> pointless.
> 
>> -
>>  
>> /*/
>>  
>>  #define is_isa300(ctx) (!!(ctx->insns_flags2 & PPC2_ISA300))
>> diff --git a/target/ppc/excp_helper.c b/target/ppc/excp_helper.c
>> index b8881c0f85..964a58cfdc 100644
>> --- a/target/ppc/excp_helper.c
>> +++ b/target/ppc/excp_helper.c
>> @@ -136,25 +136,105 @@ static int powerpc_reset_wakeup(CPUState *cs, 
>> CPUPPCState *env, int excp,
>>  return POWERPC_EXCP_RESET;
>>  }
>>  
>> -static uint64_t ppc_excp_vector_offset(CPUState *cs, int ail)
>> +/*
>> + * AIL - Alternate Interrupt Location, a mode that allows interrupts to be
>> + * taken with the MMU on, and which uses an alternate location (e.g., so the
>> + * kernel/hv can map the vectors there with an effective address).
>> + *
>> + * An interrupt is considered to be taken "with AIL" or "AIL applies" if 
>> they
>> + * are delivered in this way. AIL requires the LPCR to be set to enable this
>> + * mode, and then a number of conditions have to be true for AIL to apply.
>> + *
>> + * First of all, SRESET, MCE, and HMI are always delivered without AIL, 
>> because
>> + * they specifically want to be in real mode (e.g., the MCE might be 
>> signaling
>> + * a SLB multi-hit which requires SLB flush before the MMU can be enabled).
>> + *
>> + * After that, behaviour depends on the current MSR[IR], MSR[DR], MSR[HV],
>> + * whether or not the interrupt changes MSR[HV] from 0 to 1, and the current
>> + * radix mode (LPCR[HR]).
>> + *
>> + * POWER8, POWER9 with LPCR[HR]=0
>> + * | LPCR[AIL] | MSR[IR||DR] | MSR[HV] | new MSR[HV] | AIL |
>> + * +---+-+-+-+-+
>> + * | a | 00/01/10| x   | x   | 0   |
>> + * | a | 11  | 0   | 1   | 0   |
>> + * | a | 11  | 1   | 1   | a   |
>> + * | a | 11  | 0   | 0   | a   |
>> + * +---+
>> + *
>> + * POWER9 with LPCR[HR]=1
>> + * | LPCR[AIL] | MSR[IR||DR] | MSR[HV] | new MSR[HV] | AIL |
>> + * +---+-+-+-+-+
>> + * | a | 00/01/10| x   | x   | 0   |
>> + * | a | 11  | x   | x   | a   |
>> + * +---+
>> + *
>> + * The difference with POWER9 being that MSR[HV] 0->1 interrupts can be 
>> sent to
>> + * the hypervisor in AIL mode if the guest is radix.
>> + */
>> +static inline void ppc_excp_apply_ail(PowerPCCPU *cpu, int excp_model, int 
>> excp,
>> +  target_ulong msr,
>> +  target_ulong *new_msr,
>> +  target_ulong *vector)
>>  {
>> -uint64_t offset = 0;
>> +#if defined(TARGET_PPC64)
>> +CPUPPCState *env = >env;
>> +bool mmu_all_on = ((msr >> MSR_IR) & 1) && ((msr >> MSR_DR) & 1);
>> +bool hv_escalation = !(msr & MSR_HVB) && (*new_msr & MSR_HVB);
>> +int ail = 0;
>> +
>> +if (excp == POWERPC_EXCP_MCHECK ||
>> +excp == POWERPC_EXCP_RESET ||
>> +excp == POWERPC_EXCP_HV_MAINT) {
>> +/* SRESET, MCE, HMI never 

Re: [RFC PATCH v2 1/6] device_tree: Add qemu_fdt_add_path

2021-04-16 Thread wangyanan (Y)

Hi David,

On 2021/4/16 12:52, David Gibson wrote:

On Tue, Apr 13, 2021 at 04:07:40PM +0800, Yanan Wang wrote:

From: Andrew Jones 

qemu_fdt_add_path() works like qemu_fdt_add_subnode(), except
it also adds any missing subnodes in the path. We also tweak
an error message of qemu_fdt_add_subnode().

We'll make use of this new function in a coming patch.

Signed-off-by: Andrew Jones 
Signed-off-by: Yanan Wang 
---
  include/sysemu/device_tree.h |  1 +
  softmmu/device_tree.c| 45 ++--
  2 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/include/sysemu/device_tree.h b/include/sysemu/device_tree.h
index 8a2fe55622..ef060a9759 100644
--- a/include/sysemu/device_tree.h
+++ b/include/sysemu/device_tree.h
@@ -121,6 +121,7 @@ uint32_t qemu_fdt_get_phandle(void *fdt, const char *path);
  uint32_t qemu_fdt_alloc_phandle(void *fdt);
  int qemu_fdt_nop_node(void *fdt, const char *node_path);
  int qemu_fdt_add_subnode(void *fdt, const char *name);
+int qemu_fdt_add_path(void *fdt, const char *path);
  
  #define qemu_fdt_setprop_cells(fdt, node_path, property, ...) \

  do {  
\
diff --git a/softmmu/device_tree.c b/softmmu/device_tree.c
index 2691c58cf6..8592c7aa1b 100644
--- a/softmmu/device_tree.c
+++ b/softmmu/device_tree.c
@@ -541,8 +541,8 @@ int qemu_fdt_add_subnode(void *fdt, const char *name)
  
  retval = fdt_add_subnode(fdt, parent, basename);

  if (retval < 0) {
-error_report("FDT: Failed to create subnode %s: %s", name,
- fdt_strerror(retval));
+error_report("%s: Failed to create subnode %s: %s",
+ __func__, name, fdt_strerror(retval));
  exit(1);
  }
  
@@ -550,6 +550,47 @@ int qemu_fdt_add_subnode(void *fdt, const char *name)

  return retval;
  }
  
+/*

+ * Like qemu_fdt_add_subnode(), but will add all missing
+ * subnodes in the path.
+ */
+int qemu_fdt_add_path(void *fdt, const char *path)
+{
+char *dupname, *basename, *p;
+int parent, retval = -1;
+
+if (path[0] != '/') {
+return retval;
+}
+
+parent = fdt_path_offset(fdt, "/");

Getting the offset for "/" is never needed - it's always 0.

Thanks, will fix it.

+p = dupname = g_strdup(path);

You shouldn't need the strdup(), see below.


+
+while (p) {
+*p = '/';
+basename = p + 1;
+p = strchr(p + 1, '/');
+if (p) {
+*p = '\0';
+}
+retval = fdt_path_offset(fdt, dupname);

The fdt_path_offset_namelen() function exists *exactly* so that you
can look up partial parths without having to mangle your input
string.  Just set the namelen right, and it will ignore anything to
the right of that.

Function fdt_path_offset_namelen() seems more reasonable.

After we call qemu_fdt_add_path() to add "/cpus/cpu-map/socket0/core0" 
successfully,
if we want to add another path like "/cpus/cpu-map/socket0/core1" we 
will get the error
-FDT_ERR_NOTFOUND for each partial path. But actually 
"/cpus/cpu-map/socket0"
already exists, so by using fdt_path_offset_namelen() with right namelen 
we can avoid

the error retval for this part.

+if (retval < 0 && retval != -FDT_ERR_NOTFOUND) {
+error_report("%s: Invalid path %s: %s",
+ __func__, path, fdt_strerror(retval));

If you're getting an error other than FDT_ERR_NOTFOUND here, chances
are it's not an invalid path, but a corrupted fdt blob or something
else.


Right, there can be variable reasons for the fail in addition to the 
invalid path.



+exit(1);
+} else if (retval == -FDT_ERR_NOTFOUND) {
+retval = fdt_add_subnode(fdt, parent, basename);
+if (retval < 0) {
+break;
+}
I found another question here. If path "/cpus/cpu-map/socket0/core0" has 
already

been added, when we want to add another path "/cpus/cpu-map/socket0/core1"
and go here with retval = fdt_add_subnode(fdt, parent, "cpus"), then 
retval will

be -FDT_ERR_EXISTS, but we can't just break the loop in this case.

Am I right of the explanation ?

Thanks,
Yanan

+}
+parent = retval;
+}
+
+g_free(dupname);
+return retval;
+}
+
  void qemu_fdt_dumpdtb(void *fdt, int size)
  {
  const char *dumpdtb = current_machine->dumpdtb;




Re: [PATCH v4 16/19] qapi/expr.py: Add docstrings

2021-04-16 Thread John Snow

On 4/14/21 11:04 AM, Markus Armbruster wrote:

John Snow  writes:



Thanks for taking this on. I realize it's a slog.

(Update: much later: AUUUGH WHY DID I DECIDE TO WRITE DOCS. MY HUBRIS)


Signed-off-by: John Snow 
---
  scripts/qapi/expr.py | 213 ++-
  1 file changed, 208 insertions(+), 5 deletions(-)

diff --git a/scripts/qapi/expr.py b/scripts/qapi/expr.py
index 1869ddf815..adc5b903bc 100644
--- a/scripts/qapi/expr.py
+++ b/scripts/qapi/expr.py
@@ -1,7 +1,5 @@
  # -*- coding: utf-8 -*-
  #
-# Check (context-free) QAPI schema expression structure
-#
  # Copyright IBM, Corp. 2011
  # Copyright (c) 2013-2019 Red Hat Inc.
  #
@@ -14,6 +12,25 @@
  # This work is licensed under the terms of the GNU GPL, version 2.
  # See the COPYING file in the top-level directory.
  
+"""

+Normalize and validate (context-free) QAPI schema expression structures.
+
+After QAPI expressions are parsed from disk, they are stored in
+recursively nested Python data structures using Dict, List, str, bool,
+and int. This module ensures that those nested structures have the
+correct type(s) and key(s) where appropriate for the QAPI context-free
+grammar.


"from disk"?  Perhaps something like "QAPISchemaParser parses the QAPI
schema into abstract syntax trees consisting of dict, list, str, bool
and int nodes."  This isn't quite accurate; it parses into a list of
{'expr': AST, 'info': INFO}, but that's detail.



Let's skip the detail; it doesn't help communicate purpose in the first 
paragraph here at the high level. The bulk of this module IS primarily 
concerned with the structures named.


Edited to:

`QAPISchemaParser` parses a QAPI schema into abstract syntax trees 
consisting of dict, list, str, bool, and int nodes. This module ensures 
that these nested structures have the correct type(s) and key(s) where 
appropriate for the QAPI context-free grammar.


(I replaced "the QAPI schema" with "a QAPI schema" as we have several; 
and I wanted to hint at (somehow) that this processes configurable input 
(i.e. "from disk") and not something indelibly linked.)


((What's wrong with "from disk?"))


PEP 8: You should use two spaces after a sentence-ending period in
multi- sentence comments, except after the final sentence.



Is this a demand?


+
+The QAPI schema expression language allows for syntactic sugar; this


suggest "certain syntactic sugar".



OK


+module also handles the normalization process of these nested
+structures.
+
+See `check_exprs` for the main entry point.
+
+See `schema.QAPISchema` for processing into native Python data
+structures and contextual semantic validation.
+"""
+
  import re
  from typing import (
  Collection,
@@ -31,9 +48,10 @@
  from .source import QAPISourceInfo
  
  
-# Deserialized JSON objects as returned by the parser;

-# The values of this mapping are not necessary to exhaustively type
-# here, because the purpose of this module is to interrogate that type.
+#: Deserialized JSON objects as returned by the parser.
+#:
+#: The values of this mapping are not necessary to exhaustively type
+#: here, because the purpose of this module is to interrogate that type.


First time I see #: comments; pardon my ignorance.  What's the deal?



Sphinx-ese: It indicates that this comment is actually a block relating 
to the entity below. It also means that I can cross-reference 
`_JSONObject` in docstrings.


... which, because of the rewrite where I stopped calling this object an 
Expression to avoid overloading a term, is something I actually don't 
try to cross-reference anymore.


So this block can be dropped now, actually.

(Also: It came up in part one, actually: I snuck one in for EATSPACE, 
and reference it in the comment for cgen. We cannot cross-reference 
constants unless they are documented, and this is how we accomplish that.)



  _JSONObject = Dict[str, object]
  
  
@@ -48,11 +66,29 @@

  def check_name_is_str(name: object,
info: QAPISourceInfo,
source: str) -> None:
+"""Ensures that ``name`` is a string."""


PEP 257: The docstring [...] prescribes the function or method's effect
as a command ("Do this", "Return that"), not as a description;
e.g. don't write "Returns the pathname ...".

More of the same below.



Alright.

While we're here, then ...

I take this to mean that you prefer:

:raise: to :raises:, and
:return: to :returns: ?

And since I need to adjust the verb anyway, I might as well use "Check" 
instead of "Ensure".


""" 

Check that ``name`` is a string. 




:raise: QAPISemError when ``name`` is an incorrect type. 


"""

which means our preferred spellings should be:

:param: (and not parameter, arg, argument, key, keyword)
:raise: (and not raises, except, exception)
:var/ivar/cvar: (variable, instance variable, class variable)
:return: (and not returns)

Disallow these, as covered by the mypy signature:

:type:
:vartype:
:rtype:


  if not 

[PATCH v1 07/11] target/arm: Implement bfloat16 dot product (indexed)

2021-04-16 Thread Richard Henderson
This is BFDOT for both AArch64 AdvSIMD and SVE,
and VDOT.BF16 for AArch32 NEON.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h |  2 ++
 target/arm/neon-shared.decode   |  2 ++
 target/arm/sve.decode   |  3 +++
 target/arm/translate-a64.c  | 41 +
 target/arm/translate-sve.c  | 12 ++
 target/arm/vec_helper.c | 20 
 target/arm/translate-neon.c.inc |  9 
 7 files changed, 80 insertions(+), 9 deletions(-)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index eb4cb2b65b..af0ee8f693 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1005,6 +1005,8 @@ DEF_HELPER_FLAGS_5(gvec_usmmla_b, TCG_CALL_NO_RWG,
 
 DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
 
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
index 31a0839bbb..fa3cf14e3a 100644
--- a/target/arm/neon-shared.decode
+++ b/target/arm/neon-shared.decode
@@ -81,6 +81,8 @@ VUSDOT_scalar   1110 1 . 00   1101 . q:1 index:1 
0 vm:4 \
vn=%vn_dp vd=%vd_dp
 VSUDOT_scalar   1110 1 . 00   1101 . q:1 index:1 1 vm:4 \
vn=%vn_dp vd=%vd_dp
+VDOT_b16_scal   1110 0 . 00   1101 . q:1 index:1 0 vm:4 \
+   vn=%vn_dp vd=%vd_dp
 
 %vfml_scalar_q0_rm 0:3 5:1
 %vfml_scalar_q1_index 5:1 3:1
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 523140ca56..d5e1e5d400 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1584,3 +1584,6 @@ FMLALB_zzxw 01100100 10 1 . 0100.0 . .
 @rrxr_3a esz=2
 FMLALT_zzxw 01100100 10 1 . 0100.1 . . @rrxr_3a esz=2
 FMLSLB_zzxw 01100100 10 1 . 0110.0 . . @rrxr_3a esz=2
 FMLSLT_zzxw 01100100 10 1 . 0110.1 . . @rrxr_3a esz=2
+
+### SVE2 floating-point bfloat16 dot-product (indexed)
+BFDOT_zzxz  01100100 01 1 . 01 . . @rrxr_2 esz=2
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index fc16e0a126..f60afbbd06 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -13457,8 +13457,22 @@ static void disas_simd_indexed(DisasContext *s, 
uint32_t insn)
 return;
 }
 break;
-case 0x0f: /* SUDOT, USDOT */
-if (is_scalar || (size & 1) || !dc_isar_feature(aa64_i8mm, s)) {
+case 0x0f:
+switch (size) {
+case 0: /* SUDOT */
+case 2: /* USDOT */
+if (is_scalar || !dc_isar_feature(aa64_i8mm, s)) {
+unallocated_encoding(s);
+return;
+}
+break;
+case 1: /* BFDOT */
+if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
+unallocated_encoding(s);
+return;
+}
+break;
+default:
 unallocated_encoding(s);
 return;
 }
@@ -13578,13 +13592,22 @@ static void disas_simd_indexed(DisasContext *s, 
uint32_t insn)
  u ? gen_helper_gvec_udot_idx_b
  : gen_helper_gvec_sdot_idx_b);
 return;
-case 0x0f: /* SUDOT, USDOT */
-gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
- extract32(insn, 23, 1)
- ? gen_helper_gvec_usdot_idx_b
- : gen_helper_gvec_sudot_idx_b);
-return;
-
+case 0x0f:
+switch (extract32(insn, 22, 2)) {
+case 0: /* SUDOT */
+gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
+ gen_helper_gvec_sudot_idx_b);
+return;
+case 1: /* BFDOT */
+gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
+ gen_helper_gvec_bfdot_idx);
+return;
+case 2: /* USDOT */
+gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
+ gen_helper_gvec_usdot_idx_b);
+return;
+}
+g_assert_not_reached();
 case 0x11: /* FCMLA #0 */
 case 0x13: /* FCMLA #90 */
 case 0x15: /* FCMLA #180 */
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 3527430c1a..ef6828c632 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8598,3 +8598,15 @@ static bool trans_BFDOT_(DisasContext *s, 
arg__esz *a)
 }
 return true;
 }
+
+static bool trans_BFDOT_zzxz(DisasContext *s, arg_rrxr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve_bf16, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_ool_(s, gen_helper_gvec_bfdot_idx,
+  a->rd, a->rn, a->rm, a->ra, a->index);
+}
+return true;
+}
diff --git 

[PATCH v1 09/11] target/arm: Implement bfloat widening fma (vector)

2021-04-16 Thread Richard Henderson
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
and VFMA{B,T}.BF16 for AArch32 NEON.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h |  3 +++
 target/arm/neon-shared.decode   |  3 +++
 target/arm/sve.decode   |  3 +++
 target/arm/translate-a64.c  | 13 +
 target/arm/translate-sve.c  | 30 ++
 target/arm/vec_helper.c | 16 
 target/arm/translate-neon.c.inc |  9 +
 7 files changed, 73 insertions(+), 4 deletions(-)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index 74f8bc766f..2c6f0cecfa 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1011,6 +1011,9 @@ DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
 #include "helper-sve.h"
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
index 4e0a25d27c..b61addd98b 100644
--- a/target/arm/neon-shared.decode
+++ b/target/arm/neon-shared.decode
@@ -70,6 +70,9 @@ VUSMMLA 1100 1.10   1100 .1.0  \
 VMMLA_b16   1100 0.00   1100 .1.0  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp
 
+VFMA_b16    110 0 0.11   1000 . q:1 . 1  \
+   vm=%vm_dp vn=%vn_dp vd=%vd_dp
+
 VCMLA_scalar    1110 0 . rot:2   1000 . q:1 index:1 0 vm:4 \
vn=%vn_dp vd=%vd_dp size=1
 VCMLA_scalar    1110 1 . rot:2   1000 . q:1 . 0  \
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index aa8d5e4b8f..322bef24cf 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1578,6 +1578,9 @@ FMLALT_zzzw 01100100 10 1 . 10 0 00 1 . . 
 @rda_rn_rm_e0
 FMLSLB_zzzw 01100100 10 1 . 10 1 00 0 . .  @rda_rn_rm_e0
 FMLSLT_zzzw 01100100 10 1 . 10 1 00 1 . .  @rda_rn_rm_e0
 
+BFMLALB_zzzw01100100 11 1 . 10 0 00 0 . .  @rda_rn_rm_e0
+BFMLALT_zzzw01100100 11 1 . 10 0 00 1 . .  @rda_rn_rm_e0
+
 ### SVE2 floating-point bfloat16 dot-product
 BFDOT_  01100100 01 1 . 10 0 00 0 . .  @rda_rn_rm_e0
 
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 8636eac4a8..74794e3da3 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -12250,9 +12250,10 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 }
 feature = dc_isar_feature(aa64_bf16, s);
 break;
-case 0x1f: /* BFDOT */
+case 0x1f:
 switch (size) {
-case 1:
+case 1: /* BFDOT */
+case 3: /* BFMLAL{B,T} */
 feature = dc_isar_feature(aa64_bf16, s);
 break;
 default:
@@ -12346,11 +12347,15 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 case 0xd: /* BFMMLA */
 gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfmmla);
 return;
-case 0xf: /* BFDOT */
+case 0xf:
 switch (size) {
-case 1:
+case 1: /* BFDOT */
 gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, 
gen_helper_gvec_bfdot);
 break;
+case 3: /* BFMLAL{B,T} */
+gen_gvec_op4_fpst(s, 1, rd, rn, rm, rd, false, is_q,
+  gen_helper_gvec_bfmlal);
+break;
 default:
 g_assert_not_reached();
 }
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 9ade521705..3af980caba 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8622,3 +8622,33 @@ static bool trans_BFMMLA(DisasContext *s, arg__esz 
*a)
 }
 return true;
 }
+
+static bool do_BFMLAL_zzzw(DisasContext *s, arg__esz *a, bool sel)
+{
+if (!dc_isar_feature(aa64_sve_bf16, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+TCGv_ptr status = fpstatus_ptr(FPST_FPCR);
+unsigned vsz = vec_full_reg_size(s);
+
+tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   vec_full_reg_offset(s, a->ra),
+   status, vsz, vsz, sel,
+   gen_helper_gvec_bfmlal);
+tcg_temp_free_ptr(status);
+}
+return true;
+}
+
+static bool trans_BFMLALB_zzzw(DisasContext *s, arg__esz *a)
+{
+return do_BFMLAL_zzzw(s, a, false);
+}
+
+static bool trans_BFMLALT_zzzw(DisasContext *s, arg__esz *a)
+{
+return do_BFMLAL_zzzw(s, a, true);
+}
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 623a0872f3..646a364c94 100644
--- a/target/arm/vec_helper.c
+++ 

[PATCH v1 04/11] target/arm: Implement vector float32 to bfloat16 conversion

2021-04-16 Thread Richard Henderson
This is BFCVT{N,T} for both AArch64 AdvSIMD and SVE,
and VCVT.BF16.F32 for AArch32 NEON.

Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h |  4 +++
 target/arm/helper.h |  1 +
 target/arm/neon-dp.decode   |  1 +
 target/arm/sve.decode   |  2 ++
 target/arm/sve_helper.c |  2 ++
 target/arm/translate-a64.c  | 17 +
 target/arm/translate-sve.c  | 16 
 target/arm/vfp_helper.c |  7 +
 target/arm/translate-neon.c.inc | 45 +
 9 files changed, 95 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index fa7418e706..9287e6f26c 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -1197,6 +1197,8 @@ DEF_HELPER_FLAGS_5(sve_fcvt_hd, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_fcvt_sd, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_bfcvt, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_5(sve_fcvtzs_hh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
@@ -2744,6 +2746,8 @@ DEF_HELPER_FLAGS_5(sve2_fcvtnt_sh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_fcvtnt_ds, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve_bfcvtnt, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_5(sve2_fcvtlt_hs, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 0892207f80..0b52ee6256 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -144,6 +144,7 @@ DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
 DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
 DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
 DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, ptr)
+DEF_HELPER_FLAGS_2(bfcvt_pair, TCG_CALL_NO_RWG, i32, i64, ptr)
 
 DEF_HELPER_2(vfp_uitoh, f16, i32, ptr)
 DEF_HELPER_2(vfp_uitos, f32, i32, ptr)
diff --git a/target/arm/neon-dp.decode b/target/arm/neon-dp.decode
index ec83f10ab3..fd3a01bfa0 100644
--- a/target/arm/neon-dp.decode
+++ b/target/arm/neon-dp.decode
@@ -521,6 +521,7 @@ Vimm_1r   001 . 1 . 000 ...  cmode:4 0 . 
op:1 1  @1reg_imm
 VRINTZ    001 11 . 11 .. 10  0 1011 . . 0  @2misc
 
 VCVT_F16_F32  001 11 . 11 .. 10  0 1100 0 . 0  @2misc_q0
+VCVT_B16_F32  001 11 . 11 .. 10  0 1100 1 . 0  @2misc_q0
 
 VRINTM    001 11 . 11 .. 10  0 1101 . . 0  @2misc
 
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 3d7c4fa6e5..bad81580c5 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -987,6 +987,7 @@ FNMLS_zpzzz 01100101 .. 1 . 111 ... . . 
@rdn_pg_rm_ra
 # SVE floating-point convert precision
 FCVT_sh 01100101 10 0010 00 101 ... . . @rd_pg_rn_e0
 FCVT_hs 01100101 10 0010 01 101 ... . . @rd_pg_rn_e0
+BFCVT   01100101 10 0010 10 101 ... . . @rd_pg_rn_e0
 FCVT_dh 01100101 11 0010 00 101 ... . . @rd_pg_rn_e0
 FCVT_hd 01100101 11 0010 01 101 ... . . @rd_pg_rn_e0
 FCVT_ds 01100101 11 0010 10 101 ... . . @rd_pg_rn_e0
@@ -1561,6 +1562,7 @@ RAX101000101 00 1 . 0 1 . .  
@rd_rn_rm_e0
 FCVTXNT_ds  01100100 00 0010 10 101 ... . .  @rd_pg_rn_e0
 FCVTX_ds01100101 00 0010 10 101 ... . .  @rd_pg_rn_e0
 FCVTNT_sh   01100100 10 0010 00 101 ... . .  @rd_pg_rn_e0
+BFCVTNT 01100100 10 0010 10 101 ... . .  @rd_pg_rn_e0
 FCVTLT_hs   01100100 10 0010 01 101 ... . .  @rd_pg_rn_e0
 FCVTNT_ds   01100100 11 0010 10 101 ... . .  @rd_pg_rn_e0
 FCVTLT_sd   01100100 11 0010 11 101 ... . .  @rd_pg_rn_e0
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index c5c3017745..ae3db11c0d 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -4570,6 +4570,7 @@ static inline uint64_t vfp_float64_to_uint64_rtz(float64 
f, float_status *s)
 
 DO_ZPZ_FP(sve_fcvt_sh, uint32_t, H1_4, sve_f32_to_f16)
 DO_ZPZ_FP(sve_fcvt_hs, uint32_t, H1_4, sve_f16_to_f32)
+DO_ZPZ_FP(sve_bfcvt,   uint32_t, H1_4, float32_to_bfloat16)
 DO_ZPZ_FP(sve_fcvt_dh, uint64_t, , sve_f64_to_f16)
 DO_ZPZ_FP(sve_fcvt_hd, uint64_t, , sve_f16_to_f64)
 DO_ZPZ_FP(sve_fcvt_ds, uint64_t, , float64_to_float32)
@@ -7567,6 +7568,7 @@ void HELPER(NAME)(void *vd, void *vn, void *vg, void 
*status, uint32_t desc)  \
 }
 
 DO_FCVTNT(sve2_fcvtnt_sh, uint32_t, uint16_t, H1_4, H1_2, sve_f32_to_f16)
+DO_FCVTNT(sve_bfcvtnt,uint32_t, uint16_t, H1_4, H1_2, float32_to_bfloat16)
 DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, H1_4, H1_2, float64_to_float32)
 
 #define DO_FCVTLT(NAME, TYPEW, 

[PATCH v1 06/11] target/arm: Implement bfloat16 dot product (vector)

2021-04-16 Thread Richard Henderson
This is BFDOT for both AArch64 AdvSIMD and SVE,
and VDOT.BF16 for AArch32 NEON.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h |  3 +++
 target/arm/neon-shared.decode   |  2 ++
 target/arm/sve.decode   |  3 +++
 target/arm/translate-a64.c  | 20 +
 target/arm/translate-sve.c  | 12 ++
 target/arm/vec_helper.c | 40 +
 target/arm/translate-neon.c.inc |  9 
 7 files changed, 89 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index 0b52ee6256..eb4cb2b65b 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1003,6 +1003,9 @@ DEF_HELPER_FLAGS_5(gvec_ummla_b, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_usmmla_b, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
 #include "helper-sve.h"
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
index cc9f4cdd85..31a0839bbb 100644
--- a/target/arm/neon-shared.decode
+++ b/target/arm/neon-shared.decode
@@ -52,6 +52,8 @@ VUDOT   110 00 . 10   1101 . q:1 . 1  
\
vm=%vm_dp vn=%vn_dp vd=%vd_dp
 VUSDOT  110 01 . 10   1101 . q:1 . 0  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp
+VDOT_b16    110 00 . 00   1101 . q:1 . 0  \
+   vm=%vm_dp vn=%vn_dp vd=%vd_dp
 
 # VFM[AS]L
 VFML    110 0 s:1 . 10   1000 . 0 . 1  \
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index bad81580c5..523140ca56 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1576,6 +1576,9 @@ FMLALT_zzzw 01100100 10 1 . 10 0 00 1 . . 
 @rda_rn_rm_e0
 FMLSLB_zzzw 01100100 10 1 . 10 1 00 0 . .  @rda_rn_rm_e0
 FMLSLT_zzzw 01100100 10 1 . 10 1 00 1 . .  @rda_rn_rm_e0
 
+### SVE2 floating-point bfloat16 dot-product
+BFDOT_  01100100 01 1 . 10 0 00 0 . .  @rda_rn_rm_e0
+
 ### SVE2 floating-point multiply-add long (indexed)
 FMLALB_zzxw 01100100 10 1 . 0100.0 . . @rrxr_3a esz=2
 FMLALT_zzxw 01100100 10 1 . 0100.1 . . @rrxr_3a esz=2
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index c528fb2cf0..fc16e0a126 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -12243,6 +12243,16 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 }
 feature = dc_isar_feature(aa64_fcma, s);
 break;
+case 0x1f: /* BFDOT */
+switch (size) {
+case 1:
+feature = dc_isar_feature(aa64_bf16, s);
+break;
+default:
+unallocated_encoding(s);
+return;
+}
+break;
 default:
 unallocated_encoding(s);
 return;
@@ -12326,6 +12336,16 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 }
 return;
 
+case 0xf: /* BFDOT */
+switch (size) {
+case 1:
+gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, 
gen_helper_gvec_bfdot);
+break;
+default:
+g_assert_not_reached();
+}
+return;
+
 default:
 g_assert_not_reached();
 }
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index aacbabd11e..3527430c1a 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8586,3 +8586,15 @@ static bool trans_UMMLA(DisasContext *s, arg__esz *a)
 {
 return do_i8mm__ool(s, a, gen_helper_gvec_ummla_b, 0);
 }
+
+static bool trans_BFDOT_(DisasContext *s, arg__esz *a)
+{
+if (!dc_isar_feature(aa64_sve_bf16, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_ool_(s, gen_helper_gvec_bfdot,
+  a->rd, a->rn, a->rm, a->ra, 0);
+}
+return true;
+}
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 6c9f1e5146..e227ba6590 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -2655,3 +2655,43 @@ static void do_mmla_b(void *vd, void *vn, void *vm, void 
*va, uint32_t desc,
 DO_MMLA_B(gvec_smmla_b, do_smmla_b)
 DO_MMLA_B(gvec_ummla_b, do_ummla_b)
 DO_MMLA_B(gvec_usmmla_b, do_usmmla_b)
+
+/*
+ * BFloat16 Dot Product
+ */
+
+static float32 bfdotadd(float32 sum, uint32_t e1, uint32_t e2)
+{
+/* FPCR is ignored for BFDOT and BFMMLA. */
+float_status bf_status = {
+.tininess_before_rounding = float_tininess_before_rounding,
+.float_rounding_mode = float_round_to_odd_inf,
+.flush_to_zero = true,
+.flush_inputs_to_zero = true,
+.default_nan_mode = true,
+};
+float32 t1, t2;
+
+/*
+ * Extract each BFloat16 from the element pair, and shift
+ * them such that 

[PATCH v1 11/11] target/arm: Enable BFloat16 extensions

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu64.c   | 3 +++
 target/arm/cpu_tcg.c | 1 +
 2 files changed, 4 insertions(+)

diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 379f90fab8..db4f48edcf 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -660,6 +660,7 @@ static void aarch64_max_initfn(Object *obj)
 t = FIELD_DP64(t, ID_AA64ISAR1, FCMA, 1);
 t = FIELD_DP64(t, ID_AA64ISAR1, SB, 1);
 t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
+t = FIELD_DP64(t, ID_AA64ISAR1, BF16, 1);
 t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
 t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
 t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
@@ -707,6 +708,7 @@ static void aarch64_max_initfn(Object *obj)
 t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
 t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2);  /* PMULL */
 t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, BFLOAT16, 1);
 t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
 t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
 t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
@@ -730,6 +732,7 @@ static void aarch64_max_initfn(Object *obj)
 u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
 u = FIELD_DP32(u, ID_ISAR6, SB, 1);
 u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
+u = FIELD_DP32(u, ID_ISAR6, BF16, 1);
 u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
 cpu->isar.id_isar6 = u;
 
diff --git a/target/arm/cpu_tcg.c b/target/arm/cpu_tcg.c
index 046e476f65..b2463cf109 100644
--- a/target/arm/cpu_tcg.c
+++ b/target/arm/cpu_tcg.c
@@ -968,6 +968,7 @@ static void arm_max_initfn(Object *obj)
 t = FIELD_DP32(t, ID_ISAR6, FHM, 1);
 t = FIELD_DP32(t, ID_ISAR6, SB, 1);
 t = FIELD_DP32(t, ID_ISAR6, SPECRES, 1);
+t = FIELD_DP32(t, ID_ISAR6, BF16, 1);
 cpu->isar.id_isar6 = t;
 
 t = cpu->isar.mvfr1;
-- 
2.25.1




[PATCH v1 05/11] fpu: Add float_round_to_odd_inf

2021-04-16 Thread Richard Henderson
For Arm BFDOT and BFMMLA, we need a version of round-to-odd
that overflows to infinity, instead of the max normal number.

Signed-off-by: Richard Henderson 
---
 include/fpu/softfloat-types.h | 4 +++-
 fpu/softfloat.c   | 8 ++--
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/include/fpu/softfloat-types.h b/include/fpu/softfloat-types.h
index 8a3f20fae9..3b757c3d6a 100644
--- a/include/fpu/softfloat-types.h
+++ b/include/fpu/softfloat-types.h
@@ -134,8 +134,10 @@ typedef enum __attribute__((__packed__)) {
 float_round_up   = 2,
 float_round_to_zero  = 3,
 float_round_ties_away= 4,
-/* Not an IEEE rounding mode: round to the closest odd mantissa value */
+/* Not an IEEE rounding mode: round to closest odd, overflow to max */
 float_round_to_odd   = 5,
+/* Not an IEEE rounding mode: round to closest odd, overflow to inf */
+float_round_to_odd_inf   = 6,
 } FloatRoundMode;
 
 /*
diff --git a/fpu/softfloat.c b/fpu/softfloat.c
index 67cfa0fd82..76097679b0 100644
--- a/fpu/softfloat.c
+++ b/fpu/softfloat.c
@@ -694,13 +694,12 @@ static FloatParts round_canonical(FloatParts p, 
float_status *s,
 
 switch (p.cls) {
 case float_class_normal:
+overflow_norm = false;
 switch (s->float_rounding_mode) {
 case float_round_nearest_even:
-overflow_norm = false;
 inc = ((frac & roundeven_mask) != frac_lsbm1 ? frac_lsbm1 : 0);
 break;
 case float_round_ties_away:
-overflow_norm = false;
 inc = frac_lsbm1;
 break;
 case float_round_to_zero:
@@ -717,6 +716,8 @@ static FloatParts round_canonical(FloatParts p, 
float_status *s,
 break;
 case float_round_to_odd:
 overflow_norm = true;
+/* fall through */
+case float_round_to_odd_inf:
 inc = frac & frac_lsb ? 0 : round_mask;
 break;
 default:
@@ -771,6 +772,7 @@ static FloatParts round_canonical(FloatParts p, 
float_status *s,
? frac_lsbm1 : 0);
 break;
 case float_round_to_odd:
+case float_round_to_odd_inf:
 inc = frac & frac_lsb ? 0 : round_mask;
 break;
 default:
@@ -6860,6 +6862,8 @@ float128 float128_round_to_int(float128 a, float_status 
*status)
 
 case float_round_to_zero:
 break;
+default:
+g_assert_not_reached();
 }
 return packFloat128( aSign, 0, 0, 0 );
 }
-- 
2.25.1




[PATCH v1 08/11] target/arm: Implement bfloat16 matrix multiply accumulate

2021-04-16 Thread Richard Henderson
This is BFMMLA for both AArch64 AdvSIMD and SVE,
and VMMLA.BF16 for AArch32 NEON.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h |  3 +++
 target/arm/neon-shared.decode   |  2 ++
 target/arm/sve.decode   |  6 +++--
 target/arm/translate-a64.c  | 10 +
 target/arm/translate-sve.c  | 12 ++
 target/arm/vec_helper.c | 40 +
 target/arm/translate-neon.c.inc |  9 
 7 files changed, 80 insertions(+), 2 deletions(-)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index af0ee8f693..74f8bc766f 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1008,6 +1008,9 @@ DEF_HELPER_FLAGS_5(gvec_bfdot, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_bfdot_idx, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
 #include "helper-sve.h"
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
index fa3cf14e3a..4e0a25d27c 100644
--- a/target/arm/neon-shared.decode
+++ b/target/arm/neon-shared.decode
@@ -67,6 +67,8 @@ VUMMLA  1100 0.10   1100 .1.1  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp
 VUSMMLA 1100 1.10   1100 .1.0  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp
+VMMLA_b16   1100 0.00   1100 .1.0  \
+   vm=%vm_dp vn=%vn_dp vd=%vd_dp
 
 VCMLA_scalar    1110 0 . rot:2   1000 . q:1 index:1 0 vm:4 \
vn=%vn_dp vd=%vd_dp size=1
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index d5e1e5d400..aa8d5e4b8f 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1519,8 +1519,10 @@ SQRDCMLAH_  01000100 esz:2 0 rm:5 0011 rot:2 rn:5 
rd:5  ra=%reg_movprfx
 USDOT_  01000100 .. 0 . 011 110 . .  @rda_rn_rm
 
 ### SVE2 floating point matrix multiply accumulate
-
-FMMLA   01100100 .. 1 . 111001 . .  @rda_rn_rm
+{
+  BFMMLA01100100 01 1 . 111 001 . .  @rda_rn_rm_e0
+  FMMLA 01100100 .. 1 . 111 001 . .  @rda_rn_rm
+}
 
 ### SVE2 Memory Gather Load Group
 
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index f60afbbd06..8636eac4a8 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -12243,6 +12243,13 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 }
 feature = dc_isar_feature(aa64_fcma, s);
 break;
+case 0x1d: /* BFMMLA */
+if (size != MO_16 || !is_q) {
+unallocated_encoding(s);
+return;
+}
+feature = dc_isar_feature(aa64_bf16, s);
+break;
 case 0x1f: /* BFDOT */
 switch (size) {
 case 1:
@@ -12336,6 +12343,9 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 }
 return;
 
+case 0xd: /* BFMMLA */
+gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_bfmmla);
+return;
 case 0xf: /* BFDOT */
 switch (size) {
 case 1:
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ef6828c632..9ade521705 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8610,3 +8610,15 @@ static bool trans_BFDOT_zzxz(DisasContext *s, 
arg_rrxr_esz *a)
 }
 return true;
 }
+
+static bool trans_BFMMLA(DisasContext *s, arg__esz *a)
+{
+if (!dc_isar_feature(aa64_sve_bf16, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_ool_(s, gen_helper_gvec_bfmmla,
+  a->rd, a->rn, a->rm, a->ra, 0);
+}
+return true;
+}
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 3e26fb0e5f..623a0872f3 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -2715,3 +2715,43 @@ void HELPER(gvec_bfdot_idx)(void *vd, void *vn, void *vm,
 }
 clear_tail(d, opr_sz, simd_maxsz(desc));
 }
+
+void HELPER(gvec_bfmmla)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
+{
+intptr_t s, opr_sz = simd_oprsz(desc);
+float32 *d = vd, *a = va;
+uint32_t *n = vn, *m = vm;
+
+for (s = 0; s < opr_sz / 4; s += 4) {
+float32 sum00, sum01, sum10, sum11;
+
+/*
+ * Process the entire segment at once, writing back the
+ * results only after we've consumed all of the inputs.
+ *
+ * Key to indicies by column:
+ *   i   j   i   k j   k
+ */
+sum00 = a[s + H4(0 + 0)];
+sum00 = bfdotadd(sum00, n[s + H4(0 + 0)], m[s + H4(0 + 0)]);
+sum00 = bfdotadd(sum00, n[s + H4(0 + 1)], m[s + H4(0 + 1)]);
+
+sum01 = a[s + H4(0 + 1)];
+sum01 = bfdotadd(sum01, n[s + H4(0 + 0)], m[s + H4(2 + 0)]);
+sum01 = bfdotadd(sum01, n[s + 

[PATCH v1 03/11] target/arm: Implement scalar float32 to bfloat16 conversion

2021-04-16 Thread Richard Henderson
This is the 64-bit BFCVT and the 32-bit VCVT{B,T}.BF16.F32.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h|  1 +
 target/arm/vfp.decode  |  2 ++
 target/arm/translate-a64.c | 19 +++
 target/arm/vfp_helper.c|  5 +
 target/arm/translate-vfp.c.inc | 24 
 5 files changed, 51 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index 33df62f44d..0892207f80 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -143,6 +143,7 @@ DEF_HELPER_3(vfp_cmped, void, f64, f64, env)
 
 DEF_HELPER_2(vfp_fcvtds, f64, f32, env)
 DEF_HELPER_2(vfp_fcvtsd, f32, f64, env)
+DEF_HELPER_FLAGS_2(bfcvt, TCG_CALL_NO_RWG, i32, f32, ptr)
 
 DEF_HELPER_2(vfp_uitoh, f16, i32, ptr)
 DEF_HELPER_2(vfp_uitos, f32, i32, ptr)
diff --git a/target/arm/vfp.decode b/target/arm/vfp.decode
index 6f7f28f9a4..52535d9b0b 100644
--- a/target/arm/vfp.decode
+++ b/target/arm/vfp.decode
@@ -205,6 +205,8 @@ VCVT_f64_f16  1110 1.11 0010  1011 t:1 1.0  \
 
 # VCVTB and VCVTT to f16: Vd format is always vd_sp;
 # Vm format depends on size bit
+VCVT_b16_f32  1110 1.11 0011  1001 t:1 1.0  \
+ vd=%vd_sp vm=%vm_sp
 VCVT_f16_f32  1110 1.11 0011  1010 t:1 1.0  \
  vd=%vd_sp vm=%vm_sp
 VCVT_f16_f64  1110 1.11 0011  1011 t:1 1.0  \
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d8ec219bb2..d767194cc7 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -6288,6 +6288,9 @@ static void handle_fp_1src_single(DisasContext *s, int 
opcode, int rd, int rn)
 case 0x3: /* FSQRT */
 gen_helper_vfp_sqrts(tcg_res, tcg_op, cpu_env);
 goto done;
+case 0x6: /* BFCVT */
+gen_fpst = gen_helper_bfcvt;
+break;
 case 0x8: /* FRINTN */
 case 0x9: /* FRINTP */
 case 0xa: /* FRINTM */
@@ -6565,6 +6568,22 @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
 }
 break;
 
+case 0x6:
+switch (type) {
+case 1: /* BFCVT */
+if (!dc_isar_feature(aa64_bf16, s)) {
+goto do_unallocated;
+}
+if (!fp_access_check(s)) {
+return;
+}
+handle_fp_1src_single(s, opcode, rd, rn);
+break;
+default:
+goto do_unallocated;
+}
+break;
+
 default:
 do_unallocated:
 unallocated_encoding(s);
diff --git a/target/arm/vfp_helper.c b/target/arm/vfp_helper.c
index 01b9d8557f..fe7a2a5daa 100644
--- a/target/arm/vfp_helper.c
+++ b/target/arm/vfp_helper.c
@@ -408,6 +408,11 @@ float32 VFP_HELPER(fcvts, d)(float64 x, CPUARMState *env)
 return float64_to_float32(x, >vfp.fp_status);
 }
 
+uint32_t HELPER(bfcvt)(float32 x, void *status)
+{
+return float32_to_bfloat16(x, status);
+}
+
 /*
  * VFP3 fixed point conversion. The AArch32 versions of fix-to-float
  * must always round-to-nearest; the AArch64 ones honour the FPSCR
diff --git a/target/arm/translate-vfp.c.inc b/target/arm/translate-vfp.c.inc
index e20d9c7ba6..709d1fddcf 100644
--- a/target/arm/translate-vfp.c.inc
+++ b/target/arm/translate-vfp.c.inc
@@ -3003,6 +3003,30 @@ static bool trans_VCVT_f64_f16(DisasContext *s, 
arg_VCVT_f64_f16 *a)
 return true;
 }
 
+static bool trans_VCVT_b16_f32(DisasContext *s, arg_VCVT_b16_f32 *a)
+{
+TCGv_ptr fpst;
+TCGv_i32 tmp;
+
+if (!dc_isar_feature(aa32_bf16, s)) {
+return false;
+}
+
+if (!vfp_access_check(s)) {
+return true;
+}
+
+fpst = fpstatus_ptr(FPST_FPCR);
+tmp = tcg_temp_new_i32();
+
+vfp_load_reg32(tmp, a->vm);
+gen_helper_bfcvt(tmp, tmp, fpst);
+tcg_gen_st16_i32(tmp, cpu_env, vfp_f16_offset(a->vd, a->t));
+tcg_temp_free_ptr(fpst);
+tcg_temp_free_i32(tmp);
+return true;
+}
+
 static bool trans_VCVT_f16_f32(DisasContext *s, arg_VCVT_f16_f32 *a)
 {
 TCGv_ptr fpst;
-- 
2.25.1




[PATCH v1 02/11] target/arm: Unify unallocated path in disas_fp_1src

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/translate-a64.c | 15 ++-
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 668edf3a00..d8ec219bb2 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -6509,8 +6509,7 @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
 int rd = extract32(insn, 0, 5);
 
 if (mos) {
-unallocated_encoding(s);
-return;
+goto do_unallocated;
 }
 
 switch (opcode) {
@@ -6519,8 +6518,7 @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
 /* FCVT between half, single and double precision */
 int dtype = extract32(opcode, 0, 2);
 if (type == 2 || dtype == type) {
-unallocated_encoding(s);
-return;
+goto do_unallocated;
 }
 if (!fp_access_check(s)) {
 return;
@@ -6532,8 +6530,7 @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
 
 case 0x10 ... 0x13: /* FRINT{32,64}{X,Z} */
 if (type > 1 || !dc_isar_feature(aa64_frint, s)) {
-unallocated_encoding(s);
-return;
+goto do_unallocated;
 }
 /* fall through */
 case 0x0 ... 0x3:
@@ -6555,8 +6552,7 @@ static void disas_fp_1src(DisasContext *s, uint32_t insn)
 break;
 case 3:
 if (!dc_isar_feature(aa64_fp16, s)) {
-unallocated_encoding(s);
-return;
+goto do_unallocated;
 }
 
 if (!fp_access_check(s)) {
@@ -6565,11 +6561,12 @@ static void disas_fp_1src(DisasContext *s, uint32_t 
insn)
 handle_fp_1src_half(s, opcode, rd, rn);
 break;
 default:
-unallocated_encoding(s);
+goto do_unallocated;
 }
 break;
 
 default:
+do_unallocated:
 unallocated_encoding(s);
 break;
 }
-- 
2.25.1




[PATCH v1 10/11] target/arm: Implement bfloat widening fma (indexed)

2021-04-16 Thread Richard Henderson
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
and VFMA{B,T}.BF16 for AArch32 NEON.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h |  2 ++
 target/arm/neon-shared.decode   |  2 ++
 target/arm/sve.decode   |  2 ++
 target/arm/translate-a64.c  | 15 ++-
 target/arm/translate-sve.c  | 30 ++
 target/arm/vec_helper.c | 22 ++
 target/arm/translate-neon.c.inc | 10 ++
 7 files changed, 82 insertions(+), 1 deletion(-)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index 2c6f0cecfa..cbcaab2ce0 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -1013,6 +1013,8 @@ DEF_HELPER_FLAGS_5(gvec_bfmmla, TCG_CALL_NO_RWG,
 
 DEF_HELPER_FLAGS_6(gvec_bfmlal, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(gvec_bfmlal_idx, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
 
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
index b61addd98b..df80e6ebf6 100644
--- a/target/arm/neon-shared.decode
+++ b/target/arm/neon-shared.decode
@@ -95,3 +95,5 @@ VFML_scalar 1110 0 . 0 s:1   1000 . 0 . 1 
index:1 ... \
rm=%vfml_scalar_q0_rm vn=%vn_sp vd=%vd_dp q=0
 VFML_scalar 1110 0 . 0 s:1   1000 . 1 . 1 . rm:3 \
index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp q=1
+VFMA_b16_scal   1110 0.11   1000 . q:1 . 1 . vm:3 \
+   index=%vfml_scalar_q1_index vn=%vn_dp vd=%vd_dp
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 322bef24cf..69f979fb47 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1589,6 +1589,8 @@ FMLALB_zzxw 01100100 10 1 . 0100.0 . .
 @rrxr_3a esz=2
 FMLALT_zzxw 01100100 10 1 . 0100.1 . . @rrxr_3a esz=2
 FMLSLB_zzxw 01100100 10 1 . 0110.0 . . @rrxr_3a esz=2
 FMLSLT_zzxw 01100100 10 1 . 0110.1 . . @rrxr_3a esz=2
+BFMLALB_zzxw01100100 11 1 . 0100.0 . . @rrxr_3a esz=2
+BFMLALT_zzxw01100100 11 1 . 0100.1 . . @rrxr_3a esz=2
 
 ### SVE2 floating-point bfloat16 dot-product (indexed)
 BFDOT_zzxz  01100100 01 1 . 01 . . @rrxr_2 esz=2
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 74794e3da3..7842dd51be 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -13480,18 +13480,27 @@ static void disas_simd_indexed(DisasContext *s, 
uint32_t insn)
 unallocated_encoding(s);
 return;
 }
+size = MO_32;
 break;
 case 1: /* BFDOT */
 if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
 unallocated_encoding(s);
 return;
 }
+size = MO_32;
+break;
+case 3: /* BFMLAL{B,T} */
+if (is_scalar || !dc_isar_feature(aa64_bf16, s)) {
+unallocated_encoding(s);
+return;
+}
+/* can't set is_fp without other incorrect size checks */
+size = MO_16;
 break;
 default:
 unallocated_encoding(s);
 return;
 }
-size = MO_32;
 break;
 case 0x11: /* FCMLA #0 */
 case 0x13: /* FCMLA #90 */
@@ -13621,6 +13630,10 @@ static void disas_simd_indexed(DisasContext *s, 
uint32_t insn)
 gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
  gen_helper_gvec_usdot_idx_b);
 return;
+case 3: /* BFMLAL{B,T} */
+gen_gvec_op4_fpst(s, 1, rd, rn, rm, rd, 0, (index << 1) | is_q,
+  gen_helper_gvec_bfmlal_idx);
+return;
 }
 g_assert_not_reached();
 case 0x11: /* FCMLA #0 */
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 3af980caba..7f33bc4682 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8652,3 +8652,33 @@ static bool trans_BFMLALT_zzzw(DisasContext *s, 
arg__esz *a)
 {
 return do_BFMLAL_zzzw(s, a, true);
 }
+
+static bool do_BFMLAL_zzxw(DisasContext *s, arg_rrxr_esz *a, bool sel)
+{
+if (!dc_isar_feature(aa64_sve_bf16, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+TCGv_ptr status = fpstatus_ptr(FPST_FPCR);
+unsigned vsz = vec_full_reg_size(s);
+
+tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   vec_full_reg_offset(s, a->ra),
+   status, vsz, vsz, (a->index << 1) | sel,
+   gen_helper_gvec_bfmlal_idx);
+tcg_temp_free_ptr(status);
+}
+return true;
+}
+

[PATCH v1 for-6.1 00/11] target/arm: Implement BFloat16

2021-04-16 Thread Richard Henderson
Based-on: 20210416210240.1591291-1-richard.hender...@linaro.org
("[PATCH v5 for-6.1 00/81] target/arm: Implement SVE2")

https://gitlab.com/rth7680/qemu/-/tree/tgt-arm-bf16
https://gitlab.com/rth7680/qemu/-/commit/2ecc372b672d11fdc4e2573d789bfb3f5e6cba48

Bfloat16 is a set of 2 tightly-coupled features adding to AArch32 NEON,
AArch64 AdvSIMD, and AArch64 SVE1.  That said, there are helper functions
and decode patterns in the SVE2 patch set that help here, so I've based
this patchset on that.

Tested against FVP 11.13.36 via RISU.


r~


Richard Henderson (11):
  target/arm: Add isar_feature_{aa32,aa64,aa64_sve}_bf16
  target/arm: Unify unallocated path in disas_fp_1src
  target/arm: Implement scalar float32 to bfloat16 conversion
  target/arm: Implement vector float32 to bfloat16 conversion
  fpu: Add float_round_to_odd_inf
  target/arm: Implement bfloat16 dot product (vector)
  target/arm: Implement bfloat16 dot product (indexed)
  target/arm: Implement bfloat16 matrix multiply accumulate
  target/arm: Implement bfloat widening fma (vector)
  target/arm: Implement bfloat widening fma (indexed)
  target/arm: Enable BFloat16 extensions

 include/fpu/softfloat-types.h   |   4 +-
 target/arm/cpu.h|  15 
 target/arm/helper-sve.h |   4 +
 target/arm/helper.h |  15 
 target/arm/neon-dp.decode   |   1 +
 target/arm/neon-shared.decode   |  11 +++
 target/arm/sve.decode   |  19 -
 target/arm/vfp.decode   |   2 +
 fpu/softfloat.c |   8 +-
 target/arm/cpu64.c  |   3 +
 target/arm/cpu_tcg.c|   1 +
 target/arm/sve_helper.c |   2 +
 target/arm/translate-a64.c  | 142 +++-
 target/arm/translate-sve.c  | 112 +
 target/arm/vec_helper.c | 138 +++
 target/arm/vfp_helper.c |  12 +++
 target/arm/translate-neon.c.inc |  91 
 target/arm/translate-vfp.c.inc  |  24 ++
 18 files changed, 580 insertions(+), 24 deletions(-)

-- 
2.25.1




[PATCH v1 01/11] target/arm: Add isar_feature_{aa32, aa64, aa64_sve}_bf16

2021-04-16 Thread Richard Henderson
Note that the SVE BFLOAT16 support does not require SVE2,
it is an independent extension.

Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h | 15 +++
 1 file changed, 15 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 134dc65e34..38db20c721 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3783,6 +3783,11 @@ static inline bool isar_feature_aa32_predinv(const 
ARMISARegisters *id)
 return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
 }
 
+static inline bool isar_feature_aa32_bf16(const ARMISARegisters *id)
+{
+return FIELD_EX32(id->id_isar6, ID_ISAR6, BF16) != 0;
+}
+
 static inline bool isar_feature_aa32_i8mm(const ARMISARegisters *id)
 {
 return FIELD_EX32(id->id_isar6, ID_ISAR6, I8MM) != 0;
@@ -4112,6 +4117,11 @@ static inline bool isar_feature_aa64_dcpodp(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, DPB) >= 2;
 }
 
+static inline bool isar_feature_aa64_bf16(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, BF16) != 0;
+}
+
 static inline bool isar_feature_aa64_fp_simd(const ARMISARegisters *id)
 {
 /* We always set the AdvSIMD and FP fields identically.  */
@@ -4256,6 +4266,11 @@ static inline bool isar_feature_aa64_sve2_bitperm(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
 }
 
+static inline bool isar_feature_aa64_sve_bf16(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BFLOAT16) != 0;
+}
+
 static inline bool isar_feature_aa64_sve2_sha3(const ARMISARegisters *id)
 {
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SHA3) != 0;
-- 
2.25.1




Re: [PATCH v5 for-6.1 00/81] target/arm: Implement SVE2

2021-04-16 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/20210416210240.1591291-1-richard.hender...@linaro.org/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Message-id: 20210416210240.1591291-1-richard.hender...@linaro.org
Subject: [PATCH v5 for-6.1 00/81] target/arm: Implement SVE2

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag] 
patchew/20210416210240.1591291-1-richard.hender...@linaro.org -> 
patchew/20210416210240.1591291-1-richard.hender...@linaro.org
Switched to a new branch 'test'
9b57850 target/arm: Enable SVE2 and some extensions
ef27737 target/arm: Implement integer matrix multiply accumulate
37e3a60 target/arm: Implement aarch32 VSUDOT, VUSDOT
2418ef0 target/arm: Split decode of VSDOT and VUDOT
36777eb target/arm: Fix decode for VDOT (indexed)
85ce0a8 target/arm: Remove unused fpst from VDOT_scalar
5a12019 target/arm: Split out do_neon_ddda_fpst
52cd36c target/arm: Implement aarch64 SUDOT, USDOT
6212a60 target/arm: Implement SVE2 fp multiply-add long
fc626ae target/arm: Implement SVE2 bitwise shift immediate
ea844f4 target/arm: Implement 128-bit ZIP, UZP, TRN
cb4aff9 target/arm: Implement SVE2 LD1RO
406b15e target/arm: Share table of sve load functions
2aaa71d target/arm: Implement SVE2 FLOGB
43874d7 target/arm: Implement SVE2 FCVTXNT, FCVTX
3b7c16b target/arm: Implement SVE2 FCVTLT
db7cc0d target/arm: Implement SVE2 FCVTNT
f178783 target/arm: Implement SVE2 TBL, TBX
7855ae6 target/arm: Implement SVE2 crypto constructive binary operations
95dbbf6 target/arm: Implement SVE2 crypto destructive binary operations
8271888 target/arm: Implement SVE2 crypto unary operations
5a952bf target/arm: Implement SVE mixed sign dot product
f6bb4a2 target/arm: Implement SVE mixed sign dot product (indexed)
baedaa8 target/arm: Implement SVE2 saturating multiply high (indexed)
bf9a562 target/arm: Implement SVE2 signed saturating doubling multiply high
41d13d4 target/arm: Implement SVE2 saturating multiply (indexed)
7599063 target/arm: Implement SVE2 saturating multiply-add (indexed)
f6e99e5 target/arm: Implement SVE2 saturating multiply-add high (indexed)
976c0b7 target/arm: Implement SVE2 integer multiply-add (indexed)
d10bf6c target/arm: Implement SVE2 integer multiply (indexed)
606d6a4 target/arm: Split out formats for 3 vectors + 1 index
d29fbe1 target/arm: Split out formats for 2 vectors + 1 index
25606ae target/arm: Pass separate addend to FCMLA helpers
94b092e target/arm: Pass separate addend to {U, S}DOT helpers
1246231 target/arm: Implement SVE2 SPLICE, EXT
a23046a target/arm: Implement SVE2 FMMLA
861aad4 target/arm: Implement SVE2 gather load insns
2a53508 target/arm: Implement SVE2 scatter store insns
eb709e4 target/arm: Implement SVE2 XAR
44a5278 target/arm: Implement SVE2 HISTCNT, HISTSEG
7d2eb26 target/arm: Implement SVE2 RSUBHNB, RSUBHNT
2ea5ccf target/arm: Implement SVE2 SUBHNB, SUBHNT
14e62a9 target/arm: Implement SVE2 RADDHNB, RADDHNT
3573c67 target/arm: Implement SVE2 ADDHNB, ADDHNT
23fa167 target/arm: Implement SVE2 complex integer multiply-add
72c82f6 target/arm: Implement SVE2 integer multiply-add long
4213a0c target/arm: Implement SVE2 saturating multiply-add high
a80cca3 target/arm: Implement SVE2 saturating multiply-add long
17d9528 target/arm: Implement SVE2 MATCH, NMATCH
908209b target/arm: Implement SVE2 bitwise ternary operations
37197bf target/arm: Implement SVE2 WHILERW, WHILEWR
fb03afc target/arm: Implement SVE2 WHILEGT, WHILEGE, WHILEHI, WHILEHS
e445d2f target/arm: Implement SVE2 SQSHRN, SQRSHRN
a77122a target/arm: Implement SVE2 UQSHRN, UQRSHRN
b4f0efd target/arm: Implement SVE2 SQSHRUN, SQRSHRUN
9f3f360 target/arm: Implement SVE2 SHRN, RSHRN
a3021fd target/arm: Implement SVE2 floating-point pairwise
f34ad54 target/arm: Implement SVE2 saturating extract narrow
551d3aa target/arm: Implement SVE2 integer absolute difference and accumulate
63c7d93 target/arm: Implement SVE2 bitwise shift and insert
4a1d236 target/arm: Implement SVE2 bitwise shift right and accumulate
376b2fa target/arm: Implement SVE2 integer add/subtract long with carry
4ead521 target/arm: Implement SVE2 integer absolute difference and accumulate 
long
90015d6 target/arm: Implement SVE2 complex integer add
d706d5b target/arm: Implement SVE2 bitwise permute
f0dd83e target/arm: Implement SVE2 bitwise exclusive-or interleaved
327320f target/arm: Implement SVE2 bitwise shift left long
d3ef34e target/arm: Implement PMULLB and PMULLT
2445ba3 target/arm: Implement SVE2 integer multiply long
6eda9e2 target/arm: Implement SVE2 integer add/subtract wide
a9bacf4 target/arm: Implement SVE2 integer add/subtract interleaved long
e57041a target/arm: 

[PATCH v5 81/81] target/arm: Enable SVE2 and some extensions

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.c   |  1 +
 target/arm/cpu64.c | 13 +
 2 files changed, 14 insertions(+)

diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 0dd623e590..30fd5d5ff7 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -1464,6 +1464,7 @@ static void arm_cpu_realizefn(DeviceState *dev, Error 
**errp)
 
 u = cpu->isar.id_isar6;
 u = FIELD_DP32(u, ID_ISAR6, JSCVT, 0);
+u = FIELD_DP32(u, ID_ISAR6, I8MM, 0);
 cpu->isar.id_isar6 = u;
 
 u = cpu->isar.mvfr0;
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index f0a9e968c9..379f90fab8 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -662,6 +662,7 @@ static void aarch64_max_initfn(Object *obj)
 t = FIELD_DP64(t, ID_AA64ISAR1, SPECRES, 1);
 t = FIELD_DP64(t, ID_AA64ISAR1, FRINTTS, 1);
 t = FIELD_DP64(t, ID_AA64ISAR1, LRCPC, 2); /* ARMv8.4-RCPC */
+t = FIELD_DP64(t, ID_AA64ISAR1, I8MM, 1);
 cpu->isar.id_aa64isar1 = t;
 
 t = cpu->isar.id_aa64pfr0;
@@ -702,6 +703,17 @@ static void aarch64_max_initfn(Object *obj)
 t = FIELD_DP64(t, ID_AA64MMFR2, ST, 1); /* TTST */
 cpu->isar.id_aa64mmfr2 = t;
 
+t = cpu->isar.id_aa64zfr0;
+t = FIELD_DP64(t, ID_AA64ZFR0, SVEVER, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, AES, 2);  /* PMULL */
+t = FIELD_DP64(t, ID_AA64ZFR0, BITPERM, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, SHA3, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, SM4, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, I8MM, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, F32MM, 1);
+t = FIELD_DP64(t, ID_AA64ZFR0, F64MM, 1);
+cpu->isar.id_aa64zfr0 = t;
+
 /* Replicate the same data to the 32-bit id registers.  */
 u = cpu->isar.id_isar5;
 u = FIELD_DP32(u, ID_ISAR5, AES, 2); /* AES + PMULL */
@@ -718,6 +730,7 @@ static void aarch64_max_initfn(Object *obj)
 u = FIELD_DP32(u, ID_ISAR6, FHM, 1);
 u = FIELD_DP32(u, ID_ISAR6, SB, 1);
 u = FIELD_DP32(u, ID_ISAR6, SPECRES, 1);
+u = FIELD_DP32(u, ID_ISAR6, I8MM, 1);
 cpu->isar.id_isar6 = u;
 
 u = cpu->isar.id_pfr0;
-- 
2.25.1




[PATCH v5 76/81] target/arm: Remove unused fpst from VDOT_scalar

2021-04-16 Thread Richard Henderson
Cut and paste error from another pattern.

Signed-off-by: Richard Henderson 
---
 target/arm/translate-neon.c.inc | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
index 6385d13a7e..c1fbe21ae6 100644
--- a/target/arm/translate-neon.c.inc
+++ b/target/arm/translate-neon.c.inc
@@ -316,7 +316,6 @@ static bool trans_VDOT_scalar(DisasContext *s, 
arg_VDOT_scalar *a)
 {
 gen_helper_gvec_4 *fn_gvec;
 int opr_sz;
-TCGv_ptr fpst;
 
 if (!dc_isar_feature(aa32_dp, s)) {
 return false;
@@ -338,13 +337,11 @@ static bool trans_VDOT_scalar(DisasContext *s, 
arg_VDOT_scalar *a)
 
 fn_gvec = a->u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
 opr_sz = (1 + a->q) * 8;
-fpst = fpstatus_ptr(FPST_STD);
 tcg_gen_gvec_4_ool(vfp_reg_offset(1, a->vd),
vfp_reg_offset(1, a->vn),
vfp_reg_offset(1, a->rm),
vfp_reg_offset(1, a->vd),
opr_sz, opr_sz, a->index, fn_gvec);
-tcg_temp_free_ptr(fpst);
 return true;
 }
 
-- 
2.25.1




[PATCH v5 72/81] target/arm: Implement SVE2 bitwise shift immediate

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Implements SQSHL/UQSHL, SRSHR/URSHR, and SQSHLU

Signed-off-by: Stephen Long 
Message-Id: <20200430194159.24064-1-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 33 +
 target/arm/sve.decode  |  5 
 target/arm/sve_helper.c| 35 ++
 target/arm/translate-sve.c | 60 ++
 4 files changed, 133 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 6e9479800d..fa7418e706 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2707,6 +2707,39 @@ DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_sqshl_zpzi_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqshl_zpzi_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqshl_zpzi_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqshl_zpzi_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_uqshl_zpzi_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uqshl_zpzi_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uqshl_zpzi_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uqshl_zpzi_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_srshr_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_srshr_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_srshr_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_srshr_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_urshr_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_urshr_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_urshr_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_urshr_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_sqshlu_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqshlu_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqshlu_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqshlu_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve2_fcvtnt_sh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_fcvtnt_ds, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 32e11301a5..cfdee8955b 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -340,6 +340,11 @@ ASR_zpzi0100 .. 000 000 100 ... .. ... .  
@rdn_pg_tszimm_shr
 LSR_zpzi0100 .. 000 001 100 ... .. ... .  @rdn_pg_tszimm_shr
 LSL_zpzi0100 .. 000 011 100 ... .. ... .  @rdn_pg_tszimm_shl
 ASRD0100 .. 000 100 100 ... .. ... .  @rdn_pg_tszimm_shr
+SQSHL_zpzi  0100 .. 000 110 100 ... .. ... .  @rdn_pg_tszimm_shl
+UQSHL_zpzi  0100 .. 000 111 100 ... .. ... .  @rdn_pg_tszimm_shl
+SRSHR   0100 .. 001 100 100 ... .. ... .  @rdn_pg_tszimm_shr
+URSHR   0100 .. 001 101 100 ... .. ... .  @rdn_pg_tszimm_shr
+SQSHLU  0100 .. 001 111 100 ... .. ... .  @rdn_pg_tszimm_shl
 
 # SVE bitwise shift by vector (predicated)
 ASR_zpzz0100 .. 010 000 100 ... . .   @rdn_pg_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index d5701cb4e8..c5c3017745 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2084,6 +2084,41 @@ DO_ZPZI(sve_asrd_h, int16_t, H1_2, DO_ASRD)
 DO_ZPZI(sve_asrd_s, int32_t, H1_4, DO_ASRD)
 DO_ZPZI_D(sve_asrd_d, int64_t, DO_ASRD)
 
+/* SVE2 bitwise shift by immediate */
+DO_ZPZI(sve2_sqshl_zpzi_b, int8_t, H1, do_sqshl_b)
+DO_ZPZI(sve2_sqshl_zpzi_h, int16_t, H1_2, do_sqshl_h)
+DO_ZPZI(sve2_sqshl_zpzi_s, int32_t, H1_4, do_sqshl_s)
+DO_ZPZI_D(sve2_sqshl_zpzi_d, int64_t, do_sqshl_d)
+
+DO_ZPZI(sve2_uqshl_zpzi_b, uint8_t, H1, do_uqshl_b)
+DO_ZPZI(sve2_uqshl_zpzi_h, uint16_t, H1_2, do_uqshl_h)
+DO_ZPZI(sve2_uqshl_zpzi_s, uint32_t, H1_4, do_uqshl_s)
+DO_ZPZI_D(sve2_uqshl_zpzi_d, uint64_t, do_uqshl_d)
+
+DO_ZPZI(sve2_srshr_b, int8_t, H1, do_srshr)
+DO_ZPZI(sve2_srshr_h, int16_t, H1_2, do_srshr)
+DO_ZPZI(sve2_srshr_s, int32_t, H1_4, do_srshr)
+DO_ZPZI_D(sve2_srshr_d, int64_t, do_srshr)
+
+DO_ZPZI(sve2_urshr_b, uint8_t, H1, do_urshr)
+DO_ZPZI(sve2_urshr_h, uint16_t, H1_2, do_urshr)
+DO_ZPZI(sve2_urshr_s, uint32_t, H1_4, do_urshr)
+DO_ZPZI_D(sve2_urshr_d, uint64_t, do_urshr)
+
+#define do_suqrshl_b(n, m) \
+   ({ uint32_t discard; do_suqrshl_bhs(n, (int8_t)m, 8, false, ); })
+#define do_suqrshl_h(n, m) \
+   ({ 

[PATCH v5 67/81] target/arm: Implement SVE2 FCVTXNT, FCVTX

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200428174332.17162-4-stepl...@quicinc.com>
[rth: Use do_frint_mode, which avoids a specific runtime helper.]
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  |  2 ++
 target/arm/translate-sve.c | 49 ++
 2 files changed, 41 insertions(+), 10 deletions(-)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index fb998f5f34..46153d6a84 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1533,6 +1533,8 @@ SM4EKEY 01000101 00 1 . 0 0 . .  
@rd_rn_rm_e0
 RAX101000101 00 1 . 0 1 . .  @rd_rn_rm_e0
 
 ### SVE2 floating-point convert precision odd elements
+FCVTXNT_ds  01100100 00 0010 10 101 ... . .  @rd_pg_rn_e0
+FCVTX_ds01100101 00 0010 10 101 ... . .  @rd_pg_rn_e0
 FCVTNT_sh   01100100 10 0010 00 101 ... . .  @rd_pg_rn_e0
 FCVTLT_hs   01100100 10 0010 01 101 ... . .  @rd_pg_rn_e0
 FCVTNT_ds   01100100 11 0010 10 101 ... . .  @rd_pg_rn_e0
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 9cad93cb98..5b78298777 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4715,11 +4715,9 @@ static bool trans_FRINTX(DisasContext *s, arg_rpr_esz *a)
 return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 
1]);
 }
 
-static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
+static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a,
+  int mode, gen_helper_gvec_3_ptr *fn)
 {
-if (a->esz == 0) {
-return false;
-}
 if (sve_access_check(s)) {
 unsigned vsz = vec_full_reg_size(s);
 TCGv_i32 tmode = tcg_const_i32(mode);
@@ -4730,7 +4728,7 @@ static bool do_frint_mode(DisasContext *s, arg_rpr_esz 
*a, int mode)
 tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
pred_full_reg_offset(s, a->pg),
-   status, vsz, vsz, 0, frint_fns[a->esz - 1]);
+   status, vsz, vsz, 0, fn);
 
 gen_helper_set_rmode(tmode, tmode, status);
 tcg_temp_free_i32(tmode);
@@ -4741,27 +4739,42 @@ static bool do_frint_mode(DisasContext *s, arg_rpr_esz 
*a, int mode)
 
 static bool trans_FRINTN(DisasContext *s, arg_rpr_esz *a)
 {
-return do_frint_mode(s, a, float_round_nearest_even);
+if (a->esz == 0) {
+return false;
+}
+return do_frint_mode(s, a, float_round_nearest_even, frint_fns[a->esz - 
1]);
 }
 
 static bool trans_FRINTP(DisasContext *s, arg_rpr_esz *a)
 {
-return do_frint_mode(s, a, float_round_up);
+if (a->esz == 0) {
+return false;
+}
+return do_frint_mode(s, a, float_round_up, frint_fns[a->esz - 1]);
 }
 
 static bool trans_FRINTM(DisasContext *s, arg_rpr_esz *a)
 {
-return do_frint_mode(s, a, float_round_down);
+if (a->esz == 0) {
+return false;
+}
+return do_frint_mode(s, a, float_round_down, frint_fns[a->esz - 1]);
 }
 
 static bool trans_FRINTZ(DisasContext *s, arg_rpr_esz *a)
 {
-return do_frint_mode(s, a, float_round_to_zero);
+if (a->esz == 0) {
+return false;
+}
+return do_frint_mode(s, a, float_round_to_zero, frint_fns[a->esz - 1]);
 }
 
 static bool trans_FRINTA(DisasContext *s, arg_rpr_esz *a)
 {
-return do_frint_mode(s, a, float_round_ties_away);
+if (a->esz == 0) {
+return false;
+}
+return do_frint_mode(s, a, float_round_ties_away, frint_fns[a->esz - 1]);
 }
 
 static bool trans_FRECPX(DisasContext *s, arg_rpr_esz *a)
@@ -8202,3 +8215,19 @@ static bool trans_FCVTLT_sd(DisasContext *s, arg_rpr_esz 
*a)
 }
 return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, 
gen_helper_sve2_fcvtlt_sd);
 }
+
+static bool trans_FCVTX_ds(DisasContext *s, arg_rpr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return do_frint_mode(s, a, float_round_to_odd, gen_helper_sve_fcvt_ds);
+}
+
+static bool trans_FCVTXNT_ds(DisasContext *s, arg_rpr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return do_frint_mode(s, a, float_round_to_odd, gen_helper_sve2_fcvtnt_ds);
+}
-- 
2.25.1




[PATCH v5 68/81] target/arm: Implement SVE2 FLOGB

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200430191405.21641-1-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
v2: Fixed esz index and c++ comments
v3: Fixed denormal arithmetic and raise invalid.
---
 target/arm/helper-sve.h|  4 +++
 target/arm/sve.decode  |  3 +++
 target/arm/sve_helper.c| 52 ++
 target/arm/translate-sve.c | 24 ++
 4 files changed, 83 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 30b6dc49c8..96bd200e73 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2713,3 +2713,7 @@ DEF_HELPER_FLAGS_5(sve2_fcvtlt_hs, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_fcvtlt_sd, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(flogb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(flogb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(flogb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 46153d6a84..17adb393ff 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1539,3 +1539,6 @@ FCVTNT_sh   01100100 10 0010 00 101 ... . .  
@rd_pg_rn_e0
 FCVTLT_hs   01100100 10 0010 01 101 ... . .  @rd_pg_rn_e0
 FCVTNT_ds   01100100 11 0010 10 101 ... . .  @rd_pg_rn_e0
 FCVTLT_sd   01100100 11 0010 11 101 ... . .  @rd_pg_rn_e0
+
+### SVE2 floating-point convert to integer
+FLOGB   01100101 00 011 esz:2 0101 pg:3 rn:5 rd:5  _esz
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 2684f40a62..754301a3a6 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -4575,6 +4575,58 @@ DO_ZPZ_FP(sve_ucvt_dh, uint64_t, , uint64_to_float16)
 DO_ZPZ_FP(sve_ucvt_ds, uint64_t, , uint64_to_float32)
 DO_ZPZ_FP(sve_ucvt_dd, uint64_t, , uint64_to_float64)
 
+static int16_t do_float16_logb_as_int(float16 a, float_status *s)
+{
+if (float16_is_normal(a)) {
+return extract16(a, 10, 5) - 15;
+} else if (float16_is_infinity(a)) {
+return INT16_MAX;
+} else if (float16_is_any_nan(a) || float16_is_zero(a)) {
+float_raise(float_flag_invalid, s);
+return INT16_MIN;
+} else {
+/*
+ * denormal: bias - fractional_zeros
+ * = bias + masked_zeros - uint32_zeros
+ */
+return -15 + 22 - clz32(extract16(a, 0, 10));
+}
+}
+
+static int32_t do_float32_logb_as_int(float32 a, float_status *s)
+{
+if (float32_is_normal(a)) {
+return extract32(a, 23, 8) - 127;
+} else if (float32_is_infinity(a)) {
+return INT32_MAX;
+} else if (float32_is_any_nan(a) || float32_is_zero(a)) {
+float_raise(float_flag_invalid, s);
+return INT32_MIN;
+} else {
+/* denormal (see above) */
+return -127 + 9 - clz32(extract32(a, 0, 23));
+}
+}
+
+static int64_t do_float64_logb_as_int(float64 a, float_status *s)
+{
+if (float64_is_normal(a)) {
+return extract64(a, 52, 11) - 1023;
+} else if (float64_is_infinity(a)) {
+return INT64_MAX;
+} else if (float64_is_any_nan(a) || float64_is_zero(a)) {
+float_raise(float_flag_invalid, s);
+return INT64_MIN;
+} else {
+/* denormal (see above) */
+return -1023 + 12 - clz64(extract64(a, 0, 52));
+}
+}
+
+DO_ZPZ_FP(flogb_h, float16, H1_2, do_float16_logb_as_int)
+DO_ZPZ_FP(flogb_s, float32, H1_4, do_float32_logb_as_int)
+DO_ZPZ_FP(flogb_d, float64, , do_float64_logb_as_int)
+
 #undef DO_ZPZ_FP
 
 static void do_fmla_zpzzz_h(void *vd, void *vn, void *vm, void *va, void *vg,
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 5b78298777..fe8f87d55e 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8231,3 +8231,27 @@ static bool trans_FCVTXNT_ds(DisasContext *s, 
arg_rpr_esz *a)
 }
 return do_frint_mode(s, a, float_round_to_odd, gen_helper_sve2_fcvtnt_ds);
 }
+
+static bool trans_FLOGB(DisasContext *s, arg_rpr_esz *a)
+{
+static gen_helper_gvec_3_ptr * const fns[] = {
+NULL,   gen_helper_flogb_h,
+gen_helper_flogb_s, gen_helper_flogb_d
+};
+
+if (!dc_isar_feature(aa64_sve2, s) || fns[a->esz] == NULL) {
+return false;
+}
+if (sve_access_check(s)) {
+TCGv_ptr status =
+fpstatus_ptr(a->esz == MO_16 ? FPST_FPCR_F16 : FPST_FPCR);
+unsigned vsz = vec_full_reg_size(s);
+
+tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   pred_full_reg_offset(s, a->pg),
+   status, vsz, vsz, 0, fns[a->esz]);
+tcg_temp_free_ptr(status);
+}
+return true;
+}
-- 
2.25.1




[PATCH v5 59/81] target/arm: Implement SVE mixed sign dot product (indexed)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  5 +++
 target/arm/helper.h|  4 +++
 target/arm/sve.decode  |  4 +++
 target/arm/translate-sve.c | 16 +
 target/arm/vec_helper.c| 68 ++
 5 files changed, 97 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index e44bb8973a..132ac5d8ec 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4246,6 +4246,11 @@ static inline bool isar_feature_aa64_sve2_bitperm(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
 }
 
+static inline bool isar_feature_aa64_sve_i8mm(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, I8MM) != 0;
+}
+
 static inline bool isar_feature_aa64_sve2_f32mm(const ARMISARegisters *id)
 {
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F32MM) != 0;
diff --git a/target/arm/helper.h b/target/arm/helper.h
index e7c463fff5..e4c6458f98 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -621,6 +621,10 @@ DEF_HELPER_FLAGS_5(gvec_sdot_idx_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(gvec_udot_idx_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_sudot_idx_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_usdot_idx_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_5(gvec_fcaddh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 35010d755f..05360e2608 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -813,6 +813,10 @@ SQRDMLSH_zzxz_h 01000100 0. 1 . 000101 . .   
@rrxr_3 esz=1
 SQRDMLSH_zzxz_s 01000100 10 1 . 000101 . .   @rrxr_2 esz=2
 SQRDMLSH_zzxz_d 01000100 11 1 . 000101 . .   @rrxr_1 esz=3
 
+# SVE mixed sign dot product (indexed)
+USDOT_zzxw_s01000100 10 1 . 000110 . .   @rrxr_2 esz=2
+SUDOT_zzxw_s01000100 10 1 . 000111 . .   @rrxr_2 esz=2
+
 # SVE2 saturating multiply-add (indexed)
 SQDMLALB_zzxw_s 01000100 10 1 . 0010.0 . .   @rrxr_3a esz=2
 SQDMLALB_zzxw_d 01000100 11 1 . 0010.0 . .   @rrxr_2a esz=3
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index b43bf939f5..1f07131cff 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3838,6 +3838,22 @@ DO_RRXR(trans_SDOT_zzxw_d, gen_helper_gvec_sdot_idx_h)
 DO_RRXR(trans_UDOT_zzxw_s, gen_helper_gvec_udot_idx_b)
 DO_RRXR(trans_UDOT_zzxw_d, gen_helper_gvec_udot_idx_h)
 
+static bool trans_SUDOT_zzxw_s(DisasContext *s, arg_rrxr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve_i8mm, s)) {
+return false;
+}
+return do_zzxz_data(s, a, gen_helper_gvec_sudot_idx_b, a->index);
+}
+
+static bool trans_USDOT_zzxw_s(DisasContext *s, arg_rrxr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve_i8mm, s)) {
+return false;
+}
+return do_zzxz_data(s, a, gen_helper_gvec_usdot_idx_b, a->index);
+}
+
 #undef DO_RRXR
 
 static bool do_sve2_zzx_data(DisasContext *s, arg_rrx_esz *a,
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 8b7269d8e1..98b707f4f5 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -677,6 +677,74 @@ void HELPER(gvec_udot_idx_b)(void *vd, void *vn, void *vm,
 clear_tail(d, opr_sz, simd_maxsz(desc));
 }
 
+void HELPER(gvec_sudot_idx_b)(void *vd, void *vn, void *vm,
+  void *va, uint32_t desc)
+{
+intptr_t i, segend, opr_sz = simd_oprsz(desc), opr_sz_4 = opr_sz / 4;
+intptr_t index = simd_data(desc);
+int32_t *d = vd, *a = va;
+int8_t *n = vn;
+uint8_t *m_indexed = (uint8_t *)vm + index * 4;
+
+/*
+ * Notice the special case of opr_sz == 8, from aa64/aa32 advsimd.
+ * Otherwise opr_sz is a multiple of 16.
+ */
+segend = MIN(4, opr_sz_4);
+i = 0;
+do {
+uint8_t m0 = m_indexed[i * 4 + 0];
+uint8_t m1 = m_indexed[i * 4 + 1];
+uint8_t m2 = m_indexed[i * 4 + 2];
+uint8_t m3 = m_indexed[i * 4 + 3];
+
+do {
+d[i] = (a[i] +
+n[i * 4 + 0] * m0 +
+n[i * 4 + 1] * m1 +
+n[i * 4 + 2] * m2 +
+n[i * 4 + 3] * m3);
+} while (++i < segend);
+segend = i + 4;
+} while (i < opr_sz_4);
+
+clear_tail(d, opr_sz, simd_maxsz(desc));
+}
+
+void HELPER(gvec_usdot_idx_b)(void *vd, void *vn, void *vm,
+  void *va, uint32_t desc)
+{
+intptr_t i, segend, opr_sz = simd_oprsz(desc), opr_sz_4 = opr_sz / 4;
+intptr_t index = simd_data(desc);
+uint32_t *d = vd, *a = va;
+uint8_t *n = vn;
+int8_t *m_indexed = (int8_t *)vm + index * 4;
+
+/*
+ * Notice the special case of opr_sz == 8, from aa64/aa32 

[PATCH v5 78/81] target/arm: Split decode of VSDOT and VUDOT

2021-04-16 Thread Richard Henderson
Now that we have a common helper, sharing decode does not
save much.  Also, this will solve an upcoming naming problem.

Signed-off-by: Richard Henderson 
---
 target/arm/neon-shared.decode   |  9 ++---
 target/arm/translate-neon.c.inc | 30 ++
 2 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
index facb621450..2d94369750 100644
--- a/target/arm/neon-shared.decode
+++ b/target/arm/neon-shared.decode
@@ -46,8 +46,9 @@ VCMLA   110 rot:2 . 1 .   1000 . q:1 . 0 
 \
 VCADD   110 rot:1 1 . 0 .   1000 . q:1 . 0  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp size=%vcadd_size
 
-# VUDOT and VSDOT
-VDOT    110 00 . 10   1101 . q:1 . u:1  \
+VSDOT   110 00 . 10   1101 . q:1 . 0  \
+   vm=%vm_dp vn=%vn_dp vd=%vd_dp
+VUDOT   110 00 . 10   1101 . q:1 . 1  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp
 
 # VFM[AS]L
@@ -61,7 +62,9 @@ VCMLA_scalar    1110 0 . rot:2   1000 . q:1 
index:1 0 vm:4 \
 VCMLA_scalar    1110 1 . rot:2   1000 . q:1 . 0  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp size=2 index=0
 
-VDOT_scalar 1110 0 . 10   1101 . q:1 index:1 u:1 vm:4 \
+VSDOT_scalar    1110 0 . 10   1101 . q:1 index:1 0 vm:4 \
+   vn=%vn_dp vd=%vd_dp
+VUDOT_scalar    1110 0 . 10   1101 . q:1 index:1 1 vm:4 \
vn=%vn_dp vd=%vd_dp
 
 %vfml_scalar_q0_rm 0:3 5:1
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
index d9901c0153..2fd6478d3c 100644
--- a/target/arm/translate-neon.c.inc
+++ b/target/arm/translate-neon.c.inc
@@ -260,15 +260,22 @@ static bool trans_VCADD(DisasContext *s, arg_VCADD *a)
 return true;
 }
 
-static bool trans_VDOT(DisasContext *s, arg_VDOT *a)
+static bool trans_VSDOT(DisasContext *s, arg_VSDOT *a)
 {
 if (!dc_isar_feature(aa32_dp, s)) {
 return false;
 }
 return do_neon_ddda(s, a->q * 7, a->vd, a->vn, a->vm, 0,
-a->u
-? gen_helper_gvec_udot_b
-: gen_helper_gvec_sdot_b);
+gen_helper_gvec_sdot_b);
+}
+
+static bool trans_VUDOT(DisasContext *s, arg_VUDOT *a)
+{
+if (!dc_isar_feature(aa32_dp, s)) {
+return false;
+}
+return do_neon_ddda(s, a->q * 7, a->vd, a->vn, a->vm, 0,
+gen_helper_gvec_udot_b);
 }
 
 static bool trans_VFML(DisasContext *s, arg_VFML *a)
@@ -320,15 +327,22 @@ static bool trans_VCMLA_scalar(DisasContext *s, 
arg_VCMLA_scalar *a)
  FPST_STD, gen_helper_gvec_fcmlas_idx);
 }
 
-static bool trans_VDOT_scalar(DisasContext *s, arg_VDOT_scalar *a)
+static bool trans_VSDOT_scalar(DisasContext *s, arg_VSDOT_scalar *a)
 {
 if (!dc_isar_feature(aa32_dp, s)) {
 return false;
 }
 return do_neon_ddda(s, a->q * 6, a->vd, a->vn, a->vm, a->index,
-a->u
-? gen_helper_gvec_udot_idx_b
-: gen_helper_gvec_sdot_idx_b);
+gen_helper_gvec_sdot_idx_b);
+}
+
+static bool trans_VUDOT_scalar(DisasContext *s, arg_VUDOT_scalar *a)
+{
+if (!dc_isar_feature(aa32_dp, s)) {
+return false;
+}
+return do_neon_ddda(s, a->q * 6, a->vd, a->vn, a->vm, a->index,
+gen_helper_gvec_udot_idx_b);
 }
 
 static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
-- 
2.25.1




[PATCH v5 66/81] target/arm: Implement SVE2 FCVTLT

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200428174332.17162-3-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  5 +
 target/arm/sve.decode  |  2 ++
 target/arm/sve_helper.c| 23 +++
 target/arm/translate-sve.c | 16 
 4 files changed, 46 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index d6b064bdc9..30b6dc49c8 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2708,3 +2708,8 @@ DEF_HELPER_FLAGS_5(sve2_fcvtnt_sh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_fcvtnt_ds, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_fcvtlt_hs, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_fcvtlt_sd, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index afc53639ac..fb998f5f34 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1534,4 +1534,6 @@ RAX101000101 00 1 . 0 1 . .  
@rd_rn_rm_e0
 
 ### SVE2 floating-point convert precision odd elements
 FCVTNT_sh   01100100 10 0010 00 101 ... . .  @rd_pg_rn_e0
+FCVTLT_hs   01100100 10 0010 01 101 ... . .  @rd_pg_rn_e0
 FCVTNT_ds   01100100 11 0010 10 101 ... . .  @rd_pg_rn_e0
+FCVTLT_sd   01100100 11 0010 11 101 ... . .  @rd_pg_rn_e0
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 6164ae17cc..2684f40a62 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -7468,3 +7468,26 @@ void HELPER(NAME)(void *vd, void *vn, void *vg, void 
*status, uint32_t desc)  \
 
 DO_FCVTNT(sve2_fcvtnt_sh, uint32_t, uint16_t, H1_4, H1_2, sve_f32_to_f16)
 DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, H1_4, H1_2, float64_to_float32)
+
+#define DO_FCVTLT(NAME, TYPEW, TYPEN, HW, HN, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc)  \
+{ \
+intptr_t i = simd_oprsz(desc);\
+uint64_t *g = vg; \
+do {  \
+uint64_t pg = g[(i - 1) >> 6];\
+do {  \
+i -= sizeof(TYPEW);   \
+if (likely((pg >> (i & 63)) & 1)) {   \
+TYPEN nn = *(TYPEN *)(vn + HN(i + sizeof(TYPEN)));\
+*(TYPEW *)(vd + HW(i)) = OP(nn, status);  \
+} \
+} while (i & 63); \
+} while (i != 0); \
+}
+
+DO_FCVTLT(sve2_fcvtlt_hs, uint32_t, uint16_t, H1_4, H1_2, sve_f16_to_f32)
+DO_FCVTLT(sve2_fcvtlt_sd, uint64_t, uint32_t, H1_4, H1_2, float32_to_float64)
+
+#undef DO_FCVTLT
+#undef DO_FCVTNT
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index df52736e3b..9cad93cb98 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8186,3 +8186,19 @@ static bool trans_FCVTNT_ds(DisasContext *s, arg_rpr_esz 
*a)
 }
 return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, 
gen_helper_sve2_fcvtnt_ds);
 }
+
+static bool trans_FCVTLT_hs(DisasContext *s, arg_rpr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, 
gen_helper_sve2_fcvtlt_hs);
+}
+
+static bool trans_FCVTLT_sd(DisasContext *s, arg_rpr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, 
gen_helper_sve2_fcvtlt_sd);
+}
-- 
2.25.1




[PATCH v5 55/81] target/arm: Implement SVE2 saturating multiply-add (indexed)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  9 +
 target/arm/sve.decode  | 18 ++
 target/arm/sve_helper.c| 30 ++
 target/arm/translate-sve.c | 32 
 4 files changed, 81 insertions(+), 8 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index fe67574741..08398800bd 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2679,3 +2679,12 @@ DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_idx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_idx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_idx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_idx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 1956d96ad5..8d2709d3cc 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -30,6 +30,8 @@
 %size_2323:2
 %dtype_23_1323:2 13:2
 %index3_22_19   22:1 19:2
+%index3_19_11   19:2 11:1
+%index2_20_11   20:1 11:1
 
 # A combination of tsz:imm3 -- extract esize.
 %tszimm_esz 22:2 5:5 !function=tszimm_esz
@@ -261,6 +263,12 @@
 @rrxr_1  .. . index:1 rm:4 .. rn:5 rd:5 \
 _esz ra=%reg_movprfx
 
+# Three registers and a scalar by N-bit index, alternate
+@rrxr_3a .. ... rm:3 .. rn:5 rd:5 \
+_esz ra=%reg_movprfx index=%index3_19_11
+@rrxr_2a .. ..  rm:4 .. rn:5 rd:5 \
+_esz ra=%reg_movprfx index=%index2_20_11
+
 ###
 # Instruction patterns.  Grouped according to the SVE encodingindex.xhtml.
 
@@ -799,6 +807,16 @@ SQRDMLSH_zzxz_h 01000100 0. 1 . 000101 . .   
@rrxr_3 esz=1
 SQRDMLSH_zzxz_s 01000100 10 1 . 000101 . .   @rrxr_2 esz=2
 SQRDMLSH_zzxz_d 01000100 11 1 . 000101 . .   @rrxr_1 esz=3
 
+# SVE2 saturating multiply-add (indexed)
+SQDMLALB_zzxw_s 01000100 10 1 . 0010.0 . .   @rrxr_3a esz=2
+SQDMLALB_zzxw_d 01000100 11 1 . 0010.0 . .   @rrxr_2a esz=3
+SQDMLALT_zzxw_s 01000100 10 1 . 0010.1 . .   @rrxr_3a esz=2
+SQDMLALT_zzxw_d 01000100 11 1 . 0010.1 . .   @rrxr_2a esz=3
+SQDMLSLB_zzxw_s 01000100 10 1 . 0011.0 . .   @rrxr_3a esz=2
+SQDMLSLB_zzxw_d 01000100 11 1 . 0011.0 . .   @rrxr_2a esz=3
+SQDMLSLT_zzxw_s 01000100 10 1 . 0011.1 . .   @rrxr_3a esz=2
+SQDMLSLT_zzxw_d 01000100 11 1 . 0011.1 . .   @rrxr_2a esz=3
+
 # SVE2 integer multiply (indexed)
 MUL_zzx_h   01000100 0. 1 . 10 . .   @rrx_3 esz=1
 MUL_zzx_s   01000100 10 1 . 10 . .   @rrx_2 esz=2
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index fc4a943029..c43c38044b 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1530,6 +1530,36 @@ DO_ZZXZ(sve2_sqrdmlsh_idx_d, int64_t,   , DO_SQRDMLSH_D)
 
 #undef DO_ZZXZ
 
+#define DO_ZZXW(NAME, TYPEW, TYPEN, HW, HN, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc)  \
+{ \
+intptr_t i, j, oprsz = simd_oprsz(desc);  \
+intptr_t sel = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN);   \
+intptr_t idx = extract32(desc, SIMD_DATA_SHIFT + 1, 3) * sizeof(TYPEN); \
+for (i = 0; i < oprsz; i += 16) { \
+TYPEW mm = *(TYPEN *)(vm + i + idx);  \
+for (j = 0; j < 16; j += sizeof(TYPEW)) { \
+TYPEW nn = *(TYPEN *)(vn + HN(i + j + sel));  \
+TYPEW aa = *(TYPEW *)(va + HW(i + j));\
+*(TYPEW *)(vd + HW(i + j)) = OP(nn, mm, aa);  \
+} \
+} \
+}
+
+#define DO_SQDMLAL_S(N, M, A)  DO_SQADD_S(A, do_sqdmull_s(N, M))
+#define DO_SQDMLAL_D(N, M, A)  do_sqadd_d(A, do_sqdmull_d(N, M))
+
+DO_ZZXW(sve2_sqdmlal_idx_s, int32_t, int16_t, H1_4, H1_2, DO_SQDMLAL_S)
+DO_ZZXW(sve2_sqdmlal_idx_d, int64_t, int32_t, , H1_4, DO_SQDMLAL_D)
+
+#define DO_SQDMLSL_S(N, M, A)  DO_SQSUB_S(A, do_sqdmull_s(N, M))
+#define DO_SQDMLSL_D(N, M, A)  do_sqsub_d(A, do_sqdmull_d(N, M))
+
+DO_ZZXW(sve2_sqdmlsl_idx_s, int32_t, int16_t, H1_4, H1_2, DO_SQDMLSL_S)
+DO_ZZXW(sve2_sqdmlsl_idx_d, 

[PATCH v5 80/81] target/arm: Implement integer matrix multiply accumulate

2021-04-16 Thread Richard Henderson
This is {S,U,US}MMLA for both AArch64 AdvSIMD and SVE,
and V{S,U,US}MMLA.S8 for AArch32 NEON.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h |  7 +++
 target/arm/neon-shared.decode   |  7 +++
 target/arm/sve.decode   |  6 +++
 target/arm/translate-a64.c  | 18 
 target/arm/translate-sve.c  | 27 
 target/arm/vec_helper.c | 77 +
 target/arm/translate-neon.c.inc | 27 
 7 files changed, 169 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index e8b16a401f..33df62f44d 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -994,6 +994,13 @@ DEF_HELPER_FLAGS_6(sve2_fmlal_zzxw_s, TCG_CALL_NO_RWG,
 
 DEF_HELPER_FLAGS_4(gvec_xar_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(gvec_smmla_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_ummla_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_usmmla_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
 #include "helper-sve.h"
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
index 5befaec87b..cc9f4cdd85 100644
--- a/target/arm/neon-shared.decode
+++ b/target/arm/neon-shared.decode
@@ -59,6 +59,13 @@ VFML    110 0 s:1 . 10   1000 . 0 . 1 
 \
 VFML    110 0 s:1 . 10   1000 . 1 . 1  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp q=1
 
+VSMMLA  1100 0.10   1100 .1.0  \
+   vm=%vm_dp vn=%vn_dp vd=%vd_dp
+VUMMLA  1100 0.10   1100 .1.1  \
+   vm=%vm_dp vn=%vn_dp vd=%vd_dp
+VUSMMLA 1100 1.10   1100 .1.0  \
+   vm=%vm_dp vn=%vn_dp vd=%vd_dp
+
 VCMLA_scalar    1110 0 . rot:2   1000 . q:1 index:1 0 vm:4 \
vn=%vn_dp vd=%vd_dp size=1
 VCMLA_scalar    1110 1 . rot:2   1000 . q:1 . 0  \
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 63870b7539..3d7c4fa6e5 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1364,6 +1364,12 @@ USHLLT  01000101 .. 0 . 1010 11 . .  
@rd_rn_tszimm_shl
 EORBT   01000101 .. 0 . 10010 0 . .  @rd_rn_rm
 EORTB   01000101 .. 0 . 10010 1 . .  @rd_rn_rm
 
+## SVE integer matrix multiply accumulate
+
+SMMLA   01000101 00 0 . 10011 0 . .  @rda_rn_rm_e0
+USMMLA  01000101 10 0 . 10011 0 . .  @rda_rn_rm_e0
+UMMLA   01000101 11 0 . 10011 0 . .  @rda_rn_rm_e0
+
 ## SVE2 bitwise permute
 
 BEXT01000101 .. 0 . 1011 00 . .  @rd_rn_rm
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 0d45a44f51..668edf3a00 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -12197,6 +12197,15 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 }
 feature = dc_isar_feature(aa64_i8mm, s);
 break;
+case 0x04: /* SMMLA */
+case 0x14: /* UMMLA */
+case 0x05: /* USMMLA */
+if (!is_q || size != MO_32) {
+unallocated_encoding(s);
+return;
+}
+feature = dc_isar_feature(aa64_i8mm, s);
+break;
 case 0x18: /* FCMLA, #0 */
 case 0x19: /* FCMLA, #90 */
 case 0x1a: /* FCMLA, #180 */
@@ -12241,6 +12250,15 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_usdot_b);
 return;
 
+case 0x04: /* SMMLA, UMMLA */
+gen_gvec_op4_ool(s, 1, rd, rn, rm, rd, 0,
+ u ? gen_helper_gvec_ummla_b
+ : gen_helper_gvec_smmla_b);
+return;
+case 0x05: /* USMMLA */
+gen_gvec_op4_ool(s, 1, rd, rn, rm, rd, 0, gen_helper_gvec_usmmla_b);
+return;
+
 case 0x8: /* FCMLA, #0 */
 case 0x9: /* FCMLA, #90 */
 case 0xa: /* FCMLA, #180 */
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ae628968da..cb0e7a1f68 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8543,3 +8543,30 @@ static bool trans_FMLSLT_zzxw(DisasContext *s, 
arg_rrxr_esz *a)
 {
 return do_FMLAL_zzxw(s, a, true, true);
 }
+
+static bool do_i8mm__ool(DisasContext *s, arg__esz *a,
+ gen_helper_gvec_4 *fn, int data)
+{
+if (!dc_isar_feature(aa64_sve_i8mm, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_ool_(s, fn, a->rd, a->rn, a->rm, a->ra, data);
+}
+return true;
+}
+
+static bool trans_SMMLA(DisasContext *s, arg__esz *a)
+{
+return do_i8mm__ool(s, a, gen_helper_gvec_smmla_b, 0);
+}
+
+static bool trans_USMMLA(DisasContext *s, 

[PATCH 1/1] spapr_drc.c: handle hotunplug errors in drc_unisolate_logical()

2021-04-16 Thread Daniel Henrique Barboza
The Linux kernel will call set-indicator to move a DRC to 'unisolate' in
the case a device removal fails. Setting a DRC that is already
unisolated or configured to 'unisolate' is a no-op for the current
hypervisors that supports pSeries guests, namely QEMU and phyp, and is
being used to signal hotunplug errors if the hypervisor has the code for
it.

This patch changes drc_unisolate_logical() to implement in the pSeries
machine. For CPUs it's a simple matter of setting drc->unplug_requested
to 'false', while for LMBs the process is similar to the rollback that
is done in rtas_ibm_configure_connector(). Although at this moment the
Linux kernel is only reporting CPU removal errors, let's get the code
ready to handle LMBs as well.

Signed-off-by: Daniel Henrique Barboza 
---
 hw/ppc/spapr_drc.c | 23 +++
 1 file changed, 23 insertions(+)

diff --git a/hw/ppc/spapr_drc.c b/hw/ppc/spapr_drc.c
index 9e16505fa1..6918e0c9d1 100644
--- a/hw/ppc/spapr_drc.c
+++ b/hw/ppc/spapr_drc.c
@@ -151,9 +151,32 @@ static uint32_t drc_isolate_logical(SpaprDrc *drc)
 
 static uint32_t drc_unisolate_logical(SpaprDrc *drc)
 {
+SpaprMachineState *spapr = NULL;
+
 switch (drc->state) {
 case SPAPR_DRC_STATE_LOGICAL_UNISOLATE:
 case SPAPR_DRC_STATE_LOGICAL_CONFIGURED:
+/*
+ * Unisolating a logical DRC that was marked for unplug
+ * means that the kernel is refusing the removal.
+ */
+if (drc->unplug_requested && drc->dev) {
+if (spapr_drc_type(drc) == SPAPR_DR_CONNECTOR_TYPE_LMB) {
+spapr = SPAPR_MACHINE(qdev_get_machine());
+
+spapr_memory_unplug_rollback(spapr, drc->dev);
+}
+
+drc->unplug_requested = false;
+error_report("Device hotunplug rejected by the guest "
+ "for device %s", drc->dev->id);
+
+/*
+ * TODO: send a QAPI DEVICE_UNPLUG_ERROR event when
+ * it is implemented.
+ */
+}
+
 return RTAS_OUT_SUCCESS; /* Nothing to do */
 case SPAPR_DRC_STATE_LOGICAL_AVAILABLE:
 break; /* see below */
-- 
2.30.2




[PATCH v5 70/81] target/arm: Implement SVE2 LD1RO

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  |  4 ++
 target/arm/translate-sve.c | 97 ++
 2 files changed, 101 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 17adb393ff..df870ce23b 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1077,11 +1077,15 @@ LD_zpri 1010010 .. nreg:2 0 111 ... . 
. @rpri_load_msz
 # SVE load and broadcast quadword (scalar plus scalar)
 LD1RQ_zprr  1010010 .. 00 . 000 ... . . \
 @rprr_load_msz nreg=0
+LD1RO_zprr  1010010 .. 01 . 000 ... . . \
+@rprr_load_msz nreg=0
 
 # SVE load and broadcast quadword (scalar plus immediate)
 # LD1RQB, LD1RQH, LD1RQS, LD1RQD
 LD1RQ_zpri  1010010 .. 00 0 001 ... . . \
 @rpri_load_msz nreg=0
+LD1RO_zpri  1010010 .. 01 0 001 ... . . \
+@rpri_load_msz nreg=0
 
 # SVE 32-bit gather prefetch (scalar plus 32-bit scaled offsets)
 PRF 110 00 -1 - 0-- --- - 0 
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 04efa037f2..1cc98a1447 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -5586,6 +5586,103 @@ static bool trans_LD1RQ_zpri(DisasContext *s, 
arg_rpri_load *a)
 return true;
 }
 
+static void do_ldro(DisasContext *s, int zt, int pg, TCGv_i64 addr, int dtype)
+{
+unsigned vsz = vec_full_reg_size(s);
+unsigned vsz_r32;
+TCGv_ptr t_pg;
+TCGv_i32 t_desc;
+int desc, poff, doff;
+
+if (vsz < 32) {
+/*
+ * Note that this UNDEFINED check comes after CheckSVEEnabled()
+ * in the ARM pseudocode, which is the sve_access_check() done
+ * in our caller.  We should not now return false from the caller.
+ */
+unallocated_encoding(s);
+return;
+}
+
+/* Load the first octaword using the normal predicated load helpers.  */
+
+poff = pred_full_reg_offset(s, pg);
+if (vsz > 32) {
+/*
+ * Zero-extend the first 32 bits of the predicate into a temporary.
+ * This avoids triggering an assert making sure we don't have bits
+ * set within a predicate beyond VQ, but we have lowered VQ to 2
+ * for this load operation.
+ */
+TCGv_i64 tmp = tcg_temp_new_i64();
+#ifdef HOST_WORDS_BIGENDIAN
+poff += 4;
+#endif
+tcg_gen_ld32u_i64(tmp, cpu_env, poff);
+
+poff = offsetof(CPUARMState, vfp.preg_tmp);
+tcg_gen_st_i64(tmp, cpu_env, poff);
+tcg_temp_free_i64(tmp);
+}
+
+t_pg = tcg_temp_new_ptr();
+tcg_gen_addi_ptr(t_pg, cpu_env, poff);
+
+desc = simd_desc(32, 32, zt);
+t_desc = tcg_const_i32(desc);
+
+gen_helper_gvec_mem *fn
+= ldr_fns[s->mte_active[0]][s->be_data == MO_BE][dtype][0];
+fn(cpu_env, t_pg, addr, t_desc);
+
+tcg_temp_free_ptr(t_pg);
+tcg_temp_free_i32(t_desc);
+
+/*
+ * Replicate that first octaword.
+ * The replication happens in units of 32; if the full vector size
+ * is not a multiple of 32, the final bits are zeroed.
+ */
+doff = vec_full_reg_offset(s, zt);
+vsz_r32 = QEMU_ALIGN_DOWN(vsz, 32);
+if (vsz >= 64) {
+tcg_gen_gvec_dup_mem(5, doff + 32, doff, vsz_r32 - 32, vsz - 32);
+} else if (vsz > vsz_r32) {
+/* Nop move, with side effect of clearing the tail. */
+tcg_gen_gvec_mov(MO_64, doff, doff, vsz_r32, vsz);
+}
+}
+
+static bool trans_LD1RO_zprr(DisasContext *s, arg_rprr_load *a)
+{
+if (!dc_isar_feature(aa64_sve2_f64mm, s)) {
+return false;
+}
+if (a->rm == 31) {
+return false;
+}
+if (sve_access_check(s)) {
+TCGv_i64 addr = new_tmp_a64(s);
+tcg_gen_shli_i64(addr, cpu_reg(s, a->rm), dtype_msz(a->dtype));
+tcg_gen_add_i64(addr, addr, cpu_reg_sp(s, a->rn));
+do_ldro(s, a->rd, a->pg, addr, a->dtype);
+}
+return true;
+}
+
+static bool trans_LD1RO_zpri(DisasContext *s, arg_rpri_load *a)
+{
+if (!dc_isar_feature(aa64_sve2_f64mm, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+TCGv_i64 addr = new_tmp_a64(s);
+tcg_gen_addi_i64(addr, cpu_reg_sp(s, a->rn), a->imm * 32);
+do_ldro(s, a->rd, a->pg, addr, a->dtype);
+}
+return true;
+}
+
 /* Load and broadcast element.  */
 static bool trans_LD1R_zpri(DisasContext *s, arg_rpri_load *a)
 {
-- 
2.25.1




[PATCH v5 49/81] target/arm: Pass separate addend to FCMLA helpers

2021-04-16 Thread Richard Henderson
For SVE, we potentially have a 4th argument coming from the
movprfx instruction.  Currently we do not optimize movprfx,
so the problem is not visible.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h | 20 ++---
 target/arm/translate-a64.c  | 28 ++
 target/arm/translate-sve.c  |  5 ++--
 target/arm/vec_helper.c | 50 +
 target/arm/translate-neon.c.inc | 10 ---
 5 files changed, 62 insertions(+), 51 deletions(-)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index f4b092ee1c..72c5bf6aca 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -629,16 +629,16 @@ DEF_HELPER_FLAGS_5(gvec_fcadds, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_fcaddd, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
-DEF_HELPER_FLAGS_5(gvec_fcmlah, TCG_CALL_NO_RWG,
-   void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(gvec_fcmlah_idx, TCG_CALL_NO_RWG,
-   void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(gvec_fcmlas, TCG_CALL_NO_RWG,
-   void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
-   void, ptr, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_5(gvec_fcmlad, TCG_CALL_NO_RWG,
-   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(gvec_fcmlah, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(gvec_fcmlah_idx, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(gvec_fcmlas, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(gvec_fcmlas_idx, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(gvec_fcmlad, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_5(neon_paddh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(neon_pmaxh, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 004ff8c019..f45b81e56d 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -709,6 +709,23 @@ static void gen_gvec_op4_ool(DisasContext *s, bool is_q, 
int rd, int rn,
is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
 }
 
+/*
+ * Expand a 4-operand + fpstatus pointer + simd data value operation using
+ * an out-of-line helper.
+ */
+static void gen_gvec_op4_fpst(DisasContext *s, bool is_q, int rd, int rn,
+  int rm, int ra, bool is_fp16, int data,
+  gen_helper_gvec_4_ptr *fn)
+{
+TCGv_ptr fpst = fpstatus_ptr(is_fp16 ? FPST_FPCR_F16 : FPST_FPCR);
+tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, rd),
+   vec_full_reg_offset(s, rn),
+   vec_full_reg_offset(s, rm),
+   vec_full_reg_offset(s, ra), fpst,
+   is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
+tcg_temp_free_ptr(fpst);
+}
+
 /* Set ZF and NF based on a 64 bit result. This is alas fiddlier
  * than the 32 bit equivalent.
  */
@@ -12220,15 +12237,15 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 rot = extract32(opcode, 0, 2);
 switch (size) {
 case 1:
-gen_gvec_op3_fpst(s, is_q, rd, rn, rm, true, rot,
+gen_gvec_op4_fpst(s, is_q, rd, rn, rm, rd, true, rot,
   gen_helper_gvec_fcmlah);
 break;
 case 2:
-gen_gvec_op3_fpst(s, is_q, rd, rn, rm, false, rot,
+gen_gvec_op4_fpst(s, is_q, rd, rn, rm, rd, false, rot,
   gen_helper_gvec_fcmlas);
 break;
 case 3:
-gen_gvec_op3_fpst(s, is_q, rd, rn, rm, false, rot,
+gen_gvec_op4_fpst(s, is_q, rd, rn, rm, rd, false, rot,
   gen_helper_gvec_fcmlad);
 break;
 default:
@@ -13479,9 +13496,10 @@ static void disas_simd_indexed(DisasContext *s, 
uint32_t insn)
 {
 int rot = extract32(insn, 13, 2);
 int data = (index << 2) | rot;
-tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, rd),
+tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, rd),
vec_full_reg_offset(s, rn),
-   vec_full_reg_offset(s, rm), fpst,
+   vec_full_reg_offset(s, rm),
+   vec_full_reg_offset(s, rd), fpst,
is_q ? 16 : 8, vec_full_reg_size(s), data,
size == MO_64
? gen_helper_gvec_fcmlas_idx
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 0317d5386a..ffae6884d2 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4383,7 +4383,7 @@ 

[PATCH 0/1] pSeries: handle hotunplug errors in drc_unisolate_logical()

2021-04-16 Thread Daniel Henrique Barboza
Hi,

This is the QEMU side of a kernel change being proposed in [1],
where an attempt to implement a CPU hotunplug error report
mechanism was proposed.

The idea was discussed first in this mailing list [2], where the
RTAS set-indicator call would be used to signal QEMU when a kernel
side error happens during the unplug process.

Using the modified kernel and this patch, this is the result of a
failed CPU hotunplug attempt when trying to unplug the last online
CPU of the guest:

( QEMU command line: qemu-system-ppc64 -machine pseries,accel=kvm,usb=off
-smp 1,maxcpus=2,threads=1,cores=2,sockets=1 ... )

[root@localhost ~]# QEMU 5.2.92 monitor - type 'help' for more information
(qemu) device_add host-spapr-cpu-core,core-id=1,id=core1
(qemu) 

[root@localhost ~]# echo 0 > /sys/devices/system/cpu/cpu0/online
[   77.548442][   T13] IRQ 19: no longer affine to CPU0
[   77.548452][   T13] IRQ 20: no longer affine to CPU0
[   77.548458][   T13] IRQ 256: no longer affine to CPU0
[   77.548465][   T13] IRQ 258: no longer affine to CPU0
[   77.548472][   T13] IRQ 259: no longer affine to CPU0
[   77.548479][   T13] IRQ 260: no longer affine to CPU0
[   77.548485][   T13] IRQ 261: no longer affine to CPU0
[   77.548590][T0] cpu 0 (hwid 0) Ready to die...
[root@localhost ~]# (qemu) 
(qemu) device_del core1
(qemu) [   83.214073][  T100] pseries-hotplug-cpu: Failed to offline CPU 
PowerPC,POWER9, rc: -16
qemu-system-ppc64: Device hotunplug rejected by the guest for device core1

(qemu) 


As mentioned in the kernel change, if this is accepted I'll push
for a PAPR change to make this an official device removal error
report mechanism.


[1] 
https://lore.kernel.org/linuxppc-dev/20210416210216.380291-3-danielhb...@gmail.com/
[2] https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg06395.html

Daniel Henrique Barboza (1):
  spapr_drc.c: handle hotunplug errors in drc_unisolate_logical()

 hw/ppc/spapr_drc.c | 23 +++
 1 file changed, 23 insertions(+)

-- 
2.30.2




[PATCH v5 63/81] target/arm: Implement SVE2 crypto constructive binary operations

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  5 +
 target/arm/sve.decode  |  4 
 target/arm/translate-sve.c | 16 
 3 files changed, 25 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 904f5da290..b43fd066ba 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4246,6 +4246,11 @@ static inline bool isar_feature_aa64_sve2_bitperm(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
 }
 
+static inline bool isar_feature_aa64_sve2_sha3(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SHA3) != 0;
+}
+
 static inline bool isar_feature_aa64_sve2_sm4(const ARMISARegisters *id)
 {
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SM4) != 0;
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index fb4d32691e..7a2770cb0c 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1522,3 +1522,7 @@ AESMC   01000101 00 1011100 decrypt:1 0 
rd:5
 AESE01000101 00 10001 0 11100 0 . .  @rdn_rm_e0
 AESD01000101 00 10001 0 11100 1 . .  @rdn_rm_e0
 SM4E01000101 00 10001 1 11100 0 . .  @rdn_rm_e0
+
+# SVE2 crypto constructive binary operations
+SM4EKEY 01000101 00 1 . 0 0 . .  @rd_rn_rm_e0
+RAX101000101 00 1 . 0 1 . .  @rd_rn_rm_e0
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 681bbc6174..de8a6b2a15 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8121,3 +8121,19 @@ static bool trans_SM4E(DisasContext *s, arg_rrr_esz *a)
 {
 return do_sm4(s, a, gen_helper_crypto_sm4e);
 }
+
+static bool trans_SM4EKEY(DisasContext *s, arg_rrr_esz *a)
+{
+return do_sm4(s, a, gen_helper_crypto_sm4ekey);
+}
+
+static bool trans_RAX1(DisasContext *s, arg_rrr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve2_sha3, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_fn_zzz(s, gen_gvec_rax1, MO_64, a->rd, a->rn, a->rm);
+}
+return true;
+}
-- 
2.25.1




[PATCH v5 79/81] target/arm: Implement aarch32 VSUDOT, VUSDOT

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h|  5 +
 target/arm/neon-shared.decode   |  6 ++
 target/arm/translate-neon.c.inc | 27 +++
 3 files changed, 38 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index a0865e224c..134dc65e34 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -3783,6 +3783,11 @@ static inline bool isar_feature_aa32_predinv(const 
ARMISARegisters *id)
 return FIELD_EX32(id->id_isar6, ID_ISAR6, SPECRES) != 0;
 }
 
+static inline bool isar_feature_aa32_i8mm(const ARMISARegisters *id)
+{
+return FIELD_EX32(id->id_isar6, ID_ISAR6, I8MM) != 0;
+}
+
 static inline bool isar_feature_aa32_ras(const ARMISARegisters *id)
 {
 return FIELD_EX32(id->id_pfr0, ID_PFR0, RAS) != 0;
diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
index 2d94369750..5befaec87b 100644
--- a/target/arm/neon-shared.decode
+++ b/target/arm/neon-shared.decode
@@ -50,6 +50,8 @@ VSDOT   110 00 . 10   1101 . q:1 . 0  
\
vm=%vm_dp vn=%vn_dp vd=%vd_dp
 VUDOT   110 00 . 10   1101 . q:1 . 1  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp
+VUSDOT  110 01 . 10   1101 . q:1 . 0  \
+   vm=%vm_dp vn=%vn_dp vd=%vd_dp
 
 # VFM[AS]L
 VFML    110 0 s:1 . 10   1000 . 0 . 1  \
@@ -66,6 +68,10 @@ VSDOT_scalar    1110 0 . 10   1101 . q:1 index:1 
0 vm:4 \
vn=%vn_dp vd=%vd_dp
 VUDOT_scalar    1110 0 . 10   1101 . q:1 index:1 1 vm:4 \
vn=%vn_dp vd=%vd_dp
+VUSDOT_scalar   1110 1 . 00   1101 . q:1 index:1 0 vm:4 \
+   vn=%vn_dp vd=%vd_dp
+VSUDOT_scalar   1110 1 . 00   1101 . q:1 index:1 1 vm:4 \
+   vn=%vn_dp vd=%vd_dp
 
 %vfml_scalar_q0_rm 0:3 5:1
 %vfml_scalar_q1_index 5:1 3:1
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
index 2fd6478d3c..c322615915 100644
--- a/target/arm/translate-neon.c.inc
+++ b/target/arm/translate-neon.c.inc
@@ -278,6 +278,15 @@ static bool trans_VUDOT(DisasContext *s, arg_VUDOT *a)
 gen_helper_gvec_udot_b);
 }
 
+static bool trans_VUSDOT(DisasContext *s, arg_VUSDOT *a)
+{
+if (!dc_isar_feature(aa32_i8mm, s)) {
+return false;
+}
+return do_neon_ddda(s, a->q * 7, a->vd, a->vn, a->vm, 0,
+gen_helper_gvec_usdot_b);
+}
+
 static bool trans_VFML(DisasContext *s, arg_VFML *a)
 {
 int opr_sz;
@@ -345,6 +354,24 @@ static bool trans_VUDOT_scalar(DisasContext *s, 
arg_VUDOT_scalar *a)
 gen_helper_gvec_udot_idx_b);
 }
 
+static bool trans_VUSDOT_scalar(DisasContext *s, arg_VUSDOT_scalar *a)
+{
+if (!dc_isar_feature(aa32_i8mm, s)) {
+return false;
+}
+return do_neon_ddda(s, a->q * 6, a->vd, a->vn, a->vm, a->index,
+gen_helper_gvec_usdot_idx_b);
+}
+
+static bool trans_VSUDOT_scalar(DisasContext *s, arg_VSUDOT_scalar *a)
+{
+if (!dc_isar_feature(aa32_i8mm, s)) {
+return false;
+}
+return do_neon_ddda(s, a->q * 6, a->vd, a->vn, a->vm, a->index,
+gen_helper_gvec_sudot_idx_b);
+}
+
 static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
 {
 int opr_sz;
-- 
2.25.1




[PATCH v5 51/81] target/arm: Split out formats for 3 vectors + 1 index

2021-04-16 Thread Richard Henderson
Used by FMLA and DOT, but will shortly be used more.
Split FMLA from FMLS to avoid an extra sub field;
similarly for SDOT from UDOT.

Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  | 29 +++--
 target/arm/translate-sve.c | 38 --
 2 files changed, 47 insertions(+), 20 deletions(-)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index a504b55dad..74ac72bdbd 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -73,6 +73,7 @@
 _s rd pg rn rm s
 _esz   rd pg rn rm esz
 _esz   rd ra rn rm esz
+_esz   rd rn rm ra index esz
 _esz  rd pg rn rm ra esz
 _esz   rd pg rn imm esz
   rd esz pat s
@@ -252,6 +253,14 @@
 @rrx_2   .. . index:2 rm:3 .. rn:5 rd:5  _esz
 @rrx_1   .. . index:1 rm:4 .. rn:5 rd:5  _esz
 
+# Three registers and a scalar by N-bit index
+@rrxr_3  .. . ..  rm:3 .. rn:5 rd:5 \
+_esz ra=%reg_movprfx index=%index3_22_19
+@rrxr_2  .. . index:2 rm:3 .. rn:5 rd:5 \
+_esz ra=%reg_movprfx
+@rrxr_1  .. . index:1 rm:4 .. rn:5 rd:5 \
+_esz ra=%reg_movprfx
+
 ###
 # Instruction patterns.  Grouped according to the SVE encodingindex.xhtml.
 
@@ -767,10 +776,10 @@ DOT_01000100 1 sz:1 0 rm:5 0 u:1 rn:5 
rd:5 \
 ra=%reg_movprfx
 
 # SVE integer dot product (indexed)
-DOT_zzxw01000100 101 index:2 rm:3 0 u:1 rn:5 rd:5 \
-sz=0 ra=%reg_movprfx
-DOT_zzxw01000100 111 index:1 rm:4 0 u:1 rn:5 rd:5 \
-sz=1 ra=%reg_movprfx
+SDOT_zzxw_s 01000100 10 1 . 00 . .   @rrxr_2 esz=2
+SDOT_zzxw_d 01000100 11 1 . 00 . .   @rrxr_1 esz=3
+UDOT_zzxw_s 01000100 10 1 . 01 . .   @rrxr_2 esz=2
+UDOT_zzxw_d 01000100 11 1 . 01 . .   @rrxr_1 esz=3
 
 # SVE floating-point complex add (predicated)
 FCADD   01100100 esz:2 0 rot:1 100 pg:3 rm:5 rd:5 \
@@ -789,12 +798,12 @@ FCMLA_zzxz  01100100 11 1 index:1 rm:4 0001 rot:2 
rn:5 rd:5 \
 ### SVE FP Multiply-Add Indexed Group
 
 # SVE floating-point multiply-add (indexed)
-FMLA_zzxz   01100100 0.1 .. rm:3 0 sub:1 rn:5 rd:5 \
-ra=%reg_movprfx index=%index3_22_19 esz=1
-FMLA_zzxz   01100100 101 index:2 rm:3 0 sub:1 rn:5 rd:5 \
-ra=%reg_movprfx esz=2
-FMLA_zzxz   01100100 111 index:1 rm:4 0 sub:1 rn:5 rd:5 \
-ra=%reg_movprfx esz=3
+FMLA_zzxz   01100100 0. 1 . 00 . .  @rrxr_3 esz=1
+FMLA_zzxz   01100100 10 1 . 00 . .  @rrxr_2 esz=2
+FMLA_zzxz   01100100 11 1 . 00 . .  @rrxr_1 esz=3
+FMLS_zzxz   01100100 0. 1 . 01 . .  @rrxr_3 esz=1
+FMLS_zzxz   01100100 10 1 . 01 . .  @rrxr_2 esz=2
+FMLS_zzxz   01100100 11 1 . 01 . .  @rrxr_1 esz=3
 
 ### SVE FP Multiply Indexed Group
 
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ffae6884d2..2eb21b28e1 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3813,26 +3813,34 @@ static bool trans_DOT_(DisasContext *s, 
arg_DOT_ *a)
 return true;
 }
 
-static bool trans_DOT_zzxw(DisasContext *s, arg_DOT_zzxw *a)
+static bool do_zzxz_ool(DisasContext *s, arg_rrxr_esz *a,
+gen_helper_gvec_4 *fn)
 {
-static gen_helper_gvec_4 * const fns[2][2] = {
-{ gen_helper_gvec_sdot_idx_b, gen_helper_gvec_sdot_idx_h },
-{ gen_helper_gvec_udot_idx_b, gen_helper_gvec_udot_idx_h }
-};
-
+if (fn == NULL) {
+return false;
+}
 if (sve_access_check(s)) {
-gen_gvec_ool_(s, fns[a->u][a->sz], a->rd, a->rn, a->rm,
-  a->ra, a->index);
+gen_gvec_ool_(s, fn, a->rd, a->rn, a->rm, a->ra, a->index);
 }
 return true;
 }
 
+#define DO_RRXR(NAME, FUNC) \
+static bool NAME(DisasContext *s, arg_rrxr_esz *a)  \
+{ return do_zzxz_ool(s, a, FUNC); }
+
+DO_RRXR(trans_SDOT_zzxw_s, gen_helper_gvec_sdot_idx_b)
+DO_RRXR(trans_SDOT_zzxw_d, gen_helper_gvec_sdot_idx_h)
+DO_RRXR(trans_UDOT_zzxw_s, gen_helper_gvec_udot_idx_b)
+DO_RRXR(trans_UDOT_zzxw_d, gen_helper_gvec_udot_idx_h)
+
+#undef DO_RRXR
 
 /*
  *** SVE Floating Point Multiply-Add Indexed Group
  */
 
-static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a)
+static bool do_FMLA_zzxz(DisasContext *s, arg_rrxr_esz *a, bool sub)
 {
 static gen_helper_gvec_4_ptr * const fns[3] = {
 gen_helper_gvec_fmla_idx_h,
@@ -3847,13 +3855,23 @@ static bool trans_FMLA_zzxz(DisasContext *s, 
arg_FMLA_zzxz *a)
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),

[PATCH v5 69/81] target/arm: Share table of sve load functions

2021-04-16 Thread Richard Henderson
The table used by do_ldrq is a subset of the table used by do_ld_zpa;
we can share them by passing dtype instead of msz to do_ldrq.

Signed-off-by: Richard Henderson 
---
 target/arm/translate-sve.c | 254 ++---
 1 file changed, 126 insertions(+), 128 deletions(-)

diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fe8f87d55e..04efa037f2 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -5153,128 +5153,130 @@ static void do_mem_zpa(DisasContext *s, int zt, int 
pg, TCGv_i64 addr,
 tcg_temp_free_i32(t_desc);
 }
 
+/* Indexed by [mte][be][dtype][nreg] */
+static gen_helper_gvec_mem * const ldr_fns[2][2][16][4] = {
+{ /* mte inactive, little-endian */
+  { { gen_helper_sve_ld1bb_r, gen_helper_sve_ld2bb_r,
+  gen_helper_sve_ld3bb_r, gen_helper_sve_ld4bb_r },
+{ gen_helper_sve_ld1bhu_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bsu_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bdu_r, NULL, NULL, NULL },
+
+{ gen_helper_sve_ld1sds_le_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1hh_le_r, gen_helper_sve_ld2hh_le_r,
+  gen_helper_sve_ld3hh_le_r, gen_helper_sve_ld4hh_le_r },
+{ gen_helper_sve_ld1hsu_le_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1hdu_le_r, NULL, NULL, NULL },
+
+{ gen_helper_sve_ld1hds_le_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1hss_le_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1ss_le_r, gen_helper_sve_ld2ss_le_r,
+  gen_helper_sve_ld3ss_le_r, gen_helper_sve_ld4ss_le_r },
+{ gen_helper_sve_ld1sdu_le_r, NULL, NULL, NULL },
+
+{ gen_helper_sve_ld1bds_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bss_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bhs_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1dd_le_r, gen_helper_sve_ld2dd_le_r,
+  gen_helper_sve_ld3dd_le_r, gen_helper_sve_ld4dd_le_r } },
+
+  /* mte inactive, big-endian */
+  { { gen_helper_sve_ld1bb_r, gen_helper_sve_ld2bb_r,
+  gen_helper_sve_ld3bb_r, gen_helper_sve_ld4bb_r },
+{ gen_helper_sve_ld1bhu_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bsu_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bdu_r, NULL, NULL, NULL },
+
+{ gen_helper_sve_ld1sds_be_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1hh_be_r, gen_helper_sve_ld2hh_be_r,
+  gen_helper_sve_ld3hh_be_r, gen_helper_sve_ld4hh_be_r },
+{ gen_helper_sve_ld1hsu_be_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1hdu_be_r, NULL, NULL, NULL },
+
+{ gen_helper_sve_ld1hds_be_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1hss_be_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1ss_be_r, gen_helper_sve_ld2ss_be_r,
+  gen_helper_sve_ld3ss_be_r, gen_helper_sve_ld4ss_be_r },
+{ gen_helper_sve_ld1sdu_be_r, NULL, NULL, NULL },
+
+{ gen_helper_sve_ld1bds_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bss_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bhs_r, NULL, NULL, NULL },
+{ gen_helper_sve_ld1dd_be_r, gen_helper_sve_ld2dd_be_r,
+  gen_helper_sve_ld3dd_be_r, gen_helper_sve_ld4dd_be_r } } },
+
+{ /* mte active, little-endian */
+  { { gen_helper_sve_ld1bb_r_mte,
+  gen_helper_sve_ld2bb_r_mte,
+  gen_helper_sve_ld3bb_r_mte,
+  gen_helper_sve_ld4bb_r_mte },
+{ gen_helper_sve_ld1bhu_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bsu_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bdu_r_mte, NULL, NULL, NULL },
+
+{ gen_helper_sve_ld1sds_le_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1hh_le_r_mte,
+  gen_helper_sve_ld2hh_le_r_mte,
+  gen_helper_sve_ld3hh_le_r_mte,
+  gen_helper_sve_ld4hh_le_r_mte },
+{ gen_helper_sve_ld1hsu_le_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1hdu_le_r_mte, NULL, NULL, NULL },
+
+{ gen_helper_sve_ld1hds_le_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1hss_le_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1ss_le_r_mte,
+  gen_helper_sve_ld2ss_le_r_mte,
+  gen_helper_sve_ld3ss_le_r_mte,
+  gen_helper_sve_ld4ss_le_r_mte },
+{ gen_helper_sve_ld1sdu_le_r_mte, NULL, NULL, NULL },
+
+{ gen_helper_sve_ld1bds_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bss_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bhs_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1dd_le_r_mte,
+  gen_helper_sve_ld2dd_le_r_mte,
+  gen_helper_sve_ld3dd_le_r_mte,
+  gen_helper_sve_ld4dd_le_r_mte } },
+
+  /* mte active, big-endian */
+  { { gen_helper_sve_ld1bb_r_mte,
+  gen_helper_sve_ld2bb_r_mte,
+  gen_helper_sve_ld3bb_r_mte,
+  gen_helper_sve_ld4bb_r_mte },
+{ gen_helper_sve_ld1bhu_r_mte, NULL, NULL, NULL },
+{ gen_helper_sve_ld1bsu_r_mte, NULL, NULL, NULL },
+{ 

[PATCH v5 77/81] target/arm: Fix decode for VDOT (indexed)

2021-04-16 Thread Richard Henderson
We were extracting the M register twice, once incorrectly
as M:vm and once correctly as rm.  Remove the incorrect
name and remove the incorrect decode.

Signed-off-by: Richard Henderson 
---
 target/arm/neon-shared.decode   |  4 +-
 target/arm/translate-neon.c.inc | 90 ++---
 2 files changed, 40 insertions(+), 54 deletions(-)

diff --git a/target/arm/neon-shared.decode b/target/arm/neon-shared.decode
index ca0c699072..facb621450 100644
--- a/target/arm/neon-shared.decode
+++ b/target/arm/neon-shared.decode
@@ -61,8 +61,8 @@ VCMLA_scalar    1110 0 . rot:2   1000 . q:1 
index:1 0 vm:4 \
 VCMLA_scalar    1110 1 . rot:2   1000 . q:1 . 0  \
vm=%vm_dp vn=%vn_dp vd=%vd_dp size=2 index=0
 
-VDOT_scalar 1110 0 . 10   1101 . q:1 index:1 u:1 rm:4 \
-   vm=%vm_dp vn=%vn_dp vd=%vd_dp
+VDOT_scalar 1110 0 . 10   1101 . q:1 index:1 u:1 vm:4 \
+   vn=%vn_dp vd=%vd_dp
 
 %vfml_scalar_q0_rm 0:3 5:1
 %vfml_scalar_q1_index 5:1 3:1
diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
index c1fbe21ae6..d9901c0153 100644
--- a/target/arm/translate-neon.c.inc
+++ b/target/arm/translate-neon.c.inc
@@ -142,6 +142,36 @@ static void neon_store_element64(int reg, int ele, MemOp 
size, TCGv_i64 var)
 }
 }
 
+static bool do_neon_ddda(DisasContext *s, int q, int vd, int vn, int vm,
+ int data, gen_helper_gvec_4 *fn_gvec)
+{
+/* UNDEF accesses to D16-D31 if they don't exist. */
+if (((vd | vn | vm) & 0x10) && !dc_isar_feature(aa32_simd_r32, s)) {
+return false;
+}
+
+/*
+ * UNDEF accesses to odd registers for each bit of Q.
+ * Q will be 0b111 for all Q-reg instructions, otherwise
+ * when we have mixed Q- and D-reg inputs.
+ */
+if (((vd & 1) * 4 | (vn & 1) * 2 | (vm & 1)) & q) {
+return false;
+}
+
+if (!vfp_access_check(s)) {
+return true;
+}
+
+int opr_sz = q ? 16 : 8;
+tcg_gen_gvec_4_ool(vfp_reg_offset(1, vd),
+   vfp_reg_offset(1, vn),
+   vfp_reg_offset(1, vm),
+   vfp_reg_offset(1, vd),
+   opr_sz, opr_sz, data, fn_gvec);
+return true;
+}
+
 static bool do_neon_ddda_fpst(DisasContext *s, int q, int vd, int vn, int vm,
   int data, ARMFPStatusFlavour fp_flavor,
   gen_helper_gvec_4_ptr *fn_gvec_ptr)
@@ -232,35 +262,13 @@ static bool trans_VCADD(DisasContext *s, arg_VCADD *a)
 
 static bool trans_VDOT(DisasContext *s, arg_VDOT *a)
 {
-int opr_sz;
-gen_helper_gvec_4 *fn_gvec;
-
 if (!dc_isar_feature(aa32_dp, s)) {
 return false;
 }
-
-/* UNDEF accesses to D16-D31 if they don't exist. */
-if (!dc_isar_feature(aa32_simd_r32, s) &&
-((a->vd | a->vn | a->vm) & 0x10)) {
-return false;
-}
-
-if ((a->vn | a->vm | a->vd) & a->q) {
-return false;
-}
-
-if (!vfp_access_check(s)) {
-return true;
-}
-
-opr_sz = (1 + a->q) * 8;
-fn_gvec = a->u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b;
-tcg_gen_gvec_4_ool(vfp_reg_offset(1, a->vd),
-   vfp_reg_offset(1, a->vn),
-   vfp_reg_offset(1, a->vm),
-   vfp_reg_offset(1, a->vd),
-   opr_sz, opr_sz, 0, fn_gvec);
-return true;
+return do_neon_ddda(s, a->q * 7, a->vd, a->vn, a->vm, 0,
+a->u
+? gen_helper_gvec_udot_b
+: gen_helper_gvec_sdot_b);
 }
 
 static bool trans_VFML(DisasContext *s, arg_VFML *a)
@@ -314,35 +322,13 @@ static bool trans_VCMLA_scalar(DisasContext *s, 
arg_VCMLA_scalar *a)
 
 static bool trans_VDOT_scalar(DisasContext *s, arg_VDOT_scalar *a)
 {
-gen_helper_gvec_4 *fn_gvec;
-int opr_sz;
-
 if (!dc_isar_feature(aa32_dp, s)) {
 return false;
 }
-
-/* UNDEF accesses to D16-D31 if they don't exist. */
-if (!dc_isar_feature(aa32_simd_r32, s) &&
-((a->vd | a->vn) & 0x10)) {
-return false;
-}
-
-if ((a->vd | a->vn) & a->q) {
-return false;
-}
-
-if (!vfp_access_check(s)) {
-return true;
-}
-
-fn_gvec = a->u ? gen_helper_gvec_udot_idx_b : gen_helper_gvec_sdot_idx_b;
-opr_sz = (1 + a->q) * 8;
-tcg_gen_gvec_4_ool(vfp_reg_offset(1, a->vd),
-   vfp_reg_offset(1, a->vn),
-   vfp_reg_offset(1, a->rm),
-   vfp_reg_offset(1, a->vd),
-   opr_sz, opr_sz, a->index, fn_gvec);
-return true;
+return do_neon_ddda(s, a->q * 6, a->vd, a->vn, a->vm, a->index,
+a->u
+? gen_helper_gvec_udot_idx_b
+: gen_helper_gvec_sdot_idx_b);
 }
 
 static bool trans_VFML_scalar(DisasContext *s, arg_VFML_scalar *a)
-- 
2.25.1




[PATCH v5 46/81] target/arm: Implement SVE2 FMMLA

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200422165503.13511-1-stepl...@quicinc.com>
[rth: Fix indexing in helpers, expand macro to straight functions.]
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   | 10 ++
 target/arm/helper-sve.h|  3 ++
 target/arm/sve.decode  |  4 +++
 target/arm/sve_helper.c| 74 ++
 target/arm/translate-sve.c | 34 ++
 5 files changed, 125 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index ae787fac8a..e44bb8973a 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4246,6 +4246,16 @@ static inline bool isar_feature_aa64_sve2_bitperm(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
 }
 
+static inline bool isar_feature_aa64_sve2_f32mm(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F32MM) != 0;
+}
+
+static inline bool isar_feature_aa64_sve2_f64mm(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, F64MM) != 0;
+}
+
 /*
  * Feature tests for "does this exist in either 32-bit or 64-bit?"
  */
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 28b8f00201..7e99dcd119 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2662,3 +2662,6 @@ DEF_HELPER_FLAGS_5(sve2_sqrdcmlah__s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_sqrdcmlah__d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(fmmla_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, 
i32)
+DEF_HELPER_FLAGS_6(fmmla_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, 
i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index c3958bed6a..cb2ee86228 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1389,6 +1389,10 @@ UMLSLT_zzzw 01000100 .. 0 . 010 111 . .  
@rda_rn_rm
 CMLA_   01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5  ra=%reg_movprfx
 SQRDCMLAH_  01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5  ra=%reg_movprfx
 
+### SVE2 floating point matrix multiply accumulate
+
+FMMLA   01100100 .. 1 . 111001 . .  @rda_rn_rm
+
 ### SVE2 Memory Gather Load Group
 
 # SVE2 64-bit gather non-temporal load
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index c77003217e..f285c90b70 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -7232,3 +7232,77 @@ void HELPER(sve2_xar_s)(void *vd, void *vn, void *vm, 
uint32_t desc)
 d[i] = ror32(n[i] ^ m[i], shr);
 }
 }
+
+void HELPER(fmmla_s)(void *vd, void *vn, void *vm, void *va,
+ void *status, uint32_t desc)
+{
+intptr_t s, opr_sz = simd_oprsz(desc) / (sizeof(float32) * 4);
+
+for (s = 0; s < opr_sz; ++s) {
+float32 *n = vn + s * sizeof(float32) * 4;
+float32 *m = vm + s * sizeof(float32) * 4;
+float32 *a = va + s * sizeof(float32) * 4;
+float32 *d = vd + s * sizeof(float32) * 4;
+float32 n00 = n[H4(0)], n01 = n[H4(1)];
+float32 n10 = n[H4(2)], n11 = n[H4(3)];
+float32 m00 = m[H4(0)], m01 = m[H4(1)];
+float32 m10 = m[H4(2)], m11 = m[H4(3)];
+float32 p0, p1;
+
+/* i = 0, j = 0 */
+p0 = float32_mul(n00, m00, status);
+p1 = float32_mul(n01, m01, status);
+d[H4(0)] = float32_add(a[H4(0)], float32_add(p0, p1, status), status);
+
+/* i = 0, j = 1 */
+p0 = float32_mul(n00, m10, status);
+p1 = float32_mul(n01, m11, status);
+d[H4(1)] = float32_add(a[H4(1)], float32_add(p0, p1, status), status);
+
+/* i = 1, j = 0 */
+p0 = float32_mul(n10, m00, status);
+p1 = float32_mul(n11, m01, status);
+d[H4(2)] = float32_add(a[H4(2)], float32_add(p0, p1, status), status);
+
+/* i = 1, j = 1 */
+p0 = float32_mul(n10, m10, status);
+p1 = float32_mul(n11, m11, status);
+d[H4(3)] = float32_add(a[H4(3)], float32_add(p0, p1, status), status);
+}
+}
+
+void HELPER(fmmla_d)(void *vd, void *vn, void *vm, void *va,
+ void *status, uint32_t desc)
+{
+intptr_t s, opr_sz = simd_oprsz(desc) / (sizeof(float64) * 4);
+
+for (s = 0; s < opr_sz; ++s) {
+float64 *n = vn + s * sizeof(float64) * 4;
+float64 *m = vm + s * sizeof(float64) * 4;
+float64 *a = va + s * sizeof(float64) * 4;
+float64 *d = vd + s * sizeof(float64) * 4;
+float64 n00 = n[0], n01 = n[1], n10 = n[2], n11 = n[3];
+float64 m00 = m[0], m01 = m[1], m10 = m[2], m11 = m[3];
+float64 p0, p1;
+
+/* i = 0, j = 0 */
+p0 = float64_mul(n00, m00, status);
+p1 = float64_mul(n01, m01, status);
+d[0] = float64_add(a[0], float64_add(p0, p1, status), status);
+
+/* i = 0, j = 1 */
+p0 = float64_mul(n00, m10, status);
+p1 = float64_mul(n01, m11, 

[PATCH v5 73/81] target/arm: Implement SVE2 fp multiply-add long

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Implements both vectored and indexed FMLALB, FMLALT, FMLSLB, FMLSLT

Signed-off-by: Stephen Long 
Message-Id: <20200504171240.11220-1-stepl...@quicinc.com>
[rth: Rearrange to use float16_to_float32_by_bits.]
Signed-off-by: Richard Henderson 
---
 target/arm/helper.h|  5 +++
 target/arm/sve.decode  | 14 +++
 target/arm/translate-sve.c | 75 ++
 target/arm/vec_helper.c| 51 ++
 4 files changed, 145 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index 86f938c938..e8b16a401f 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -987,6 +987,11 @@ DEF_HELPER_FLAGS_4(sve2_sqrdmulh_idx_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_4(sve2_sqrdmulh_idx_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_6(sve2_fmlal_zzzw_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_fmlal_zzxw_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(gvec_xar_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
 #ifdef TARGET_AARCH64
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index cfdee8955b..63870b7539 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -132,6 +132,8 @@
 _esz ra=%reg_movprfx
 
 # Four operand with unused vector element size
+@rda_rn_rm_e0    ... rm:5 ... ... rn:5 rd:5 \
+_esz esz=0 ra=%reg_movprfx
 @rdn_ra_rm_e0    ... rm:5 ... ... ra:5 rd:5 \
 _esz esz=0 rn=%reg_movprfx
 
@@ -1559,3 +1561,15 @@ FCVTLT_sd   01100100 11 0010 11 101 ... . .  
@rd_pg_rn_e0
 
 ### SVE2 floating-point convert to integer
 FLOGB   01100101 00 011 esz:2 0101 pg:3 rn:5 rd:5  _esz
+
+### SVE2 floating-point multiply-add long (vectors)
+FMLALB_zzzw 01100100 10 1 . 10 0 00 0 . .  @rda_rn_rm_e0
+FMLALT_zzzw 01100100 10 1 . 10 0 00 1 . .  @rda_rn_rm_e0
+FMLSLB_zzzw 01100100 10 1 . 10 1 00 0 . .  @rda_rn_rm_e0
+FMLSLT_zzzw 01100100 10 1 . 10 1 00 1 . .  @rda_rn_rm_e0
+
+### SVE2 floating-point multiply-add long (indexed)
+FMLALB_zzxw 01100100 10 1 . 0100.0 . . @rrxr_3a esz=2
+FMLALT_zzxw 01100100 10 1 . 0100.1 . . @rrxr_3a esz=2
+FMLSLB_zzxw 01100100 10 1 . 0110.0 . . @rrxr_3a esz=2
+FMLSLT_zzxw 01100100 10 1 . 0110.1 . . @rrxr_3a esz=2
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 26377b1227..ae628968da 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8468,3 +8468,78 @@ static bool trans_FLOGB(DisasContext *s, arg_rpr_esz *a)
 }
 return true;
 }
+
+static bool do_FMLAL_zzzw(DisasContext *s, arg__esz *a, bool sub, bool sel)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+unsigned vsz = vec_full_reg_size(s);
+tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   vec_full_reg_offset(s, a->ra),
+   cpu_env, vsz, vsz, (sel << 1) | sub,
+   gen_helper_sve2_fmlal_zzzw_s);
+}
+return true;
+}
+
+static bool trans_FMLALB_zzzw(DisasContext *s, arg__esz *a)
+{
+return do_FMLAL_zzzw(s, a, false, false);
+}
+
+static bool trans_FMLALT_zzzw(DisasContext *s, arg__esz *a)
+{
+return do_FMLAL_zzzw(s, a, false, true);
+}
+
+static bool trans_FMLSLB_zzzw(DisasContext *s, arg__esz *a)
+{
+return do_FMLAL_zzzw(s, a, true, false);
+}
+
+static bool trans_FMLSLT_zzzw(DisasContext *s, arg__esz *a)
+{
+return do_FMLAL_zzzw(s, a, true, true);
+}
+
+static bool do_FMLAL_zzxw(DisasContext *s, arg_rrxr_esz *a, bool sub, bool sel)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+unsigned vsz = vec_full_reg_size(s);
+tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   vec_full_reg_offset(s, a->ra),
+   cpu_env, vsz, vsz,
+   (a->index << 2) | (sel << 1) | sub,
+   gen_helper_sve2_fmlal_zzxw_s);
+}
+return true;
+}
+
+static bool trans_FMLALB_zzxw(DisasContext *s, arg_rrxr_esz *a)
+{
+return do_FMLAL_zzxw(s, a, false, false);
+}
+
+static bool trans_FMLALT_zzxw(DisasContext *s, arg_rrxr_esz *a)
+{
+return do_FMLAL_zzxw(s, a, false, true);
+}
+
+static bool trans_FMLSLB_zzxw(DisasContext *s, arg_rrxr_esz *a)
+{
+return do_FMLAL_zzxw(s, a, true, false);
+}
+
+static bool trans_FMLSLT_zzxw(DisasContext *s, 

[PATCH v5 56/81] target/arm: Implement SVE2 saturating multiply (indexed)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  5 +
 target/arm/sve.decode  | 12 
 target/arm/sve_helper.c| 20 
 target/arm/translate-sve.c | 19 +++
 4 files changed, 52 insertions(+), 4 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 08398800bd..0be0d90bee 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2688,3 +2688,8 @@ DEF_HELPER_FLAGS_5(sve2_sqdmlsl_idx_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_sqdmlsl_idx_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 8d2709d3cc..a3b9fb95f9 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -255,6 +255,12 @@
 @rrx_2   .. . index:2 rm:3 .. rn:5 rd:5  _esz
 @rrx_1   .. . index:1 rm:4 .. rn:5 rd:5  _esz
 
+# Two registers and a scalar by N-bit index, alternate
+@rrx_3a  .. . .. rm:3 .. rn:5 rd:5 \
+_esz index=%index3_19_11
+@rrx_2a  .. . .  rm:4 .. rn:5 rd:5 \
+_esz index=%index2_20_11
+
 # Three registers and a scalar by N-bit index
 @rrxr_3  .. . ..  rm:3 .. rn:5 rd:5 \
 _esz ra=%reg_movprfx index=%index3_22_19
@@ -817,6 +823,12 @@ SQDMLSLB_zzxw_d 01000100 11 1 . 0011.0 . .   
@rrxr_2a esz=3
 SQDMLSLT_zzxw_s 01000100 10 1 . 0011.1 . .   @rrxr_3a esz=2
 SQDMLSLT_zzxw_d 01000100 11 1 . 0011.1 . .   @rrxr_2a esz=3
 
+# SVE2 saturating multiply (indexed)
+SQDMULLB_zzx_s  01000100 10 1 . 1110.0 . .   @rrx_3a esz=2
+SQDMULLB_zzx_d  01000100 11 1 . 1110.0 . .   @rrx_2a esz=3
+SQDMULLT_zzx_s  01000100 10 1 . 1110.1 . .   @rrx_3a esz=2
+SQDMULLT_zzx_d  01000100 11 1 . 1110.1 . .   @rrx_2a esz=3
+
 # SVE2 integer multiply (indexed)
 MUL_zzx_h   01000100 0. 1 . 10 . .   @rrx_3 esz=1
 MUL_zzx_s   01000100 10 1 . 10 . .   @rrx_2 esz=2
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index c43c38044b..e8a8425522 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1560,6 +1560,26 @@ DO_ZZXW(sve2_sqdmlsl_idx_d, int64_t, int32_t, , 
H1_4, DO_SQDMLSL_D)
 
 #undef DO_ZZXW
 
+#define DO_ZZX(NAME, TYPEW, TYPEN, HW, HN, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc)\
+{ \
+intptr_t i, j, oprsz = simd_oprsz(desc);  \
+intptr_t sel = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN);   \
+intptr_t idx = extract32(desc, SIMD_DATA_SHIFT + 1, 3) * sizeof(TYPEN); \
+for (i = 0; i < oprsz; i += 16) { \
+TYPEW mm = *(TYPEN *)(vm + i + idx);  \
+for (j = 0; j < 16; j += sizeof(TYPEW)) { \
+TYPEW nn = *(TYPEN *)(vn + HN(i + j + sel));  \
+*(TYPEW *)(vd + HW(i + j)) = OP(nn, mm);  \
+} \
+} \
+}
+
+DO_ZZX(sve2_sqdmull_idx_s, int32_t, int16_t, H1_4, H1_2, do_sqdmull_s)
+DO_ZZX(sve2_sqdmull_idx_d, int64_t, int32_t, , H1_4, do_sqdmull_d)
+
+#undef DO_ZZX
+
 #define DO_BITPERM(NAME, TYPE, OP) \
 void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
 {  \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 4c304c0124..d3fcf2e4c1 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3840,8 +3840,8 @@ DO_RRXR(trans_UDOT_zzxw_d, gen_helper_gvec_udot_idx_h)
 
 #undef DO_RRXR
 
-static bool do_sve2_zzx_ool(DisasContext *s, arg_rrx_esz *a,
-gen_helper_gvec_3 *fn)
+static bool do_sve2_zzx_data(DisasContext *s, arg_rrx_esz *a,
+ gen_helper_gvec_3 *fn, int data)
 {
 if (fn == NULL || !dc_isar_feature(aa64_sve2, s)) {
 return false;
@@ -3851,14 +3851,14 @@ static bool do_sve2_zzx_ool(DisasContext *s, 
arg_rrx_esz *a,
 tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
vec_full_reg_offset(s, a->rn),
vec_full_reg_offset(s, a->rm),
-   vsz, vsz, a->index, fn);
+   vsz, vsz, data, fn);
 }
 return true;
 }
 
 #define DO_SVE2_RRX(NAME, FUNC) \
 static bool 

[PATCH v5 75/81] target/arm: Split out do_neon_ddda_fpst

2021-04-16 Thread Richard Henderson
Split out a helper that can handle the 4-register
format for helpers shared with SVE.

Signed-off-by: Richard Henderson 
---
 target/arm/translate-neon.c.inc | 98 +++--
 1 file changed, 43 insertions(+), 55 deletions(-)

diff --git a/target/arm/translate-neon.c.inc b/target/arm/translate-neon.c.inc
index 6f418bd8de..6385d13a7e 100644
--- a/target/arm/translate-neon.c.inc
+++ b/target/arm/translate-neon.c.inc
@@ -142,24 +142,21 @@ static void neon_store_element64(int reg, int ele, MemOp 
size, TCGv_i64 var)
 }
 }
 
-static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
+static bool do_neon_ddda_fpst(DisasContext *s, int q, int vd, int vn, int vm,
+  int data, ARMFPStatusFlavour fp_flavor,
+  gen_helper_gvec_4_ptr *fn_gvec_ptr)
 {
-int opr_sz;
-TCGv_ptr fpst;
-gen_helper_gvec_4_ptr *fn_gvec_ptr;
-
-if (!dc_isar_feature(aa32_vcma, s)
-|| (a->size == MO_16 && !dc_isar_feature(aa32_fp16_arith, s))) {
-return false;
-}
-
 /* UNDEF accesses to D16-D31 if they don't exist. */
-if (!dc_isar_feature(aa32_simd_r32, s) &&
-((a->vd | a->vn | a->vm) & 0x10)) {
+if (((vd | vn | vm) & 0x10) && !dc_isar_feature(aa32_simd_r32, s)) {
 return false;
 }
 
-if ((a->vn | a->vm | a->vd) & a->q) {
+/*
+ * UNDEF accesses to odd registers for each bit of Q.
+ * Q will be 0b111 for all Q-reg instructions, otherwise
+ * when we have mixed Q- and D-reg inputs.
+ */
+if (((vd & 1) * 4 | (vn & 1) * 2 | (vm & 1)) & q) {
 return false;
 }
 
@@ -167,20 +164,34 @@ static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
 return true;
 }
 
-opr_sz = (1 + a->q) * 8;
-fpst = fpstatus_ptr(a->size == MO_16 ? FPST_STD_F16 : FPST_STD);
-fn_gvec_ptr = (a->size == MO_16) ?
-gen_helper_gvec_fcmlah : gen_helper_gvec_fcmlas;
-tcg_gen_gvec_4_ptr(vfp_reg_offset(1, a->vd),
-   vfp_reg_offset(1, a->vn),
-   vfp_reg_offset(1, a->vm),
-   vfp_reg_offset(1, a->vd),
-   fpst, opr_sz, opr_sz, a->rot,
-   fn_gvec_ptr);
+int opr_sz = q ? 16 : 8;
+TCGv_ptr fpst = fpstatus_ptr(fp_flavor);
+
+tcg_gen_gvec_4_ptr(vfp_reg_offset(1, vd),
+   vfp_reg_offset(1, vn),
+   vfp_reg_offset(1, vm),
+   vfp_reg_offset(1, vd),
+   fpst, opr_sz, opr_sz, data, fn_gvec_ptr);
 tcg_temp_free_ptr(fpst);
 return true;
 }
 
+static bool trans_VCMLA(DisasContext *s, arg_VCMLA *a)
+{
+if (!dc_isar_feature(aa32_vcma, s)) {
+return false;
+}
+if (a->size == MO_16) {
+if (!dc_isar_feature(aa32_fp16_arith, s)) {
+return false;
+}
+return do_neon_ddda_fpst(s, a->q * 7, a->vd, a->vn, a->vm, a->rot,
+ FPST_STD_F16, gen_helper_gvec_fcmlah);
+}
+return do_neon_ddda_fpst(s, a->q * 7, a->vd, a->vn, a->vm, a->rot,
+ FPST_STD, gen_helper_gvec_fcmlas);
+}
+
 static bool trans_VCADD(DisasContext *s, arg_VCADD *a)
 {
 int opr_sz;
@@ -285,43 +296,20 @@ static bool trans_VFML(DisasContext *s, arg_VFML *a)
 
 static bool trans_VCMLA_scalar(DisasContext *s, arg_VCMLA_scalar *a)
 {
-gen_helper_gvec_4_ptr *fn_gvec_ptr;
-int opr_sz;
-TCGv_ptr fpst;
+int data = (a->index << 2) | a->rot;
 
 if (!dc_isar_feature(aa32_vcma, s)) {
 return false;
 }
-if (a->size == MO_16 && !dc_isar_feature(aa32_fp16_arith, s)) {
-return false;
+if (a->size == MO_16) {
+if (!dc_isar_feature(aa32_fp16_arith, s)) {
+return false;
+}
+return do_neon_ddda_fpst(s, a->q * 6, a->vd, a->vn, a->vm, data,
+ FPST_STD_F16, gen_helper_gvec_fcmlah_idx);
 }
-
-/* UNDEF accesses to D16-D31 if they don't exist. */
-if (!dc_isar_feature(aa32_simd_r32, s) &&
-((a->vd | a->vn | a->vm) & 0x10)) {
-return false;
-}
-
-if ((a->vd | a->vn) & a->q) {
-return false;
-}
-
-if (!vfp_access_check(s)) {
-return true;
-}
-
-fn_gvec_ptr = (a->size == MO_16) ?
-gen_helper_gvec_fcmlah_idx : gen_helper_gvec_fcmlas_idx;
-opr_sz = (1 + a->q) * 8;
-fpst = fpstatus_ptr(a->size == MO_16 ? FPST_STD_F16 : FPST_STD);
-tcg_gen_gvec_4_ptr(vfp_reg_offset(1, a->vd),
-   vfp_reg_offset(1, a->vn),
-   vfp_reg_offset(1, a->vm),
-   vfp_reg_offset(1, a->vd),
-   fpst, opr_sz, opr_sz,
-   (a->index << 2) | a->rot, fn_gvec_ptr);
-tcg_temp_free_ptr(fpst);
-return true;
+return do_neon_ddda_fpst(s, a->q * 6, a->vd, a->vn, a->vm, data,
+ FPST_STD, gen_helper_gvec_fcmlas_idx);
 }
 
 static bool 

[PATCH v5 43/81] target/arm: Implement SVE2 XAR

2021-04-16 Thread Richard Henderson
In addition, use the same vector generator interface for AdvSIMD.
This fixes a bug in which the AdvSIMD insn failed to clear the
high bits of the SVE register.

Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|   4 ++
 target/arm/helper.h|   2 +
 target/arm/translate-a64.h |   3 ++
 target/arm/sve.decode  |   4 ++
 target/arm/sve_helper.c|  39 ++
 target/arm/translate-a64.c |  25 ++---
 target/arm/translate-sve.c | 104 +
 target/arm/vec_helper.c|  12 +
 8 files changed, 172 insertions(+), 21 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 507a2fea8e..28b8f00201 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2558,6 +2558,10 @@ DEF_HELPER_FLAGS_5(sve2_histcnt_d, TCG_CALL_NO_RWG,
 
 DEF_HELPER_FLAGS_4(sve2_histseg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_xar_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_xar_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_xar_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
diff --git a/target/arm/helper.h b/target/arm/helper.h
index 6bb0b0ddc0..23a7ec5638 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -953,6 +953,8 @@ DEF_HELPER_FLAGS_5(neon_sqrdmulh_h, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(neon_sqrdmulh_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(gvec_xar_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 #ifdef TARGET_AARCH64
 #include "helper-a64.h"
 #include "helper-sve.h"
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index 868d355048..b4d391207f 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -122,5 +122,8 @@ bool disas_sve(DisasContext *, uint32_t);
 
 void gen_gvec_rax1(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
uint32_t rm_ofs, uint32_t opr_sz, uint32_t max_sz);
+void gen_gvec_xar(unsigned vece, uint32_t rd_ofs, uint32_t rn_ofs,
+  uint32_t rm_ofs, int64_t shift,
+  uint32_t opr_sz, uint32_t max_sz);
 
 #endif /* TARGET_ARM_TRANSLATE_A64_H */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 8f501a083c..7645587469 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -65,6 +65,7 @@
 _dbm rd rn dbm
rd rn rm imm
 _eszrd rn imm esz
+_esz   rd rn rm imm esz
 _eszrd rn rm esz
 _eszrd pg rn esz
 _s  rd pg rn s
@@ -384,6 +385,9 @@ ORR_zzz 0100 01 1 . 001 100 . . 
@rd_rn_rm_e0
 EOR_zzz 0100 10 1 . 001 100 . . @rd_rn_rm_e0
 BIC_zzz 0100 11 1 . 001 100 . . @rd_rn_rm_e0
 
+XAR 0100 .. 1 . 001 101 rm:5  rd:5   _esz \
+rn=%reg_movprfx esz=%tszimm16_esz imm=%tszimm16_shr
+
 # SVE2 bitwise ternary operations
 EOR30100 00 1 . 001 110 . . @rdn_ra_rm_e0
 BSL 0100 00 1 . 001 111 . . @rdn_ra_rm_e0
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 8d002fdb65..c77003217e 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -7193,3 +7193,42 @@ void HELPER(sve2_histseg)(void *vd, void *vn, void *vm, 
uint32_t desc)
 *(uint64_t *)(vd + i + 8) = out1;
 }
 }
+
+void HELPER(sve2_xar_b)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+int shr = simd_data(desc);
+int shl = 8 - shr;
+uint64_t mask = dup_const(MO_8, 0xff >> shr);
+uint64_t *d = vd, *n = vn, *m = vm;
+
+for (i = 0; i < opr_sz; ++i) {
+uint64_t t = n[i] ^ m[i];
+d[i] = ((t >> shr) & mask) | ((t << shl) & ~mask);
+}
+}
+
+void HELPER(sve2_xar_h)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+int shr = simd_data(desc);
+int shl = 16 - shr;
+uint64_t mask = dup_const(MO_16, 0x >> shr);
+uint64_t *d = vd, *n = vn, *m = vm;
+
+for (i = 0; i < opr_sz; ++i) {
+uint64_t t = n[i] ^ m[i];
+d[i] = ((t >> shr) & mask) | ((t << shl) & ~mask);
+}
+}
+
+void HELPER(sve2_xar_s)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 4;
+int shr = simd_data(desc);
+uint32_t *d = vd, *n = vn, *m = vm;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = ror32(n[i] ^ m[i], shr);
+}
+}
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 95897e63af..cd8408e84c 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -14364,8 +14364,6 @@ static void disas_crypto_xar(DisasContext *s, uint32_t 
insn)
   

[PATCH v5 71/81] target/arm: Implement 128-bit ZIP, UZP, TRN

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  3 ++
 target/arm/sve.decode  |  8 ++
 target/arm/sve_helper.c| 29 +--
 target/arm/translate-sve.c | 58 ++
 4 files changed, 90 insertions(+), 8 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 96bd200e73..6e9479800d 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -689,16 +689,19 @@ DEF_HELPER_FLAGS_4(sve_zip_b, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_zip_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_zip_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_zip_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_zip_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_4(sve_uzp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_uzp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_uzp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_uzp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uzp_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_4(sve_trn_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_trn_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_trn_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_trn_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_trn_q, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_4(sve_compact_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_compact_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index df870ce23b..32e11301a5 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -590,6 +590,14 @@ UZP2_z  0101 .. 1 . 011 011 . .
 @rd_rn_rm
 TRN1_z  0101 .. 1 . 011 100 . . @rd_rn_rm
 TRN2_z  0101 .. 1 . 011 101 . . @rd_rn_rm
 
+# SVE2 permute vector segments
+ZIP1_q  0101 10 1 . 000 000 . . @rd_rn_rm_e0
+ZIP2_q  0101 10 1 . 000 001 . . @rd_rn_rm_e0
+UZP1_q  0101 10 1 . 000 010 . . @rd_rn_rm_e0
+UZP2_q  0101 10 1 . 000 011 . . @rd_rn_rm_e0
+TRN1_q  0101 10 1 . 000 110 . . @rd_rn_rm_e0
+TRN2_q  0101 10 1 . 000 111 . . @rd_rn_rm_e0
+
 ### SVE Permute - Predicated Group
 
 # SVE compress active elements
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 754301a3a6..d5701cb4e8 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3338,36 +3338,45 @@ void HELPER(NAME)(void *vd, void *vn, void *vm, 
uint32_t desc)   \
 *(TYPE *)(vd + H(2 * i + 0)) = *(TYPE *)(vn + H(i)); \
 *(TYPE *)(vd + H(2 * i + sizeof(TYPE))) = *(TYPE *)(vm + H(i)); \
 }\
+if (sizeof(TYPE) == 16 && unlikely(oprsz & 16)) {\
+memset(vd + oprsz - 16, 0, 16);  \
+}\
 }
 
 DO_ZIP(sve_zip_b, uint8_t, H1)
 DO_ZIP(sve_zip_h, uint16_t, H1_2)
 DO_ZIP(sve_zip_s, uint32_t, H1_4)
 DO_ZIP(sve_zip_d, uint64_t, )
+DO_ZIP(sve2_zip_q, Int128, )
 
 #define DO_UZP(NAME, TYPE, H) \
 void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
 {  \
 intptr_t oprsz = simd_oprsz(desc); \
-intptr_t oprsz_2 = oprsz / 2;  \
 intptr_t odd_ofs = simd_data(desc);\
-intptr_t i;\
+intptr_t i, p; \
 ARMVectorReg tmp_m;\
 if (unlikely((vm - vd) < (uintptr_t)oprsz)) {  \
 vm = memcpy(_m, vm, oprsz);\
 }  \
-for (i = 0; i < oprsz_2; i += sizeof(TYPE)) {  \
-*(TYPE *)(vd + H(i)) = *(TYPE *)(vn + H(2 * i + odd_ofs)); \
-}  \
-for (i = 0; i < oprsz_2; i += sizeof(TYPE)) {  \
-*(TYPE *)(vd + H(oprsz_2 + i)) = *(TYPE *)(vm + H(2 * i + odd_ofs)); \
-}  \
+i = 0, p = odd_ofs;\
+do { 

[PATCH v5 62/81] target/arm: Implement SVE2 crypto destructive binary operations

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  5 +
 target/arm/sve.decode  |  7 +++
 target/arm/translate-sve.c | 38 ++
 3 files changed, 50 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 132ac5d8ec..904f5da290 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4246,6 +4246,11 @@ static inline bool isar_feature_aa64_sve2_bitperm(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
 }
 
+static inline bool isar_feature_aa64_sve2_sm4(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SM4) != 0;
+}
+
 static inline bool isar_feature_aa64_sve_i8mm(const ARMISARegisters *id)
 {
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, I8MM) != 0;
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 6ab13b2f78..fb4d32691e 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -118,6 +118,8 @@
 @pd_pn_pm    esz:2 .. rm:4 ... rn:4 . rd:4  _esz
 @rdn_rm  esz:2 .. .. rm:5 rd:5 \
 _esz rn=%reg_movprfx
+@rdn_rm_e0   .. .. .. rm:5 rd:5 \
+_esz rn=%reg_movprfx esz=0
 @rdn_sh_i8u  esz:2 .. .. . rd:5 \
 _esz rn=%reg_movprfx imm=%sh8_i8u
 @rdn_i8u esz:2 .. ... imm:8 rd:5 \
@@ -1515,3 +1517,8 @@ STNT1_zprz  1110010 .. 10 . 001 ... . . \
 # SVE2 crypto unary operations
 # AESMC and AESIMC
 AESMC   01000101 00 1011100 decrypt:1 0 rd:5
+
+# SVE2 crypto destructive binary operations
+AESE01000101 00 10001 0 11100 0 . .  @rdn_rm_e0
+AESD01000101 00 10001 0 11100 1 . .  @rdn_rm_e0
+SM4E01000101 00 10001 1 11100 0 . .  @rdn_rm_e0
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 4213411caa..681bbc6174 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8083,3 +8083,41 @@ static bool trans_AESMC(DisasContext *s, arg_AESMC *a)
 }
 return true;
 }
+
+static bool do_aese(DisasContext *s, arg_rrr_esz *a, bool decrypt)
+{
+if (!dc_isar_feature(aa64_sve2_aes, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_ool_zzz(s, gen_helper_crypto_aese,
+ a->rd, a->rn, a->rm, decrypt);
+}
+return true;
+}
+
+static bool trans_AESE(DisasContext *s, arg_rrr_esz *a)
+{
+return do_aese(s, a, false);
+}
+
+static bool trans_AESD(DisasContext *s, arg_rrr_esz *a)
+{
+return do_aese(s, a, true);
+}
+
+static bool do_sm4(DisasContext *s, arg_rrr_esz *a, gen_helper_gvec_3 *fn)
+{
+if (!dc_isar_feature(aa64_sve2_sm4, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, 0);
+}
+return true;
+}
+
+static bool trans_SM4E(DisasContext *s, arg_rrr_esz *a)
+{
+return do_sm4(s, a, gen_helper_crypto_sm4e);
+}
-- 
2.25.1




[PATCH v5 74/81] target/arm: Implement aarch64 SUDOT, USDOT

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  5 +
 target/arm/translate-a64.c | 25 +
 2 files changed, 30 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index b43fd066ba..a0865e224c 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4206,6 +4206,11 @@ static inline bool isar_feature_aa64_rcpc_8_4(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, LRCPC) >= 2;
 }
 
+static inline bool isar_feature_aa64_i8mm(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64isar1, ID_AA64ISAR1, I8MM) != 0;
+}
+
 static inline bool isar_feature_aa64_ccidx(const ARMISARegisters *id)
 {
 return FIELD_EX64(id->id_aa64mmfr2, ID_AA64MMFR2, CCIDX) != 0;
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index f45b81e56d..0d45a44f51 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -12190,6 +12190,13 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 }
 feature = dc_isar_feature(aa64_dp, s);
 break;
+case 0x03: /* USDOT */
+if (size != MO_32) {
+unallocated_encoding(s);
+return;
+}
+feature = dc_isar_feature(aa64_i8mm, s);
+break;
 case 0x18: /* FCMLA, #0 */
 case 0x19: /* FCMLA, #90 */
 case 0x1a: /* FCMLA, #180 */
@@ -12230,6 +12237,10 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
  u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b);
 return;
 
+case 0x3: /* USDOT */
+gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0, gen_helper_gvec_usdot_b);
+return;
+
 case 0x8: /* FCMLA, #0 */
 case 0x9: /* FCMLA, #90 */
 case 0xa: /* FCMLA, #180 */
@@ -13375,6 +13386,13 @@ static void disas_simd_indexed(DisasContext *s, 
uint32_t insn)
 return;
 }
 break;
+case 0x0f: /* SUDOT, USDOT */
+if (is_scalar || (size & 1) || !dc_isar_feature(aa64_i8mm, s)) {
+unallocated_encoding(s);
+return;
+}
+size = MO_32;
+break;
 case 0x11: /* FCMLA #0 */
 case 0x13: /* FCMLA #90 */
 case 0x15: /* FCMLA #180 */
@@ -13489,6 +13507,13 @@ static void disas_simd_indexed(DisasContext *s, 
uint32_t insn)
  u ? gen_helper_gvec_udot_idx_b
  : gen_helper_gvec_sdot_idx_b);
 return;
+case 0x0f: /* SUDOT, USDOT */
+gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
+ extract32(insn, 23, 1)
+ ? gen_helper_gvec_usdot_idx_b
+ : gen_helper_gvec_sudot_idx_b);
+return;
+
 case 0x11: /* FCMLA #0 */
 case 0x13: /* FCMLA #90 */
 case 0x15: /* FCMLA #180 */
-- 
2.25.1




[PATCH v5 65/81] target/arm: Implement SVE2 FCVTNT

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200428174332.17162-2-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  5 +
 target/arm/sve.decode  |  4 
 target/arm/sve_helper.c| 20 
 target/arm/translate-sve.c | 16 
 4 files changed, 45 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 41c08a963b..d6b064bdc9 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2703,3 +2703,8 @@ DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_sqdmull_idx_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_fcvtnt_sh, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_fcvtnt_ds, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 38aaf1b37e..afc53639ac 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1531,3 +1531,7 @@ SM4E01000101 00 10001 1 11100 0 . .  
@rdn_rm_e0
 # SVE2 crypto constructive binary operations
 SM4EKEY 01000101 00 1 . 0 0 . .  @rd_rn_rm_e0
 RAX101000101 00 1 . 0 1 . .  @rd_rn_rm_e0
+
+### SVE2 floating-point convert precision odd elements
+FCVTNT_sh   01100100 10 0010 00 101 ... . .  @rd_pg_rn_e0
+FCVTNT_ds   01100100 11 0010 10 101 ... . .  @rd_pg_rn_e0
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 8dc04441aa..6164ae17cc 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -7448,3 +7448,23 @@ void HELPER(fmmla_d)(void *vd, void *vn, void *vm, void 
*va,
 d[3] = float64_add(a[3], float64_add(p0, p1, status), status);
 }
 }
+
+#define DO_FCVTNT(NAME, TYPEW, TYPEN, HW, HN, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vg, void *status, uint32_t desc)  \
+{ \
+intptr_t i = simd_oprsz(desc);\
+uint64_t *g = vg; \
+do {  \
+uint64_t pg = g[(i - 1) >> 6];\
+do {  \
+i -= sizeof(TYPEW);   \
+if (likely((pg >> (i & 63)) & 1)) {   \
+TYPEW nn = *(TYPEW *)(vn + HW(i));\
+*(TYPEN *)(vd + HN(i + sizeof(TYPEN))) = OP(nn, status);  \
+} \
+} while (i & 63); \
+} while (i != 0); \
+}
+
+DO_FCVTNT(sve2_fcvtnt_sh, uint32_t, uint16_t, H1_4, H1_2, sve_f32_to_f16)
+DO_FCVTNT(sve2_fcvtnt_ds, uint64_t, uint32_t, H1_4, H1_2, float64_to_float32)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 63e79fafe5..df52736e3b 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8170,3 +8170,19 @@ static bool trans_RAX1(DisasContext *s, arg_rrr_esz *a)
 }
 return true;
 }
+
+static bool trans_FCVTNT_sh(DisasContext *s, arg_rpr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, 
gen_helper_sve2_fcvtnt_sh);
+}
+
+static bool trans_FCVTNT_ds(DisasContext *s, arg_rpr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return do_zpz_ptr(s, a->rd, a->rn, a->pg, false, 
gen_helper_sve2_fcvtnt_ds);
+}
-- 
2.25.1




[PATCH v5 37/81] target/arm: Implement SVE2 complex integer multiply-add

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Fix do_sqrdmlah_d (laurent desnogues)
---
 target/arm/helper-sve.h| 18 
 target/arm/vec_internal.h  |  5 +
 target/arm/sve.decode  |  5 +
 target/arm/sve_helper.c| 42 ++
 target/arm/translate-sve.c | 32 +
 target/arm/vec_helper.c| 15 +++---
 6 files changed, 109 insertions(+), 8 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 457a421455..d154218452 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2601,3 +2601,21 @@ DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_cmla__b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_cmla__h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_cmla__s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_cmla__d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah__b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah__h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah__s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdcmlah__d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
index 0102547a10..200fb55909 100644
--- a/target/arm/vec_internal.h
+++ b/target/arm/vec_internal.h
@@ -168,4 +168,9 @@ static inline int64_t do_suqrshl_d(int64_t src, int64_t 
shift,
 return do_uqrshl_d(src, shift, round, sat);
 }
 
+int8_t do_sqrdmlah_b(int8_t, int8_t, int8_t, bool, bool);
+int16_t do_sqrdmlah_h(int16_t, int16_t, int16_t, bool, bool, uint32_t *);
+int32_t do_sqrdmlah_s(int32_t, int32_t, int32_t, bool, bool, uint32_t *);
+int64_t do_sqrdmlah_d(int64_t, int64_t, int64_t, bool, bool);
+
 #endif /* TARGET_ARM_VEC_INTERNALS_H */
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index b28b50e05c..936977eacb 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1362,3 +1362,8 @@ SMLSLB_zzzw 01000100 .. 0 . 010 100 . .  
@rda_rn_rm
 SMLSLT_zzzw 01000100 .. 0 . 010 101 . .  @rda_rn_rm
 UMLSLB_zzzw 01000100 .. 0 . 010 110 . .  @rda_rn_rm
 UMLSLT_zzzw 01000100 .. 0 . 010 111 . .  @rda_rn_rm
+
+## SVE2 complex integer multiply-add
+
+CMLA_   01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5  ra=%reg_movprfx
+SQRDCMLAH_  01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5  ra=%reg_movprfx
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 8b86e7ecd6..572d41a26c 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1448,6 +1448,48 @@ DO_SQDMLAL(sve2_sqdmlsl_zzzw_d, int64_t, int32_t, , 
H1_4,
 
 #undef DO_SQDMLAL
 
+#define DO_CMLA(NAME, TYPE, H, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc) / sizeof(TYPE);   \
+int rot = simd_data(desc);  \
+int sel_a = rot & 1, sel_b = sel_a ^ 1; \
+bool sub_r = rot == 1 || rot == 2;  \
+bool sub_i = rot >= 2;  \
+TYPE *d = vd, *n = vn, *m = vm, *a = va;\
+for (i = 0; i < opr_sz; i += 2) {   \
+TYPE elt1_a = n[H(i + sel_a)];  \
+TYPE elt2_a = m[H(i + sel_a)];  \
+TYPE elt2_b = m[H(i + sel_b)];  \
+d[H(i)] = OP(elt1_a, elt2_a, a[H(i)], sub_r);   \
+d[H(i + 1)] = OP(elt1_a, elt2_b, a[H(i + 1)], sub_i);   \
+}   \
+}
+
+#define do_cmla(N, M, A, S) (A + (N * M) * (S ? -1 : 1))
+
+DO_CMLA(sve2_cmla__b, uint8_t, H1, do_cmla)
+DO_CMLA(sve2_cmla__h, uint16_t, H2, do_cmla)
+DO_CMLA(sve2_cmla__s, uint32_t, H4, do_cmla)
+DO_CMLA(sve2_cmla__d, uint64_t,   , do_cmla)
+
+#define DO_SQRDMLAH_B(N, M, A, S) \
+do_sqrdmlah_b(N, M, A, S, true)
+#define DO_SQRDMLAH_H(N, M, A, S) \
+({ uint32_t discard; do_sqrdmlah_h(N, M, A, S, true, ); })
+#define DO_SQRDMLAH_S(N, M, A, S) \
+({ uint32_t discard; do_sqrdmlah_s(N, M, A, S, true, ); })
+#define DO_SQRDMLAH_D(N, M, A, S) \
+do_sqrdmlah_d(N, M, A, S, true)
+
+DO_CMLA(sve2_sqrdcmlah__b, int8_t, H1, DO_SQRDMLAH_B)

[PATCH v5 54/81] target/arm: Implement SVE2 saturating multiply-add high (indexed)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 14 +
 target/arm/sve.decode  |  8 
 target/arm/sve_helper.c| 40 ++
 target/arm/translate-sve.c |  8 
 4 files changed, 70 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 7e99dcd119..fe67574741 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2665,3 +2665,17 @@ DEF_HELPER_FLAGS_5(sve2_sqrdcmlah__d, 
TCG_CALL_NO_RWG,
 
 DEF_HELPER_FLAGS_6(fmmla_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, 
i32)
 DEF_HELPER_FLAGS_6(fmmla_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, ptr, 
i32)
+
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_idx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_idx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_idx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_idx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 9bfaf737b7..1956d96ad5 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -791,6 +791,14 @@ MLS_zzxz_h  01000100 0. 1 . 11 . .   
@rrxr_3 esz=1
 MLS_zzxz_s  01000100 10 1 . 11 . .   @rrxr_2 esz=2
 MLS_zzxz_d  01000100 11 1 . 11 . .   @rrxr_1 esz=3
 
+# SVE2 saturating multiply-add high (indexed)
+SQRDMLAH_zzxz_h 01000100 0. 1 . 000100 . .   @rrxr_3 esz=1
+SQRDMLAH_zzxz_s 01000100 10 1 . 000100 . .   @rrxr_2 esz=2
+SQRDMLAH_zzxz_d 01000100 11 1 . 000100 . .   @rrxr_1 esz=3
+SQRDMLSH_zzxz_h 01000100 0. 1 . 000101 . .   @rrxr_3 esz=1
+SQRDMLSH_zzxz_s 01000100 10 1 . 000101 . .   @rrxr_2 esz=2
+SQRDMLSH_zzxz_d 01000100 11 1 . 000101 . .   @rrxr_1 esz=3
+
 # SVE2 integer multiply (indexed)
 MUL_zzx_h   01000100 0. 1 . 10 . .   @rrx_3 esz=1
 MUL_zzx_s   01000100 10 1 . 10 . .   @rrx_2 esz=2
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index f285c90b70..fc4a943029 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1487,9 +1487,49 @@ DO_CMLA(sve2_sqrdcmlah__h, int16_t, H2, 
DO_SQRDMLAH_H)
 DO_CMLA(sve2_sqrdcmlah__s, int32_t, H4, DO_SQRDMLAH_S)
 DO_CMLA(sve2_sqrdcmlah__d, int64_t,   , DO_SQRDMLAH_D)
 
+#undef DO_SQRDMLAH_B
+#undef DO_SQRDMLAH_H
+#undef DO_SQRDMLAH_S
+#undef DO_SQRDMLAH_D
 #undef do_cmla
 #undef DO_CMLA
 
+#define DO_ZZXZ(NAME, TYPE, H, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
+{   \
+intptr_t oprsz = simd_oprsz(desc), segment = 16 / sizeof(TYPE); \
+intptr_t i, j, idx = simd_data(desc);   \
+TYPE *d = vd, *a = va, *n = vn, *m = (TYPE *)vm + H(idx);   \
+for (i = 0; i < oprsz / sizeof(TYPE); i += segment) {   \
+TYPE mm = m[i]; \
+for (j = 0; j < segment; j++) { \
+d[i + j] = OP(n[i + j], mm, a[i + j]);  \
+}   \
+}   \
+}
+
+#define DO_SQRDMLAH_H(N, M, A) \
+({ uint32_t discard; do_sqrdmlah_h(N, M, A, false, true, ); })
+#define DO_SQRDMLAH_S(N, M, A) \
+({ uint32_t discard; do_sqrdmlah_s(N, M, A, false, true, ); })
+#define DO_SQRDMLAH_D(N, M, A) do_sqrdmlah_d(N, M, A, false, true)
+
+DO_ZZXZ(sve2_sqrdmlah_idx_h, int16_t, H2, DO_SQRDMLAH_H)
+DO_ZZXZ(sve2_sqrdmlah_idx_s, int32_t, H4, DO_SQRDMLAH_S)
+DO_ZZXZ(sve2_sqrdmlah_idx_d, int64_t,   , DO_SQRDMLAH_D)
+
+#define DO_SQRDMLSH_H(N, M, A) \
+({ uint32_t discard; do_sqrdmlah_h(N, M, A, true, true, ); })
+#define DO_SQRDMLSH_S(N, M, A) \
+({ uint32_t discard; do_sqrdmlah_s(N, M, A, true, true, ); })
+#define DO_SQRDMLSH_D(N, M, A) do_sqrdmlah_d(N, M, A, true, true)
+
+DO_ZZXZ(sve2_sqrdmlsh_idx_h, int16_t, H2, DO_SQRDMLSH_H)
+DO_ZZXZ(sve2_sqrdmlsh_idx_s, int32_t, H4, DO_SQRDMLSH_S)
+DO_ZZXZ(sve2_sqrdmlsh_idx_d, int64_t,   , DO_SQRDMLSH_D)
+
+#undef DO_ZZXZ
+
 #define DO_BITPERM(NAME, TYPE, OP) \
 void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
 {  \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 25dadabe28..42d8582707 100644
--- a/target/arm/translate-sve.c
+++ 

[PATCH v5 60/81] target/arm: Implement SVE mixed sign dot product

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper.h|  2 ++
 target/arm/sve.decode  |  4 
 target/arm/translate-sve.c | 16 
 target/arm/vec_helper.c| 18 ++
 4 files changed, 40 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index e4c6458f98..86f938c938 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -612,6 +612,8 @@ DEF_HELPER_FLAGS_5(gvec_sdot_b, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(gvec_udot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_usdot_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_5(gvec_sdot_idx_b, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 05360e2608..73f1348313 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1481,6 +1481,10 @@ UMLSLT_zzzw 01000100 .. 0 . 010 111 . .  
@rda_rn_rm
 CMLA_   01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5  ra=%reg_movprfx
 SQRDCMLAH_  01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5  ra=%reg_movprfx
 
+## SVE mixed sign dot product
+
+USDOT_  01000100 .. 0 . 011 110 . .  @rda_rn_rm
+
 ### SVE2 floating point matrix multiply accumulate
 
 FMMLA   01100100 .. 1 . 111001 . .  @rda_rn_rm
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 1f07131cff..0da4a48199 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8056,3 +8056,19 @@ static bool trans_SQRDCMLAH_(DisasContext *s, 
arg_SQRDCMLAH_ *a)
 }
 return true;
 }
+
+static bool trans_USDOT_(DisasContext *s, arg_USDOT_ *a)
+{
+if (a->esz != 2 || !dc_isar_feature(aa64_sve_i8mm, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+unsigned vsz = vec_full_reg_size(s);
+tcg_gen_gvec_4_ool(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   vec_full_reg_offset(s, a->ra),
+   vsz, vsz, 0, gen_helper_gvec_usdot_b);
+}
+return true;
+}
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 98b707f4f5..9b2a4d5b7e 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -579,6 +579,24 @@ void HELPER(gvec_udot_b)(void *vd, void *vn, void *vm, 
void *va, uint32_t desc)
 clear_tail(d, opr_sz, simd_maxsz(desc));
 }
 
+void HELPER(gvec_usdot_b)(void *vd, void *vn, void *vm,
+  void *va, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int32_t *d = vd, *a = va;
+uint8_t *n = vn;
+int8_t *m = vm;
+
+for (i = 0; i < opr_sz / 4; ++i) {
+d[i] = (a[i] +
+n[i * 4 + 0] * m[i * 4 + 0] +
+n[i * 4 + 1] * m[i * 4 + 1] +
+n[i * 4 + 2] * m[i * 4 + 2] +
+n[i * 4 + 3] * m[i * 4 + 3]);
+}
+clear_tail(d, opr_sz, simd_maxsz(desc));
+}
+
 void HELPER(gvec_sdot_h)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
 {
 intptr_t i, opr_sz = simd_oprsz(desc);
-- 
2.25.1




[PATCH v5 64/81] target/arm: Implement SVE2 TBL, TBX

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200428144352.9275-1-stepl...@quicinc.com>
[rth: rearrange the macros a little and rebase]
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 10 +
 target/arm/sve.decode  |  5 +++
 target/arm/sve_helper.c| 90 ++
 target/arm/translate-sve.c | 33 ++
 4 files changed, 119 insertions(+), 19 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 0be0d90bee..41c08a963b 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -661,6 +661,16 @@ DEF_HELPER_FLAGS_4(sve_tbl_h, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_tbl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_tbl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve2_tbl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_tbl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_tbl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_tbl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_tbx_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_tbx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_tbx_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_tbx_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_3(sve_sunpk_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_sunpk_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve_sunpk_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 7a2770cb0c..38aaf1b37e 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -558,6 +558,11 @@ TBL 0101 .. 1 . 001100 . . 
 @rd_rn_rm
 # SVE unpack vector elements
 UNPK0101 esz:2 1100 u:1 h:1 001110 rn:5 rd:5
 
+# SVE2 Table Lookup (three sources)
+
+TBL_sve20101 .. 1 . 001010 . .  @rd_rn_rm
+TBX 0101 .. 1 . 001011 . .  @rd_rn_rm
+
 ### SVE Permute - Predicates Group
 
 # SVE permute predicate elements
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index e8a8425522..8dc04441aa 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2915,28 +2915,80 @@ void HELPER(sve_rev_d)(void *vd, void *vn, uint32_t 
desc)
 }
 }
 
-#define DO_TBL(NAME, TYPE, H) \
-void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
-{  \
-intptr_t i, opr_sz = simd_oprsz(desc); \
-uintptr_t elem = opr_sz / sizeof(TYPE);\
-TYPE *d = vd, *n = vn, *m = vm;\
-ARMVectorReg tmp;  \
-if (unlikely(vd == vn)) {  \
-n = memcpy(, vn, opr_sz);  \
-}  \
-for (i = 0; i < elem; i++) {   \
-TYPE j = m[H(i)];  \
-d[H(i)] = j < elem ? n[H(j)] : 0;  \
-}  \
+typedef void tb_impl_fn(void *, void *, void *, void *, uintptr_t, bool);
+
+static inline void do_tbl1(void *vd, void *vn, void *vm, uint32_t desc,
+   bool is_tbx, tb_impl_fn *fn)
+{
+ARMVectorReg scratch;
+uintptr_t oprsz = simd_oprsz(desc);
+
+if (unlikely(vd == vn)) {
+vn = memcpy(, vn, oprsz);
+}
+
+fn(vd, vn, NULL, vm, oprsz, is_tbx);
 }
 
-DO_TBL(sve_tbl_b, uint8_t, H1)
-DO_TBL(sve_tbl_h, uint16_t, H2)
-DO_TBL(sve_tbl_s, uint32_t, H4)
-DO_TBL(sve_tbl_d, uint64_t, )
+static inline void do_tbl2(void *vd, void *vn0, void *vn1, void *vm,
+   uint32_t desc, bool is_tbx, tb_impl_fn *fn)
+{
+ARMVectorReg scratch;
+uintptr_t oprsz = simd_oprsz(desc);
 
-#undef TBL
+if (unlikely(vd == vn0)) {
+vn0 = memcpy(, vn0, oprsz);
+if (vd == vn1) {
+vn1 = vn0;
+}
+} else if (unlikely(vd == vn1)) {
+vn1 = memcpy(, vn1, oprsz);
+}
+
+fn(vd, vn0, vn1, vm, oprsz, is_tbx);
+}
+
+#define DO_TB(SUFF, TYPE, H)\
+static inline void do_tb_##SUFF(void *vd, void *vt0, void *vt1, \
+void *vm, uintptr_t oprsz, bool is_tbx) \
+{   \
+TYPE *d = vd, *tbl0 = vt0, *tbl1 = vt1, *indexes = vm;  \
+uintptr_t i, nelem = oprsz / sizeof(TYPE);  \
+for (i = 0; i < nelem; ++i) {   \
+TYPE index = 

[PATCH v5 53/81] target/arm: Implement SVE2 integer multiply-add (indexed)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  |  8 
 target/arm/translate-sve.c | 23 +++
 2 files changed, 31 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 65cb0a2206..9bfaf737b7 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -783,6 +783,14 @@ SDOT_zzxw_d 01000100 11 1 . 00 . .   
@rrxr_1 esz=3
 UDOT_zzxw_s 01000100 10 1 . 01 . .   @rrxr_2 esz=2
 UDOT_zzxw_d 01000100 11 1 . 01 . .   @rrxr_1 esz=3
 
+# SVE2 integer multiply-add (indexed)
+MLA_zzxz_h  01000100 0. 1 . 10 . .   @rrxr_3 esz=1
+MLA_zzxz_s  01000100 10 1 . 10 . .   @rrxr_2 esz=2
+MLA_zzxz_d  01000100 11 1 . 10 . .   @rrxr_1 esz=3
+MLS_zzxz_h  01000100 0. 1 . 11 . .   @rrxr_3 esz=1
+MLS_zzxz_s  01000100 10 1 . 11 . .   @rrxr_2 esz=2
+MLS_zzxz_d  01000100 11 1 . 11 . .   @rrxr_1 esz=3
+
 # SVE2 integer multiply (indexed)
 MUL_zzx_h   01000100 0. 1 . 10 . .   @rrx_3 esz=1
 MUL_zzx_s   01000100 10 1 . 10 . .   @rrx_2 esz=2
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 0de8445fb4..25dadabe28 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3866,6 +3866,29 @@ DO_SVE2_RRX(trans_MUL_zzx_d, gen_helper_gvec_mul_idx_d)
 
 #undef DO_SVE2_RRX
 
+static bool do_sve2_zzxz_ool(DisasContext *s, arg_rrxr_esz *a,
+ gen_helper_gvec_4 *fn)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return do_zzxz_ool(s, a, fn);
+}
+
+#define DO_SVE2_RRXR(NAME, FUNC) \
+static bool NAME(DisasContext *s, arg_rrxr_esz *a)  \
+{ return do_sve2_zzxz_ool(s, a, FUNC); }
+
+DO_SVE2_RRXR(trans_MLA_zzxz_h, gen_helper_gvec_mla_idx_h)
+DO_SVE2_RRXR(trans_MLA_zzxz_s, gen_helper_gvec_mla_idx_s)
+DO_SVE2_RRXR(trans_MLA_zzxz_d, gen_helper_gvec_mla_idx_d)
+
+DO_SVE2_RRXR(trans_MLS_zzxz_h, gen_helper_gvec_mls_idx_h)
+DO_SVE2_RRXR(trans_MLS_zzxz_s, gen_helper_gvec_mls_idx_s)
+DO_SVE2_RRXR(trans_MLS_zzxz_d, gen_helper_gvec_mls_idx_d)
+
+#undef DO_SVE2_RRXR
+
 /*
  *** SVE Floating Point Multiply-Add Indexed Group
  */
-- 
2.25.1




[PATCH v5 36/81] target/arm: Implement SVE2 integer multiply-add long

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 28 ++
 target/arm/sve.decode  | 11 ++
 target/arm/sve_helper.c| 18 +
 target/arm/translate-sve.c | 76 ++
 4 files changed, 133 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index d8f390617c..457a421455 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2573,3 +2573,31 @@ DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_smlal_zzzw_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_smlal_zzzw_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_smlal_zzzw_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_umlal_zzzw_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_umlal_zzzw_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_umlal_zzzw_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_smlsl_zzzw_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_smlsl_zzzw_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_smlsl_zzzw_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_umlsl_zzzw_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 8308c9238a..b28b50e05c 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1351,3 +1351,14 @@ SQDMLSLBT   01000100 .. 0 . 1 1 . .  
@rda_rn_rm
 
 SQRDMLAH_   01000100 .. 0 . 01110 0 . .  @rda_rn_rm
 SQRDMLSH_   01000100 .. 0 . 01110 1 . .  @rda_rn_rm
+
+## SVE2 integer multiply-add long
+
+SMLALB_zzzw 01000100 .. 0 . 010 000 . .  @rda_rn_rm
+SMLALT_zzzw 01000100 .. 0 . 010 001 . .  @rda_rn_rm
+UMLALB_zzzw 01000100 .. 0 . 010 010 . .  @rda_rn_rm
+UMLALT_zzzw 01000100 .. 0 . 010 011 . .  @rda_rn_rm
+SMLSLB_zzzw 01000100 .. 0 . 010 100 . .  @rda_rn_rm
+SMLSLT_zzzw 01000100 .. 0 . 010 101 . .  @rda_rn_rm
+UMLSLB_zzzw 01000100 .. 0 . 010 110 . .  @rda_rn_rm
+UMLSLT_zzzw 01000100 .. 0 . 010 111 . .  @rda_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 859091b7cf..8b86e7ecd6 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1308,6 +1308,24 @@ DO_ZZZW_ACC(sve2_uabal_h, uint16_t, uint8_t, H1_2, H1, 
DO_ABD)
 DO_ZZZW_ACC(sve2_uabal_s, uint32_t, uint16_t, H1_4, H1_2, DO_ABD)
 DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, , H1_4, DO_ABD)
 
+DO_ZZZW_ACC(sve2_smlal_zzzw_h, int16_t, int8_t, H1_2, H1, DO_MUL)
+DO_ZZZW_ACC(sve2_smlal_zzzw_s, int32_t, int16_t, H1_4, H1_2, DO_MUL)
+DO_ZZZW_ACC(sve2_smlal_zzzw_d, int64_t, int32_t, , H1_4, DO_MUL)
+
+DO_ZZZW_ACC(sve2_umlal_zzzw_h, uint16_t, uint8_t, H1_2, H1, DO_MUL)
+DO_ZZZW_ACC(sve2_umlal_zzzw_s, uint32_t, uint16_t, H1_4, H1_2, DO_MUL)
+DO_ZZZW_ACC(sve2_umlal_zzzw_d, uint64_t, uint32_t, , H1_4, DO_MUL)
+
+#define DO_NMUL(N, M)  -(N * M)
+
+DO_ZZZW_ACC(sve2_smlsl_zzzw_h, int16_t, int8_t, H1_2, H1, DO_NMUL)
+DO_ZZZW_ACC(sve2_smlsl_zzzw_s, int32_t, int16_t, H1_4, H1_2, DO_NMUL)
+DO_ZZZW_ACC(sve2_smlsl_zzzw_d, int64_t, int32_t, , H1_4, DO_NMUL)
+
+DO_ZZZW_ACC(sve2_umlsl_zzzw_h, uint16_t, uint8_t, H1_2, H1, DO_NMUL)
+DO_ZZZW_ACC(sve2_umlsl_zzzw_s, uint32_t, uint16_t, H1_4, H1_2, DO_NMUL)
+DO_ZZZW_ACC(sve2_umlsl_zzzw_d, uint64_t, uint32_t, , H1_4, DO_NMUL)
+
 #undef DO_ZZZW_ACC
 
 #define DO_XTNB(NAME, TYPE, OP) \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 4326b597e6..0fdfd1e9e0 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7580,3 +7580,79 @@ static bool trans_SQRDMLSH_(DisasContext *s, 
arg__esz *a)
 };
 return do_sve2__ool(s, a, fns[a->esz], 0);
 }
+
+static bool do_smlal_zzzw(DisasContext *s, arg__esz *a, bool sel)
+{
+static gen_helper_gvec_4 * const fns[] = {
+NULL, gen_helper_sve2_smlal_zzzw_h,
+gen_helper_sve2_smlal_zzzw_s, gen_helper_sve2_smlal_zzzw_d,
+};
+return do_sve2__ool(s, a, fns[a->esz], sel);
+}
+
+static bool trans_SMLALB_zzzw(DisasContext *s, arg__esz *a)
+{
+

[PATCH v5 57/81] target/arm: Implement SVE2 signed saturating doubling multiply high

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper.h| 10 +
 target/arm/sve.decode  |  4 ++
 target/arm/translate-sve.c | 18 
 target/arm/vec_helper.c| 84 ++
 4 files changed, 116 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index 72c5bf6aca..eb94b6b1e6 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -957,6 +957,16 @@ DEF_HELPER_FLAGS_5(neon_sqrdmulh_h, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(neon_sqrdmulh_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(gvec_xar_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
 #ifdef TARGET_AARCH64
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index a3b9fb95f9..407d3019d1 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1202,6 +1202,10 @@ SMULH_zzz   0100 .. 1 . 0110 10 . .  
@rd_rn_rm
 UMULH_zzz   0100 .. 1 . 0110 11 . .  @rd_rn_rm
 PMUL_zzz0100 00 1 . 0110 01 . .  @rd_rn_rm_e0
 
+# SVE2 signed saturating doubling multiply high (unpredicated)
+SQDMULH_zzz 0100 .. 1 . 0111 00 . .  @rd_rn_rm
+SQRDMULH_zzz0100 .. 1 . 0111 01 . .  @rd_rn_rm
+
 ### SVE2 Integer - Predicated
 
 SADALP_zpzz 01000100 .. 000 100 101 ... . .  @rdm_pg_rn
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index d3fcf2e4c1..dd4de9e57f 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6419,6 +6419,24 @@ static bool trans_PMUL_zzz(DisasContext *s, arg_rrr_esz 
*a)
 return do_sve2_zzz_ool(s, a, gen_helper_gvec_pmul_b);
 }
 
+static bool trans_SQDMULH_zzz(DisasContext *s, arg_rrr_esz *a)
+{
+static gen_helper_gvec_3 * const fns[4] = {
+gen_helper_sve2_sqdmulh_b, gen_helper_sve2_sqdmulh_h,
+gen_helper_sve2_sqdmulh_s, gen_helper_sve2_sqdmulh_d,
+};
+return do_sve2_zzz_ool(s, a, fns[a->esz]);
+}
+
+static bool trans_SQRDMULH_zzz(DisasContext *s, arg_rrr_esz *a)
+{
+static gen_helper_gvec_3 * const fns[4] = {
+gen_helper_sve2_sqrdmulh_b, gen_helper_sve2_sqrdmulh_h,
+gen_helper_sve2_sqrdmulh_s, gen_helper_sve2_sqrdmulh_d,
+};
+return do_sve2_zzz_ool(s, a, fns[a->esz]);
+}
+
 /*
  * SVE2 Integer - Predicated
  */
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index b19877e0d3..25061c15e1 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -81,6 +81,26 @@ void HELPER(sve2_sqrdmlsh_b)(void *vd, void *vn, void *vm,
 }
 }
 
+void HELPER(sve2_sqdmulh_b)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int8_t *d = vd, *n = vn, *m = vm;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = do_sqrdmlah_b(n[i], m[i], 0, false, false);
+}
+}
+
+void HELPER(sve2_sqrdmulh_b)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int8_t *d = vd, *n = vn, *m = vm;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = do_sqrdmlah_b(n[i], m[i], 0, false, true);
+}
+}
+
 /* Signed saturating rounding doubling multiply-accumulate high half, 16-bit */
 int16_t do_sqrdmlah_h(int16_t src1, int16_t src2, int16_t src3,
   bool neg, bool round, uint32_t *sat)
@@ -198,6 +218,28 @@ void HELPER(sve2_sqrdmlsh_h)(void *vd, void *vn, void *vm,
 }
 }
 
+void HELPER(sve2_sqdmulh_h)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int16_t *d = vd, *n = vn, *m = vm;
+uint32_t discard;
+
+for (i = 0; i < opr_sz / 2; ++i) {
+d[i] = do_sqrdmlah_h(n[i], m[i], 0, false, false, );
+}
+}
+
+void HELPER(sve2_sqrdmulh_h)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int16_t *d = vd, *n = vn, *m = vm;
+uint32_t discard;
+
+for (i = 0; i < opr_sz / 2; ++i) {
+d[i] = do_sqrdmlah_h(n[i], m[i], 0, false, true, );
+}
+}
+
 /* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
 int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
   bool neg, bool round, uint32_t *sat)
@@ -309,6 +351,28 @@ void HELPER(sve2_sqrdmlsh_s)(void *vd, void *vn, void *vm,
 }
 }
 
+void 

[PATCH v5 58/81] target/arm: Implement SVE2 saturating multiply high (indexed)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper.h| 14 ++
 target/arm/sve.decode  |  8 
 target/arm/translate-sve.c |  8 
 target/arm/vec_helper.c| 88 ++
 4 files changed, 118 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index eb94b6b1e6..e7c463fff5 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -967,6 +967,20 @@ DEF_HELPER_FLAGS_4(sve2_sqrdmulh_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_sqrdmulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_sqrdmulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_idx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_idx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqdmulh_idx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_idx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_idx_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqrdmulh_idx_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(gvec_xar_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
 #ifdef TARGET_AARCH64
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 407d3019d1..35010d755f 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -829,6 +829,14 @@ SQDMULLB_zzx_d  01000100 11 1 . 1110.0 . .   
@rrx_2a esz=3
 SQDMULLT_zzx_s  01000100 10 1 . 1110.1 . .   @rrx_3a esz=2
 SQDMULLT_zzx_d  01000100 11 1 . 1110.1 . .   @rrx_2a esz=3
 
+# SVE2 saturating multiply high (indexed)
+SQDMULH_zzx_h   01000100 0. 1 . 00 . .   @rrx_3 esz=1
+SQDMULH_zzx_s   01000100 10 1 . 00 . .   @rrx_2 esz=2
+SQDMULH_zzx_d   01000100 11 1 . 00 . .   @rrx_1 esz=3
+SQRDMULH_zzx_h  01000100 0. 1 . 01 . .   @rrx_3 esz=1
+SQRDMULH_zzx_s  01000100 10 1 . 01 . .   @rrx_2 esz=2
+SQRDMULH_zzx_d  01000100 11 1 . 01 . .   @rrx_1 esz=3
+
 # SVE2 integer multiply (indexed)
 MUL_zzx_h   01000100 0. 1 . 10 . .   @rrx_3 esz=1
 MUL_zzx_s   01000100 10 1 . 10 . .   @rrx_2 esz=2
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index dd4de9e57f..b43bf939f5 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3864,6 +3864,14 @@ DO_SVE2_RRX(trans_MUL_zzx_h, gen_helper_gvec_mul_idx_h)
 DO_SVE2_RRX(trans_MUL_zzx_s, gen_helper_gvec_mul_idx_s)
 DO_SVE2_RRX(trans_MUL_zzx_d, gen_helper_gvec_mul_idx_d)
 
+DO_SVE2_RRX(trans_SQDMULH_zzx_h, gen_helper_sve2_sqdmulh_idx_h)
+DO_SVE2_RRX(trans_SQDMULH_zzx_s, gen_helper_sve2_sqdmulh_idx_s)
+DO_SVE2_RRX(trans_SQDMULH_zzx_d, gen_helper_sve2_sqdmulh_idx_d)
+
+DO_SVE2_RRX(trans_SQRDMULH_zzx_h, gen_helper_sve2_sqrdmulh_idx_h)
+DO_SVE2_RRX(trans_SQRDMULH_zzx_s, gen_helper_sve2_sqrdmulh_idx_s)
+DO_SVE2_RRX(trans_SQRDMULH_zzx_d, gen_helper_sve2_sqrdmulh_idx_d)
+
 #undef DO_SVE2_RRX
 
 #define DO_SVE2_RRX_TB(NAME, FUNC, TOP) \
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 25061c15e1..8b7269d8e1 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -240,6 +240,36 @@ void HELPER(sve2_sqrdmulh_h)(void *vd, void *vn, void *vm, 
uint32_t desc)
 }
 }
 
+void HELPER(sve2_sqdmulh_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, j, opr_sz = simd_oprsz(desc);
+int idx = simd_data(desc);
+int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
+uint32_t discard;
+
+for (i = 0; i < opr_sz / 2; i += 16 / 2) {
+int16_t mm = m[i];
+for (j = 0; j < 16 / 2; ++j) {
+d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, false, );
+}
+}
+}
+
+void HELPER(sve2_sqrdmulh_idx_h)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, j, opr_sz = simd_oprsz(desc);
+int idx = simd_data(desc);
+int16_t *d = vd, *n = vn, *m = (int16_t *)vm + H2(idx);
+uint32_t discard;
+
+for (i = 0; i < opr_sz / 2; i += 16 / 2) {
+int16_t mm = m[i];
+for (j = 0; j < 16 / 2; ++j) {
+d[i + j] = do_sqrdmlah_h(n[i + j], mm, 0, false, true, );
+}
+}
+}
+
 /* Signed saturating rounding doubling multiply-accumulate high half, 32-bit */
 int32_t do_sqrdmlah_s(int32_t src1, int32_t src2, int32_t src3,
   bool neg, bool round, uint32_t *sat)
@@ -373,6 +403,36 @@ void HELPER(sve2_sqrdmulh_s)(void *vd, void *vn, void *vm, 
uint32_t desc)
 }
 }
 
+void HELPER(sve2_sqdmulh_idx_s)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, j, opr_sz = simd_oprsz(desc);
+int idx = simd_data(desc);
+int32_t *d = vd, *n = vn, *m = (int32_t *)vm + H4(idx);
+uint32_t 

[PATCH v5 48/81] target/arm: Pass separate addend to {U, S}DOT helpers

2021-04-16 Thread Richard Henderson
For SVE, we potentially have a 4th argument coming from the
movprfx instruction.  Currently we do not optimize movprfx,
so the problem is not visible.

Signed-off-by: Richard Henderson 
---
v4: Fix double addition (zhiwei).
---
 target/arm/helper.h |  20 +++---
 target/arm/sve.decode   |   7 +-
 target/arm/translate-a64.c  |  15 +++-
 target/arm/translate-sve.c  |  13 ++--
 target/arm/vec_helper.c | 120 ++--
 target/arm/translate-neon.c.inc |  10 +--
 6 files changed, 109 insertions(+), 76 deletions(-)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index 23a7ec5638..f4b092ee1c 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -608,15 +608,19 @@ DEF_HELPER_FLAGS_5(sve2_sqrdmlah_d, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
-DEF_HELPER_FLAGS_4(gvec_sdot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_4(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_4(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_4(gvec_udot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_sdot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_udot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 
-DEF_HELPER_FLAGS_4(gvec_sdot_idx_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_4(gvec_udot_idx_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_4(gvec_sdot_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
-DEF_HELPER_FLAGS_4(gvec_udot_idx_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_sdot_idx_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_udot_idx_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_sdot_idx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(gvec_udot_idx_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_5(gvec_fcaddh, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 67b6466a1e..04ef38f148 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -756,12 +756,13 @@ UMIN_zzi00100101 .. 101 011 110  .
  @rdn_i8u
 MUL_zzi 00100101 .. 110 000 110  .  @rdn_i8s
 
 # SVE integer dot product (unpredicated)
-DOT_zzz 01000100 1 sz:1 0 rm:5 0 u:1 rn:5 rd:5  ra=%reg_movprfx
+DOT_01000100 1 sz:1 0 rm:5 0 u:1 rn:5 rd:5 \
+ra=%reg_movprfx
 
 # SVE integer dot product (indexed)
-DOT_zzx 01000100 101 index:2 rm:3 0 u:1 rn:5 rd:5 \
+DOT_zzxw01000100 101 index:2 rm:3 0 u:1 rn:5 rd:5 \
 sz=0 ra=%reg_movprfx
-DOT_zzx 01000100 111 index:1 rm:4 0 u:1 rn:5 rd:5 \
+DOT_zzxw01000100 111 index:1 rm:4 0 u:1 rn:5 rd:5 \
 sz=1 ra=%reg_movprfx
 
 # SVE floating-point complex add (predicated)
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index cd8408e84c..004ff8c019 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -698,6 +698,17 @@ static void gen_gvec_op3_qc(DisasContext *s, bool is_q, 
int rd, int rn,
 tcg_temp_free_ptr(qc_ptr);
 }
 
+/* Expand a 4-operand operation using an out-of-line helper.  */
+static void gen_gvec_op4_ool(DisasContext *s, bool is_q, int rd, int rn,
+ int rm, int ra, int data, gen_helper_gvec_4 *fn)
+{
+tcg_gen_gvec_4_ool(vec_full_reg_offset(s, rd),
+   vec_full_reg_offset(s, rn),
+   vec_full_reg_offset(s, rm),
+   vec_full_reg_offset(s, ra),
+   is_q ? 16 : 8, vec_full_reg_size(s), data, fn);
+}
+
 /* Set ZF and NF based on a 64 bit result. This is alas fiddlier
  * than the 32 bit equivalent.
  */
@@ -12198,7 +12209,7 @@ static void 
disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
 return;
 
 case 0x2: /* SDOT / UDOT */
-gen_gvec_op3_ool(s, is_q, rd, rn, rm, 0,
+gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, 0,
  u ? gen_helper_gvec_udot_b : gen_helper_gvec_sdot_b);
 return;
 
@@ -13457,7 +13468,7 @@ static void disas_simd_indexed(DisasContext *s, 
uint32_t insn)
 switch (16 * u + opcode) {
 case 0x0e: /* SDOT */
 case 0x1e: /* UDOT */
-gen_gvec_op3_ool(s, is_q, rd, rn, rm, index,
+gen_gvec_op4_ool(s, is_q, rd, rn, rm, rd, index,
  u ? gen_helper_gvec_udot_idx_b
  : 

[PATCH v5 34/81] target/arm: Implement SVE2 saturating multiply-add long

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 14 ++
 target/arm/sve.decode  | 14 ++
 target/arm/sve_helper.c| 30 +
 target/arm/translate-sve.c | 54 ++
 4 files changed, 112 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 11dc6870de..d8f390617c 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2559,3 +2559,17 @@ DEF_HELPER_FLAGS_5(sve2_bcax, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_bsl1n, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_bsl2n, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_nbsl, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_zzzw_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_zzzw_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqdmlal_zzzw_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqdmlsl_zzzw_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 47fca5e12d..52f615b39e 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1332,3 +1332,17 @@ FMAXNMP 01100100 .. 010 10 0 100 ... . . 
@rdn_pg_rm
 FMINNMP 01100100 .. 010 10 1 100 ... . . @rdn_pg_rm
 FMAXP   01100100 .. 010 11 0 100 ... . . @rdn_pg_rm
 FMINP   01100100 .. 010 11 1 100 ... . . @rdn_pg_rm
+
+ SVE Integer Multiply-Add (unpredicated)
+
+## SVE2 saturating multiply-add long
+
+SQDMLALB_zzzw   01000100 .. 0 . 0110 00 . .  @rda_rn_rm
+SQDMLALT_zzzw   01000100 .. 0 . 0110 01 . .  @rda_rn_rm
+SQDMLSLB_zzzw   01000100 .. 0 . 0110 10 . .  @rda_rn_rm
+SQDMLSLT_zzzw   01000100 .. 0 . 0110 11 . .  @rda_rn_rm
+
+## SVE2 saturating multiply-add interleaved long
+
+SQDMLALBT   01000100 .. 0 . 1 0 . .  @rda_rn_rm
+SQDMLSLBT   01000100 .. 0 . 1 1 . .  @rda_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 010d8b260a..859091b7cf 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1400,6 +1400,36 @@ void HELPER(sve2_adcl_d)(void *vd, void *vn, void *vm, 
void *va, uint32_t desc)
 }
 }
 
+#define DO_SQDMLAL(NAME, TYPEW, TYPEN, HW, HN, DMUL_OP, SUM_OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+int sel1 = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN); \
+int sel2 = extract32(desc, SIMD_DATA_SHIFT + 1, 1) * sizeof(TYPEN); \
+for (i = 0; i < opr_sz; i += sizeof(TYPEW)) {   \
+TYPEW nn = *(TYPEN *)(vn + HN(i + sel1));   \
+TYPEW mm = *(TYPEN *)(vm + HN(i + sel2));   \
+TYPEW aa = *(TYPEW *)(va + HW(i));  \
+*(TYPEW *)(vd + HW(i)) = SUM_OP(aa, DMUL_OP(nn, mm));   \
+}   \
+}
+
+DO_SQDMLAL(sve2_sqdmlal_zzzw_h, int16_t, int8_t, H1_2, H1,
+   do_sqdmull_h, DO_SQADD_H)
+DO_SQDMLAL(sve2_sqdmlal_zzzw_s, int32_t, int16_t, H1_4, H1_2,
+   do_sqdmull_s, DO_SQADD_S)
+DO_SQDMLAL(sve2_sqdmlal_zzzw_d, int64_t, int32_t, , H1_4,
+   do_sqdmull_d, do_sqadd_d)
+
+DO_SQDMLAL(sve2_sqdmlsl_zzzw_h, int16_t, int8_t, H1_2, H1,
+   do_sqdmull_h, DO_SQSUB_H)
+DO_SQDMLAL(sve2_sqdmlsl_zzzw_s, int32_t, int16_t, H1_4, H1_2,
+   do_sqdmull_s, DO_SQSUB_S)
+DO_SQDMLAL(sve2_sqdmlsl_zzzw_d, int64_t, int32_t, , H1_4,
+   do_sqdmull_d, do_sqsub_d)
+
+#undef DO_SQDMLAL
+
 #define DO_BITPERM(NAME, TYPE, OP) \
 void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
 {  \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index bdf1da8424..27f9cdb891 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7508,3 +7508,57 @@ DO_SVE2_ZPZZ_FP(FMAXNMP, fmaxnmp)
 DO_SVE2_ZPZZ_FP(FMINNMP, fminnmp)
 DO_SVE2_ZPZZ_FP(FMAXP, fmaxp)
 DO_SVE2_ZPZZ_FP(FMINP, fminp)
+
+/*
+ * SVE Integer Multiply-Add (unpredicated)
+ */
+
+static bool do_sqdmlal_zzzw(DisasContext *s, arg__esz *a,
+bool sel1, bool sel2)
+{
+static gen_helper_gvec_4 * const fns[] = {
+NULL,  

[PATCH v5 61/81] target/arm: Implement SVE2 crypto unary operations

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  |  6 ++
 target/arm/translate-sve.c | 11 +++
 2 files changed, 17 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 73f1348313..6ab13b2f78 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1509,3 +1509,9 @@ STNT1_zprz  1110010 .. 00 . 001 ... . . \
 # SVE2 32-bit scatter non-temporal store (vector plus scalar)
 STNT1_zprz  1110010 .. 10 . 001 ... . . \
 @rprr_scatter_store xs=0 esz=2 scale=0
+
+### SVE2 Crypto Extensions
+
+# SVE2 crypto unary operations
+# AESMC and AESIMC
+AESMC   01000101 00 1011100 decrypt:1 0 rd:5
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 0da4a48199..4213411caa 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -8072,3 +8072,14 @@ static bool trans_USDOT_(DisasContext *s, 
arg_USDOT_ *a)
 }
 return true;
 }
+
+static bool trans_AESMC(DisasContext *s, arg_AESMC *a)
+{
+if (!dc_isar_feature(aa64_sve2_aes, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_ool_zz(s, gen_helper_crypto_aesmc, a->rd, a->rd, a->decrypt);
+}
+return true;
+}
-- 
2.25.1




[PATCH v5 50/81] target/arm: Split out formats for 2 vectors + 1 index

2021-04-16 Thread Richard Henderson
Currently only used by FMUL, but will shortly be used more.

Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 04ef38f148..a504b55dad 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -67,6 +67,7 @@
 _eszrd rn imm esz
 _esz   rd rn rm imm esz
 _eszrd rn rm esz
+_eszrd rn rm index esz
 _eszrd pg rn esz
 _s  rd pg rn s
 _s rd pg rn rm s
@@ -245,6 +246,12 @@
 @rpri_scatter_store ... msz:2 ..imm:5 ... pg:3 rn:5 rd:5 \
 _scatter_store
 
+# Two registers and a scalar by N-bit index
+@rrx_3   .. . ..  rm:3 .. rn:5 rd:5 \
+_esz index=%index3_22_19
+@rrx_2   .. . index:2 rm:3 .. rn:5 rd:5  _esz
+@rrx_1   .. . index:1 rm:4 .. rn:5 rd:5  _esz
+
 ###
 # Instruction patterns.  Grouped according to the SVE encodingindex.xhtml.
 
@@ -792,10 +799,9 @@ FMLA_zzxz   01100100 111 index:1 rm:4 0 sub:1 rn:5 
rd:5 \
 ### SVE FP Multiply Indexed Group
 
 # SVE floating-point multiply (indexed)
-FMUL_zzx01100100 0.1 .. rm:3 001000 rn:5 rd:5 \
-index=%index3_22_19 esz=1
-FMUL_zzx01100100 101 index:2 rm:3 001000 rn:5 rd:5  esz=2
-FMUL_zzx01100100 111 index:1 rm:4 001000 rn:5 rd:5  esz=3
+FMUL_zzx01100100 0. 1 . 001000 . .   @rrx_3 esz=1
+FMUL_zzx01100100 10 1 . 001000 . .   @rrx_2 esz=2
+FMUL_zzx01100100 11 1 . 001000 . .   @rrx_1 esz=3
 
 ### SVE FP Fast Reduction Group
 
-- 
2.25.1




[PATCH v5 33/81] target/arm: Implement SVE2 MATCH, NMATCH

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Reviewed-by: Richard Henderson 
Signed-off-by: Stephen Long 
Message-Id: <20200415145915.2859-1-stepl...@quicinc.com>
[rth: Expanded comment for do_match2]
Signed-off-by: Richard Henderson 
---
v2: Apply esz_mask to input pg to fix output flags.
---
 target/arm/helper-sve.h| 10 ++
 target/arm/sve.decode  |  5 +++
 target/arm/sve_helper.c| 64 ++
 target/arm/translate-sve.c | 22 +
 4 files changed, 101 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index df617e3351..11dc6870de 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2509,6 +2509,16 @@ DEF_HELPER_FLAGS_3(sve2_uqrshrnt_h, TCG_CALL_NO_RWG, 
void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_uqrshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_uqrshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
+   i32, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
+   i32, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_nmatch_ppzz_b, TCG_CALL_NO_RWG,
+   i32, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_nmatch_ppzz_h, TCG_CALL_NO_RWG,
+   i32, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index bf673e2f16..47fca5e12d 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1320,6 +1320,11 @@ UQSHRNT 01000101 .. 1 . 00 1101 . .  
@rd_rn_tszimm_shr
 UQRSHRNB01000101 .. 1 . 00 1110 . .  @rd_rn_tszimm_shr
 UQRSHRNT01000101 .. 1 . 00  . .  @rd_rn_tszimm_shr
 
+### SVE2 Character Match
+
+MATCH   01000101 .. 1 . 100 ... . 0  @pd_pg_rn_rm
+NMATCH  01000101 .. 1 . 100 ... . 1  @pd_pg_rn_rm
+
 ## SVE2 floating-point pairwise operations
 
 FADDP   01100100 .. 010 00 0 100 ... . . @rdn_pg_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index b0598f9097..010d8b260a 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -6842,3 +6842,67 @@ void HELPER(sve2_nbsl)(void *vd, void *vn, void *vm, 
void *vk, uint32_t desc)
 d[i] = ~((n[i] & k[i]) | (m[i] & ~k[i]));
 }
 }
+
+/*
+ * Returns true if m0 or m1 contains the low uint8_t/uint16_t in n.
+ * See hasless(v,1) from
+ *   https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord
+ */
+static inline bool do_match2(uint64_t n, uint64_t m0, uint64_t m1, int esz)
+{
+int bits = 8 << esz;
+uint64_t ones = dup_const(esz, 1);
+uint64_t signs = ones << (bits - 1);
+uint64_t cmp0, cmp1;
+
+cmp1 = dup_const(esz, n);
+cmp0 = cmp1 ^ m0;
+cmp1 = cmp1 ^ m1;
+cmp0 = (cmp0 - ones) & ~cmp0;
+cmp1 = (cmp1 - ones) & ~cmp1;
+return (cmp0 | cmp1) & signs;
+}
+
+static inline uint32_t do_match(void *vd, void *vn, void *vm, void *vg,
+uint32_t desc, int esz, bool nmatch)
+{
+uint16_t esz_mask = pred_esz_masks[esz];
+intptr_t opr_sz = simd_oprsz(desc);
+uint32_t flags = PREDTEST_INIT;
+intptr_t i, j, k;
+
+for (i = 0; i < opr_sz; i += 16) {
+uint64_t m0 = *(uint64_t *)(vm + i);
+uint64_t m1 = *(uint64_t *)(vm + i + 8);
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)) & esz_mask;
+uint16_t out = 0;
+
+for (j = 0; j < 16; j += 8) {
+uint64_t n = *(uint64_t *)(vn + i + j);
+
+for (k = 0; k < 8; k += 1 << esz) {
+if (pg & (1 << (j + k))) {
+bool o = do_match2(n >> (k * 8), m0, m1, esz);
+out |= (o ^ nmatch) << (j + k);
+}
+}
+}
+*(uint16_t *)(vd + H1_2(i >> 3)) = out;
+flags = iter_predtest_fwd(out, pg, flags);
+}
+return flags;
+}
+
+#define DO_PPZZ_MATCH(NAME, ESZ, INV) \
+uint32_t HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc)  \
+{ \
+return do_match(vd, vn, vm, vg, desc, ESZ, INV);  \
+}
+
+DO_PPZZ_MATCH(sve2_match_ppzz_b, MO_8, false)
+DO_PPZZ_MATCH(sve2_match_ppzz_h, MO_16, false)
+
+DO_PPZZ_MATCH(sve2_nmatch_ppzz_b, MO_8, true)
+DO_PPZZ_MATCH(sve2_nmatch_ppzz_h, MO_16, true)
+
+#undef DO_PPZZ_MATCH
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ab290b9025..bdf1da8424 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7462,6 +7462,28 @@ static bool trans_UQRSHRNT(DisasContext *s, arg_rri_esz 
*a)
 return do_sve2_shr_narrow(s, a, ops);
 }
 
+static bool 

[PATCH v5 41/81] target/arm: Implement SVE2 RSUBHNB, RSUBHNT

2021-04-16 Thread Richard Henderson
From: Stephen Long 

This completes the section 'SVE2 integer add/subtract narrow high part'

Signed-off-by: Stephen Long 
Message-Id: <20200417162231.10374-5-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
v2: Fix round bit type (laurent desnogues)
---
 target/arm/helper-sve.h|  8 
 target/arm/sve.decode  |  2 ++
 target/arm/sve_helper.c| 10 ++
 target/arm/translate-sve.c |  2 ++
 4 files changed, 22 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 3642e7c820..98e6b57e38 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2533,6 +2533,14 @@ DEF_HELPER_FLAGS_4(sve2_subhnt_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_subhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_subhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_rsubhnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_rsubhnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_rsubhnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_rsubhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_rsubhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_rsubhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
i32, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index c68bfcf6ed..388bf92acf 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1328,6 +1328,8 @@ RADDHNB 01000101 .. 1 . 011 010 . .  
@rd_rn_rm
 RADDHNT 01000101 .. 1 . 011 011 . .  @rd_rn_rm
 SUBHNB  01000101 .. 1 . 011 100 . .  @rd_rn_rm
 SUBHNT  01000101 .. 1 . 011 101 . .  @rd_rn_rm
+RSUBHNB 01000101 .. 1 . 011 110 . .  @rd_rn_rm
+RSUBHNT 01000101 .. 1 . 011 111 . .  @rd_rn_rm
 
 ### SVE2 Character Match
 
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 0df70effe3..12acc4fb0b 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2137,6 +2137,7 @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t 
desc)  \
 #define DO_ADDHN(N, M, SH)  ((N + M) >> SH)
 #define DO_RADDHN(N, M, SH) ((N + M + ((__typeof(N))1 << (SH - 1))) >> SH)
 #define DO_SUBHN(N, M, SH)  ((N - M) >> SH)
+#define DO_RSUBHN(N, M, SH) ((N - M + ((__typeof(N))1 << (SH - 1))) >> SH)
 
 DO_BINOPNB(sve2_addhnb_h, uint16_t, uint8_t, 8, DO_ADDHN)
 DO_BINOPNB(sve2_addhnb_s, uint32_t, uint16_t, 16, DO_ADDHN)
@@ -2162,6 +2163,15 @@ DO_BINOPNT(sve2_subhnt_h, uint16_t, uint8_t, 8, H1_2, 
H1, DO_SUBHN)
 DO_BINOPNT(sve2_subhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_SUBHN)
 DO_BINOPNT(sve2_subhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_SUBHN)
 
+DO_BINOPNB(sve2_rsubhnb_h, uint16_t, uint8_t, 8, DO_RSUBHN)
+DO_BINOPNB(sve2_rsubhnb_s, uint32_t, uint16_t, 16, DO_RSUBHN)
+DO_BINOPNB(sve2_rsubhnb_d, uint64_t, uint32_t, 32, DO_RSUBHN)
+
+DO_BINOPNT(sve2_rsubhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_RSUBHN)
+DO_BINOPNT(sve2_rsubhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_RSUBHN)
+DO_BINOPNT(sve2_rsubhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_RSUBHN)
+
+#undef DO_RSUBHN
 #undef DO_SUBHN
 #undef DO_RADDHN
 #undef DO_ADDHN
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 55303ba41d..49d7a45a50 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7479,6 +7479,8 @@ DO_SVE2_ZZZ_NARROW(RADDHNT, raddhnt)
 
 DO_SVE2_ZZZ_NARROW(SUBHNB, subhnb)
 DO_SVE2_ZZZ_NARROW(SUBHNT, subhnt)
+DO_SVE2_ZZZ_NARROW(RSUBHNB, rsubhnb)
+DO_SVE2_ZZZ_NARROW(RSUBHNT, rsubhnt)
 
 static bool do_sve2_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
gen_helper_gvec_flags_4 *fn)
-- 
2.25.1




[PATCH v5 52/81] target/arm: Implement SVE2 integer multiply (indexed)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  |  7 +++
 target/arm/translate-sve.c | 30 ++
 2 files changed, 37 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 74ac72bdbd..65cb0a2206 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -775,12 +775,19 @@ MUL_zzi 00100101 .. 110 000 110  .
  @rdn_i8s
 DOT_01000100 1 sz:1 0 rm:5 0 u:1 rn:5 rd:5 \
 ra=%reg_movprfx
 
+ SVE Multiply - Indexed
+
 # SVE integer dot product (indexed)
 SDOT_zzxw_s 01000100 10 1 . 00 . .   @rrxr_2 esz=2
 SDOT_zzxw_d 01000100 11 1 . 00 . .   @rrxr_1 esz=3
 UDOT_zzxw_s 01000100 10 1 . 01 . .   @rrxr_2 esz=2
 UDOT_zzxw_d 01000100 11 1 . 01 . .   @rrxr_1 esz=3
 
+# SVE2 integer multiply (indexed)
+MUL_zzx_h   01000100 0. 1 . 10 . .   @rrx_3 esz=1
+MUL_zzx_s   01000100 10 1 . 10 . .   @rrx_2 esz=2
+MUL_zzx_d   01000100 11 1 . 10 . .   @rrx_1 esz=3
+
 # SVE floating-point complex add (predicated)
 FCADD   01100100 esz:2 0 rot:1 100 pg:3 rm:5 rd:5 \
 rn=%reg_movprfx
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 2eb21b28e1..0de8445fb4 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3813,6 +3813,10 @@ static bool trans_DOT_(DisasContext *s, arg_DOT_ 
*a)
 return true;
 }
 
+/*
+ * SVE Multiply - Indexed
+ */
+
 static bool do_zzxz_ool(DisasContext *s, arg_rrxr_esz *a,
 gen_helper_gvec_4 *fn)
 {
@@ -3836,6 +3840,32 @@ DO_RRXR(trans_UDOT_zzxw_d, gen_helper_gvec_udot_idx_h)
 
 #undef DO_RRXR
 
+static bool do_sve2_zzx_ool(DisasContext *s, arg_rrx_esz *a,
+gen_helper_gvec_3 *fn)
+{
+if (fn == NULL || !dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+unsigned vsz = vec_full_reg_size(s);
+tcg_gen_gvec_3_ool(vec_full_reg_offset(s, a->rd),
+   vec_full_reg_offset(s, a->rn),
+   vec_full_reg_offset(s, a->rm),
+   vsz, vsz, a->index, fn);
+}
+return true;
+}
+
+#define DO_SVE2_RRX(NAME, FUNC) \
+static bool NAME(DisasContext *s, arg_rrx_esz *a)  \
+{ return do_sve2_zzx_ool(s, a, FUNC); }
+
+DO_SVE2_RRX(trans_MUL_zzx_h, gen_helper_gvec_mul_idx_h)
+DO_SVE2_RRX(trans_MUL_zzx_s, gen_helper_gvec_mul_idx_s)
+DO_SVE2_RRX(trans_MUL_zzx_d, gen_helper_gvec_mul_idx_d)
+
+#undef DO_SVE2_RRX
+
 /*
  *** SVE Floating Point Multiply-Add Indexed Group
  */
-- 
2.25.1




[PATCH v5 32/81] target/arm: Implement SVE2 bitwise ternary operations

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|   6 ++
 target/arm/sve.decode  |  12 +++
 target/arm/sve_helper.c|  50 +
 target/arm/translate-sve.c | 213 +
 4 files changed, 281 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 5bf9fdc7a3..df617e3351 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2543,3 +2543,9 @@ DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_eor3, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_bcax, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_bsl1n, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_bsl2n, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_nbsl, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index f365907518..bf673e2f16 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -124,6 +124,10 @@
 @rda_rn_rm   esz:2 . rm:5 ... ... rn:5 rd:5 \
 _esz ra=%reg_movprfx
 
+# Four operand with unused vector element size
+@rdn_ra_rm_e0    ... rm:5 ... ... ra:5 rd:5 \
+_esz esz=0 rn=%reg_movprfx
+
 # Three operand with "memory" size, aka immediate left shift
 @rd_rn_msz_rm    ... rm:5  imm:2 rn:5 rd:5  
 
@@ -379,6 +383,14 @@ ORR_zzz 0100 01 1 . 001 100 . .
 @rd_rn_rm_e0
 EOR_zzz 0100 10 1 . 001 100 . . @rd_rn_rm_e0
 BIC_zzz 0100 11 1 . 001 100 . . @rd_rn_rm_e0
 
+# SVE2 bitwise ternary operations
+EOR30100 00 1 . 001 110 . . @rdn_ra_rm_e0
+BSL 0100 00 1 . 001 111 . . @rdn_ra_rm_e0
+BCAX0100 01 1 . 001 110 . . @rdn_ra_rm_e0
+BSL1N   0100 01 1 . 001 111 . . @rdn_ra_rm_e0
+BSL2N   0100 10 1 . 001 111 . . @rdn_ra_rm_e0
+NBSL0100 11 1 . 001 111 . . @rdn_ra_rm_e0
+
 ### SVE Index Generation Group
 
 # SVE index generation (immediate start, immediate increment)
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index fb38f2c57e..b0598f9097 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -6792,3 +6792,53 @@ DO_ST1_ZPZ_D(dd_be, zd, MO_64)
 
 #undef DO_ST1_ZPZ_S
 #undef DO_ST1_ZPZ_D
+
+void HELPER(sve2_eor3)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = n[i] ^ m[i] ^ k[i];
+}
+}
+
+void HELPER(sve2_bcax)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = n[i] ^ (m[i] & ~k[i]);
+}
+}
+
+void HELPER(sve2_bsl1n)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = (~n[i] & k[i]) | (m[i] & ~k[i]);
+}
+}
+
+void HELPER(sve2_bsl2n)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = (n[i] & k[i]) | (~m[i] & ~k[i]);
+}
+}
+
+void HELPER(sve2_nbsl)(void *vd, void *vn, void *vm, void *vk, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc) / 8;
+uint64_t *d = vd, *n = vn, *m = vm, *k = vk;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = ~((n[i] & k[i]) | (m[i] & ~k[i]));
+}
+}
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 97e113ceec..ab290b9025 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -217,6 +217,17 @@ static void gen_gvec_fn_zzz(DisasContext *s, GVecGen3Fn 
*gvec_fn,
 vec_full_reg_offset(s, rm), vsz, vsz);
 }
 
+/* Invoke a vector expander on four Zregs.  */
+static void gen_gvec_fn_(DisasContext *s, GVecGen4Fn *gvec_fn,
+ int esz, int rd, int rn, int rm, int ra)
+{
+unsigned vsz = vec_full_reg_size(s);
+gvec_fn(esz, vec_full_reg_offset(s, rd),
+vec_full_reg_offset(s, rn),
+vec_full_reg_offset(s, rm),
+vec_full_reg_offset(s, ra), vsz, vsz);
+}
+
 /* Invoke a vector move on two Zregs.  */
 static bool do_mov_z(DisasContext *s, int rd, int rn)
 {
@@ -329,6 +340,208 @@ static bool 

[PATCH v5 40/81] target/arm: Implement SVE2 SUBHNB, SUBHNT

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200417162231.10374-4-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  8 
 target/arm/sve.decode  |  2 ++
 target/arm/sve_helper.c| 10 ++
 target/arm/translate-sve.c |  3 +++
 4 files changed, 23 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 8d95c87694..3642e7c820 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2525,6 +2525,14 @@ DEF_HELPER_FLAGS_4(sve2_raddhnt_h, TCG_CALL_NO_RWG, 
void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_raddhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_raddhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_subhnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_subhnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_subhnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_subhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_subhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_subhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
i32, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index dfcfab4bc0..c68bfcf6ed 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1326,6 +1326,8 @@ ADDHNB  01000101 .. 1 . 011 000 . .  
@rd_rn_rm
 ADDHNT  01000101 .. 1 . 011 001 . .  @rd_rn_rm
 RADDHNB 01000101 .. 1 . 011 010 . .  @rd_rn_rm
 RADDHNT 01000101 .. 1 . 011 011 . .  @rd_rn_rm
+SUBHNB  01000101 .. 1 . 011 100 . .  @rd_rn_rm
+SUBHNT  01000101 .. 1 . 011 101 . .  @rd_rn_rm
 
 ### SVE2 Character Match
 
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index e6f6e3d5fa..0df70effe3 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2136,6 +2136,7 @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t 
desc)  \
 
 #define DO_ADDHN(N, M, SH)  ((N + M) >> SH)
 #define DO_RADDHN(N, M, SH) ((N + M + ((__typeof(N))1 << (SH - 1))) >> SH)
+#define DO_SUBHN(N, M, SH)  ((N - M) >> SH)
 
 DO_BINOPNB(sve2_addhnb_h, uint16_t, uint8_t, 8, DO_ADDHN)
 DO_BINOPNB(sve2_addhnb_s, uint32_t, uint16_t, 16, DO_ADDHN)
@@ -2153,6 +2154,15 @@ DO_BINOPNT(sve2_raddhnt_h, uint16_t, uint8_t, 8, H1_2, 
H1, DO_RADDHN)
 DO_BINOPNT(sve2_raddhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_RADDHN)
 DO_BINOPNT(sve2_raddhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_RADDHN)
 
+DO_BINOPNB(sve2_subhnb_h, uint16_t, uint8_t, 8, DO_SUBHN)
+DO_BINOPNB(sve2_subhnb_s, uint32_t, uint16_t, 16, DO_SUBHN)
+DO_BINOPNB(sve2_subhnb_d, uint64_t, uint32_t, 32, DO_SUBHN)
+
+DO_BINOPNT(sve2_subhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_SUBHN)
+DO_BINOPNT(sve2_subhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_SUBHN)
+DO_BINOPNT(sve2_subhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_SUBHN)
+
+#undef DO_SUBHN
 #undef DO_RADDHN
 #undef DO_ADDHN
 
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index af0d0ab279..55303ba41d 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7477,6 +7477,9 @@ DO_SVE2_ZZZ_NARROW(ADDHNT, addhnt)
 DO_SVE2_ZZZ_NARROW(RADDHNB, raddhnb)
 DO_SVE2_ZZZ_NARROW(RADDHNT, raddhnt)
 
+DO_SVE2_ZZZ_NARROW(SUBHNB, subhnb)
+DO_SVE2_ZZZ_NARROW(SUBHNT, subhnt)
+
 static bool do_sve2_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
gen_helper_gvec_flags_4 *fn)
 {
-- 
2.25.1




[PATCH v5 44/81] target/arm: Implement SVE2 scatter store insns

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Add decoding logic for SVE2 64-bit/32-bit scatter non-temporal
store insns.

64-bit
* STNT1B (vector plus scalar)
* STNT1H (vector plus scalar)
* STNT1W (vector plus scalar)
* STNT1D (vector plus scalar)

32-bit
* STNT1B (vector plus scalar)
* STNT1H (vector plus scalar)
* STNT1W (vector plus scalar)

Signed-off-by: Stephen Long 
Message-Id: <20200422141553.8037-1-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  | 10 ++
 target/arm/translate-sve.c |  8 
 2 files changed, 18 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 7645587469..5cfe6df0d2 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1388,3 +1388,13 @@ UMLSLT_zzzw 01000100 .. 0 . 010 111 . .  
@rda_rn_rm
 
 CMLA_   01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5  ra=%reg_movprfx
 SQRDCMLAH_  01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5  ra=%reg_movprfx
+
+### SVE2 Memory Store Group
+
+# SVE2 64-bit scatter non-temporal store (vector plus scalar)
+STNT1_zprz  1110010 .. 00 . 001 ... . . \
+@rprr_scatter_store xs=2 esz=3 scale=0
+
+# SVE2 32-bit scatter non-temporal store (vector plus scalar)
+STNT1_zprz  1110010 .. 10 . 001 ... . . \
+@rprr_scatter_store xs=0 esz=2 scale=0
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index eea8b6f1d0..0356b6a124 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6167,6 +6167,14 @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz 
*a)
 return true;
 }
 
+static bool trans_STNT1_zprz(DisasContext *s, arg_ST1_zprz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return trans_ST1_zprz(s, a);
+}
+
 /*
  * Prefetches
  */
-- 
2.25.1




[PATCH v5 47/81] target/arm: Implement SVE2 SPLICE, EXT

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200423180347.9403-1-stepl...@quicinc.com>
[rth: Rename the trans_* functions to *_sve2.]
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  | 11 +--
 target/arm/translate-sve.c | 35 ++-
 2 files changed, 39 insertions(+), 7 deletions(-)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index cb2ee86228..67b6466a1e 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -494,10 +494,14 @@ CPY_z_i 0101 .. 01  00 .  .   
@rdn_pg4 imm=%sh8_i8s
 
 ### SVE Permute - Extract Group
 
-# SVE extract vector (immediate offset)
+# SVE extract vector (destructive)
 EXT 0101 001 . 000 ... rm:5 rd:5 \
  rn=%reg_movprfx imm=%imm8_16_10
 
+# SVE2 extract vector (constructive)
+EXT_sve20101 011 . 000 ... rn:5 rd:5 \
+ imm=%imm8_16_10
+
 ### SVE Permute - Unpredicated Group
 
 # SVE broadcast general register
@@ -588,9 +592,12 @@ REVH0101 .. 1001 01 100 ... . .
 @rd_pg_rn
 REVW0101 .. 1001 10 100 ... . . @rd_pg_rn
 RBIT0101 .. 1001 11 100 ... . . @rd_pg_rn
 
-# SVE vector splice (predicated)
+# SVE vector splice (predicated, destructive)
 SPLICE  0101 .. 101 100 100 ... . . @rdn_pg_rm
 
+# SVE2 vector splice (predicated, constructive)
+SPLICE_sve2 0101 .. 101 101 100 ... . . @rd_pg_rn
+
 ### SVE Select Vectors Group
 
 # SVE select vector elements (predicated)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 0afae9646f..e9feb05da3 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -2266,18 +2266,18 @@ static bool trans_CPY_z_i(DisasContext *s, arg_CPY_z_i 
*a)
  *** SVE Permute Extract Group
  */
 
-static bool trans_EXT(DisasContext *s, arg_EXT *a)
+static bool do_EXT(DisasContext *s, int rd, int rn, int rm, int imm)
 {
 if (!sve_access_check(s)) {
 return true;
 }
 
 unsigned vsz = vec_full_reg_size(s);
-unsigned n_ofs = a->imm >= vsz ? 0 : a->imm;
+unsigned n_ofs = imm >= vsz ? 0 : imm;
 unsigned n_siz = vsz - n_ofs;
-unsigned d = vec_full_reg_offset(s, a->rd);
-unsigned n = vec_full_reg_offset(s, a->rn);
-unsigned m = vec_full_reg_offset(s, a->rm);
+unsigned d = vec_full_reg_offset(s, rd);
+unsigned n = vec_full_reg_offset(s, rn);
+unsigned m = vec_full_reg_offset(s, rm);
 
 /* Use host vector move insns if we have appropriate sizes
  * and no unfortunate overlap.
@@ -2296,6 +2296,19 @@ static bool trans_EXT(DisasContext *s, arg_EXT *a)
 return true;
 }
 
+static bool trans_EXT(DisasContext *s, arg_EXT *a)
+{
+return do_EXT(s, a->rd, a->rn, a->rm, a->imm);
+}
+
+static bool trans_EXT_sve2(DisasContext *s, arg_rri *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return do_EXT(s, a->rd, a->rn, (a->rn + 1) % 32, a->imm);
+}
+
 /*
  *** SVE Permute - Unpredicated Group
  */
@@ -3013,6 +3026,18 @@ static bool trans_SPLICE(DisasContext *s, arg_rprr_esz 
*a)
 return true;
 }
 
+static bool trans_SPLICE_sve2(DisasContext *s, arg_rpr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_ool_zzzp(s, gen_helper_sve_splice,
+  a->rd, a->rn, (a->rn + 1) % 32, a->pg, a->esz);
+}
+return true;
+}
+
 /*
  *** SVE Integer Compare - Vectors Group
  */
-- 
2.25.1




[PATCH v5 42/81] target/arm: Implement SVE2 HISTCNT, HISTSEG

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200416173109.8856-1-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
v2: Fix overlap between output and input vectors.
v4: Fix histseg counting (zhiwei).
---
 target/arm/helper-sve.h|   7 ++
 target/arm/sve.decode  |   6 ++
 target/arm/sve_helper.c| 131 +
 target/arm/translate-sve.c |  19 ++
 4 files changed, 163 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 98e6b57e38..507a2fea8e 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2551,6 +2551,13 @@ DEF_HELPER_FLAGS_5(sve2_nmatch_ppzz_b, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve2_nmatch_ppzz_h, TCG_CALL_NO_RWG,
i32, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve2_histcnt_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_histcnt_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_histseg, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 388bf92acf..8f501a083c 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -146,6 +146,7 @@
 _esz rn=%reg_movprfx
 @rdn_pg_rm_ra    esz:2 . ra:5  ... pg:3 rm:5 rd:5 \
 _esz rn=%reg_movprfx
+@rd_pg_rn_rm    esz:2 . rm:5 ... pg:3 rn:5 rd:5   _esz
 
 # One register operand, with governing predicate, vector element size
 @rd_pg_rn    esz:2 ... ... ... pg:3 rn:5 rd:5   _esz
@@ -1336,6 +1337,11 @@ RSUBHNT 01000101 .. 1 . 011 111 . .  
@rd_rn_rm
 MATCH   01000101 .. 1 . 100 ... . 0  @pd_pg_rn_rm
 NMATCH  01000101 .. 1 . 100 ... . 1  @pd_pg_rn_rm
 
+### SVE2 Histogram Computation
+
+HISTCNT 01000101 .. 1 . 110 ... . .  @rd_pg_rn_rm
+HISTSEG 01000101 .. 1 . 101 000 . .  @rd_rn_rm
+
 ## SVE2 floating-point pairwise operations
 
 FADDP   01100100 .. 010 00 0 100 ... . . @rdn_pg_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 12acc4fb0b..8d002fdb65 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -7062,3 +7062,134 @@ DO_PPZZ_MATCH(sve2_nmatch_ppzz_b, MO_8, true)
 DO_PPZZ_MATCH(sve2_nmatch_ppzz_h, MO_16, true)
 
 #undef DO_PPZZ_MATCH
+
+void HELPER(sve2_histcnt_s)(void *vd, void *vn, void *vm, void *vg,
+uint32_t desc)
+{
+ARMVectorReg scratch;
+intptr_t i, j;
+intptr_t opr_sz = simd_oprsz(desc);
+uint32_t *d = vd, *n = vn, *m = vm;
+uint8_t *pg = vg;
+
+if (d == n) {
+n = memcpy(, n, opr_sz);
+if (d == m) {
+m = n;
+}
+} else if (d == m) {
+m = memcpy(, m, opr_sz);
+}
+
+for (i = 0; i < opr_sz; i += 4) {
+uint64_t count = 0;
+uint8_t pred;
+
+pred = pg[H1(i >> 3)] >> (i & 7);
+if (pred & 1) {
+uint32_t nn = n[H4(i >> 2)];
+
+for (j = 0; j <= i; j += 4) {
+pred = pg[H1(j >> 3)] >> (j & 7);
+if ((pred & 1) && nn == m[H4(j >> 2)]) {
+++count;
+}
+}
+}
+d[H4(i >> 2)] = count;
+}
+}
+
+void HELPER(sve2_histcnt_d)(void *vd, void *vn, void *vm, void *vg,
+uint32_t desc)
+{
+ARMVectorReg scratch;
+intptr_t i, j;
+intptr_t opr_sz = simd_oprsz(desc);
+uint64_t *d = vd, *n = vn, *m = vm;
+uint8_t *pg = vg;
+
+if (d == n) {
+n = memcpy(, n, opr_sz);
+if (d == m) {
+m = n;
+}
+} else if (d == m) {
+m = memcpy(, m, opr_sz);
+}
+
+for (i = 0; i < opr_sz / 8; ++i) {
+uint64_t count = 0;
+if (pg[H1(i)] & 1) {
+uint64_t nn = n[i];
+for (j = 0; j <= i; ++j) {
+if ((pg[H1(j)] & 1) && nn == m[j]) {
+++count;
+}
+}
+}
+d[i] = count;
+}
+}
+
+/*
+ * Returns the number of bytes in m0 and m1 that match n.
+ * Unlike do_match2 we don't just need true/false, we need an exact count.
+ * This requires two extra logical operations.
+ */
+static inline uint64_t do_histseg_cnt(uint8_t n, uint64_t m0, uint64_t m1)
+{
+const uint64_t mask = dup_const(MO_8, 0x7f);
+uint64_t cmp0, cmp1;
+
+cmp1 = dup_const(MO_8, n);
+cmp0 = cmp1 ^ m0;
+cmp1 = cmp1 ^ m1;
+
+/*
+ * 1: clear msb of each byte to avoid carry to next byte (& mask)
+ * 2: carry in to msb if byte != 0 (+ mask)
+ * 3: set msb if cmp has msb set (| cmp)
+ * 4: set ~msb to ignore them (| mask)
+ * We now have 0xff for byte != 0 

[PATCH v5 35/81] target/arm: Implement SVE2 saturating multiply-add high

2021-04-16 Thread Richard Henderson
SVE2 has two additional sizes of the operation and unlike NEON,
there is no saturation flag.  Create new entry points for SVE2
that do not set QC.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h|  17 
 target/arm/sve.decode  |   5 ++
 target/arm/translate-sve.c |  18 +
 target/arm/vec_helper.c| 161 +++--
 4 files changed, 195 insertions(+), 6 deletions(-)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index 2c412ffd3b..6bb0b0ddc0 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -591,6 +591,23 @@ DEF_HELPER_FLAGS_5(gvec_qrdmlah_s32, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(gvec_qrdmlsh_s32, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlah_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrdmlsh_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(gvec_sdot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(gvec_udot_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(gvec_sdot_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 52f615b39e..8308c9238a 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1346,3 +1346,8 @@ SQDMLSLT_zzzw   01000100 .. 0 . 0110 11 . .  
@rda_rn_rm
 
 SQDMLALBT   01000100 .. 0 . 1 0 . .  @rda_rn_rm
 SQDMLSLBT   01000100 .. 0 . 1 1 . .  @rda_rn_rm
+
+## SVE2 saturating multiply-add high
+
+SQRDMLAH_   01000100 .. 0 . 01110 0 . .  @rda_rn_rm
+SQRDMLSH_   01000100 .. 0 . 01110 1 . .  @rda_rn_rm
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 27f9cdb891..4326b597e6 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7562,3 +7562,21 @@ static bool trans_SQDMLSLBT(DisasContext *s, 
arg__esz *a)
 {
 return do_sqdmlsl_zzzw(s, a, false, true);
 }
+
+static bool trans_SQRDMLAH_(DisasContext *s, arg__esz *a)
+{
+static gen_helper_gvec_4 * const fns[] = {
+gen_helper_sve2_sqrdmlah_b, gen_helper_sve2_sqrdmlah_h,
+gen_helper_sve2_sqrdmlah_s, gen_helper_sve2_sqrdmlah_d,
+};
+return do_sve2__ool(s, a, fns[a->esz], 0);
+}
+
+static bool trans_SQRDMLSH_(DisasContext *s, arg__esz *a)
+{
+static gen_helper_gvec_4 * const fns[] = {
+gen_helper_sve2_sqrdmlsh_b, gen_helper_sve2_sqrdmlsh_h,
+gen_helper_sve2_sqrdmlsh_s, gen_helper_sve2_sqrdmlsh_d,
+};
+return do_sve2__ool(s, a, fns[a->esz], 0);
+}
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index b0ce597060..c56337e724 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -22,6 +22,7 @@
 #include "exec/helper-proto.h"
 #include "tcg/tcg-gvec-desc.h"
 #include "fpu/softfloat.h"
+#include "qemu/int128.h"
 #include "vec_internal.h"
 
 /* Note that vector data is stored in host-endian 64-bit chunks,
@@ -36,15 +37,55 @@
 #define H4(x)  (x)
 #endif
 
+/* Signed saturating rounding doubling multiply-accumulate high half, 8-bit */
+static int8_t do_sqrdmlah_b(int8_t src1, int8_t src2, int8_t src3,
+bool neg, bool round)
+{
+/*
+ * Simplify:
+ * = ((a3 << 8) + ((e1 * e2) << 1) + (round << 7)) >> 8
+ * = ((a3 << 7) + (e1 * e2) + (round << 6)) >> 7
+ */
+int32_t ret = (int32_t)src1 * src2;
+if (neg) {
+ret = -ret;
+}
+ret += ((int32_t)src3 << 7) + (round << 6);
+ret >>= 7;
+
+if (ret != (int8_t)ret) {
+ret = (ret < 0 ? INT8_MIN : INT8_MAX);
+}
+return ret;
+}
+
+void HELPER(sve2_sqrdmlah_b)(void *vd, void *vn, void *vm,
+ void *va, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int8_t *d = vd, *n = vn, *m = vm, *a = va;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = do_sqrdmlah_b(n[i], m[i], a[i], false, true);
+}
+}
+
+void HELPER(sve2_sqrdmlsh_b)(void *vd, void *vn, void *vm,
+ void *va, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int8_t *d = vd, *n = vn, *m = vm, *a = va;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = do_sqrdmlah_b(n[i], m[i], a[i], true, 

[PATCH v5 27/81] target/arm: Implement SVE2 SQSHRUN, SQRSHRUN

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 16 +++
 target/arm/sve.decode  |  4 ++
 target/arm/sve_helper.c| 35 ++
 target/arm/translate-sve.c | 98 ++
 4 files changed, 153 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 2b2ebea631..2e80d9d27b 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2460,6 +2460,22 @@ DEF_HELPER_FLAGS_3(sve2_rshrnt_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_rshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_rshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_3(sve2_sqshrunb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqshrunb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqshrunb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqshrunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqshrunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqshrunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqrshrunb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqrshrunb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqrshrunb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqrshrunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqrshrunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqrshrunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 169486ecb2..18faa900ca 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1288,6 +1288,10 @@ SQXTUNT 01000101 .. 1 . 010 101 . .  
@rd_rn_tszimm_shl
 ## SVE2 bitwise shift right narrow
 
 # Bit 23 == 0 is handled by esz > 0 in the translator.
+SQSHRUNB01000101 .. 1 . 00  . .  @rd_rn_tszimm_shr
+SQSHRUNT01000101 .. 1 . 00 0001 . .  @rd_rn_tszimm_shr
+SQRSHRUNB   01000101 .. 1 . 00 0010 . .  @rd_rn_tszimm_shr
+SQRSHRUNT   01000101 .. 1 . 00 0011 . .  @rd_rn_tszimm_shr
 SHRNB   01000101 .. 1 . 00 0100 . .  @rd_rn_tszimm_shr
 SHRNT   01000101 .. 1 . 00 0101 . .  @rd_rn_tszimm_shr
 RSHRNB  01000101 .. 1 . 00 0110 . .  @rd_rn_tszimm_shr
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 3f864da3ab..d6b6293ab0 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1874,6 +1874,16 @@ static inline uint64_t do_urshr(uint64_t x, unsigned sh)
 }
 }
 
+static inline int64_t do_srshr(int64_t x, unsigned sh)
+{
+if (likely(sh < 64)) {
+return (x >> sh) + ((x >> (sh - 1)) & 1);
+} else {
+/* Rounding the sign bit always produces 0. */
+return 0;
+}
+}
+
 DO_ZPZI(sve_asr_zpzi_b, int8_t, H1, DO_SHR)
 DO_ZPZI(sve_asr_zpzi_h, int16_t, H1_2, DO_SHR)
 DO_ZPZI(sve_asr_zpzi_s, int32_t, H1_4, DO_SHR)
@@ -1936,6 +1946,31 @@ DO_SHRNT(sve2_rshrnt_h, uint16_t, uint8_t, H1_2, H1, 
do_urshr)
 DO_SHRNT(sve2_rshrnt_s, uint32_t, uint16_t, H1_4, H1_2, do_urshr)
 DO_SHRNT(sve2_rshrnt_d, uint64_t, uint32_t, , H1_4, do_urshr)
 
+#define DO_SQSHRUN_H(x, sh) do_sat_bhs((int64_t)(x) >> sh, 0, UINT8_MAX)
+#define DO_SQSHRUN_S(x, sh) do_sat_bhs((int64_t)(x) >> sh, 0, UINT16_MAX)
+#define DO_SQSHRUN_D(x, sh) \
+do_sat_bhs((int64_t)(x) >> (sh < 64 ? sh : 63), 0, UINT32_MAX)
+
+DO_SHRNB(sve2_sqshrunb_h, int16_t, uint8_t, DO_SQSHRUN_H)
+DO_SHRNB(sve2_sqshrunb_s, int32_t, uint16_t, DO_SQSHRUN_S)
+DO_SHRNB(sve2_sqshrunb_d, int64_t, uint32_t, DO_SQSHRUN_D)
+
+DO_SHRNT(sve2_sqshrunt_h, int16_t, uint8_t, H1_2, H1, DO_SQSHRUN_H)
+DO_SHRNT(sve2_sqshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQSHRUN_S)
+DO_SHRNT(sve2_sqshrunt_d, int64_t, uint32_t, , H1_4, DO_SQSHRUN_D)
+
+#define DO_SQRSHRUN_H(x, sh) do_sat_bhs(do_srshr(x, sh), 0, UINT8_MAX)
+#define DO_SQRSHRUN_S(x, sh) do_sat_bhs(do_srshr(x, sh), 0, UINT16_MAX)
+#define DO_SQRSHRUN_D(x, sh) do_sat_bhs(do_srshr(x, sh), 0, UINT32_MAX)
+
+DO_SHRNB(sve2_sqrshrunb_h, int16_t, uint8_t, DO_SQRSHRUN_H)
+DO_SHRNB(sve2_sqrshrunb_s, int32_t, uint16_t, DO_SQRSHRUN_S)
+DO_SHRNB(sve2_sqrshrunb_d, int64_t, uint32_t, DO_SQRSHRUN_D)
+
+DO_SHRNT(sve2_sqrshrunt_h, int16_t, uint8_t, H1_2, H1, DO_SQRSHRUN_H)
+DO_SHRNT(sve2_sqrshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRUN_S)
+DO_SHRNT(sve2_sqrshrunt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRUN_D)
+
 #undef DO_SHRNB
 #undef DO_SHRNT
 
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index c1a081acaa..5ff6b8ffb6 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6858,6 +6858,104 @@ static 

[PATCH v5 31/81] target/arm: Implement SVE2 WHILERW, WHILEWR

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Fix decodetree typo
v3: Fix iteration counts (zhiwei).
v4: Update for PREDDESC.
---
 target/arm/sve.decode  |  3 ++
 target/arm/translate-sve.c | 67 ++
 2 files changed, 70 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index ae853d21f2..f365907518 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -702,6 +702,9 @@ CTERM   00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 
 # SVE integer compare scalar count and limit
 WHILE   00100101 esz:2 1 rm:5 000 sf:1 u:1 lt:1 rn:5 eq:1 rd:4
 
+# SVE2 pointer conflict compare
+WHILE_ptr   00100101 esz:2 1 rm:5 001 100 rn:5 rw:1 rd:4
+
 ### SVE Integer Wide Immediate - Unpredicated Group
 
 # SVE broadcast floating-point immediate (unpredicated)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index aff85b0220..97e113ceec 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3218,6 +3218,73 @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
 return true;
 }
 
+static bool trans_WHILE_ptr(DisasContext *s, arg_WHILE_ptr *a)
+{
+TCGv_i64 op0, op1, diff, t1, tmax;
+TCGv_i32 t2, t3;
+TCGv_ptr ptr;
+unsigned vsz = vec_full_reg_size(s);
+unsigned desc = 0;
+
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+if (!sve_access_check(s)) {
+return true;
+}
+
+op0 = read_cpu_reg(s, a->rn, 1);
+op1 = read_cpu_reg(s, a->rm, 1);
+
+tmax = tcg_const_i64(vsz);
+diff = tcg_temp_new_i64();
+
+if (a->rw) {
+/* WHILERW */
+/* diff = abs(op1 - op0), noting that op0/1 are unsigned. */
+t1 = tcg_temp_new_i64();
+tcg_gen_sub_i64(diff, op0, op1);
+tcg_gen_sub_i64(t1, op1, op0);
+tcg_gen_movcond_i64(TCG_COND_GEU, diff, op0, op1, diff, t1);
+tcg_temp_free_i64(t1);
+/* Round down to a multiple of ESIZE.  */
+tcg_gen_andi_i64(diff, diff, -1 << a->esz);
+/* If op1 == op0, diff == 0, and the condition is always true. */
+tcg_gen_movcond_i64(TCG_COND_EQ, diff, op0, op1, tmax, diff);
+} else {
+/* WHILEWR */
+tcg_gen_sub_i64(diff, op1, op0);
+/* Round down to a multiple of ESIZE.  */
+tcg_gen_andi_i64(diff, diff, -1 << a->esz);
+/* If op0 >= op1, diff <= 0, the condition is always true. */
+tcg_gen_movcond_i64(TCG_COND_GEU, diff, op0, op1, tmax, diff);
+}
+
+/* Bound to the maximum.  */
+tcg_gen_umin_i64(diff, diff, tmax);
+tcg_temp_free_i64(tmax);
+
+/* Since we're bounded, pass as a 32-bit type.  */
+t2 = tcg_temp_new_i32();
+tcg_gen_extrl_i64_i32(t2, diff);
+tcg_temp_free_i64(diff);
+
+desc = FIELD_DP32(desc, PREDDESC, OPRSZ, vsz / 8);
+desc = FIELD_DP32(desc, PREDDESC, ESZ, a->esz);
+t3 = tcg_const_i32(desc);
+
+ptr = tcg_temp_new_ptr();
+tcg_gen_addi_ptr(ptr, cpu_env, pred_full_reg_offset(s, a->rd));
+
+gen_helper_sve_whilel(t2, ptr, t2, t3);
+do_pred_flags(t2);
+
+tcg_temp_free_ptr(ptr);
+tcg_temp_free_i32(t2);
+tcg_temp_free_i32(t3);
+return true;
+}
+
 /*
  *** SVE Integer Wide Immediate - Unpredicated Group
  */
-- 
2.25.1




[PATCH v5 45/81] target/arm: Implement SVE2 gather load insns

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Add decoding logic for SVE2 64-bit/32-bit gather non-temporal
load insns.

64-bit
* LDNT1SB
* LDNT1B (vector plus scalar)
* LDNT1SH
* LDNT1H (vector plus scalar)
* LDNT1SW
* LDNT1W (vector plus scalar)
* LDNT1D (vector plus scalar)

32-bit
* LDNT1SB
* LDNT1B (vector plus scalar)
* LDNT1SH
* LDNT1H (vector plus scalar)
* LDNT1W (vector plus scalar)

Signed-off-by: Stephen Long 
Message-Id: <20200422152343.12493-1-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  | 11 +++
 target/arm/translate-sve.c |  8 
 2 files changed, 19 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 5cfe6df0d2..c3958bed6a 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1389,6 +1389,17 @@ UMLSLT_zzzw 01000100 .. 0 . 010 111 . .  
@rda_rn_rm
 CMLA_   01000100 esz:2 0 rm:5 0010 rot:2 rn:5 rd:5  ra=%reg_movprfx
 SQRDCMLAH_  01000100 esz:2 0 rm:5 0011 rot:2 rn:5 rd:5  ra=%reg_movprfx
 
+### SVE2 Memory Gather Load Group
+
+# SVE2 64-bit gather non-temporal load
+#   (scalar plus unpacked 32-bit unscaled offsets)
+LDNT1_zprz  1100010 msz:2 00 rm:5 1 u:1 0 pg:3 rn:5 rd:5 \
+_gather_load xs=0 esz=3 scale=0 ff=0
+
+# SVE2 32-bit gather non-temporal load (scalar plus 32-bit unscaled offsets)
+LDNT1_zprz  110 msz:2 00 rm:5 10 u:1 pg:3 rn:5 rd:5 \
+_gather_load xs=0 esz=2 scale=0 ff=0
+
 ### SVE2 Memory Store Group
 
 # SVE2 64-bit scatter non-temporal store (vector plus scalar)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 0356b6a124..a74c15b23f 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6015,6 +6015,14 @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz 
*a)
 return true;
 }
 
+static bool trans_LDNT1_zprz(DisasContext *s, arg_LD1_zprz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return trans_LD1_zprz(s, a);
+}
+
 /* Indexed by [mte][be][xs][msz].  */
 static gen_helper_gvec_mem_scatter * const scatter_store_fn32[2][2][2][3] = {
 { /* MTE Inactive */
-- 
2.25.1




[PATCH v5 38/81] target/arm: Implement SVE2 ADDHNB, ADDHNT

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200417162231.10374-2-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  8 
 target/arm/sve.decode  |  5 +
 target/arm/sve_helper.c| 36 
 target/arm/translate-sve.c | 13 +
 4 files changed, 62 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index d154218452..a369fd2391 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2509,6 +2509,14 @@ DEF_HELPER_FLAGS_3(sve2_uqrshrnt_h, TCG_CALL_NO_RWG, 
void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_uqrshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_uqrshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_addhnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_addhnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_addhnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_addhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_addhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_addhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
i32, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 936977eacb..72dd36a5c8 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1320,6 +1320,11 @@ UQSHRNT 01000101 .. 1 . 00 1101 . .  
@rd_rn_tszimm_shr
 UQRSHRNB01000101 .. 1 . 00 1110 . .  @rd_rn_tszimm_shr
 UQRSHRNT01000101 .. 1 . 00  . .  @rd_rn_tszimm_shr
 
+## SVE2 integer add/subtract narrow high part
+
+ADDHNB  01000101 .. 1 . 011 000 . .  @rd_rn_rm
+ADDHNT  01000101 .. 1 . 011 001 . .  @rd_rn_rm
+
 ### SVE2 Character Match
 
 MATCH   01000101 .. 1 . 100 ... . 0  @pd_pg_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 572d41a26c..2dead1f056 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2112,6 +2112,42 @@ DO_SHRNT(sve2_uqrshrnt_d, uint64_t, uint32_t, , 
H1_4, DO_UQRSHRN_D)
 #undef DO_SHRNB
 #undef DO_SHRNT
 
+#define DO_BINOPNB(NAME, TYPEW, TYPEN, SHIFT, OP)   \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc)  \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+for (i = 0; i < opr_sz; i += sizeof(TYPEW)) {   \
+TYPEW nn = *(TYPEW *)(vn + i);  \
+TYPEW mm = *(TYPEW *)(vm + i);  \
+*(TYPEW *)(vd + i) = (TYPEN)OP(nn, mm, SHIFT);  \
+}   \
+}
+
+#define DO_BINOPNT(NAME, TYPEW, TYPEN, SHIFT, HW, HN, OP)   \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc)  \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+for (i = 0; i < opr_sz; i += sizeof(TYPEW)) {   \
+TYPEW nn = *(TYPEW *)(vn + HW(i));  \
+TYPEW mm = *(TYPEW *)(vm + HW(i));  \
+*(TYPEN *)(vd + HN(i + sizeof(TYPEN))) = OP(nn, mm, SHIFT); \
+}   \
+}
+
+#define DO_ADDHN(N, M, SH)  ((N + M) >> SH)
+
+DO_BINOPNB(sve2_addhnb_h, uint16_t, uint8_t, 8, DO_ADDHN)
+DO_BINOPNB(sve2_addhnb_s, uint32_t, uint16_t, 16, DO_ADDHN)
+DO_BINOPNB(sve2_addhnb_d, uint64_t, uint32_t, 32, DO_ADDHN)
+
+DO_BINOPNT(sve2_addhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_ADDHN)
+DO_BINOPNT(sve2_addhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_ADDHN)
+DO_BINOPNT(sve2_addhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_ADDHN)
+
+#undef DO_ADDHN
+
+#undef DO_BINOPNB
+
 /* Fully general four-operand expander, controlled by a predicate.
  */
 #define DO_ZPZZZ(NAME, TYPE, H, OP)   \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 6e92abbd8f..86f8a24b5b 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7462,6 +7462,19 @@ static bool trans_UQRSHRNT(DisasContext *s, arg_rri_esz 
*a)
 return do_sve2_shr_narrow(s, a, ops);
 }
 
+#define DO_SVE2_ZZZ_NARROW(NAME, name)\
+static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a) \
+{ \
+   

[PATCH v5 28/81] target/arm: Implement SVE2 UQSHRN, UQRSHRN

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 16 +++
 target/arm/sve.decode  |  4 ++
 target/arm/sve_helper.c| 24 ++
 target/arm/translate-sve.c | 93 ++
 4 files changed, 137 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 2e80d9d27b..ba6a24fc8b 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2476,6 +2476,22 @@ DEF_HELPER_FLAGS_3(sve2_sqrshrunt_h, TCG_CALL_NO_RWG, 
void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_sqrshrunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_sqrshrunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_3(sve2_uqshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_uqshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_uqrshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqrshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqrshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_uqrshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqrshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqrshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 18faa900ca..13b5da0856 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1296,6 +1296,10 @@ SHRNB   01000101 .. 1 . 00 0100 . .  
@rd_rn_tszimm_shr
 SHRNT   01000101 .. 1 . 00 0101 . .  @rd_rn_tszimm_shr
 RSHRNB  01000101 .. 1 . 00 0110 . .  @rd_rn_tszimm_shr
 RSHRNT  01000101 .. 1 . 00 0111 . .  @rd_rn_tszimm_shr
+UQSHRNB 01000101 .. 1 . 00 1100 . .  @rd_rn_tszimm_shr
+UQSHRNT 01000101 .. 1 . 00 1101 . .  @rd_rn_tszimm_shr
+UQRSHRNB01000101 .. 1 . 00 1110 . .  @rd_rn_tszimm_shr
+UQRSHRNT01000101 .. 1 . 00  . .  @rd_rn_tszimm_shr
 
 ## SVE2 floating-point pairwise operations
 
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index d6b6293ab0..175f3de08f 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1971,6 +1971,30 @@ DO_SHRNT(sve2_sqrshrunt_h, int16_t, uint8_t, H1_2, H1, 
DO_SQRSHRUN_H)
 DO_SHRNT(sve2_sqrshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRUN_S)
 DO_SHRNT(sve2_sqrshrunt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRUN_D)
 
+#define DO_UQSHRN_H(x, sh) MIN(x >> sh, UINT8_MAX)
+#define DO_UQSHRN_S(x, sh) MIN(x >> sh, UINT16_MAX)
+#define DO_UQSHRN_D(x, sh) MIN(x >> sh, UINT32_MAX)
+
+DO_SHRNB(sve2_uqshrnb_h, uint16_t, uint8_t, DO_UQSHRN_H)
+DO_SHRNB(sve2_uqshrnb_s, uint32_t, uint16_t, DO_UQSHRN_S)
+DO_SHRNB(sve2_uqshrnb_d, uint64_t, uint32_t, DO_UQSHRN_D)
+
+DO_SHRNT(sve2_uqshrnt_h, uint16_t, uint8_t, H1_2, H1, DO_UQSHRN_H)
+DO_SHRNT(sve2_uqshrnt_s, uint32_t, uint16_t, H1_4, H1_2, DO_UQSHRN_S)
+DO_SHRNT(sve2_uqshrnt_d, uint64_t, uint32_t, , H1_4, DO_UQSHRN_D)
+
+#define DO_UQRSHRN_H(x, sh) MIN(do_urshr(x, sh), UINT8_MAX)
+#define DO_UQRSHRN_S(x, sh) MIN(do_urshr(x, sh), UINT16_MAX)
+#define DO_UQRSHRN_D(x, sh) MIN(do_urshr(x, sh), UINT32_MAX)
+
+DO_SHRNB(sve2_uqrshrnb_h, uint16_t, uint8_t, DO_UQRSHRN_H)
+DO_SHRNB(sve2_uqrshrnb_s, uint32_t, uint16_t, DO_UQRSHRN_S)
+DO_SHRNB(sve2_uqrshrnb_d, uint64_t, uint32_t, DO_UQRSHRN_D)
+
+DO_SHRNT(sve2_uqrshrnt_h, uint16_t, uint8_t, H1_2, H1, DO_UQRSHRN_H)
+DO_SHRNT(sve2_uqrshrnt_s, uint32_t, uint16_t, H1_4, H1_2, DO_UQRSHRN_S)
+DO_SHRNT(sve2_uqrshrnt_d, uint64_t, uint32_t, , H1_4, DO_UQRSHRN_D)
+
 #undef DO_SHRNB
 #undef DO_SHRNT
 
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 5ff6b8ffb6..12733d9b4f 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6956,6 +6956,99 @@ static bool trans_SQRSHRUNT(DisasContext *s, arg_rri_esz 
*a)
 return do_sve2_shr_narrow(s, a, ops);
 }
 
+static void gen_uqshrnb_vec(unsigned vece, TCGv_vec d,
+TCGv_vec n, int64_t shr)
+{
+TCGv_vec t = tcg_temp_new_vec_matching(d);
+int halfbits = 4 << vece;
+
+tcg_gen_shri_vec(vece, n, n, shr);
+tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(0, halfbits));
+tcg_gen_umin_vec(vece, d, n, t);
+tcg_temp_free_vec(t);
+}
+
+static bool trans_UQSHRNB(DisasContext *s, arg_rri_esz *a)
+{
+static const TCGOpcode vec_list[] = {
+INDEX_op_shri_vec, INDEX_op_umin_vec, 0
+};
+

[PATCH v5 22/81] target/arm: Implement SVE2 bitwise shift and insert

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  |  5 +
 target/arm/translate-sve.c | 10 ++
 2 files changed, 15 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index d3c4ec6dd1..695a16551e 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1261,3 +1261,8 @@ SSRA01000101 .. 0 . 1110 00 . .  
@rd_rn_tszimm_shr
 USRA01000101 .. 0 . 1110 01 . .  @rd_rn_tszimm_shr
 SRSRA   01000101 .. 0 . 1110 10 . .  @rd_rn_tszimm_shr
 URSRA   01000101 .. 0 . 1110 11 . .  @rd_rn_tszimm_shr
+
+## SVE2 bitwise shift and insert
+
+SRI 01000101 .. 0 . 0 0 . .  @rd_rn_tszimm_shr
+SLI 01000101 .. 0 . 0 1 . .  @rd_rn_tszimm_shl
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index c11074..d74a15d8b8 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6428,3 +6428,13 @@ static bool trans_URSRA(DisasContext *s, arg_rri_esz *a)
 {
 return do_sve2_fn2i(s, a, gen_gvec_ursra);
 }
+
+static bool trans_SRI(DisasContext *s, arg_rri_esz *a)
+{
+return do_sve2_fn2i(s, a, gen_gvec_sri);
+}
+
+static bool trans_SLI(DisasContext *s, arg_rri_esz *a)
+{
+return do_sve2_fn2i(s, a, gen_gvec_sli);
+}
-- 
2.25.1




[PATCH v5 39/81] target/arm: Implement SVE2 RADDHNB, RADDHNT

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Message-Id: <20200417162231.10374-3-stepl...@quicinc.com>
Signed-off-by: Richard Henderson 
---
v2: Fix round bit type (laurent desnogues)
---
 target/arm/helper-sve.h|  8 
 target/arm/sve.decode  |  2 ++
 target/arm/sve_helper.c| 10 ++
 target/arm/translate-sve.c |  2 ++
 4 files changed, 22 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index a369fd2391..8d95c87694 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2517,6 +2517,14 @@ DEF_HELPER_FLAGS_4(sve2_addhnt_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_addhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_addhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_raddhnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_raddhnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_raddhnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_raddhnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_raddhnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_raddhnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve2_match_ppzz_b, TCG_CALL_NO_RWG,
i32, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_match_ppzz_h, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 72dd36a5c8..dfcfab4bc0 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1324,6 +1324,8 @@ UQRSHRNT01000101 .. 1 . 00  . .  
@rd_rn_tszimm_shr
 
 ADDHNB  01000101 .. 1 . 011 000 . .  @rd_rn_rm
 ADDHNT  01000101 .. 1 . 011 001 . .  @rd_rn_rm
+RADDHNB 01000101 .. 1 . 011 010 . .  @rd_rn_rm
+RADDHNT 01000101 .. 1 . 011 011 . .  @rd_rn_rm
 
 ### SVE2 Character Match
 
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 2dead1f056..e6f6e3d5fa 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -2135,6 +2135,7 @@ void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t 
desc)  \
 }
 
 #define DO_ADDHN(N, M, SH)  ((N + M) >> SH)
+#define DO_RADDHN(N, M, SH) ((N + M + ((__typeof(N))1 << (SH - 1))) >> SH)
 
 DO_BINOPNB(sve2_addhnb_h, uint16_t, uint8_t, 8, DO_ADDHN)
 DO_BINOPNB(sve2_addhnb_s, uint32_t, uint16_t, 16, DO_ADDHN)
@@ -2144,6 +2145,15 @@ DO_BINOPNT(sve2_addhnt_h, uint16_t, uint8_t, 8, H1_2, 
H1, DO_ADDHN)
 DO_BINOPNT(sve2_addhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_ADDHN)
 DO_BINOPNT(sve2_addhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_ADDHN)
 
+DO_BINOPNB(sve2_raddhnb_h, uint16_t, uint8_t, 8, DO_RADDHN)
+DO_BINOPNB(sve2_raddhnb_s, uint32_t, uint16_t, 16, DO_RADDHN)
+DO_BINOPNB(sve2_raddhnb_d, uint64_t, uint32_t, 32, DO_RADDHN)
+
+DO_BINOPNT(sve2_raddhnt_h, uint16_t, uint8_t, 8, H1_2, H1, DO_RADDHN)
+DO_BINOPNT(sve2_raddhnt_s, uint32_t, uint16_t, 16, H1_4, H1_2, DO_RADDHN)
+DO_BINOPNT(sve2_raddhnt_d, uint64_t, uint32_t, 32, , H1_4, DO_RADDHN)
+
+#undef DO_RADDHN
 #undef DO_ADDHN
 
 #undef DO_BINOPNB
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 86f8a24b5b..af0d0ab279 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -7474,6 +7474,8 @@ static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a) 
\
 
 DO_SVE2_ZZZ_NARROW(ADDHNB, addhnb)
 DO_SVE2_ZZZ_NARROW(ADDHNT, addhnt)
+DO_SVE2_ZZZ_NARROW(RADDHNB, raddhnb)
+DO_SVE2_ZZZ_NARROW(RADDHNT, raddhnt)
 
 static bool do_sve2_ppzz_flags(DisasContext *s, arg_rprr_esz *a,
gen_helper_gvec_flags_4 *fn)
-- 
2.25.1




[PATCH v5 25/81] target/arm: Implement SVE2 floating-point pairwise

2021-04-16 Thread Richard Henderson
From: Stephen Long 

Signed-off-by: Stephen Long 
Reviewed-by: Richard Henderson 
Signed-off-by: Richard Henderson 
---
v2: Load all inputs before writing any output (laurent desnogues)
---
 target/arm/helper-sve.h| 35 +
 target/arm/sve.decode  |  8 +++
 target/arm/sve_helper.c| 46 ++
 target/arm/translate-sve.c | 25 +
 4 files changed, 114 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index b302203ce8..a033b5f6b2 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2443,3 +2443,38 @@ DEF_HELPER_FLAGS_3(sve2_uqxtnt_d, TCG_CALL_NO_RWG, void, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_sqxtunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_sqxtunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_sqxtunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve2_fmaxnmp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_fmaxnmp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_fmaxnmp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve2_fminnmp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_fminnmp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_fminnmp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve2_fmaxp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_fmaxp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_fmaxp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_6(sve2_fminp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 19866ec4c6..9c75ac94c0 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1284,3 +1284,11 @@ UQXTNB  01000101 .. 1 . 010 010 . .  
@rd_rn_tszimm_shl
 UQXTNT  01000101 .. 1 . 010 011 . .  @rd_rn_tszimm_shl
 SQXTUNB 01000101 .. 1 . 010 100 . .  @rd_rn_tszimm_shl
 SQXTUNT 01000101 .. 1 . 010 101 . .  @rd_rn_tszimm_shl
+
+## SVE2 floating-point pairwise operations
+
+FADDP   01100100 .. 010 00 0 100 ... . . @rdn_pg_rm
+FMAXNMP 01100100 .. 010 10 0 100 ... . . @rdn_pg_rm
+FMINNMP 01100100 .. 010 10 1 100 ... . . @rdn_pg_rm
+FMAXP   01100100 .. 010 11 0 100 ... . . @rdn_pg_rm
+FMINP   01100100 .. 010 11 1 100 ... . . @rdn_pg_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 7dca67785a..11f228144c 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -890,6 +890,52 @@ DO_ZPZZ_PAIR_D(sve2_sminp_zpzz_d, int64_t, DO_MIN)
 #undef DO_ZPZZ_PAIR
 #undef DO_ZPZZ_PAIR_D
 
+#define DO_ZPZZ_PAIR_FP(NAME, TYPE, H, OP)  \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg,   \
+  void *status, uint32_t desc)  \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+for (i = 0; i < opr_sz; ) { \
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
+do {\
+TYPE n0 = *(TYPE *)(vn + H(i)); \
+TYPE m0 = *(TYPE *)(vm + H(i)); \
+TYPE n1 = *(TYPE *)(vn + H(i + sizeof(TYPE)));  \
+TYPE m1 = *(TYPE *)(vm + H(i + sizeof(TYPE)));  \
+if (pg & 1) {   \
+*(TYPE *)(vd + H(i)) = OP(n0, n1, status);  \
+}   \
+i += sizeof(TYPE), pg >>= sizeof(TYPE); \
+

[PATCH v5 26/81] target/arm: Implement SVE2 SHRN, RSHRN

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Fix typo in gen_shrnb_vec (laurent desnogues)
v3: Replace DO_RSHR with an inline function
---
 target/arm/helper-sve.h|  16 
 target/arm/sve.decode  |   8 ++
 target/arm/sve_helper.c|  54 -
 target/arm/translate-sve.c | 160 +
 4 files changed, 236 insertions(+), 2 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index a033b5f6b2..2b2ebea631 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2444,6 +2444,22 @@ DEF_HELPER_FLAGS_3(sve2_sqxtunt_h, TCG_CALL_NO_RWG, 
void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_sqxtunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_sqxtunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_3(sve2_shrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_shrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_shrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_shrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_shrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_shrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_rshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_rshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_rshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_rshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_rshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_rshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_h, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_6(sve2_faddp_zpzz_s, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 9c75ac94c0..169486ecb2 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1285,6 +1285,14 @@ UQXTNT  01000101 .. 1 . 010 011 . .  
@rd_rn_tszimm_shl
 SQXTUNB 01000101 .. 1 . 010 100 . .  @rd_rn_tszimm_shl
 SQXTUNT 01000101 .. 1 . 010 101 . .  @rd_rn_tszimm_shl
 
+## SVE2 bitwise shift right narrow
+
+# Bit 23 == 0 is handled by esz > 0 in the translator.
+SHRNB   01000101 .. 1 . 00 0100 . .  @rd_rn_tszimm_shr
+SHRNT   01000101 .. 1 . 00 0101 . .  @rd_rn_tszimm_shr
+RSHRNB  01000101 .. 1 . 00 0110 . .  @rd_rn_tszimm_shr
+RSHRNT  01000101 .. 1 . 00 0111 . .  @rd_rn_tszimm_shr
+
 ## SVE2 floating-point pairwise operations
 
 FADDP   01100100 .. 010 00 0 100 ... . . @rdn_pg_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 11f228144c..3f864da3ab 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1863,6 +1863,17 @@ void HELPER(NAME)(void *vd, void *vn, void *vg, uint32_t 
desc)  \
when N is negative, add 2**M-1.  */
 #define DO_ASRD(N, M) ((N + (N < 0 ? ((__typeof(N))1 << M) - 1 : 0)) >> M)
 
+static inline uint64_t do_urshr(uint64_t x, unsigned sh)
+{
+if (likely(sh < 64)) {
+return (x >> sh) + ((x >> (sh - 1)) & 1);
+} else if (sh == 64) {
+return x >> 63;
+} else {
+return 0;
+}
+}
+
 DO_ZPZI(sve_asr_zpzi_b, int8_t, H1, DO_SHR)
 DO_ZPZI(sve_asr_zpzi_h, int16_t, H1_2, DO_SHR)
 DO_ZPZI(sve_asr_zpzi_s, int32_t, H1_4, DO_SHR)
@@ -1883,12 +1894,51 @@ DO_ZPZI(sve_asrd_h, int16_t, H1_2, DO_ASRD)
 DO_ZPZI(sve_asrd_s, int32_t, H1_4, DO_ASRD)
 DO_ZPZI_D(sve_asrd_d, int64_t, DO_ASRD)
 
-#undef DO_SHR
-#undef DO_SHL
 #undef DO_ASRD
 #undef DO_ZPZI
 #undef DO_ZPZI_D
 
+#define DO_SHRNB(NAME, TYPEW, TYPEN, OP) \
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
+{\
+intptr_t i, opr_sz = simd_oprsz(desc);   \
+int shift = simd_data(desc); \
+for (i = 0; i < opr_sz; i += sizeof(TYPEW)) {\
+TYPEW nn = *(TYPEW *)(vn + i);   \
+*(TYPEW *)(vd + i) = (TYPEN)OP(nn, shift);   \
+}\
+}
+
+#define DO_SHRNT(NAME, TYPEW, TYPEN, HW, HN, OP)  \
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc)  \
+{ \
+intptr_t i, opr_sz = simd_oprsz(desc);\
+int shift = simd_data(desc);  \
+for (i = 0; i < opr_sz; i += sizeof(TYPEW)) { \
+TYPEW nn = *(TYPEW *)(vn + HW(i));\
+*(TYPEN *)(vd + HN(i + sizeof(TYPEN))) = OP(nn, shift);   \
+} \
+}
+
+DO_SHRNB(sve2_shrnb_h, uint16_t, uint8_t, DO_SHR)

[PATCH v5 18/81] target/arm: Implement SVE2 complex integer add

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Fix subtraction ordering (laurent desnogues).
---
 target/arm/helper-sve.h| 10 +
 target/arm/sve.decode  |  9 
 target/arm/sve_helper.c| 42 ++
 target/arm/translate-sve.c | 31 
 4 files changed, 92 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 4861481fe0..c2155cc544 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2392,3 +2392,13 @@ DEF_HELPER_FLAGS_4(sve2_bgrp_b, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_bgrp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_bgrp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_bgrp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_cadd_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_cadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_cadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_cadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_sqcadd_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqcadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqcadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqcadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 7cb89a0d47..7508b901d0 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1226,3 +1226,12 @@ EORTB   01000101 .. 0 . 10010 1 . .  
@rd_rn_rm
 BEXT01000101 .. 0 . 1011 00 . .  @rd_rn_rm
 BDEP01000101 .. 0 . 1011 01 . .  @rd_rn_rm
 BGRP01000101 .. 0 . 1011 10 . .  @rd_rn_rm
+
+ SVE2 Accumulate
+
+## SVE2 complex integer add
+
+CADD_rot90  01000101 .. 0 0 11011 0 . .  @rdn_rm
+CADD_rot270 01000101 .. 0 0 11011 1 . .  @rdn_rm
+SQCADD_rot9001000101 .. 0 1 11011 0 . .  @rdn_rm
+SQCADD_rot270   01000101 .. 0 1 11011 1 . .  @rdn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index d692d2fe3d..2e09c3e55b 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1314,6 +1314,48 @@ DO_BITPERM(sve2_bgrp_d, uint64_t, bitgroup)
 
 #undef DO_BITPERM
 
+#define DO_CADD(NAME, TYPE, H, ADD_OP, SUB_OP)  \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc)  \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+int sub_r = simd_data(desc);\
+if (sub_r) {\
+for (i = 0; i < opr_sz; i += 2 * sizeof(TYPE)) {\
+TYPE acc_r = *(TYPE *)(vn + H(i));  \
+TYPE acc_i = *(TYPE *)(vn + H(i + sizeof(TYPE)));   \
+TYPE el2_r = *(TYPE *)(vm + H(i));  \
+TYPE el2_i = *(TYPE *)(vm + H(i + sizeof(TYPE)));   \
+acc_r = ADD_OP(acc_r, el2_i);   \
+acc_i = SUB_OP(acc_i, el2_r);   \
+*(TYPE *)(vd + H(i)) = acc_r;   \
+*(TYPE *)(vd + H(i + sizeof(TYPE))) = acc_i;\
+}   \
+} else {\
+for (i = 0; i < opr_sz; i += 2 * sizeof(TYPE)) {\
+TYPE acc_r = *(TYPE *)(vn + H(i));  \
+TYPE acc_i = *(TYPE *)(vn + H(i + sizeof(TYPE)));   \
+TYPE el2_r = *(TYPE *)(vm + H(i));  \
+TYPE el2_i = *(TYPE *)(vm + H(i + sizeof(TYPE)));   \
+acc_r = SUB_OP(acc_r, el2_i);   \
+acc_i = ADD_OP(acc_i, el2_r);   \
+*(TYPE *)(vd + H(i)) = acc_r;   \
+*(TYPE *)(vd + H(i + sizeof(TYPE))) = acc_i;\
+}   \
+}   \
+}
+
+DO_CADD(sve2_cadd_b, int8_t, H1, DO_ADD, DO_SUB)
+DO_CADD(sve2_cadd_h, int16_t, H1_2, DO_ADD, DO_SUB)
+DO_CADD(sve2_cadd_s, int32_t, H1_4, DO_ADD, DO_SUB)
+DO_CADD(sve2_cadd_d, int64_t, , DO_ADD, DO_SUB)
+
+DO_CADD(sve2_sqcadd_b, int8_t, H1, DO_SQADD_B, DO_SQSUB_B)
+DO_CADD(sve2_sqcadd_h, int16_t, H1_2, DO_SQADD_H, DO_SQSUB_H)
+DO_CADD(sve2_sqcadd_s, int32_t, H1_4, DO_SQADD_S, DO_SQSUB_S)
+DO_CADD(sve2_sqcadd_d, int64_t, , do_sqadd_d, do_sqsub_d)
+
+#undef DO_CADD
+
 #define DO_ZZI_SHLL(NAME, TYPEW, TYPEN, HW, HN) \
 void HELPER(NAME)(void *vd, void *vn, uint32_t desc)   \
 {  

[PATCH v5 30/81] target/arm: Implement SVE2 WHILEGT, WHILEGE, WHILEHI, WHILEHS

2021-04-16 Thread Richard Henderson
Rename the existing sve_while (less-than) helper to sve_whilel
to make room for a new sve_whileg helper for greater-than.

Signed-off-by: Richard Henderson 
---
v2: Use a new helper function to implement this.
v4: Update for PREDDESC.
---
 target/arm/helper-sve.h|  3 +-
 target/arm/sve.decode  |  2 +-
 target/arm/sve_helper.c| 38 +-
 target/arm/translate-sve.c | 56 --
 4 files changed, 82 insertions(+), 17 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 1c7fe8e417..5bf9fdc7a3 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -913,7 +913,8 @@ DEF_HELPER_FLAGS_4(sve_brkns, TCG_CALL_NO_RWG, i32, ptr, 
ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_3(sve_cntp, TCG_CALL_NO_RWG, i64, ptr, ptr, i32)
 
-DEF_HELPER_FLAGS_3(sve_while, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
+DEF_HELPER_FLAGS_3(sve_whilel, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
+DEF_HELPER_FLAGS_3(sve_whileg, TCG_CALL_NO_RWG, i32, ptr, i32, i32)
 
 DEF_HELPER_FLAGS_4(sve_subri_b, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
 DEF_HELPER_FLAGS_4(sve_subri_h, TCG_CALL_NO_RWG, void, ptr, ptr, i64, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 0674464695..ae853d21f2 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -700,7 +700,7 @@ SINCDECP_z  00100101 .. 1010 d:1 u:1 1 00  
.@incdec2_pred
 CTERM   00100101 1 sf:1 1 rm:5 001000 rn:5 ne:1 
 
 # SVE integer compare scalar count and limit
-WHILE   00100101 esz:2 1 rm:5 000 sf:1 u:1 1 rn:5 eq:1 rd:4
+WHILE   00100101 esz:2 1 rm:5 000 sf:1 u:1 lt:1 rn:5 eq:1 rd:4
 
 ### SVE Integer Wide Immediate - Unpredicated Group
 
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 02e87c535d..fb38f2c57e 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -3745,7 +3745,7 @@ uint64_t HELPER(sve_cntp)(void *vn, void *vg, uint32_t 
pred_desc)
 return sum;
 }
 
-uint32_t HELPER(sve_while)(void *vd, uint32_t count, uint32_t pred_desc)
+uint32_t HELPER(sve_whilel)(void *vd, uint32_t count, uint32_t pred_desc)
 {
 intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
 intptr_t esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
@@ -3771,6 +3771,42 @@ uint32_t HELPER(sve_while)(void *vd, uint32_t count, 
uint32_t pred_desc)
 return predtest_ones(d, oprsz, esz_mask);
 }
 
+uint32_t HELPER(sve_whileg)(void *vd, uint32_t count, uint32_t pred_desc)
+{
+intptr_t oprsz = FIELD_EX32(pred_desc, PREDDESC, OPRSZ);
+intptr_t esz = FIELD_EX32(pred_desc, PREDDESC, ESZ);
+uint64_t esz_mask = pred_esz_masks[esz];
+ARMPredicateReg *d = vd;
+intptr_t i, invcount, oprbits;
+uint64_t bits;
+
+if (count == 0) {
+return do_zero(d, oprsz);
+}
+
+oprbits = oprsz * 8;
+tcg_debug_assert(count <= oprbits);
+
+bits = esz_mask;
+if (oprbits & 63) {
+bits &= MAKE_64BIT_MASK(0, oprbits & 63);
+}
+
+invcount = oprbits - count;
+for (i = (oprsz - 1) / 8; i > invcount / 64; --i) {
+d->p[i] = bits;
+bits = esz_mask;
+}
+
+d->p[i] = bits & MAKE_64BIT_MASK(invcount & 63, 64);
+
+while (--i >= 0) {
+d->p[i] = 0;
+}
+
+return predtest_ones(d, oprsz, esz_mask);
+}
+
 /* Recursive reduction on a function;
  * C.f. the ARM ARM function ReducePredicated.
  *
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 218f1ca5ce..aff85b0220 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -3112,7 +3112,14 @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
 unsigned vsz = vec_full_reg_size(s);
 unsigned desc = 0;
 TCGCond cond;
+uint64_t maxval;
+/* Note that GE/HS has a->eq == 0 and GT/HI has a->eq == 1. */
+bool eq = a->eq == a->lt;
 
+/* The greater-than conditions are all SVE2. */
+if (!a->lt && !dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
 if (!sve_access_check(s)) {
 return true;
 }
@@ -3135,22 +3142,42 @@ static bool trans_WHILE(DisasContext *s, arg_WHILE *a)
  */
 t0 = tcg_temp_new_i64();
 t1 = tcg_temp_new_i64();
-tcg_gen_sub_i64(t0, op1, op0);
+
+if (a->lt) {
+tcg_gen_sub_i64(t0, op1, op0);
+if (a->u) {
+maxval = a->sf ? UINT64_MAX : UINT32_MAX;
+cond = eq ? TCG_COND_LEU : TCG_COND_LTU;
+} else {
+maxval = a->sf ? INT64_MAX : INT32_MAX;
+cond = eq ? TCG_COND_LE : TCG_COND_LT;
+}
+} else {
+tcg_gen_sub_i64(t0, op0, op1);
+if (a->u) {
+maxval = 0;
+cond = eq ? TCG_COND_GEU : TCG_COND_GTU;
+} else {
+maxval = a->sf ? INT64_MIN : INT32_MIN;
+cond = eq ? TCG_COND_GE : TCG_COND_GT;
+}
+}
 
 tmax = tcg_const_i64(vsz >> a->esz);
-if (a->eq) {
+if (eq) {
 /* Equality means one more iteration.  */
  

[PATCH v5 23/81] target/arm: Implement SVE2 integer absolute difference and accumulate

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  |  6 ++
 target/arm/translate-sve.c | 21 +
 2 files changed, 27 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 695a16551e..32b15e4192 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1266,3 +1266,9 @@ URSRA   01000101 .. 0 . 1110 11 . .  
@rd_rn_tszimm_shr
 
 SRI 01000101 .. 0 . 0 0 . .  @rd_rn_tszimm_shr
 SLI 01000101 .. 0 . 0 1 . .  @rd_rn_tszimm_shl
+
+## SVE2 integer absolute difference and accumulate
+
+# TODO: Use @rda and %reg_movprfx here.
+SABA01000101 .. 0 . 1 0 . .  @rd_rn_rm
+UABA01000101 .. 0 . 1 1 . .  @rd_rn_rm
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index d74a15d8b8..ba1953118b 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6438,3 +6438,24 @@ static bool trans_SLI(DisasContext *s, arg_rri_esz *a)
 {
 return do_sve2_fn2i(s, a, gen_gvec_sli);
 }
+
+static bool do_sve2_fn_zzz(DisasContext *s, arg_rrr_esz *a, GVecGen3Fn *fn)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_fn_zzz(s, fn, a->esz, a->rd, a->rn, a->rm);
+}
+return true;
+}
+
+static bool trans_SABA(DisasContext *s, arg_rrr_esz *a)
+{
+return do_sve2_fn_zzz(s, a, gen_gvec_saba);
+}
+
+static bool trans_UABA(DisasContext *s, arg_rrr_esz *a)
+{
+return do_sve2_fn_zzz(s, a, gen_gvec_uaba);
+}
-- 
2.25.1




[PATCH v5 19/81] target/arm: Implement SVE2 integer absolute difference and accumulate long

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Fix select offsetting and argument order (laurent desnogues).
---
 target/arm/helper-sve.h| 14 ++
 target/arm/sve.decode  | 12 +
 target/arm/sve_helper.c| 23 
 target/arm/translate-sve.c | 55 ++
 4 files changed, 104 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index c2155cc544..229fb396b2 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2402,3 +2402,17 @@ DEF_HELPER_FLAGS_4(sve2_sqcadd_b, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_sqcadd_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_sqcadd_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_sqcadd_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sabal_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sabal_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sabal_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_uabal_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uabal_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uabal_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 7508b901d0..56b7353bfa 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -70,6 +70,7 @@
 _s  rd pg rn s
 _s rd pg rn rm s
 _esz   rd pg rn rm esz
+_esz   rd ra rn rm esz
 _esz  rd pg rn rm ra esz
 _esz   rd pg rn imm esz
   rd esz pat s
@@ -119,6 +120,10 @@
 @rdn_i8s esz:2 .. ... imm:s8 rd:5 \
 _esz rn=%reg_movprfx
 
+# Four operand, vector element size
+@rda_rn_rm   esz:2 . rm:5 ... ... rn:5 rd:5 \
+_esz ra=%reg_movprfx
+
 # Three operand with "memory" size, aka immediate left shift
 @rd_rn_msz_rm    ... rm:5  imm:2 rn:5 rd:5  
 
@@ -1235,3 +1240,10 @@ CADD_rot90  01000101 .. 0 0 11011 0 . .  
@rdn_rm
 CADD_rot270 01000101 .. 0 0 11011 1 . .  @rdn_rm
 SQCADD_rot9001000101 .. 0 1 11011 0 . .  @rdn_rm
 SQCADD_rot270   01000101 .. 0 1 11011 1 . .  @rdn_rm
+
+## SVE2 integer absolute difference and accumulate long
+
+SABALB  01000101 .. 0 . 1100 00 . .  @rda_rn_rm
+SABALT  01000101 .. 0 . 1100 01 . .  @rda_rn_rm
+UABALB  01000101 .. 0 . 1100 10 . .  @rda_rn_rm
+UABALT  01000101 .. 0 . 1100 11 . .  @rda_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 2e09c3e55b..4871e90d9b 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1241,6 +1241,29 @@ DO_ZZZ_NTB(sve2_eoril_d, uint64_t, , DO_EOR)
 
 #undef DO_ZZZ_NTB
 
+#define DO_ZZZW_ACC(NAME, TYPEW, TYPEN, HW, HN, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *va, uint32_t desc) \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+intptr_t sel1 = simd_data(desc) * sizeof(TYPEN);\
+for (i = 0; i < opr_sz; i += sizeof(TYPEW)) {   \
+TYPEW nn = *(TYPEN *)(vn + HN(i + sel1));   \
+TYPEW mm = *(TYPEN *)(vm + HN(i + sel1));   \
+TYPEW aa = *(TYPEW *)(va + HW(i));  \
+*(TYPEW *)(vd + HW(i)) = OP(nn, mm) + aa;   \
+}   \
+}
+
+DO_ZZZW_ACC(sve2_sabal_h, int16_t, int8_t, H1_2, H1, DO_ABD)
+DO_ZZZW_ACC(sve2_sabal_s, int32_t, int16_t, H1_4, H1_2, DO_ABD)
+DO_ZZZW_ACC(sve2_sabal_d, int64_t, int32_t, , H1_4, DO_ABD)
+
+DO_ZZZW_ACC(sve2_uabal_h, uint16_t, uint8_t, H1_2, H1, DO_ABD)
+DO_ZZZW_ACC(sve2_uabal_s, uint32_t, uint16_t, H1_4, H1_2, DO_ABD)
+DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, , H1_4, DO_ABD)
+
+#undef DO_ZZZW_ACC
+
 #define DO_BITPERM(NAME, TYPE, OP) \
 void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
 {  \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index c594c59954..6ac50fd61f 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -163,6 +163,18 @@ static void gen_gvec_ool_zzz(DisasContext *s, 
gen_helper_gvec_3 *fn,
vsz, vsz, data, fn);
 }
 
+/* Invoke an out-of-line helper on 4 Zregs. */
+static void gen_gvec_ool_(DisasContext *s, gen_helper_gvec_4 *fn,
+  int rd, int rn, int rm, int ra, int data)
+{
+unsigned vsz = vec_full_reg_size(s);
+

[PATCH v5 14/81] target/arm: Implement PMULLB and PMULLT

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   | 10 ++
 target/arm/helper-sve.h|  1 +
 target/arm/sve.decode  |  2 ++
 target/arm/translate-sve.c | 22 ++
 target/arm/vec_helper.c| 24 
 5 files changed, 59 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index a6e1fa6333..902579d24b 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4231,6 +4231,16 @@ static inline bool isar_feature_aa64_sve2(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, SVEVER) != 0;
 }
 
+static inline bool isar_feature_aa64_sve2_aes(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) != 0;
+}
+
+static inline bool isar_feature_aa64_sve2_pmull128(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) >= 2;
+}
+
 /*
  * Feature tests for "does this exist in either 32-bit or 64-bit?"
  */
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index ad8121eec6..bf3e533eb4 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2363,3 +2363,4 @@ DEF_HELPER_FLAGS_4(sve2_umull_zzz_s, TCG_CALL_NO_RWG, 
void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_umull_zzz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_4(sve2_pmull_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_pmull_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index d9a72b7661..016c15ebb6 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1201,6 +1201,8 @@ USUBWT  01000101 .. 0 . 010 111 . .  
@rd_rn_rm
 
 SQDMULLB_zzz01000101 .. 0 . 011 000 . .  @rd_rn_rm
 SQDMULLT_zzz01000101 .. 0 . 011 001 . .  @rd_rn_rm
+PMULLB  01000101 .. 0 . 011 010 . .  @rd_rn_rm
+PMULLT  01000101 .. 0 . 011 011 . .  @rd_rn_rm
 SMULLB_zzz  01000101 .. 0 . 011 100 . .  @rd_rn_rm
 SMULLT_zzz  01000101 .. 0 . 011 101 . .  @rd_rn_rm
 UMULLB_zzz  01000101 .. 0 . 011 110 . .  @rd_rn_rm
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 19a1f289d8..fbdccc1c68 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6030,6 +6030,28 @@ DO_SVE2_ZZZ_TB(SMULLT_zzz, smull_zzz, true, true)
 DO_SVE2_ZZZ_TB(UMULLB_zzz, umull_zzz, false, false)
 DO_SVE2_ZZZ_TB(UMULLT_zzz, umull_zzz, true, true)
 
+static bool do_trans_pmull(DisasContext *s, arg_rrr_esz *a, bool sel)
+{
+static gen_helper_gvec_3 * const fns[4] = {
+gen_helper_gvec_pmull_q, gen_helper_sve2_pmull_h,
+NULL,gen_helper_sve2_pmull_d,
+};
+if (a->esz == 0 && !dc_isar_feature(aa64_sve2_pmull128, s)) {
+return false;
+}
+return do_sve2_zzw_ool(s, a, fns[a->esz], sel);
+}
+
+static bool trans_PMULLB(DisasContext *s, arg_rrr_esz *a)
+{
+return do_trans_pmull(s, a, false);
+}
+
+static bool trans_PMULLT(DisasContext *s, arg_rrr_esz *a)
+{
+return do_trans_pmull(s, a, true);
+}
+
 #define DO_SVE2_ZZZ_WTB(NAME, name, SEL2) \
 static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a)   \
 {   \
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 40b92100bf..b0ce597060 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -1750,6 +1750,30 @@ void HELPER(sve2_pmull_h)(void *vd, void *vn, void *vm, 
uint32_t desc)
 d[i] = pmull_h(nn, mm);
 }
 }
+
+static uint64_t pmull_d(uint64_t op1, uint64_t op2)
+{
+uint64_t result = 0;
+int i;
+
+for (i = 0; i < 32; ++i) {
+uint64_t mask = -((op1 >> i) & 1);
+result ^= (op2 << i) & mask;
+}
+return result;
+}
+
+void HELPER(sve2_pmull_d)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t sel = H4(simd_data(desc));
+intptr_t i, opr_sz = simd_oprsz(desc);
+uint32_t *n = vn, *m = vm;
+uint64_t *d = vd;
+
+for (i = 0; i < opr_sz / 8; ++i) {
+d[i] = pmull_d(n[2 * i + sel], m[2 * i + sel]);
+}
+}
 #endif
 
 #define DO_CMP0(NAME, TYPE, OP) \
-- 
2.25.1




[PATCH v5 29/81] target/arm: Implement SVE2 SQSHRN, SQRSHRN

2021-04-16 Thread Richard Henderson
This completes the section "SVE2 bitwise shift right narrow".

Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  16 ++
 target/arm/sve.decode  |   4 ++
 target/arm/sve_helper.c|  24 +
 target/arm/translate-sve.c | 105 +
 4 files changed, 149 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index ba6a24fc8b..1c7fe8e417 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2476,6 +2476,22 @@ DEF_HELPER_FLAGS_3(sve2_sqrshrunt_h, TCG_CALL_NO_RWG, 
void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_sqrshrunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_sqrshrunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_3(sve2_sqshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqrshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqrshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqrshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqrshrnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqrshrnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqrshrnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_3(sve2_uqshrnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_uqshrnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_uqshrnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 13b5da0856..0674464695 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1296,6 +1296,10 @@ SHRNB   01000101 .. 1 . 00 0100 . .  
@rd_rn_tszimm_shr
 SHRNT   01000101 .. 1 . 00 0101 . .  @rd_rn_tszimm_shr
 RSHRNB  01000101 .. 1 . 00 0110 . .  @rd_rn_tszimm_shr
 RSHRNT  01000101 .. 1 . 00 0111 . .  @rd_rn_tszimm_shr
+SQSHRNB 01000101 .. 1 . 00 1000 . .  @rd_rn_tszimm_shr
+SQSHRNT 01000101 .. 1 . 00 1001 . .  @rd_rn_tszimm_shr
+SQRSHRNB01000101 .. 1 . 00 1010 . .  @rd_rn_tszimm_shr
+SQRSHRNT01000101 .. 1 . 00 1011 . .  @rd_rn_tszimm_shr
 UQSHRNB 01000101 .. 1 . 00 1100 . .  @rd_rn_tszimm_shr
 UQSHRNT 01000101 .. 1 . 00 1101 . .  @rd_rn_tszimm_shr
 UQRSHRNB01000101 .. 1 . 00 1110 . .  @rd_rn_tszimm_shr
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 175f3de08f..02e87c535d 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1971,6 +1971,30 @@ DO_SHRNT(sve2_sqrshrunt_h, int16_t, uint8_t, H1_2, H1, 
DO_SQRSHRUN_H)
 DO_SHRNT(sve2_sqrshrunt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRUN_S)
 DO_SHRNT(sve2_sqrshrunt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRUN_D)
 
+#define DO_SQSHRN_H(x, sh) do_sat_bhs(x >> sh, INT8_MIN, INT8_MAX)
+#define DO_SQSHRN_S(x, sh) do_sat_bhs(x >> sh, INT16_MIN, INT16_MAX)
+#define DO_SQSHRN_D(x, sh) do_sat_bhs(x >> sh, INT32_MIN, INT32_MAX)
+
+DO_SHRNB(sve2_sqshrnb_h, int16_t, uint8_t, DO_SQSHRN_H)
+DO_SHRNB(sve2_sqshrnb_s, int32_t, uint16_t, DO_SQSHRN_S)
+DO_SHRNB(sve2_sqshrnb_d, int64_t, uint32_t, DO_SQSHRN_D)
+
+DO_SHRNT(sve2_sqshrnt_h, int16_t, uint8_t, H1_2, H1, DO_SQSHRN_H)
+DO_SHRNT(sve2_sqshrnt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQSHRN_S)
+DO_SHRNT(sve2_sqshrnt_d, int64_t, uint32_t, , H1_4, DO_SQSHRN_D)
+
+#define DO_SQRSHRN_H(x, sh) do_sat_bhs(do_srshr(x, sh), INT8_MIN, INT8_MAX)
+#define DO_SQRSHRN_S(x, sh) do_sat_bhs(do_srshr(x, sh), INT16_MIN, INT16_MAX)
+#define DO_SQRSHRN_D(x, sh) do_sat_bhs(do_srshr(x, sh), INT32_MIN, INT32_MAX)
+
+DO_SHRNB(sve2_sqrshrnb_h, int16_t, uint8_t, DO_SQRSHRN_H)
+DO_SHRNB(sve2_sqrshrnb_s, int32_t, uint16_t, DO_SQRSHRN_S)
+DO_SHRNB(sve2_sqrshrnb_d, int64_t, uint32_t, DO_SQRSHRN_D)
+
+DO_SHRNT(sve2_sqrshrnt_h, int16_t, uint8_t, H1_2, H1, DO_SQRSHRN_H)
+DO_SHRNT(sve2_sqrshrnt_s, int32_t, uint16_t, H1_4, H1_2, DO_SQRSHRN_S)
+DO_SHRNT(sve2_sqrshrnt_d, int64_t, uint32_t, , H1_4, DO_SQRSHRN_D)
+
 #define DO_UQSHRN_H(x, sh) MIN(x >> sh, UINT8_MAX)
 #define DO_UQSHRN_S(x, sh) MIN(x >> sh, UINT16_MAX)
 #define DO_UQSHRN_D(x, sh) MIN(x >> sh, UINT32_MAX)
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 12733d9b4f..218f1ca5ce 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6956,6 +6956,111 @@ static bool trans_SQRSHRUNT(DisasContext *s, 
arg_rri_esz *a)
 return do_sve2_shr_narrow(s, a, ops);
 }
 
+static void 

[PATCH v5 20/81] target/arm: Implement SVE2 integer add/subtract long with carry

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Fix sel indexing and argument order (laurent desnogues).
---
 target/arm/helper-sve.h|  3 +++
 target/arm/sve.decode  |  6 ++
 target/arm/sve_helper.c| 34 ++
 target/arm/translate-sve.c | 23 +++
 4 files changed, 66 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 229fb396b2..4a62012850 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2416,3 +2416,6 @@ DEF_HELPER_FLAGS_5(sve2_uabal_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_uabal_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_adcl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_adcl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 56b7353bfa..79046d81e3 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1247,3 +1247,9 @@ SABALB  01000101 .. 0 . 1100 00 . .  
@rda_rn_rm
 SABALT  01000101 .. 0 . 1100 01 . .  @rda_rn_rm
 UABALB  01000101 .. 0 . 1100 10 . .  @rda_rn_rm
 UABALT  01000101 .. 0 . 1100 11 . .  @rda_rn_rm
+
+## SVE2 integer add/subtract long with carry
+
+# ADC and SBC decoded via size in helper dispatch.
+ADCLB   01000101 .. 0 . 11010 0 . .  @rda_rn_rm
+ADCLT   01000101 .. 0 . 11010 1 . .  @rda_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 4871e90d9b..0049ad861f 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1264,6 +1264,40 @@ DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, , 
H1_4, DO_ABD)
 
 #undef DO_ZZZW_ACC
 
+void HELPER(sve2_adcl_s)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int sel = H4(extract32(desc, SIMD_DATA_SHIFT, 1));
+uint32_t inv = -extract32(desc, SIMD_DATA_SHIFT + 1, 1);
+uint32_t *a = va, *n = vn;
+uint64_t *d = vd, *m = vm;
+
+for (i = 0; i < opr_sz / 8; ++i) {
+uint32_t e1 = a[2 * i + H4(0)];
+uint32_t e2 = n[2 * i + sel] ^ inv;
+uint64_t c = extract64(m[i], 32, 1);
+/* Compute and store the entire 33-bit result at once. */
+d[i] = c + e1 + e2;
+}
+}
+
+void HELPER(sve2_adcl_d)(void *vd, void *vn, void *vm, void *va, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int sel = extract32(desc, SIMD_DATA_SHIFT, 1);
+uint64_t inv = -(uint64_t)extract32(desc, SIMD_DATA_SHIFT + 1, 1);
+uint64_t *d = vd, *a = va, *n = vn, *m = vm;
+
+for (i = 0; i < opr_sz / 8; i += 2) {
+Int128 e1 = int128_make64(a[i]);
+Int128 e2 = int128_make64(n[i + sel] ^ inv);
+Int128 c = int128_make64(m[i + 1] & 1);
+Int128 r = int128_add(int128_add(e1, e2), c);
+d[i + 0] = int128_getlo(r);
+d[i + 1] = int128_gethi(r);
+}
+}
+
 #define DO_BITPERM(NAME, TYPE, OP) \
 void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
 {  \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 6ac50fd61f..6f5e39b741 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6371,3 +6371,26 @@ static bool trans_UABALT(DisasContext *s, arg__esz 
*a)
 {
 return do_abal(s, a, true, true);
 }
+
+static bool do_adcl(DisasContext *s, arg__esz *a, bool sel)
+{
+static gen_helper_gvec_4 * const fns[2] = {
+gen_helper_sve2_adcl_s,
+gen_helper_sve2_adcl_d,
+};
+/*
+ * Note that in this case the ESZ field encodes both size and sign.
+ * Split out 'subtract' into bit 1 of the data field for the helper.
+ */
+return do_sve2__ool(s, a, fns[a->esz & 1], (a->esz & 2) | sel);
+}
+
+static bool trans_ADCLB(DisasContext *s, arg__esz *a)
+{
+return do_adcl(s, a, false);
+}
+
+static bool trans_ADCLT(DisasContext *s, arg__esz *a)
+{
+return do_adcl(s, a, true);
+}
-- 
2.25.1




[PATCH v5 16/81] target/arm: Implement SVE2 bitwise exclusive-or interleaved

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  5 +
 target/arm/sve.decode  |  5 +
 target/arm/sve_helper.c| 20 
 target/arm/translate-sve.c | 19 +++
 4 files changed, 49 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 740939e7a8..f65818da05 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2372,3 +2372,8 @@ DEF_HELPER_FLAGS_3(sve2_sshll_d, TCG_CALL_NO_RWG, void, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_ushll_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_ushll_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(sve2_ushll_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_eoril_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_eoril_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_eoril_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_eoril_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index a3191eba7b..0922a44829 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1215,3 +1215,8 @@ SSHLLB  01000101 .. 0 . 1010 00 . .  
@rd_rn_tszimm_shl
 SSHLLT  01000101 .. 0 . 1010 01 . .  @rd_rn_tszimm_shl
 USHLLB  01000101 .. 0 . 1010 10 . .  @rd_rn_tszimm_shl
 USHLLT  01000101 .. 0 . 1010 11 . .  @rd_rn_tszimm_shl
+
+## SVE2 bitwise exclusive-or interleaved
+
+EORBT   01000101 .. 0 . 10010 0 . .  @rd_rn_rm
+EORTB   01000101 .. 0 . 10010 1 . .  @rd_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 73aa670a77..1de0a9bdc3 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1221,6 +1221,26 @@ DO_ZZZ_WTB(sve2_usubw_d, uint64_t, uint32_t, , H1_4, 
DO_SUB)
 
 #undef DO_ZZZ_WTB
 
+#define DO_ZZZ_NTB(NAME, TYPE, H, OP)   \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc)  \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+intptr_t sel1 = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPE); \
+intptr_t sel2 = extract32(desc, SIMD_DATA_SHIFT + 1, 1) * sizeof(TYPE); \
+for (i = 0; i < opr_sz; i += 2 * sizeof(TYPE)) {\
+TYPE nn = *(TYPE *)(vn + H(i + sel1));  \
+TYPE mm = *(TYPE *)(vm + H(i + sel2));  \
+*(TYPE *)(vd + H(i + sel1)) = OP(nn, mm);   \
+}   \
+}
+
+DO_ZZZ_NTB(sve2_eoril_b, uint8_t, H1, DO_EOR)
+DO_ZZZ_NTB(sve2_eoril_h, uint16_t, H1_2, DO_EOR)
+DO_ZZZ_NTB(sve2_eoril_s, uint32_t, H1_4, DO_EOR)
+DO_ZZZ_NTB(sve2_eoril_d, uint64_t, , DO_EOR)
+
+#undef DO_ZZZ_NTB
+
 #define DO_ZZI_SHLL(NAME, TYPEW, TYPEN, HW, HN) \
 void HELPER(NAME)(void *vd, void *vn, uint32_t desc)   \
 {  \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index afd208212b..509b3bc68c 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6030,6 +6030,25 @@ DO_SVE2_ZZZ_TB(SMULLT_zzz, smull_zzz, true, true)
 DO_SVE2_ZZZ_TB(UMULLB_zzz, umull_zzz, false, false)
 DO_SVE2_ZZZ_TB(UMULLT_zzz, umull_zzz, true, true)
 
+static bool do_eor_tb(DisasContext *s, arg_rrr_esz *a, bool sel1)
+{
+static gen_helper_gvec_3 * const fns[4] = {
+gen_helper_sve2_eoril_b, gen_helper_sve2_eoril_h,
+gen_helper_sve2_eoril_s, gen_helper_sve2_eoril_d,
+};
+return do_sve2_zzw_ool(s, a, fns[a->esz], (!sel1 << 1) | sel1);
+}
+
+static bool trans_EORBT(DisasContext *s, arg_rrr_esz *a)
+{
+return do_eor_tb(s, a, false);
+}
+
+static bool trans_EORTB(DisasContext *s, arg_rrr_esz *a)
+{
+return do_eor_tb(s, a, true);
+}
+
 static bool do_trans_pmull(DisasContext *s, arg_rrr_esz *a, bool sel)
 {
 static gen_helper_gvec_3 * const fns[4] = {
-- 
2.25.1




[PATCH v5 17/81] target/arm: Implement SVE2 bitwise permute

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/cpu.h   |  5 +++
 target/arm/helper-sve.h| 15 
 target/arm/sve.decode  |  6 
 target/arm/sve_helper.c| 73 ++
 target/arm/translate-sve.c | 36 +++
 5 files changed, 135 insertions(+)

diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 902579d24b..ae787fac8a 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -4241,6 +4241,11 @@ static inline bool isar_feature_aa64_sve2_pmull128(const 
ARMISARegisters *id)
 return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, AES) >= 2;
 }
 
+static inline bool isar_feature_aa64_sve2_bitperm(const ARMISARegisters *id)
+{
+return FIELD_EX64(id->id_aa64zfr0, ID_AA64ZFR0, BITPERM) != 0;
+}
+
 /*
  * Feature tests for "does this exist in either 32-bit or 64-bit?"
  */
diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index f65818da05..4861481fe0 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2377,3 +2377,18 @@ DEF_HELPER_FLAGS_4(sve2_eoril_b, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_eoril_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_eoril_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_eoril_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_bext_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_bext_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_bext_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_bext_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_bdep_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_bdep_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_bdep_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_bdep_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_bgrp_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_bgrp_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_bgrp_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_bgrp_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 0922a44829..7cb89a0d47 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1220,3 +1220,9 @@ USHLLT  01000101 .. 0 . 1010 11 . .  
@rd_rn_tszimm_shl
 
 EORBT   01000101 .. 0 . 10010 0 . .  @rd_rn_rm
 EORTB   01000101 .. 0 . 10010 1 . .  @rd_rn_rm
+
+## SVE2 bitwise permute
+
+BEXT01000101 .. 0 . 1011 00 . .  @rd_rn_rm
+BDEP01000101 .. 0 . 1011 01 . .  @rd_rn_rm
+BGRP01000101 .. 0 . 1011 10 . .  @rd_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 1de0a9bdc3..d692d2fe3d 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1241,6 +1241,79 @@ DO_ZZZ_NTB(sve2_eoril_d, uint64_t, , DO_EOR)
 
 #undef DO_ZZZ_NTB
 
+#define DO_BITPERM(NAME, TYPE, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
+{  \
+intptr_t i, opr_sz = simd_oprsz(desc); \
+for (i = 0; i < opr_sz; i += sizeof(TYPE)) {   \
+TYPE nn = *(TYPE *)(vn + i);   \
+TYPE mm = *(TYPE *)(vm + i);   \
+*(TYPE *)(vd + i) = OP(nn, mm, sizeof(TYPE) * 8);  \
+}  \
+}
+
+static uint64_t bitextract(uint64_t data, uint64_t mask, int n)
+{
+uint64_t res = 0;
+int db, rb = 0;
+
+for (db = 0; db < n; ++db) {
+if ((mask >> db) & 1) {
+res |= ((data >> db) & 1) << rb;
+++rb;
+}
+}
+return res;
+}
+
+DO_BITPERM(sve2_bext_b, uint8_t, bitextract)
+DO_BITPERM(sve2_bext_h, uint16_t, bitextract)
+DO_BITPERM(sve2_bext_s, uint32_t, bitextract)
+DO_BITPERM(sve2_bext_d, uint64_t, bitextract)
+
+static uint64_t bitdeposit(uint64_t data, uint64_t mask, int n)
+{
+uint64_t res = 0;
+int rb, db = 0;
+
+for (rb = 0; rb < n; ++rb) {
+if ((mask >> rb) & 1) {
+res |= ((data >> db) & 1) << rb;
+++db;
+}
+}
+return res;
+}
+
+DO_BITPERM(sve2_bdep_b, uint8_t, bitdeposit)
+DO_BITPERM(sve2_bdep_h, uint16_t, bitdeposit)
+DO_BITPERM(sve2_bdep_s, uint32_t, bitdeposit)
+DO_BITPERM(sve2_bdep_d, uint64_t, bitdeposit)
+
+static uint64_t bitgroup(uint64_t data, uint64_t mask, int n)
+{
+uint64_t resm = 0, resu = 0;
+int db, rbm = 0, rbu = 0;
+
+for (db = 0; db < n; ++db) {
+uint64_t val = (data >> db) & 1;
+if ((mask >> db) & 1) {
+resm |= val << rbm++;
+} else {
+resu |= val << rbu++;
+}
+

[PATCH v5 24/81] target/arm: Implement SVE2 saturating extract narrow

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  24 
 target/arm/sve.decode  |  12 ++
 target/arm/sve_helper.c|  56 +
 target/arm/translate-sve.c | 238 +
 4 files changed, 330 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 4a62012850..b302203ce8 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2419,3 +2419,27 @@ DEF_HELPER_FLAGS_5(sve2_uabal_d, TCG_CALL_NO_RWG,
 
 DEF_HELPER_FLAGS_5(sve2_adcl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve2_adcl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqxtnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqxtnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqxtnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_uqxtnb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqxtnb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqxtnb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqxtunb_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqxtunb_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqxtunb_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqxtnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqxtnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqxtnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_uqxtnt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqxtnt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_uqxtnt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sqxtunt_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqxtunt_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sqxtunt_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 32b15e4192..19866ec4c6 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1272,3 +1272,15 @@ SLI 01000101 .. 0 . 0 1 . .  
@rd_rn_tszimm_shl
 # TODO: Use @rda and %reg_movprfx here.
 SABA01000101 .. 0 . 1 0 . .  @rd_rn_rm
 UABA01000101 .. 0 . 1 1 . .  @rd_rn_rm
+
+ SVE2 Narrowing
+
+## SVE2 saturating extract narrow
+
+# Bits 23, 18-16 are zero, limited in the translator via esz < 3 & imm == 0.
+SQXTNB  01000101 .. 1 . 010 000 . .  @rd_rn_tszimm_shl
+SQXTNT  01000101 .. 1 . 010 001 . .  @rd_rn_tszimm_shl
+UQXTNB  01000101 .. 1 . 010 010 . .  @rd_rn_tszimm_shl
+UQXTNT  01000101 .. 1 . 010 011 . .  @rd_rn_tszimm_shl
+SQXTUNB 01000101 .. 1 . 010 100 . .  @rd_rn_tszimm_shl
+SQXTUNT 01000101 .. 1 . 010 101 . .  @rd_rn_tszimm_shl
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 0049ad861f..7dca67785a 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1264,6 +1264,62 @@ DO_ZZZW_ACC(sve2_uabal_d, uint64_t, uint32_t, , 
H1_4, DO_ABD)
 
 #undef DO_ZZZW_ACC
 
+#define DO_XTNB(NAME, TYPE, OP) \
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc) \
+{\
+intptr_t i, opr_sz = simd_oprsz(desc);   \
+for (i = 0; i < opr_sz; i += sizeof(TYPE)) { \
+TYPE nn = *(TYPE *)(vn + i); \
+nn = OP(nn) & MAKE_64BIT_MASK(0, sizeof(TYPE) * 4);  \
+*(TYPE *)(vd + i) = nn;  \
+}\
+}
+
+#define DO_XTNT(NAME, TYPE, TYPEN, H, OP)   \
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc)\
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc), odd = H(sizeof(TYPEN));  \
+for (i = 0; i < opr_sz; i += sizeof(TYPE)) {\
+TYPE nn = *(TYPE *)(vn + i);\
+*(TYPEN *)(vd + i + odd) = OP(nn);  \
+}   \
+}
+
+#define DO_SQXTN_H(n)  do_sat_bhs(n, INT8_MIN, INT8_MAX)
+#define DO_SQXTN_S(n)  do_sat_bhs(n, INT16_MIN, INT16_MAX)
+#define DO_SQXTN_D(n)  do_sat_bhs(n, INT32_MIN, INT32_MAX)
+
+DO_XTNB(sve2_sqxtnb_h, int16_t, DO_SQXTN_H)
+DO_XTNB(sve2_sqxtnb_s, int32_t, DO_SQXTN_S)
+DO_XTNB(sve2_sqxtnb_d, int64_t, DO_SQXTN_D)
+
+DO_XTNT(sve2_sqxtnt_h, int16_t, int8_t, H1, DO_SQXTN_H)
+DO_XTNT(sve2_sqxtnt_s, int32_t, int16_t, H1_2, DO_SQXTN_S)
+DO_XTNT(sve2_sqxtnt_d, int64_t, int32_t, H1_4, DO_SQXTN_D)
+
+#define DO_UQXTN_H(n)  do_sat_bhs(n, 0, UINT8_MAX)

[PATCH v5 09/81] target/arm: Implement SVE2 saturating add/subtract (predicated)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|  54 +++
 target/arm/sve.decode  |  11 +++
 target/arm/sve_helper.c| 194 ++---
 target/arm/translate-sve.c |   7 ++
 4 files changed, 210 insertions(+), 56 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 09bc067dd4..37461c9927 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -371,6 +371,60 @@ DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve2_sqadd_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqadd_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqadd_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqadd_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_uqadd_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqadd_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqadd_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqadd_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sqsub_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqsub_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqsub_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqsub_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_uqsub_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqsub_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqsub_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqsub_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_suqadd_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_suqadd_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_suqadd_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_suqadd_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_usqadd_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_usqadd_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_usqadd_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_usqadd_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 61a3321325..cd4f73265f 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1148,3 +1148,14 @@ SMAXP   01000100 .. 010 100 101 ... . .  
@rdn_pg_rm
 UMAXP   01000100 .. 010 101 101 ... . .  @rdn_pg_rm
 SMINP   01000100 .. 010 110 101 ... . .  @rdn_pg_rm
 UMINP   01000100 .. 010 111 101 ... . .  @rdn_pg_rm
+
+### SVE2 saturating add/subtract (predicated)
+
+SQADD_zpzz  01000100 .. 011 000 100 ... . .  @rdn_pg_rm
+UQADD_zpzz  01000100 .. 011 001 100 ... . .  @rdn_pg_rm
+SQSUB_zpzz  01000100 .. 011 010 100 ... . .  @rdn_pg_rm
+UQSUB_zpzz  01000100 .. 011 011 100 ... . .  @rdn_pg_rm
+SUQADD  01000100 .. 011 100 100 ... . .  @rdn_pg_rm
+USQADD  01000100 .. 011 101 100 ... . .  @rdn_pg_rm
+SQSUB_zpzz  01000100 .. 011 110 100 ... . .  @rdm_pg_rn # SQSUBR
+UQSUB_zpzz  01000100 .. 011 111 100 ... . .  @rdm_pg_rn # UQSUBR
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 7cc559d950..12a2078edb 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -678,6 +678,135 @@ DO_ZPZZ(sve2_uhsub_zpzz_h, uint16_t, H1_2, DO_HSUB_BHS)
 DO_ZPZZ(sve2_uhsub_zpzz_s, uint32_t, H1_4, DO_HSUB_BHS)
 DO_ZPZZ_D(sve2_uhsub_zpzz_d, uint64_t, DO_HSUB_D)
 
+static inline int32_t do_sat_bhs(int64_t val, int64_t min, int64_t max)
+{
+return val >= max ? max : val <= min ? min : val;
+}
+
+#define DO_SQADD_B(n, m) do_sat_bhs((int64_t)n + m, INT8_MIN, 

[PATCH v5 11/81] target/arm: Implement SVE2 integer add/subtract interleaved long

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  | 6 ++
 target/arm/translate-sve.c | 4 
 2 files changed, 10 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index fbfd57b23a..12be0584a8 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1178,3 +1178,9 @@ SABDLB  01000101 .. 0 . 00 1100 . .  
@rd_rn_rm
 SABDLT  01000101 .. 0 . 00 1101 . .  @rd_rn_rm
 UABDLB  01000101 .. 0 . 00 1110 . .  @rd_rn_rm
 UABDLT  01000101 .. 0 . 00  . .  @rd_rn_rm
+
+## SVE2 integer add/subtract interleaved long
+
+SADDLBT 01000101 .. 0 . 1000 00 . .  @rd_rn_rm
+SSUBLBT 01000101 .. 0 . 1000 10 . .  @rd_rn_rm
+SSUBLTB 01000101 .. 0 . 1000 11 . .  @rd_rn_rm
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 22983b3b85..ae8323adb7 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6016,3 +6016,7 @@ DO_SVE2_ZZZ_TB(SABDLT, sabdl, true, true)
 DO_SVE2_ZZZ_TB(UADDLT, uaddl, true, true)
 DO_SVE2_ZZZ_TB(USUBLT, usubl, true, true)
 DO_SVE2_ZZZ_TB(UABDLT, uabdl, true, true)
+
+DO_SVE2_ZZZ_TB(SADDLBT, saddl, false, true)
+DO_SVE2_ZZZ_TB(SSUBLBT, ssubl, false, true)
+DO_SVE2_ZZZ_TB(SSUBLTB, ssubl, true, false)
-- 
2.25.1




[PATCH v5 10/81] target/arm: Implement SVE2 integer add/subtract long

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Fix select offsets (laurent desnogues).
---
 target/arm/helper-sve.h| 24 
 target/arm/sve.decode  | 19 
 target/arm/sve_helper.c| 43 +++
 target/arm/translate-sve.c | 46 ++
 4 files changed, 132 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 37461c9927..a81297b387 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -1367,6 +1367,30 @@ DEF_HELPER_FLAGS_5(sve_ftmad_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_ftmad_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_ftmad_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_saddl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_saddl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_saddl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_ssubl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_ssubl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_ssubl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_sabdl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sabdl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sabdl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_uaddl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uaddl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uaddl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_usubl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_usubl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_usubl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_uabdl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uabdl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uabdl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index cd4f73265f..fbfd57b23a 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1159,3 +1159,22 @@ SUQADD  01000100 .. 011 100 100 ... . .  
@rdn_pg_rm
 USQADD  01000100 .. 011 101 100 ... . .  @rdn_pg_rm
 SQSUB_zpzz  01000100 .. 011 110 100 ... . .  @rdm_pg_rn # SQSUBR
 UQSUB_zpzz  01000100 .. 011 111 100 ... . .  @rdm_pg_rn # UQSUBR
+
+ SVE2 Widening Integer Arithmetic
+
+## SVE2 integer add/subtract long
+
+SADDLB  01000101 .. 0 . 00  . .  @rd_rn_rm
+SADDLT  01000101 .. 0 . 00 0001 . .  @rd_rn_rm
+UADDLB  01000101 .. 0 . 00 0010 . .  @rd_rn_rm
+UADDLT  01000101 .. 0 . 00 0011 . .  @rd_rn_rm
+
+SSUBLB  01000101 .. 0 . 00 0100 . .  @rd_rn_rm
+SSUBLT  01000101 .. 0 . 00 0101 . .  @rd_rn_rm
+USUBLB  01000101 .. 0 . 00 0110 . .  @rd_rn_rm
+USUBLT  01000101 .. 0 . 00 0111 . .  @rd_rn_rm
+
+SABDLB  01000101 .. 0 . 00 1100 . .  @rd_rn_rm
+SABDLT  01000101 .. 0 . 00 1101 . .  @rd_rn_rm
+UABDLB  01000101 .. 0 . 00 1110 . .  @rd_rn_rm
+UABDLT  01000101 .. 0 . 00  . .  @rd_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 12a2078edb..3d0ee76411 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1113,6 +1113,49 @@ DO_ZZW(sve_lsl_zzw_s, uint32_t, uint64_t, H1_4, DO_LSL)
 #undef DO_ZPZ
 #undef DO_ZPZ_D
 
+/*
+ * Three-operand expander, unpredicated, in which the two inputs are
+ * selected from the top or bottom half of the wide column.
+ */
+#define DO_ZZZ_TB(NAME, TYPEW, TYPEN, HW, HN, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc)  \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+int sel1 = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN); \
+int sel2 = extract32(desc, SIMD_DATA_SHIFT + 1, 1) * sizeof(TYPEN); \
+for (i = 0; i < opr_sz; i += sizeof(TYPEW)) {   \
+TYPEW nn = *(TYPEN *)(vn + HN(i + sel1));   \
+TYPEW mm = *(TYPEN *)(vm + HN(i + sel2));   \
+*(TYPEW *)(vd + HW(i)) = OP(nn, mm);\
+}   \

[PATCH v5 21/81] target/arm: Implement SVE2 bitwise shift right and accumulate

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/sve.decode  |  8 
 target/arm/translate-sve.c | 34 ++
 2 files changed, 42 insertions(+)

diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 79046d81e3..d3c4ec6dd1 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1253,3 +1253,11 @@ UABALT  01000101 .. 0 . 1100 11 . .  
@rda_rn_rm
 # ADC and SBC decoded via size in helper dispatch.
 ADCLB   01000101 .. 0 . 11010 0 . .  @rda_rn_rm
 ADCLT   01000101 .. 0 . 11010 1 . .  @rda_rn_rm
+
+## SVE2 bitwise shift right and accumulate
+
+# TODO: Use @rda and %reg_movprfx here.
+SSRA01000101 .. 0 . 1110 00 . .  @rd_rn_tszimm_shr
+USRA01000101 .. 0 . 1110 01 . .  @rd_rn_tszimm_shr
+SRSRA   01000101 .. 0 . 1110 10 . .  @rd_rn_tszimm_shr
+URSRA   01000101 .. 0 . 1110 11 . .  @rd_rn_tszimm_shr
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 6f5e39b741..c11074 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6394,3 +6394,37 @@ static bool trans_ADCLT(DisasContext *s, arg__esz *a)
 {
 return do_adcl(s, a, true);
 }
+
+static bool do_sve2_fn2i(DisasContext *s, arg_rri_esz *a, GVecGen2iFn *fn)
+{
+if (a->esz < 0 || !dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+unsigned vsz = vec_full_reg_size(s);
+unsigned rd_ofs = vec_full_reg_offset(s, a->rd);
+unsigned rn_ofs = vec_full_reg_offset(s, a->rn);
+fn(a->esz, rd_ofs, rn_ofs, a->imm, vsz, vsz);
+}
+return true;
+}
+
+static bool trans_SSRA(DisasContext *s, arg_rri_esz *a)
+{
+return do_sve2_fn2i(s, a, gen_gvec_ssra);
+}
+
+static bool trans_USRA(DisasContext *s, arg_rri_esz *a)
+{
+return do_sve2_fn2i(s, a, gen_gvec_usra);
+}
+
+static bool trans_SRSRA(DisasContext *s, arg_rri_esz *a)
+{
+return do_sve2_fn2i(s, a, gen_gvec_srsra);
+}
+
+static bool trans_URSRA(DisasContext *s, arg_rri_esz *a)
+{
+return do_sve2_fn2i(s, a, gen_gvec_ursra);
+}
-- 
2.25.1




[PATCH v5 15/81] target/arm: Implement SVE2 bitwise shift left long

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h|   8 ++
 target/arm/sve.decode  |   8 ++
 target/arm/sve_helper.c|  26 ++
 target/arm/translate-sve.c | 159 +
 4 files changed, 201 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index bf3e533eb4..740939e7a8 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2364,3 +2364,11 @@ DEF_HELPER_FLAGS_4(sve2_umull_zzz_d, TCG_CALL_NO_RWG, 
void, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_4(sve2_pmull_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_pmull_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_sshll_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sshll_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_sshll_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_3(sve2_ushll_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_ushll_s, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
+DEF_HELPER_FLAGS_3(sve2_ushll_d, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 016c15ebb6..a3191eba7b 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1207,3 +1207,11 @@ SMULLB_zzz  01000101 .. 0 . 011 100 . .  
@rd_rn_rm
 SMULLT_zzz  01000101 .. 0 . 011 101 . .  @rd_rn_rm
 UMULLB_zzz  01000101 .. 0 . 011 110 . .  @rd_rn_rm
 UMULLT_zzz  01000101 .. 0 . 011 111 . .  @rd_rn_rm
+
+## SVE2 bitwise shift left long
+
+# Note bit23 == 0 is handled by esz > 0 in do_sve2_shll_tb.
+SSHLLB  01000101 .. 0 . 1010 00 . .  @rd_rn_tszimm_shl
+SSHLLT  01000101 .. 0 . 1010 01 . .  @rd_rn_tszimm_shl
+USHLLB  01000101 .. 0 . 1010 10 . .  @rd_rn_tszimm_shl
+USHLLT  01000101 .. 0 . 1010 11 . .  @rd_rn_tszimm_shl
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index f30f3722af..73aa670a77 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -625,6 +625,8 @@ DO_ZPZZ(sve2_sqrshl_zpzz_h, int16_t, H1_2, do_sqrshl_h)
 DO_ZPZZ(sve2_sqrshl_zpzz_s, int32_t, H1_4, do_sqrshl_s)
 DO_ZPZZ_D(sve2_sqrshl_zpzz_d, int64_t, do_sqrshl_d)
 
+#undef do_sqrshl_d
+
 #define do_uqrshl_b(n, m) \
({ uint32_t discard; do_uqrshl_bhs(n, (int8_t)m, 8, true, ); })
 #define do_uqrshl_h(n, m) \
@@ -639,6 +641,8 @@ DO_ZPZZ(sve2_uqrshl_zpzz_h, uint16_t, H1_2, do_uqrshl_h)
 DO_ZPZZ(sve2_uqrshl_zpzz_s, uint32_t, H1_4, do_uqrshl_s)
 DO_ZPZZ_D(sve2_uqrshl_zpzz_d, uint64_t, do_uqrshl_d)
 
+#undef do_uqrshl_d
+
 #define DO_HADD_BHS(n, m)  (((int64_t)n + m) >> 1)
 #define DO_HADD_D(n, m)((n >> 1) + (m >> 1) + (n & m & 1))
 
@@ -1217,6 +1221,28 @@ DO_ZZZ_WTB(sve2_usubw_d, uint64_t, uint32_t, , H1_4, 
DO_SUB)
 
 #undef DO_ZZZ_WTB
 
+#define DO_ZZI_SHLL(NAME, TYPEW, TYPEN, HW, HN) \
+void HELPER(NAME)(void *vd, void *vn, uint32_t desc)   \
+{  \
+intptr_t i, opr_sz = simd_oprsz(desc); \
+intptr_t sel = (simd_data(desc) & 1) * sizeof(TYPEN);  \
+int shift = simd_data(desc) >> 1;  \
+for (i = 0; i < opr_sz; i += sizeof(TYPEW)) {  \
+TYPEW nn = *(TYPEN *)(vn + HN(i + sel));   \
+*(TYPEW *)(vd + HW(i)) = nn << shift;  \
+}  \
+}
+
+DO_ZZI_SHLL(sve2_sshll_h, int16_t, int8_t, H1_2, H1)
+DO_ZZI_SHLL(sve2_sshll_s, int32_t, int16_t, H1_4, H1_2)
+DO_ZZI_SHLL(sve2_sshll_d, int64_t, int32_t, , H1_4)
+
+DO_ZZI_SHLL(sve2_ushll_h, uint16_t, uint8_t, H1_2, H1)
+DO_ZZI_SHLL(sve2_ushll_s, uint32_t, uint16_t, H1_4, H1_2)
+DO_ZZI_SHLL(sve2_ushll_d, uint64_t, uint32_t, , H1_4)
+
+#undef DO_ZZI_SHLL
+
 /* Two-operand reduction expander, controlled by a predicate.
  * The difference between TYPERED and TYPERET has to do with
  * sign-extension.  E.g. for SMAX, TYPERED must be signed,
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fbdccc1c68..afd208212b 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6071,3 +6071,162 @@ DO_SVE2_ZZZ_WTB(UADDWB, uaddw, false)
 DO_SVE2_ZZZ_WTB(UADDWT, uaddw, true)
 DO_SVE2_ZZZ_WTB(USUBWB, usubw, false)
 DO_SVE2_ZZZ_WTB(USUBWT, usubw, true)
+
+static void gen_sshll_vec(unsigned vece, TCGv_vec d, TCGv_vec n, int64_t imm)
+{
+int top = imm & 1;
+int shl = imm >> 1;
+int halfbits = 4 << vece;
+
+if (top) {
+if (shl == halfbits) {
+TCGv_vec t = tcg_temp_new_vec_matching(d);
+tcg_gen_dupi_vec(vece, t, MAKE_64BIT_MASK(halfbits, halfbits));
+tcg_gen_and_vec(vece, d, n, t);
+tcg_temp_free_vec(t);
+} else {
+tcg_gen_sari_vec(vece, d, n, halfbits);
+tcg_gen_shli_vec(vece, d, d, 

[PATCH v5 08/81] target/arm: Implement SVE2 integer pairwise arithmetic

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Load all inputs before writing any output (laurent desnogues)
---
 target/arm/helper-sve.h| 45 ++
 target/arm/sve.decode  |  8 
 target/arm/sve_helper.c| 76 ++
 target/arm/translate-sve.c |  6 +++
 4 files changed, 135 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 5fdc0d223a..09bc067dd4 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -326,6 +326,51 @@ DEF_HELPER_FLAGS_5(sve_sel_zpzz_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_sel_zpzz_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve2_addp_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_addp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_addp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_addp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_smaxp_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_smaxp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_smaxp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_smaxp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_umaxp_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_umaxp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_umaxp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_umaxp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sminp_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sminp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sminp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sminp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uminp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_asr_zpzw_b, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_asr_zpzw_h, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 58c3f7ede4..61a3321325 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1140,3 +1140,11 @@ SRHADD  01000100 .. 010 100 100 ... . .  
@rdn_pg_rm
 URHADD  01000100 .. 010 101 100 ... . .  @rdn_pg_rm
 SHSUB   01000100 .. 010 110 100 ... . .  @rdm_pg_rn # SHSUBR
 UHSUB   01000100 .. 010 111 100 ... . .  @rdm_pg_rn # UHSUBR
+
+### SVE2 integer pairwise arithmetic
+
+ADDP01000100 .. 010 001 101 ... . .  @rdn_pg_rm
+SMAXP   01000100 .. 010 100 101 ... . .  @rdn_pg_rm
+UMAXP   01000100 .. 010 101 101 ... . .  @rdn_pg_rm
+SMINP   01000100 .. 010 110 101 ... . .  @rdn_pg_rm
+UMINP   01000100 .. 010 111 101 ... . .  @rdn_pg_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 3703b96eb4..7cc559d950 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -681,6 +681,82 @@ DO_ZPZZ_D(sve2_uhsub_zpzz_d, uint64_t, DO_HSUB_D)
 #undef DO_ZPZZ
 #undef DO_ZPZZ_D
 
+/*
+ * Three operand expander, operating on element pairs.
+ * If the slot I is even, the elements from from VN {I, I+1}.
+ * If the slot I is odd, the elements from from VM {I-1, I}.
+ * Load all of the input elements in each pair before overwriting output.
+ */
+#define DO_ZPZZ_PAIR(NAME, TYPE, H, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, void *vg, uint32_t desc) \
+{   \
+intptr_t i, opr_sz = simd_oprsz(desc);  \
+for (i = 0; i < opr_sz; ) { \
+uint16_t pg = *(uint16_t *)(vg + H1_2(i >> 3)); \
+do {\
+TYPE n0 = *(TYPE *)(vn + H(i)); \
+TYPE m0 = *(TYPE *)(vm + H(i)); \
+TYPE n1 = *(TYPE *)(vn 

[PATCH v5 06/81] target/arm: Implement SVE2 saturating/rounding bitwise shift left (predicated)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Shift values are always signed (laurent desnogues).
---
 target/arm/helper-sve.h| 54 ++
 target/arm/sve.decode  | 17 +
 target/arm/sve_helper.c| 78 ++
 target/arm/translate-sve.c | 18 +
 4 files changed, 167 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 9992e93e2b..62106c74be 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -172,6 +172,60 @@ DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve2_srshl_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_srshl_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_srshl_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_srshl_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_urshl_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_urshl_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_urshl_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_urshl_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sqshl_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqshl_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqshl_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqshl_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_uqshl_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqshl_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqshl_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqshl_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_sqrshl_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrshl_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrshl_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sqrshl_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_d, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 5ba542969b..93f2479693 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1112,3 +1112,20 @@ URECPE  01000100 .. 000 000 101 ... . .  
@rd_pg_rn
 URSQRTE 01000100 .. 000 001 101 ... . .  @rd_pg_rn
 SQABS   01000100 .. 001 000 101 ... . .  @rd_pg_rn
 SQNEG   01000100 .. 001 001 101 ... . .  @rd_pg_rn
+
+### SVE2 saturating/rounding bitwise shift left (predicated)
+
+SRSHL   01000100 .. 000 010 100 ... . .  @rdn_pg_rm
+URSHL   01000100 .. 000 011 100 ... . .  @rdn_pg_rm
+SRSHL   01000100 .. 000 110 100 ... . .  @rdm_pg_rn # SRSHLR
+URSHL   01000100 .. 000 111 100 ... . .  @rdm_pg_rn # URSHLR
+
+SQSHL   01000100 .. 001 000 100 ... . .  @rdn_pg_rm
+UQSHL   01000100 .. 001 001 100 ... . .  @rdn_pg_rm
+SQSHL   01000100 .. 001 100 100 ... . .  @rdm_pg_rn # SQSHLR
+UQSHL   01000100 .. 001 101 100 ... . .  @rdm_pg_rn # UQSHLR
+
+SQRSHL  01000100 .. 001 010 100 ... . .  @rdn_pg_rm
+UQRSHL  01000100 .. 001 011 100 ... . .  @rdn_pg_rm
+SQRSHL  01000100 .. 001 110 100 ... . .  @rdm_pg_rn # SQRSHLR
+UQRSHL  01000100 .. 001 111 100 ... . .  @rdm_pg_rn # UQRSHLR
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index bbab84e81d..7eff204c3b 100644
--- a/target/arm/sve_helper.c
+++ 

[PATCH v5 13/81] target/arm: Implement SVE2 integer multiply long

2021-04-16 Thread Richard Henderson
Exclude PMULL from this category for the moment.

Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 15 +++
 target/arm/sve.decode  |  9 +
 target/arm/sve_helper.c| 31 +++
 target/arm/translate-sve.c |  9 +
 4 files changed, 64 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 3286a9c205..ad8121eec6 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -2347,4 +2347,19 @@ DEF_HELPER_FLAGS_6(sve_stdd_le_zd_mte, TCG_CALL_NO_WG,
 DEF_HELPER_FLAGS_6(sve_stdd_be_zd_mte, TCG_CALL_NO_WG,
void, env, ptr, ptr, ptr, tl, i32)
 
+DEF_HELPER_FLAGS_4(sve2_sqdmull_zzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqdmull_zzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqdmull_zzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_smull_zzz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_smull_zzz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_smull_zzz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_umull_zzz_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_umull_zzz_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_umull_zzz_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(sve2_pmull_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index f6f21426ef..d9a72b7661 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1196,3 +1196,12 @@ SSUBWB  01000101 .. 0 . 010 100 . .  
@rd_rn_rm
 SSUBWT  01000101 .. 0 . 010 101 . .  @rd_rn_rm
 USUBWB  01000101 .. 0 . 010 110 . .  @rd_rn_rm
 USUBWT  01000101 .. 0 . 010 111 . .  @rd_rn_rm
+
+## SVE2 integer multiply long
+
+SQDMULLB_zzz01000101 .. 0 . 011 000 . .  @rd_rn_rm
+SQDMULLT_zzz01000101 .. 0 . 011 001 . .  @rd_rn_rm
+SMULLB_zzz  01000101 .. 0 . 011 100 . .  @rd_rn_rm
+SMULLT_zzz  01000101 .. 0 . 011 101 . .  @rd_rn_rm
+UMULLB_zzz  01000101 .. 0 . 011 110 . .  @rd_rn_rm
+UMULLT_zzz  01000101 .. 0 . 011 111 . .  @rd_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index cf0dbb3987..f30f3722af 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1154,6 +1154,37 @@ DO_ZZZ_TB(sve2_uabdl_h, uint16_t, uint8_t, H1_2, H1, 
DO_ABD)
 DO_ZZZ_TB(sve2_uabdl_s, uint32_t, uint16_t, H1_4, H1_2, DO_ABD)
 DO_ZZZ_TB(sve2_uabdl_d, uint64_t, uint32_t, , H1_4, DO_ABD)
 
+DO_ZZZ_TB(sve2_smull_zzz_h, int16_t, int8_t, H1_2, H1, DO_MUL)
+DO_ZZZ_TB(sve2_smull_zzz_s, int32_t, int16_t, H1_4, H1_2, DO_MUL)
+DO_ZZZ_TB(sve2_smull_zzz_d, int64_t, int32_t, , H1_4, DO_MUL)
+
+DO_ZZZ_TB(sve2_umull_zzz_h, uint16_t, uint8_t, H1_2, H1, DO_MUL)
+DO_ZZZ_TB(sve2_umull_zzz_s, uint32_t, uint16_t, H1_4, H1_2, DO_MUL)
+DO_ZZZ_TB(sve2_umull_zzz_d, uint64_t, uint32_t, , H1_4, DO_MUL)
+
+/* Note that the multiply cannot overflow, but the doubling can. */
+static inline int16_t do_sqdmull_h(int16_t n, int16_t m)
+{
+int16_t val = n * m;
+return DO_SQADD_H(val, val);
+}
+
+static inline int32_t do_sqdmull_s(int32_t n, int32_t m)
+{
+int32_t val = n * m;
+return DO_SQADD_S(val, val);
+}
+
+static inline int64_t do_sqdmull_d(int64_t n, int64_t m)
+{
+int64_t val = n * m;
+return do_sqadd_d(val, val);
+}
+
+DO_ZZZ_TB(sve2_sqdmull_zzz_h, int16_t, int8_t, H1_2, H1, do_sqdmull_h)
+DO_ZZZ_TB(sve2_sqdmull_zzz_s, int32_t, int16_t, H1_4, H1_2, do_sqdmull_s)
+DO_ZZZ_TB(sve2_sqdmull_zzz_d, int64_t, int32_t, , H1_4, do_sqdmull_d)
+
 #undef DO_ZZZ_TB
 
 #define DO_ZZZ_WTB(NAME, TYPEW, TYPEN, HW, HN, OP) \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 70900c122f..19a1f289d8 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -6021,6 +6021,15 @@ DO_SVE2_ZZZ_TB(SADDLBT, saddl, false, true)
 DO_SVE2_ZZZ_TB(SSUBLBT, ssubl, false, true)
 DO_SVE2_ZZZ_TB(SSUBLTB, ssubl, true, false)
 
+DO_SVE2_ZZZ_TB(SQDMULLB_zzz, sqdmull_zzz, false, false)
+DO_SVE2_ZZZ_TB(SQDMULLT_zzz, sqdmull_zzz, true, true)
+
+DO_SVE2_ZZZ_TB(SMULLB_zzz, smull_zzz, false, false)
+DO_SVE2_ZZZ_TB(SMULLT_zzz, smull_zzz, true, true)
+
+DO_SVE2_ZZZ_TB(UMULLB_zzz, umull_zzz, false, false)
+DO_SVE2_ZZZ_TB(UMULLT_zzz, umull_zzz, true, true)
+
 #define DO_SVE2_ZZZ_WTB(NAME, name, SEL2) \
 static bool trans_##NAME(DisasContext *s, arg_rrr_esz *a)   \
 {   \
-- 
2.25.1




[PATCH v5 12/81] target/arm: Implement SVE2 integer add/subtract wide

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Fix select offsets (laurent desnogues).
---
 target/arm/helper-sve.h| 16 
 target/arm/sve.decode  | 12 
 target/arm/sve_helper.c| 30 ++
 target/arm/translate-sve.c | 20 
 4 files changed, 78 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index a81297b387..3286a9c205 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -1391,6 +1391,22 @@ DEF_HELPER_FLAGS_4(sve2_uabdl_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_uabdl_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve2_uabdl_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_saddw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_saddw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_saddw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_ssubw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_ssubw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_ssubw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_uaddw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uaddw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_uaddw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_usubw_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_usubw_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_usubw_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(sve_ld1bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld2bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
 DEF_HELPER_FLAGS_4(sve_ld3bb_r, TCG_CALL_NO_WG, void, env, ptr, tl, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 12be0584a8..f6f21426ef 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1184,3 +1184,15 @@ UABDLT  01000101 .. 0 . 00  . .  
@rd_rn_rm
 SADDLBT 01000101 .. 0 . 1000 00 . .  @rd_rn_rm
 SSUBLBT 01000101 .. 0 . 1000 10 . .  @rd_rn_rm
 SSUBLTB 01000101 .. 0 . 1000 11 . .  @rd_rn_rm
+
+## SVE2 integer add/subtract wide
+
+SADDWB  01000101 .. 0 . 010 000 . .  @rd_rn_rm
+SADDWT  01000101 .. 0 . 010 001 . .  @rd_rn_rm
+UADDWB  01000101 .. 0 . 010 010 . .  @rd_rn_rm
+UADDWT  01000101 .. 0 . 010 011 . .  @rd_rn_rm
+
+SSUBWB  01000101 .. 0 . 010 100 . .  @rd_rn_rm
+SSUBWT  01000101 .. 0 . 010 101 . .  @rd_rn_rm
+USUBWB  01000101 .. 0 . 010 110 . .  @rd_rn_rm
+USUBWT  01000101 .. 0 . 010 111 . .  @rd_rn_rm
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 3d0ee76411..cf0dbb3987 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1156,6 +1156,36 @@ DO_ZZZ_TB(sve2_uabdl_d, uint64_t, uint32_t, , H1_4, 
DO_ABD)
 
 #undef DO_ZZZ_TB
 
+#define DO_ZZZ_WTB(NAME, TYPEW, TYPEN, HW, HN, OP) \
+void HELPER(NAME)(void *vd, void *vn, void *vm, uint32_t desc) \
+{  \
+intptr_t i, opr_sz = simd_oprsz(desc); \
+int sel2 = extract32(desc, SIMD_DATA_SHIFT, 1) * sizeof(TYPEN); \
+for (i = 0; i < opr_sz; i += sizeof(TYPEW)) {  \
+TYPEW nn = *(TYPEW *)(vn + HW(i)); \
+TYPEW mm = *(TYPEN *)(vm + HN(i + sel2));  \
+*(TYPEW *)(vd + HW(i)) = OP(nn, mm);   \
+}  \
+}
+
+DO_ZZZ_WTB(sve2_saddw_h, int16_t, int8_t, H1_2, H1, DO_ADD)
+DO_ZZZ_WTB(sve2_saddw_s, int32_t, int16_t, H1_4, H1_2, DO_ADD)
+DO_ZZZ_WTB(sve2_saddw_d, int64_t, int32_t, , H1_4, DO_ADD)
+
+DO_ZZZ_WTB(sve2_ssubw_h, int16_t, int8_t, H1_2, H1, DO_SUB)
+DO_ZZZ_WTB(sve2_ssubw_s, int32_t, int16_t, H1_4, H1_2, DO_SUB)
+DO_ZZZ_WTB(sve2_ssubw_d, int64_t, int32_t, , H1_4, DO_SUB)
+
+DO_ZZZ_WTB(sve2_uaddw_h, uint16_t, uint8_t, H1_2, H1, DO_ADD)
+DO_ZZZ_WTB(sve2_uaddw_s, uint32_t, uint16_t, H1_4, H1_2, DO_ADD)
+DO_ZZZ_WTB(sve2_uaddw_d, uint64_t, uint32_t, , H1_4, DO_ADD)
+
+DO_ZZZ_WTB(sve2_usubw_h, uint16_t, uint8_t, H1_2, H1, DO_SUB)
+DO_ZZZ_WTB(sve2_usubw_s, uint32_t, uint16_t, H1_4, H1_2, DO_SUB)
+DO_ZZZ_WTB(sve2_usubw_d, uint64_t, uint32_t, , H1_4, DO_SUB)
+
+#undef DO_ZZZ_WTB
+
 /* Two-operand reduction expander, controlled by a predicate.
  * The difference between TYPERED and TYPERET has to do with
  * sign-extension.  E.g. for SMAX, TYPERED must be signed,
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ae8323adb7..70900c122f 100644
--- a/target/arm/translate-sve.c
+++ 

[PATCH v5 05/81] target/arm: Split out saturating/rounding shifts from neon

2021-04-16 Thread Richard Henderson
Split these operations out into a header that can be shared
between neon and sve.  The "sat" pointer acts both as a boolean
for control of saturating behavior and controls the difference
in behavior between neon and sve -- QC bit or no QC bit.

Widen the shift operand in the new helpers, as the SVE2 insns treat
the whole input element as significant.  For the neon uses, truncate
the shift to int8_t while passing the parameter.

Implement right-shift rounding as

tmp = src >> (shift - 1);
dst = (tmp >> 1) + (tmp & 1);

This is the same number of instructions as the current

tmp = 1 << (shift - 1);
dst = (src + tmp) >> shift;

without any possibility of intermediate overflow.

Signed-off-by: Richard Henderson 
---
v2: Widen the shift operand (laurent desnouges)
---
 target/arm/vec_internal.h | 138 +++
 target/arm/neon_helper.c  | 507 +++---
 2 files changed, 221 insertions(+), 424 deletions(-)

diff --git a/target/arm/vec_internal.h b/target/arm/vec_internal.h
index e3eb3e7a6b..0102547a10 100644
--- a/target/arm/vec_internal.h
+++ b/target/arm/vec_internal.h
@@ -30,4 +30,142 @@ static inline void clear_tail(void *vd, uintptr_t opr_sz, 
uintptr_t max_sz)
 }
 }
 
+static inline int32_t do_sqrshl_bhs(int32_t src, int32_t shift, int bits,
+bool round, uint32_t *sat)
+{
+if (shift <= -bits) {
+/* Rounding the sign bit always produces 0. */
+if (round) {
+return 0;
+}
+return src >> 31;
+} else if (shift < 0) {
+if (round) {
+src >>= -shift - 1;
+return (src >> 1) + (src & 1);
+}
+return src >> -shift;
+} else if (shift < bits) {
+int32_t val = src << shift;
+if (bits == 32) {
+if (!sat || val >> shift == src) {
+return val;
+}
+} else {
+int32_t extval = sextract32(val, 0, bits);
+if (!sat || val == extval) {
+return extval;
+}
+}
+} else if (!sat || src == 0) {
+return 0;
+}
+
+*sat = 1;
+return (1u << (bits - 1)) - (src >= 0);
+}
+
+static inline uint32_t do_uqrshl_bhs(uint32_t src, int32_t shift, int bits,
+ bool round, uint32_t *sat)
+{
+if (shift <= -(bits + round)) {
+return 0;
+} else if (shift < 0) {
+if (round) {
+src >>= -shift - 1;
+return (src >> 1) + (src & 1);
+}
+return src >> -shift;
+} else if (shift < bits) {
+uint32_t val = src << shift;
+if (bits == 32) {
+if (!sat || val >> shift == src) {
+return val;
+}
+} else {
+uint32_t extval = extract32(val, 0, bits);
+if (!sat || val == extval) {
+return extval;
+}
+}
+} else if (!sat || src == 0) {
+return 0;
+}
+
+*sat = 1;
+return MAKE_64BIT_MASK(0, bits);
+}
+
+static inline int32_t do_suqrshl_bhs(int32_t src, int32_t shift, int bits,
+ bool round, uint32_t *sat)
+{
+if (src < 0) {
+*sat = 1;
+return 0;
+}
+return do_uqrshl_bhs(src, shift, bits, round, sat);
+}
+
+static inline int64_t do_sqrshl_d(int64_t src, int64_t shift,
+  bool round, uint32_t *sat)
+{
+if (shift <= -64) {
+/* Rounding the sign bit always produces 0. */
+if (round) {
+return 0;
+}
+return src >> 63;
+} else if (shift < 0) {
+if (round) {
+src >>= -shift - 1;
+return (src >> 1) + (src & 1);
+}
+return src >> -shift;
+} else if (shift < 64) {
+int64_t val = src << shift;
+if (!sat || val >> shift == src) {
+return val;
+}
+} else if (!sat || src == 0) {
+return 0;
+}
+
+*sat = 1;
+return src < 0 ? INT64_MIN : INT64_MAX;
+}
+
+static inline uint64_t do_uqrshl_d(uint64_t src, int64_t shift,
+   bool round, uint32_t *sat)
+{
+if (shift <= -(64 + round)) {
+return 0;
+} else if (shift < 0) {
+if (round) {
+src >>= -shift - 1;
+return (src >> 1) + (src & 1);
+}
+return src >> -shift;
+} else if (shift < 64) {
+uint64_t val = src << shift;
+if (!sat || val >> shift == src) {
+return val;
+}
+} else if (!sat || src == 0) {
+return 0;
+}
+
+*sat = 1;
+return UINT64_MAX;
+}
+
+static inline int64_t do_suqrshl_d(int64_t src, int64_t shift,
+   bool round, uint32_t *sat)
+{
+if (src < 0) {
+*sat = 1;
+return 0;
+}
+return do_uqrshl_d(src, shift, round, sat);
+}
+
 #endif /* TARGET_ARM_VEC_INTERNALS_H */
diff --git 

[PATCH v5 04/81] target/arm: Implement SVE2 integer unary operations (predicated)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
v2: Fix sqabs, sqneg (laurent desnogues)
---
 target/arm/helper-sve.h| 13 +++
 target/arm/sve.decode  |  7 ++
 target/arm/sve_helper.c| 29 +++
 target/arm/translate-sve.c | 47 ++
 4 files changed, 92 insertions(+), 4 deletions(-)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index b2a274b40b..9992e93e2b 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -502,6 +502,19 @@ DEF_HELPER_FLAGS_4(sve_rbit_h, TCG_CALL_NO_RWG, void, ptr, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_rbit_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(sve_rbit_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(sve2_sqabs_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqabs_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqabs_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqabs_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_sqneg_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqneg_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqneg_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_sqneg_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(sve2_urecpe_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(sve2_ursqrte_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_splice, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, ptr, i32)
 
 DEF_HELPER_FLAGS_5(sve_cmpeq_ppzz_b, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 0524c01fcf..5ba542969b 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1105,3 +1105,10 @@ PMUL_zzz0100 00 1 . 0110 01 . .  
@rd_rn_rm_e0
 
 SADALP_zpzz 01000100 .. 000 100 101 ... . .  @rdm_pg_rn
 UADALP_zpzz 01000100 .. 000 101 101 ... . .  @rdm_pg_rn
+
+### SVE2 integer unary operations (predicated)
+
+URECPE  01000100 .. 000 000 101 ... . .  @rd_pg_rn
+URSQRTE 01000100 .. 000 001 101 ... . .  @rd_pg_rn
+SQABS   01000100 .. 001 000 101 ... . .  @rd_pg_rn
+SQNEG   01000100 .. 001 001 101 ... . .  @rd_pg_rn
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 42fe315485..bbab84e81d 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -535,8 +535,8 @@ static inline uint64_t do_sadalp_d(uint64_t n, uint64_t m)
 return m + n1 + n2;
 }
 
-DO_ZPZZ(sve2_sadalp_zpzz_h, int16_t, H1_2, do_sadalp_h)
-DO_ZPZZ(sve2_sadalp_zpzz_s, int32_t, H1_4, do_sadalp_s)
+DO_ZPZZ(sve2_sadalp_zpzz_h, uint16_t, H1_2, do_sadalp_h)
+DO_ZPZZ(sve2_sadalp_zpzz_s, uint32_t, H1_4, do_sadalp_s)
 DO_ZPZZ_D(sve2_sadalp_zpzz_d, uint64_t, do_sadalp_d)
 
 static inline uint16_t do_uadalp_h(uint16_t n, uint16_t m)
@@ -557,8 +557,8 @@ static inline uint64_t do_uadalp_d(uint64_t n, uint64_t m)
 return m + n1 + n2;
 }
 
-DO_ZPZZ(sve2_uadalp_zpzz_h, int16_t, H1_2, do_uadalp_h)
-DO_ZPZZ(sve2_uadalp_zpzz_s, int32_t, H1_4, do_uadalp_s)
+DO_ZPZZ(sve2_uadalp_zpzz_h, uint16_t, H1_2, do_uadalp_h)
+DO_ZPZZ(sve2_uadalp_zpzz_s, uint32_t, H1_4, do_uadalp_s)
 DO_ZPZZ_D(sve2_uadalp_zpzz_d, uint64_t, do_uadalp_d)
 
 #undef DO_ZPZZ
@@ -728,6 +728,27 @@ DO_ZPZ(sve_rbit_h, uint16_t, H1_2, revbit16)
 DO_ZPZ(sve_rbit_s, uint32_t, H1_4, revbit32)
 DO_ZPZ_D(sve_rbit_d, uint64_t, revbit64)
 
+#define DO_SQABS(X) \
+({ __typeof(X) x_ = (X), min_ = 1ull << (sizeof(X) * 8 - 1); \
+   x_ >= 0 ? x_ : x_ == min_ ? -min_ - 1 : -x_; })
+
+DO_ZPZ(sve2_sqabs_b, int8_t, H1, DO_SQABS)
+DO_ZPZ(sve2_sqabs_h, int16_t, H1_2, DO_SQABS)
+DO_ZPZ(sve2_sqabs_s, int32_t, H1_4, DO_SQABS)
+DO_ZPZ_D(sve2_sqabs_d, int64_t, DO_SQABS)
+
+#define DO_SQNEG(X) \
+({ __typeof(X) x_ = (X), min_ = 1ull << (sizeof(X) * 8 - 1); \
+   x_ == min_ ? -min_ - 1 : -x_; })
+
+DO_ZPZ(sve2_sqneg_b, uint8_t, H1, DO_SQNEG)
+DO_ZPZ(sve2_sqneg_h, uint16_t, H1_2, DO_SQNEG)
+DO_ZPZ(sve2_sqneg_s, uint32_t, H1_4, DO_SQNEG)
+DO_ZPZ_D(sve2_sqneg_d, uint64_t, DO_SQNEG)
+
+DO_ZPZ(sve2_urecpe_s, uint32_t, H1_4, helper_recpe_u32)
+DO_ZPZ(sve2_ursqrte_s, uint32_t, H1_4, helper_rsqrte_u32)
+
 /* Three-operand expander, unpredicated, in which the third operand is "wide".
  */
 #define DO_ZZW(NAME, TYPE, TYPEW, H, OP)   \
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 208d9ea7e0..c30b3c476e 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -5884,3 +5884,50 @@ static bool trans_UADALP_zpzz(DisasContext *s, 
arg_rprr_esz *a)
 }
 return do_sve2_zpzz_ool(s, a, fns[a->esz - 1]);
 }
+
+/*
+ * SVE2 integer unary operations (predicated)
+ */
+
+static bool do_sve2_zpz_ool(DisasContext *s, arg_rpr_esz *a,
+gen_helper_gvec_3 *fn)
+{
+if 

[PATCH v5 03/81] target/arm: Implement SVE2 integer pairwise add and accumulate long

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 14 
 target/arm/sve.decode  |  5 +
 target/arm/sve_helper.c| 44 ++
 target/arm/translate-sve.c | 39 +
 4 files changed, 102 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index e4cadd2a65..b2a274b40b 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -158,6 +158,20 @@ DEF_HELPER_FLAGS_5(sve_umulh_zpzz_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve_umulh_zpzz_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve2_sadalp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sadalp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_sadalp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uadalp_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_d, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 557706cacb..0524c01fcf 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1100,3 +1100,8 @@ MUL_zzz 0100 .. 1 . 0110 00 . .  
@rd_rn_rm
 SMULH_zzz   0100 .. 1 . 0110 10 . .  @rd_rn_rm
 UMULH_zzz   0100 .. 1 . 0110 11 . .  @rd_rn_rm
 PMUL_zzz0100 00 1 . 0110 01 . .  @rd_rn_rm_e0
+
+### SVE2 Integer - Predicated
+
+SADALP_zpzz 01000100 .. 000 100 101 ... . .  @rdm_pg_rn
+UADALP_zpzz 01000100 .. 000 101 101 ... . .  @rdm_pg_rn
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index c068dfa0d5..42fe315485 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -517,6 +517,50 @@ DO_ZPZZ_D(sve_asr_zpzz_d, int64_t, DO_ASR)
 DO_ZPZZ_D(sve_lsr_zpzz_d, uint64_t, DO_LSR)
 DO_ZPZZ_D(sve_lsl_zpzz_d, uint64_t, DO_LSL)
 
+static inline uint16_t do_sadalp_h(uint16_t n, uint16_t m)
+{
+int8_t n1 = n, n2 = n >> 8;
+return m + n1 + n2;
+}
+
+static inline uint32_t do_sadalp_s(uint32_t n, uint32_t m)
+{
+int16_t n1 = n, n2 = n >> 16;
+return m + n1 + n2;
+}
+
+static inline uint64_t do_sadalp_d(uint64_t n, uint64_t m)
+{
+int32_t n1 = n, n2 = n >> 32;
+return m + n1 + n2;
+}
+
+DO_ZPZZ(sve2_sadalp_zpzz_h, int16_t, H1_2, do_sadalp_h)
+DO_ZPZZ(sve2_sadalp_zpzz_s, int32_t, H1_4, do_sadalp_s)
+DO_ZPZZ_D(sve2_sadalp_zpzz_d, uint64_t, do_sadalp_d)
+
+static inline uint16_t do_uadalp_h(uint16_t n, uint16_t m)
+{
+uint8_t n1 = n, n2 = n >> 8;
+return m + n1 + n2;
+}
+
+static inline uint32_t do_uadalp_s(uint32_t n, uint32_t m)
+{
+uint16_t n1 = n, n2 = n >> 16;
+return m + n1 + n2;
+}
+
+static inline uint64_t do_uadalp_d(uint64_t n, uint64_t m)
+{
+uint32_t n1 = n, n2 = n >> 32;
+return m + n1 + n2;
+}
+
+DO_ZPZZ(sve2_uadalp_zpzz_h, int16_t, H1_2, do_uadalp_h)
+DO_ZPZZ(sve2_uadalp_zpzz_s, int32_t, H1_4, do_uadalp_s)
+DO_ZPZZ_D(sve2_uadalp_zpzz_d, uint64_t, do_uadalp_d)
+
 #undef DO_ZPZZ
 #undef DO_ZPZZ_D
 
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index f82d7d96f6..208d9ea7e0 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -5845,3 +5845,42 @@ static bool trans_PMUL_zzz(DisasContext *s, arg_rrr_esz 
*a)
 {
 return do_sve2_zzz_ool(s, a, gen_helper_gvec_pmul_b);
 }
+
+/*
+ * SVE2 Integer - Predicated
+ */
+
+static bool do_sve2_zpzz_ool(DisasContext *s, arg_rprr_esz *a,
+ gen_helper_gvec_4 *fn)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+return do_zpzz_ool(s, a, fn);
+}
+
+static bool trans_SADALP_zpzz(DisasContext *s, arg_rprr_esz *a)
+{
+static gen_helper_gvec_4 * const fns[3] = {
+gen_helper_sve2_sadalp_zpzz_h,
+gen_helper_sve2_sadalp_zpzz_s,
+gen_helper_sve2_sadalp_zpzz_d,
+};
+if (a->esz == 0) {
+return false;
+}
+return do_sve2_zpzz_ool(s, a, fns[a->esz - 1]);
+}
+
+static bool trans_UADALP_zpzz(DisasContext *s, arg_rprr_esz *a)
+{
+static gen_helper_gvec_4 * const fns[3] = {
+gen_helper_sve2_uadalp_zpzz_h,
+gen_helper_sve2_uadalp_zpzz_s,
+gen_helper_sve2_uadalp_zpzz_d,
+};
+if (a->esz == 0) {
+return false;
+}
+return do_sve2_zpzz_ool(s, a, fns[a->esz - 1]);
+}
-- 
2.25.1




[PATCH v5 07/81] target/arm: Implement SVE2 integer halving add/subtract (predicated)

2021-04-16 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/arm/helper-sve.h| 54 ++
 target/arm/sve.decode  | 11 
 target/arm/sve_helper.c| 39 +++
 target/arm/translate-sve.c |  8 ++
 4 files changed, 112 insertions(+)

diff --git a/target/arm/helper-sve.h b/target/arm/helper-sve.h
index 62106c74be..5fdc0d223a 100644
--- a/target/arm/helper-sve.h
+++ b/target/arm/helper-sve.h
@@ -226,6 +226,60 @@ DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_s, TCG_CALL_NO_RWG,
 DEF_HELPER_FLAGS_5(sve2_uqrshl_zpzz_d, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_5(sve2_shadd_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_shadd_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_shadd_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_shadd_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_uhadd_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uhadd_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uhadd_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uhadd_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_srhadd_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_srhadd_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_srhadd_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_srhadd_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_urhadd_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_urhadd_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_urhadd_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_urhadd_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_shsub_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_shsub_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_shsub_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_shsub_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_5(sve2_uhsub_zpzz_b, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uhsub_zpzz_h, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uhsub_zpzz_s, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_5(sve2_uhsub_zpzz_d, TCG_CALL_NO_RWG,
+   void, ptr, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_s, TCG_CALL_NO_RWG,
void, ptr, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_5(sve_sdiv_zpzz_d, TCG_CALL_NO_RWG,
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 93f2479693..58c3f7ede4 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1129,3 +1129,14 @@ SQRSHL  01000100 .. 001 010 100 ... . .  
@rdn_pg_rm
 UQRSHL  01000100 .. 001 011 100 ... . .  @rdn_pg_rm
 SQRSHL  01000100 .. 001 110 100 ... . .  @rdm_pg_rn # SQRSHLR
 UQRSHL  01000100 .. 001 111 100 ... . .  @rdm_pg_rn # UQRSHLR
+
+### SVE2 integer halving add/subtract (predicated)
+
+SHADD   01000100 .. 010 000 100 ... . .  @rdn_pg_rm
+UHADD   01000100 .. 010 001 100 ... . .  @rdn_pg_rm
+SHSUB   01000100 .. 010 010 100 ... . .  @rdn_pg_rm
+UHSUB   01000100 .. 010 011 100 ... . .  @rdn_pg_rm
+SRHADD  01000100 .. 010 100 100 ... . .  @rdn_pg_rm
+URHADD  01000100 .. 010 101 100 ... . .  @rdn_pg_rm
+SHSUB   01000100 .. 010 110 100 ... . .  @rdm_pg_rn # SHSUBR
+UHSUB   01000100 .. 010 111 100 ... . .  @rdm_pg_rn # UHSUBR
diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 7eff204c3b..3703b96eb4 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -639,6 +639,45 @@ DO_ZPZZ(sve2_uqrshl_zpzz_h, uint16_t, H1_2, do_uqrshl_h)
 DO_ZPZZ(sve2_uqrshl_zpzz_s, uint32_t, H1_4, do_uqrshl_s)
 DO_ZPZZ_D(sve2_uqrshl_zpzz_d, uint64_t, do_uqrshl_d)
 
+#define DO_HADD_BHS(n, m)  (((int64_t)n + m) >> 1)
+#define DO_HADD_D(n, m)((n >> 1) + (m >> 1) + (n & m & 1))
+
+DO_ZPZZ(sve2_shadd_zpzz_b, int8_t, H1_2, 

[PATCH v5 02/81] target/arm: Implement SVE2 Integer Multiply - Unpredicated

2021-04-16 Thread Richard Henderson
For MUL, we can rely on generic support.  For SMULH and UMULH,
create some trivial helpers.  For PMUL, back in a21bb78e5817,
we organized helper_gvec_pmul_b in preparation for this use.

Signed-off-by: Richard Henderson 
---
 target/arm/helper.h| 10 
 target/arm/sve.decode  | 10 
 target/arm/translate-sve.c | 50 
 target/arm/vec_helper.c| 96 ++
 4 files changed, 166 insertions(+)

diff --git a/target/arm/helper.h b/target/arm/helper.h
index ff8148ddc6..2c412ffd3b 100644
--- a/target/arm/helper.h
+++ b/target/arm/helper.h
@@ -828,6 +828,16 @@ DEF_HELPER_FLAGS_3(gvec_cgt0_h, TCG_CALL_NO_RWG, void, 
ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(gvec_cge0_b, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 DEF_HELPER_FLAGS_3(gvec_cge0_h, TCG_CALL_NO_RWG, void, ptr, ptr, i32)
 
+DEF_HELPER_FLAGS_4(gvec_smulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_smulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_smulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_smulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
+DEF_HELPER_FLAGS_4(gvec_umulh_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_umulh_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_umulh_s, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+DEF_HELPER_FLAGS_4(gvec_umulh_d, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
+
 DEF_HELPER_FLAGS_4(gvec_sshl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(gvec_sshl_h, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
 DEF_HELPER_FLAGS_4(gvec_ushl_b, TCG_CALL_NO_RWG, void, ptr, ptr, ptr, i32)
diff --git a/target/arm/sve.decode b/target/arm/sve.decode
index 5c90603358..557706cacb 100644
--- a/target/arm/sve.decode
+++ b/target/arm/sve.decode
@@ -1090,3 +1090,13 @@ ST1_zprz1110010 .. 00 . 100 ... . . \
 @rprr_scatter_store xs=0 esz=3 scale=0
 ST1_zprz1110010 .. 00 . 110 ... . . \
 @rprr_scatter_store xs=1 esz=3 scale=0
+
+ SVE2 Support
+
+### SVE2 Integer Multiply - Unpredicated
+
+# SVE2 integer multiply vectors (unpredicated)
+MUL_zzz 0100 .. 1 . 0110 00 . .  @rd_rn_rm
+SMULH_zzz   0100 .. 1 . 0110 10 . .  @rd_rn_rm
+UMULH_zzz   0100 .. 1 . 0110 11 . .  @rd_rn_rm
+PMUL_zzz0100 00 1 . 0110 01 . .  @rd_rn_rm_e0
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 864ed669c4..f82d7d96f6 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -5795,3 +5795,53 @@ static bool trans_MOVPRFX_z(DisasContext *s, arg_rpr_esz 
*a)
 {
 return do_movz_zpz(s, a->rd, a->rn, a->pg, a->esz, false);
 }
+
+/*
+ * SVE2 Integer Multiply - Unpredicated
+ */
+
+static bool trans_MUL_zzz(DisasContext *s, arg_rrr_esz *a)
+{
+if (!dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_fn_zzz(s, tcg_gen_gvec_mul, a->esz, a->rd, a->rn, a->rm);
+}
+return true;
+}
+
+static bool do_sve2_zzz_ool(DisasContext *s, arg_rrr_esz *a,
+gen_helper_gvec_3 *fn)
+{
+if (fn == NULL || !dc_isar_feature(aa64_sve2, s)) {
+return false;
+}
+if (sve_access_check(s)) {
+gen_gvec_ool_zzz(s, fn, a->rd, a->rn, a->rm, 0);
+}
+return true;
+}
+
+static bool trans_SMULH_zzz(DisasContext *s, arg_rrr_esz *a)
+{
+static gen_helper_gvec_3 * const fns[4] = {
+gen_helper_gvec_smulh_b, gen_helper_gvec_smulh_h,
+gen_helper_gvec_smulh_s, gen_helper_gvec_smulh_d,
+};
+return do_sve2_zzz_ool(s, a, fns[a->esz]);
+}
+
+static bool trans_UMULH_zzz(DisasContext *s, arg_rrr_esz *a)
+{
+static gen_helper_gvec_3 * const fns[4] = {
+gen_helper_gvec_umulh_b, gen_helper_gvec_umulh_h,
+gen_helper_gvec_umulh_s, gen_helper_gvec_umulh_d,
+};
+return do_sve2_zzz_ool(s, a, fns[a->esz]);
+}
+
+static bool trans_PMUL_zzz(DisasContext *s, arg_rrr_esz *a)
+{
+return do_sve2_zzz_ool(s, a, gen_helper_gvec_pmul_b);
+}
diff --git a/target/arm/vec_helper.c b/target/arm/vec_helper.c
index 3fbeae87cb..40b92100bf 100644
--- a/target/arm/vec_helper.c
+++ b/target/arm/vec_helper.c
@@ -1985,3 +1985,99 @@ void HELPER(simd_tblx)(void *vd, void *vm, void *venv, 
uint32_t desc)
 clear_tail(vd, oprsz, simd_maxsz(desc));
 }
 #endif
+
+/*
+ * NxN -> N highpart multiply
+ *
+ * TODO: expose this as a generic vector operation.
+ */
+
+void HELPER(gvec_smulh_b)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = simd_oprsz(desc);
+int8_t *d = vd, *n = vn, *m = vm;
+
+for (i = 0; i < opr_sz; ++i) {
+d[i] = ((int32_t)n[i] * m[i]) >> 8;
+}
+clear_tail(d, opr_sz, simd_maxsz(desc));
+}
+
+void HELPER(gvec_smulh_h)(void *vd, void *vn, void *vm, uint32_t desc)
+{
+intptr_t i, opr_sz = 

[PATCH v5 for-6.1 00/81] target/arm: Implement SVE2

2021-04-16 Thread Richard Henderson
Based-on: 20210416185959.1520974-1-richard.hender...@linaro.org
("[PATCH v4 for-6.1 00/39] target/arm: enforce alignment")

And of course, since I messed up the alignment subject, our tooling
isn't going to thread this properly.  So:

https://gitlab.com/rth7680/qemu/-/tree/tgt-arm-sve2
https://gitlab.com/rth7680/qemu/-/commit/cccb2c67e975322f006e81adb3cf5e235254f254

Changes since v4:
  * Rebased on mte + alignment changes.
  * Implement integer matrix multiply accumulate.
  * Change to decode to facilitate bfloat16.


r~


Richard Henderson (63):
  target/arm: Add ID_AA64ZFR0 fields and isar_feature_aa64_sve2
  target/arm: Implement SVE2 Integer Multiply - Unpredicated
  target/arm: Implement SVE2 integer pairwise add and accumulate long
  target/arm: Implement SVE2 integer unary operations (predicated)
  target/arm: Split out saturating/rounding shifts from neon
  target/arm: Implement SVE2 saturating/rounding bitwise shift left
(predicated)
  target/arm: Implement SVE2 integer halving add/subtract (predicated)
  target/arm: Implement SVE2 integer pairwise arithmetic
  target/arm: Implement SVE2 saturating add/subtract (predicated)
  target/arm: Implement SVE2 integer add/subtract long
  target/arm: Implement SVE2 integer add/subtract interleaved long
  target/arm: Implement SVE2 integer add/subtract wide
  target/arm: Implement SVE2 integer multiply long
  target/arm: Implement PMULLB and PMULLT
  target/arm: Implement SVE2 bitwise shift left long
  target/arm: Implement SVE2 bitwise exclusive-or interleaved
  target/arm: Implement SVE2 bitwise permute
  target/arm: Implement SVE2 complex integer add
  target/arm: Implement SVE2 integer absolute difference and accumulate
long
  target/arm: Implement SVE2 integer add/subtract long with carry
  target/arm: Implement SVE2 bitwise shift right and accumulate
  target/arm: Implement SVE2 bitwise shift and insert
  target/arm: Implement SVE2 integer absolute difference and accumulate
  target/arm: Implement SVE2 saturating extract narrow
  target/arm: Implement SVE2 SHRN, RSHRN
  target/arm: Implement SVE2 SQSHRUN, SQRSHRUN
  target/arm: Implement SVE2 UQSHRN, UQRSHRN
  target/arm: Implement SVE2 SQSHRN, SQRSHRN
  target/arm: Implement SVE2 WHILEGT, WHILEGE, WHILEHI, WHILEHS
  target/arm: Implement SVE2 WHILERW, WHILEWR
  target/arm: Implement SVE2 bitwise ternary operations
  target/arm: Implement SVE2 saturating multiply-add long
  target/arm: Implement SVE2 saturating multiply-add high
  target/arm: Implement SVE2 integer multiply-add long
  target/arm: Implement SVE2 complex integer multiply-add
  target/arm: Implement SVE2 XAR
  target/arm: Pass separate addend to {U,S}DOT helpers
  target/arm: Pass separate addend to FCMLA helpers
  target/arm: Split out formats for 2 vectors + 1 index
  target/arm: Split out formats for 3 vectors + 1 index
  target/arm: Implement SVE2 integer multiply (indexed)
  target/arm: Implement SVE2 integer multiply-add (indexed)
  target/arm: Implement SVE2 saturating multiply-add high (indexed)
  target/arm: Implement SVE2 saturating multiply-add (indexed)
  target/arm: Implement SVE2 saturating multiply (indexed)
  target/arm: Implement SVE2 signed saturating doubling multiply high
  target/arm: Implement SVE2 saturating multiply high (indexed)
  target/arm: Implement SVE mixed sign dot product (indexed)
  target/arm: Implement SVE mixed sign dot product
  target/arm: Implement SVE2 crypto unary operations
  target/arm: Implement SVE2 crypto destructive binary operations
  target/arm: Implement SVE2 crypto constructive binary operations
  target/arm: Share table of sve load functions
  target/arm: Implement SVE2 LD1RO
  target/arm: Implement 128-bit ZIP, UZP, TRN
  target/arm: Implement aarch64 SUDOT, USDOT
  target/arm: Split out do_neon_ddda_fpst
  target/arm: Remove unused fpst from VDOT_scalar
  target/arm: Fix decode for VDOT (indexed)
  target/arm: Split decode of VSDOT and VUDOT
  target/arm: Implement aarch32 VSUDOT, VUSDOT
  target/arm: Implement integer matrix multiply accumulate
  target/arm: Enable SVE2 and some extensions

Stephen Long (18):
  target/arm: Implement SVE2 floating-point pairwise
  target/arm: Implement SVE2 MATCH, NMATCH
  target/arm: Implement SVE2 ADDHNB, ADDHNT
  target/arm: Implement SVE2 RADDHNB, RADDHNT
  target/arm: Implement SVE2 SUBHNB, SUBHNT
  target/arm: Implement SVE2 RSUBHNB, RSUBHNT
  target/arm: Implement SVE2 HISTCNT, HISTSEG
  target/arm: Implement SVE2 scatter store insns
  target/arm: Implement SVE2 gather load insns
  target/arm: Implement SVE2 FMMLA
  target/arm: Implement SVE2 SPLICE, EXT
  target/arm: Implement SVE2 TBL, TBX
  target/arm: Implement SVE2 FCVTNT
  target/arm: Implement SVE2 FCVTLT
  target/arm: Implement SVE2 FCVTXNT, FCVTX
  target/arm: Implement SVE2 FLOGB
  target/arm: Implement SVE2 bitwise shift immediate
  target/arm: Implement SVE2 fp multiply-add long

 target/arm/cpu.h|   66 +
 target/arm/helper-sve.h |  681 ++-
 

  1   2   3   4   5   >