Re: [9fans] one weird trick to break p9sk1 ?

2024-05-13 Thread Richard Miller
23h...@gmail.com:
> ... the server and client keys are the
> same in p9sk1 as far as i understood. i would welcome public/private
> key system though (is that what you were thinking of when separating
> "server key" and "client key". that would add yet another set of
> features that are currently missing.

Have a look at authsrv(6) in the manual. The authenticator sends a
pair of tickets to the client, one encrypted with the client's own
key and one encrypted with the server's key. That's what allows
both the client and server to authenticate each other.

23h...@gmail.com:
> ... it seems to me that
> concentrating on 3DES just for the sake of similarity to DES is taking
> ocam's razor slightly too far.

Yes, I think you're probably right. I was thinking in terms of minimum
lines of code to change, but other factors are also important.


--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T56397eff6269af27-Mbc9a161e11837e5c464b2cd7
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


RE: [PATCH] Allow patterns in SLP reductions

2024-05-13 Thread Richard Biener
On Mon, 13 May 2024, Tamar Christina wrote:

> > -Original Message-
> > From: Richard Biener 
> > Sent: Friday, May 10, 2024 2:07 PM
> > To: Richard Biener 
> > Cc: gcc-patches@gcc.gnu.org
> > Subject: Re: [PATCH] Allow patterns in SLP reductions
> > 
> > On Fri, Mar 1, 2024 at 10:21 AM Richard Biener  wrote:
> > >
> > > The following removes the over-broad rejection of patterns for SLP
> > > reductions which is done by removing them from LOOP_VINFO_REDUCTIONS
> > > during pattern detection.  That's also insufficient in case the
> > > pattern only appears on the reduction path.  Instead this implements
> > > the proper correctness check in vectorizable_reduction and guides
> > > SLP discovery to heuristically avoid forming later invalid groups.
> > >
> > > I also couldn't find any testcase that FAILs when allowing the SLP
> > > reductions to form so I've added one.
> > >
> > > I came across this for single-lane SLP reductions with the all-SLP
> > > work where we rely on patterns to properly vectorize COND_EXPR
> > > reductions.
> > >
> > > Bootstrapped and tested on x86_64-unknown-linux-gnu, queued for stage1.
> > 
> > Re-bootstrapped/tested, r15-361-g52d4691294c847
> 
> Awesome!
> 
> Does this now allow us to write new reductions using patterns? i.e. 
> widening reductions?

Yes (SLP reductions, that is).  This is really only for SLP reductions
(not SLP reduction chains, not non-SLP reductions).  So it's just
a corner-case but since with SLP-only non-SLP reductions become
SLP reductions with a single lane that was important to fix ;)

Richard.

> Cheers,
> Tamar
> > 
> > Richard.
> > 
> > > Richard.
> > >
> > > * tree-vect-patterns.cc (vect_pattern_recog_1): Do not
> > > remove reductions involving patterns.
> > > * tree-vect-loop.cc (vectorizable_reduction): Reject SLP
> > > reduction groups with multiple lane-reducing reductions.
> > > * tree-vect-slp.cc (vect_analyze_slp_instance): When discovering
> > > SLP reduction groups avoid including lane-reducing ones.
> > >
> > > * gcc.dg/vect/vect-reduc-sad-9.c: New testcase.
> > > ---
> > >  gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c | 68 
> > >  gcc/tree-vect-loop.cc| 15 +
> > >  gcc/tree-vect-patterns.cc| 13 
> > >  gcc/tree-vect-slp.cc | 26 +---
> > >  4 files changed, 101 insertions(+), 21 deletions(-)
> > >  create mode 100644 gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c
> > >
> > > diff --git a/gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c
> > b/gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c
> > > new file mode 100644
> > > index 000..3c6af4510f4
> > > --- /dev/null
> > > +++ b/gcc/testsuite/gcc.dg/vect/vect-reduc-sad-9.c
> > > @@ -0,0 +1,68 @@
> > > +/* Disabling epilogues until we find a better way to deal with scans.  */
> > > +/* { dg-additional-options "--param vect-epilogues-nomask=0" } */
> > > +/* { dg-additional-options "-msse4.2" { target { x86_64-*-* i?86-*-* } } 
> > > } */
> > > +/* { dg-require-effective-target vect_usad_char } */
> > > +
> > > +#include 
> > > +#include "tree-vect.h"
> > > +
> > > +#define N 64
> > > +
> > > +unsigned char X[N] __attribute__ ((__aligned__(__BIGGEST_ALIGNMENT__)));
> > > +unsigned char Y[N] __attribute__ ((__aligned__(__BIGGEST_ALIGNMENT__)));
> > > +int abs (int);
> > > +
> > > +/* Sum of absolute differences between arrays of unsigned char types.
> > > +   Detected as a sad pattern.
> > > +   Vectorized on targets that support sad for unsigned chars.  */
> > > +
> > > +__attribute__ ((noinline)) int
> > > +foo (int len, int *res2)
> > > +{
> > > +  int i;
> > > +  int result = 0;
> > > +  int result2 = 0;
> > > +
> > > +  for (i = 0; i < len; i++)
> > > +{
> > > +  /* Make sure we are not using an SLP reduction for this.  */
> > > +  result += abs (X[2*i] - Y[2*i]);
> > > +  result2 += abs (X[2*i + 1] - Y[2*i + 1]);
> > > +}
> > > +
> > > +  *res2 = result2;
> > > +  return result;
> > > +}
> > > +
> > > +
> > > +int
> > > +main (void)
> > > +{
> > > +  i

Re: [9fans] one weird trick to break p9sk1 ?

2024-05-13 Thread Richard Miller
Jacob and Ori, thank you for filling in some more details. Without
the specifics I had been making some wrong assumptions about where
the exact threat was.

I think I now have a clearer picture:

It's not particularly p9sk1 which is vulnerable, but the protocol
for ticket request / response, which leaks enough information to
allow offline exploration of user keys. The contribution of p9sk1
is that its handshake protocol helpfully reveals a valid user name -
ie the authid - which can be used by an attacker to make a legitimate
ticket request, without any need for eavesdropping or guessing at
user names.

So, if you have an authentication service exposed to the ipv4
internet (or to the ipv6 internet with a findable address), and
your authid or a known or guessable userid has a weak enough
password to succumb to a dictionary search, it's probably right
to say that a random attacker could make a cpu connection or
mount your file service with an afternoon's work on consumer
hardware.

Nobody needs to have weak passwords, though. Using the !hex attribute
instead of !password with factotum, and/or using secstore(1), makes it
easy to have a randomly generated DES key with the full 56 bits of
entropy. This makes the attacker do more work ...  but not all that
much more. I hadn't kept up with how powerful commodity GPUs have
become. (My most recent experience with High Performance Computing
involved transputer arrays and Cray T3Ds.  Nowadays I specialise in
low performance computing.) It appears that investment of a few
thousand dollars and a few days compute time (maybe less if using
cloud services) is enough for a full brute-force exploration of the
single-DES keyspace.


--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T56397eff6269af27-M2867926d1deafb39060269df
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [BVARC] Satellites -- 2m -- 70cm -- antennas

2024-05-13 Thread Richard Bonica via BVARC
Thank you ALL

I got so much info and will try them all. Sound like I am on right path,
just a long one...

Again.  Thank you everyone.

On Sun, May 12, 2024, 9:20 PM Walter Holmes via BVARC 
wrote:

> Richard,
>
>
>
> Working Satellites can be a fantastic part of our great hobby.
>
>
>
> And depending how deep your planning to get into it, can be as cheap as
> $20.00 to $100.00 for a very nice antenna.
>
>
>
> Then you can use a dual band FM HT, BUT, this will only work late and
> night or early in the morning, when your not competing for air time with
> the 50 and 100 watt satellite stations.
>
>
>
> If you want the best experience possible, it can cost a few thousand
> dollars real quick, for a nice satellite radio capable of LSB/USB full
> duplex, and a pair of VHF/UHF beams with a nice az/el rotor.
>
>
>
> We have been working with a few people over the past few days, using HT’s
> and $15.00 antennas, to digipeat APRS packets via the ISS, with great
> success. For a quick and simple example of entry level access.
>
>
>
> The bottom line is, it doesn’t take a lot of money at all to test the
> waters, but once you get bitten by the bug, like several of us, it will
> take a great deal more $ for full station automation, and hands free
> operation. As well as getting the maximum time on each satellite pass.
>
>
>
> I hope you get the chance to give this a try, at any level, as it’s tons
> of fun.
>
>
>
> And, as always, anything I can do to help, just give a yell. We can get
> you on the birds, one way or the other.
>
>
>
> Walter/K5WH
>
>
>
> *From:* BVARC  *On Behalf Of *Richard Bonica via
> BVARC
> *Sent:* Sunday, May 12, 2024 5:53 PM
> *To:* BRAZOS VALLEY AMATEUR RADIO CLUB 
> *Cc:* Richard Bonica 
> *Subject:* [BVARC] Satellites -- 2m -- 70cm -- antennas
>
>
>
> To all...
>
>
>
> As I go down the path of satalites and antennas, I have now realized how
> much I really don't have a clue.
>
> The things I need to learn about are :
>
> 1) Yagis -- how they work beyond the normal parts and basic point and
> listen.
>
> 2) satalites and orbits and solar winds and solar interference in general
> and what it takes to track beyond software
>
> 3) up and down links -- beyond the 2m and 70cm frequency and how Doppler
> and angle effects.
>
>
>
> Basically -- looks to me that the magic smoke we work to keep in
> electronics is also in the antenna.
>
>
>
> Someone know where I can start to figure this stuff out. I am almost to
> the point of calling Mr Daniel and CPT Jack .
>
>
>
> Thank you for help in advance.
>
>
>
> Richard Bonica
>
> KG5YCU
> 
> Brazos Valley Amateur Radio Club
>
> BVARC mailing list
> BVARC@bvarc.org
> http://mail.bvarc.org/mailman/listinfo/bvarc_bvarc.org
> Publicly available archives are available here:
> https://www.mail-archive.com/bvarc@bvarc.org/
>

Brazos Valley Amateur Radio Club

BVARC mailing list
BVARC@bvarc.org
http://mail.bvarc.org/mailman/listinfo/bvarc_bvarc.org
Publicly available archives are available here: 
https://www.mail-archive.com/bvarc@bvarc.org/ 


Licensing of the Redis Module

2024-05-13 Thread Richard Zowalla
Hi all,

REDIS switched it's license, which isn't compatible anymore [1]

We have two options here (imho):

(1) Drop the REDIS module
(2) Migrate to the OSS fork: https://github.com/valkey-io/valkey


WDYT?

Gruß
Richard

[1] https://redis.io/blog/redis-adopts-dual-source-available-licensing/


signature.asc
Description: This is a digitally signed message part


annoucements page on website

2024-05-13 Thread Richard Hainsworth

Hi all,

We've added a new announcements page to the Raku documentation website.

As soon as the changes propage through the tool chain - probably after 
about 10.30am UTC - whenever you open the website after a new 
announcement has been made that you have not read, you will get a popup 
of the announcement.


Once cancelled, that message should not normally appear again.

You will only see a new announcement when one is added to the top of the 
announcements page.


All announcements can be seen by looking at the new announcement page 
(link can be found in Introduction/Beginning)


You can suppress all announcement popups using a toggle that can be 
found in the navigation's More dropdown.


Richard

aka finanalyst



annoucements page on website

2024-05-13 Thread Richard Hainsworth

Hi all,

We've added a new announcements page to the Raku documentation website.

As soon as the changes propage through the tool chain - probably after 
about 10.30am UTC - whenever you open the website after a new 
announcement has been made that you have not read, you will get a popup 
of the announcement.


Once cancelled, that message should not normally appear again.

You will only see a new announcement when one is added to the top of the 
announcements page.


All announcements can be seen by looking at the new announcement page 
(link can be found in Introduction/Beginning)


You can suppress all announcement popups using a toggle that can be 
found in the navigation's More dropdown.


Richard

aka finanalyst



Re: [PATCH] tree-ssa-math-opts: Pattern recognize yet another .ADD_OVERFLOW pattern [PR113982]

2024-05-13 Thread Richard Biener
On Mon, 13 May 2024, Jakub Jelinek wrote:

> Hi!
> 
> We pattern recognize already many different patterns, and closest to the
> requested one also
>yc = (type) y;
>zc = (type) z;
>x = yc + zc;
>w = (typeof_y) x;
>if (x > max)
> where y/z has the same unsigned type and type is a wider unsigned type
> and max is maximum value of the narrower unsigned type.
> But apparently people are creative in writing this in diffent ways,
> this requests
>yc = (type) y;
>zc = (type) z;
>x = yc + zc;
>w = (typeof_y) x;
>if (x >> narrower_type_bits)
> 
> The following patch implements that.
> 
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

OK.  Seeing the large matching code I wonder if using a match
in match.pd might be more easy to maintain (eh, and I'd still
like to somehow see "inline" match patterns in source files, not
sure how, but requiring some gen* program extracting them).

Thanks,
Richard.

> 2024-05-13  Jakub Jelinek  
> 
>   PR middle-end/113982
>   * tree-ssa-math-opts.cc (arith_overflow_check_p): Also return 1
>   for RSHIFT_EXPR by precision of maxval if shift result is only
>   used in a cast or comparison against zero.
>   (match_arith_overflow): Handle the RSHIFT_EXPR use case.
> 
>   * gcc.dg/pr113982.c: New test.
> 
> --- gcc/tree-ssa-math-opts.cc.jj  2024-04-11 09:26:36.318369218 +0200
> +++ gcc/tree-ssa-math-opts.cc 2024-05-10 18:17:08.795744811 +0200
> @@ -3947,6 +3947,66 @@ arith_overflow_check_p (gimple *stmt, gi
>else
>  return 0;
>  
> +  if (maxval
> +  && ccode == RSHIFT_EXPR
> +  && crhs1 == lhs
> +  && TREE_CODE (crhs2) == INTEGER_CST
> +  && wi::to_widest (crhs2) == TYPE_PRECISION (TREE_TYPE (maxval)))
> +{
> +  tree shiftlhs = gimple_assign_lhs (use_stmt);
> +  if (!shiftlhs)
> + return 0;
> +  use_operand_p use;
> +  if (!single_imm_use (shiftlhs, , _use_stmt))
> + return 0;
> +  if (gimple_code (cur_use_stmt) == GIMPLE_COND)
> + {
> +   ccode = gimple_cond_code (cur_use_stmt);
> +   crhs1 = gimple_cond_lhs (cur_use_stmt);
> +   crhs2 = gimple_cond_rhs (cur_use_stmt);
> + }
> +  else if (is_gimple_assign (cur_use_stmt))
> + {
> +   if (gimple_assign_rhs_class (cur_use_stmt) == GIMPLE_BINARY_RHS)
> + {
> +   ccode = gimple_assign_rhs_code (cur_use_stmt);
> +   crhs1 = gimple_assign_rhs1 (cur_use_stmt);
> +   crhs2 = gimple_assign_rhs2 (cur_use_stmt);
> + }
> +   else if (gimple_assign_rhs_code (cur_use_stmt) == COND_EXPR)
> + {
> +   tree cond = gimple_assign_rhs1 (cur_use_stmt);
> +   if (COMPARISON_CLASS_P (cond))
> + {
> +   ccode = TREE_CODE (cond);
> +   crhs1 = TREE_OPERAND (cond, 0);
> +   crhs2 = TREE_OPERAND (cond, 1);
> + }
> +   else
> + return 0;
> + }
> +   else
> + {
> +   enum tree_code sc = gimple_assign_rhs_code (cur_use_stmt);
> +   tree castlhs = gimple_assign_lhs (cur_use_stmt);
> +   if (!CONVERT_EXPR_CODE_P (sc)
> +   || !castlhs
> +   || !INTEGRAL_TYPE_P (TREE_TYPE (castlhs))
> +   || (TYPE_PRECISION (TREE_TYPE (castlhs))
> +   > TYPE_PRECISION (TREE_TYPE (maxval
> + return 0;
> +   return 1;
> + }
> + }
> +  else
> + return 0;
> +  if ((ccode != EQ_EXPR && ccode != NE_EXPR)
> +   || crhs1 != shiftlhs
> +   || !integer_zerop (crhs2))
> + return 0;
> +  return 1;
> +}
> +
>if (TREE_CODE_CLASS (ccode) != tcc_comparison)
>  return 0;
>  
> @@ -4049,6 +4109,7 @@ arith_overflow_check_p (gimple *stmt, gi
> _8 = IMAGPART_EXPR <_7>;
> if (_8)
> and replace (utype) x with _9.
> +   Or with x >> popcount (max) instead of x > max.
>  
> Also recognize:
> x = ~z;
> @@ -4481,10 +4542,62 @@ match_arith_overflow (gimple_stmt_iterat
> gcc_checking_assert (is_gimple_assign (use_stmt));
> if (gimple_assign_rhs_class (use_stmt) == GIMPLE_BINARY_RHS)
>   {
> -   gimple_assign_set_rhs1 (use_stmt, ovf);
> -   gimple_assign_set_rhs2 (use_stmt, build_int_cst (type, 0));
> -   gimple_assign_set_rhs_code (use_stmt,
> -   ovf_use == 1 ? NE_EXPR : EQ_EXPR);
> +   if (gimple_assign_rhs_code (use_stmt) == RSHIFT_EXPR)
> +   

[PATCH v2 23/45] target/hppa: Use TCG_COND_TST* in do_unit_addsub

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 50cc6decd8..47f4b23d1b 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1419,8 +1419,8 @@ static void do_unit_addsub(DisasContext *ctx, unsigned 
rt, TCGv_i64 in1,
 tcg_gen_shri_i64(cb, cb, 1);
 }
 
-tcg_gen_andi_i64(cb, cb, test_cb);
-cond = cond_make_ti(cf & 1 ? TCG_COND_EQ : TCG_COND_NE, cb, 0);
+cond = cond_make_ti(cf & 1 ? TCG_COND_TSTEQ : TCG_COND_TSTNE,
+cb, test_cb);
 }
 
 if (is_tc) {
-- 
2.34.1




[PATCH v2 34/45] target/hppa: Improve hppa_cpu_dump_state

2024-05-13 Thread Richard Henderson
Print both raw IAQ_Front and IAQ_Back as well as the GVAs.
Print control registers in system mode.
Print floating point register if CPU_DUMP_FPU.

Signed-off-by: Richard Henderson 
---
 target/hppa/helper.c | 60 +++-
 1 file changed, 54 insertions(+), 6 deletions(-)

diff --git a/target/hppa/helper.c b/target/hppa/helper.c
index 9d217d051c..7d22c248fb 100644
--- a/target/hppa/helper.c
+++ b/target/hppa/helper.c
@@ -102,6 +102,19 @@ void cpu_hppa_put_psw(CPUHPPAState *env, target_ulong psw)
 
 void hppa_cpu_dump_state(CPUState *cs, FILE *f, int flags)
 {
+#ifndef CONFIG_USER_ONLY
+static const char cr_name[32][5] = {
+"RC","CR1",   "CR2",   "CR3",
+"CR4",   "CR5",   "CR6",   "CR7",
+"PID1",  "PID2",  "CCR",   "SAR",
+"PID3",  "PID4",  "IVA",   "EIEM",
+"ITMR",  "ISQF",  "IOQF",  "IIR",
+"ISR",   "IOR",   "IPSW",  "EIRR",
+"TR0",   "TR1",   "TR2",   "TR3",
+"TR4",   "TR5",   "TR6",   "TR7",
+};
+#endif
+
 CPUHPPAState *env = cpu_env(cs);
 target_ulong psw = cpu_hppa_get_psw(env);
 target_ulong psw_cb;
@@ -117,11 +130,12 @@ void hppa_cpu_dump_state(CPUState *cs, FILE *f, int flags)
 m = UINT32_MAX;
 }
 
-qemu_fprintf(f, "IA_F " TARGET_FMT_lx " IA_B " TARGET_FMT_lx
- " IIR %0*" PRIx64 "\n",
+qemu_fprintf(f, "IA_F %08" PRIx64 ":%0*" PRIx64 " (" TARGET_FMT_lx ")\n"
+"IA_B %08" PRIx64 ":%0*" PRIx64 " (" TARGET_FMT_lx ")\n",
+ env->iasq_f >> 32, w, m & env->iaoq_f,
  hppa_form_gva_psw(psw, env->iasq_f, env->iaoq_f),
- hppa_form_gva_psw(psw, env->iasq_b, env->iaoq_b),
- w, m & env->cr[CR_IIR]);
+ env->iasq_b >> 32, w, m & env->iaoq_b,
+ hppa_form_gva_psw(psw, env->iasq_b, env->iaoq_b));
 
 psw_c[0]  = (psw & PSW_W ? 'W' : '-');
 psw_c[1]  = (psw & PSW_E ? 'E' : '-');
@@ -154,12 +168,46 @@ void hppa_cpu_dump_state(CPUState *cs, FILE *f, int flags)
  (i & 3) == 3 ? '\n' : ' ');
 }
 #ifndef CONFIG_USER_ONLY
+for (i = 0; i < 32; i++) {
+qemu_fprintf(f, "%-4s %0*" PRIx64 "%c",
+ cr_name[i], w, m & env->cr[i],
+ (i & 3) == 3 ? '\n' : ' ');
+}
+qemu_fprintf(f, "ISQB %0*" PRIx64 " IOQB %0*" PRIx64 "\n",
+ w, m & env->cr_back[0], w, m & env->cr_back[1]);
 for (i = 0; i < 8; i++) {
 qemu_fprintf(f, "SR%02d %08x%c", i, (uint32_t)(env->sr[i] >> 32),
  (i & 3) == 3 ? '\n' : ' ');
 }
 #endif
- qemu_fprintf(f, "\n");
 
-/* ??? FR */
+if (flags & CPU_DUMP_FPU) {
+static const char rm[4][4] = { "RN", "RZ", "R+", "R-" };
+char flg[6], ena[6];
+uint32_t fpsr = env->fr0_shadow;
+
+flg[0] = (fpsr & R_FPSR_FLG_V_MASK ? 'V' : '-');
+flg[1] = (fpsr & R_FPSR_FLG_Z_MASK ? 'Z' : '-');
+flg[2] = (fpsr & R_FPSR_FLG_O_MASK ? 'O' : '-');
+flg[3] = (fpsr & R_FPSR_FLG_U_MASK ? 'U' : '-');
+flg[4] = (fpsr & R_FPSR_FLG_I_MASK ? 'I' : '-');
+flg[5] = '\0';
+
+ena[0] = (fpsr & R_FPSR_ENA_V_MASK ? 'V' : '-');
+ena[1] = (fpsr & R_FPSR_ENA_Z_MASK ? 'Z' : '-');
+ena[2] = (fpsr & R_FPSR_ENA_O_MASK ? 'O' : '-');
+ena[3] = (fpsr & R_FPSR_ENA_U_MASK ? 'U' : '-');
+ena[4] = (fpsr & R_FPSR_ENA_I_MASK ? 'I' : '-');
+ena[5] = '\0';
+
+qemu_fprintf(f, "FPSR %08x flag%s enable  %s %s\n",
+ fpsr, flg, ena, rm[FIELD_EX32(fpsr, FPSR, RM)]);
+
+for (i = 0; i < 32; i++) {
+qemu_fprintf(f, "FR%02d %016" PRIx64 "%c",
+ i, env->fr[i], (i & 3) == 3 ? '\n' : ' ');
+}
+}
+
+qemu_fprintf(f, "\n");
 }
-- 
2.34.1




[PATCH v2 39/45] target/hppa: Drop tlb_entry return from hppa_get_physical_address

2024-05-13 Thread Richard Henderson
The return-by-reference is never used.

Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.h|  3 +--
 target/hppa/int_helper.c |  2 +-
 target/hppa/mem_helper.c | 19 ---
 target/hppa/op_helper.c  |  3 +--
 4 files changed, 7 insertions(+), 20 deletions(-)

diff --git a/target/hppa/cpu.h b/target/hppa/cpu.h
index f247ad56d7..78ab0adcd0 100644
--- a/target/hppa/cpu.h
+++ b/target/hppa/cpu.h
@@ -371,8 +371,7 @@ bool hppa_cpu_tlb_fill(CPUState *cs, vaddr address, int 
size,
 void hppa_cpu_do_interrupt(CPUState *cpu);
 bool hppa_cpu_exec_interrupt(CPUState *cpu, int int_req);
 int hppa_get_physical_address(CPUHPPAState *env, vaddr addr, int mmu_idx,
-  int type, hwaddr *pphys, int *pprot,
-  HPPATLBEntry **tlb_entry);
+  int type, hwaddr *pphys, int *pprot);
 void hppa_cpu_do_transaction_failed(CPUState *cs, hwaddr physaddr,
  vaddr addr, unsigned size,
  MMUAccessType access_type,
diff --git a/target/hppa/int_helper.c b/target/hppa/int_helper.c
index 97e5f0b9a7..b82f32fd12 100644
--- a/target/hppa/int_helper.c
+++ b/target/hppa/int_helper.c
@@ -167,7 +167,7 @@ void hppa_cpu_do_interrupt(CPUState *cs)
 
 vaddr = hppa_form_gva_psw(old_psw, env->iasq_f, vaddr);
 t = hppa_get_physical_address(env, vaddr, MMU_KERNEL_IDX,
-  0, , , NULL);
+  0, , );
 if (t >= 0) {
 /* We can't re-load the instruction.  */
 env->cr[CR_IIR] = 0;
diff --git a/target/hppa/mem_helper.c b/target/hppa/mem_helper.c
index ca7bbe0a7c..2929226874 100644
--- a/target/hppa/mem_helper.c
+++ b/target/hppa/mem_helper.c
@@ -197,18 +197,13 @@ static int match_prot_id64(CPUHPPAState *env, uint32_t 
access_id)
 }
 
 int hppa_get_physical_address(CPUHPPAState *env, vaddr addr, int mmu_idx,
-  int type, hwaddr *pphys, int *pprot,
-  HPPATLBEntry **tlb_entry)
+  int type, hwaddr *pphys, int *pprot)
 {
 hwaddr phys;
 int prot, r_prot, w_prot, x_prot, priv;
 HPPATLBEntry *ent;
 int ret = -1;
 
-if (tlb_entry) {
-*tlb_entry = NULL;
-}
-
 /* Virtual translation disabled.  Map absolute to physical.  */
 if (MMU_IDX_MMU_DISABLED(mmu_idx)) {
 switch (mmu_idx) {
@@ -238,10 +233,6 @@ int hppa_get_physical_address(CPUHPPAState *env, vaddr 
addr, int mmu_idx,
 goto egress;
 }
 
-if (tlb_entry) {
-*tlb_entry = ent;
-}
-
 /* We now know the physical address.  */
 phys = ent->pa + (addr - ent->itree.start);
 
@@ -350,7 +341,7 @@ hwaddr hppa_cpu_get_phys_page_debug(CPUState *cs, vaddr 
addr)
cpu->env.psw & PSW_W ? MMU_ABS_W_IDX : MMU_ABS_IDX);
 
 excp = hppa_get_physical_address(>env, addr, mmu_idx, 0,
- , , NULL);
+ , );
 
 /* Since we're translating for debugging, the only error that is a
hard error is no translation at all.  Otherwise, while a real cpu
@@ -432,7 +423,6 @@ bool hppa_cpu_tlb_fill(CPUState *cs, vaddr addr, int size,
 {
 HPPACPU *cpu = HPPA_CPU(cs);
 CPUHPPAState *env = >env;
-HPPATLBEntry *ent;
 int prot, excp, a_prot;
 hwaddr phys;
 
@@ -448,8 +438,7 @@ bool hppa_cpu_tlb_fill(CPUState *cs, vaddr addr, int size,
 break;
 }
 
-excp = hppa_get_physical_address(env, addr, mmu_idx,
- a_prot, , , );
+excp = hppa_get_physical_address(env, addr, mmu_idx, a_prot, , );
 if (unlikely(excp >= 0)) {
 if (probe) {
 return false;
@@ -690,7 +679,7 @@ target_ulong HELPER(lpa)(CPUHPPAState *env, target_ulong 
addr)
 int prot, excp;
 
 excp = hppa_get_physical_address(env, addr, MMU_KERNEL_IDX, 0,
- , , NULL);
+ , );
 if (excp >= 0) {
 if (excp == EXCP_DTLB_MISS) {
 excp = EXCP_NA_DTLB_MISS;
diff --git a/target/hppa/op_helper.c b/target/hppa/op_helper.c
index 66cad78a57..7f79196fff 100644
--- a/target/hppa/op_helper.c
+++ b/target/hppa/op_helper.c
@@ -334,8 +334,7 @@ target_ulong HELPER(probe)(CPUHPPAState *env, target_ulong 
addr,
 }
 
 mmu_idx = PRIV_P_TO_MMU_IDX(level, env->psw & PSW_P);
-excp = hppa_get_physical_address(env, addr, mmu_idx, 0, ,
- , NULL);
+excp = hppa_get_physical_address(env, addr, mmu_idx, 0, , );
 if (excp >= 0) {
 cpu_restore_state(env_cpu(env), GETPC());
 hppa_set_ior_and_isr(env, addr, MMU_IDX_MMU_DISABLED(mmu_idx));
-- 
2.34.1




[PATCH v2 20/45] target/hppa: Use TCG_COND_TST* in do_cond

2024-05-13 Thread Richard Henderson
We can directly test bits of a 32-bit comparison without
zero or sign-extending an intermediate result.
We can directly test bit 0 for odd/even.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 64 ++---
 1 file changed, 28 insertions(+), 36 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 5e32d985c9..421b0df9d4 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -775,28 +775,36 @@ static bool cond_need_cb(int c)
 static DisasCond do_cond(DisasContext *ctx, unsigned cf, bool d,
  TCGv_i64 res, TCGv_i64 uv, TCGv_i64 sv)
 {
+TCGCond sign_cond, zero_cond;
+uint64_t sign_imm, zero_imm;
 DisasCond cond;
 TCGv_i64 tmp;
 
+if (d) {
+/* 64-bit condition. */
+sign_imm = 0;
+sign_cond = TCG_COND_LT;
+zero_imm = 0;
+zero_cond = TCG_COND_EQ;
+} else {
+/* 32-bit condition. */
+sign_imm = 1ull << 31;
+sign_cond = TCG_COND_TSTNE;
+zero_imm = UINT32_MAX;
+zero_cond = TCG_COND_TSTEQ;
+}
+
 switch (cf >> 1) {
 case 0: /* Never / TR(0 / 1) */
 cond = cond_make_f();
 break;
 case 1: /* = / <>(Z / !Z) */
-if (!d) {
-tmp = tcg_temp_new_i64();
-tcg_gen_ext32u_i64(tmp, res);
-res = tmp;
-}
-cond = cond_make_vi(TCG_COND_EQ, res, 0);
+cond = cond_make_vi(zero_cond, res, zero_imm);
 break;
 case 2: /* < / >=(N ^ V / !(N ^ V) */
 tmp = tcg_temp_new_i64();
 tcg_gen_xor_i64(tmp, res, sv);
-if (!d) {
-tcg_gen_ext32s_i64(tmp, tmp);
-}
-cond = cond_make_ti(TCG_COND_LT, tmp, 0);
+cond = cond_make_ti(sign_cond, tmp, sign_imm);
 break;
 case 3: /* <= / >(N ^ V) | Z / !((N ^ V) | Z) */
 /*
@@ -804,21 +812,15 @@ static DisasCond do_cond(DisasContext *ctx, unsigned cf, 
bool d,
  *   (N ^ V) | Z
  *   ((res < 0) ^ (sv < 0)) | !res
  *   ((res ^ sv) < 0) | !res
- *   (~(res ^ sv) >= 0) | !res
- *   !(~(res ^ sv) >> 31) | !res
- *   !(~(res ^ sv) >> 31 & res)
+ *   ((res ^ sv) < 0 ? 1 : !res)
+ *   !((res ^ sv) < 0 ? 0 : res)
  */
 tmp = tcg_temp_new_i64();
-tcg_gen_eqv_i64(tmp, res, sv);
-if (!d) {
-tcg_gen_sextract_i64(tmp, tmp, 31, 1);
-tcg_gen_and_i64(tmp, tmp, res);
-tcg_gen_ext32u_i64(tmp, tmp);
-} else {
-tcg_gen_sari_i64(tmp, tmp, 63);
-tcg_gen_and_i64(tmp, tmp, res);
-}
-cond = cond_make_ti(TCG_COND_EQ, tmp, 0);
+tcg_gen_xor_i64(tmp, res, sv);
+tcg_gen_movcond_i64(sign_cond, tmp,
+tmp, tcg_constant_i64(sign_imm),
+ctx->zero, res);
+cond = cond_make_ti(zero_cond, tmp, zero_imm);
 break;
 case 4: /* NUV / UV  (!UV / UV) */
 cond = cond_make_vi(TCG_COND_EQ, uv, 0);
@@ -826,23 +828,13 @@ static DisasCond do_cond(DisasContext *ctx, unsigned cf, 
bool d,
 case 5: /* ZNV / VNZ (!UV | Z / UV & !Z) */
 tmp = tcg_temp_new_i64();
 tcg_gen_movcond_i64(TCG_COND_EQ, tmp, uv, ctx->zero, ctx->zero, res);
-if (!d) {
-tcg_gen_ext32u_i64(tmp, tmp);
-}
-cond = cond_make_ti(TCG_COND_EQ, tmp, 0);
+cond = cond_make_ti(zero_cond, tmp, zero_imm);
 break;
 case 6: /* SV / NSV  (V / !V) */
-if (!d) {
-tmp = tcg_temp_new_i64();
-tcg_gen_ext32s_i64(tmp, sv);
-sv = tmp;
-}
-cond = cond_make_ti(TCG_COND_LT, sv, 0);
+cond = cond_make_vi(sign_cond, sv, sign_imm);
 break;
 case 7: /* OD / EV */
-tmp = tcg_temp_new_i64();
-tcg_gen_andi_i64(tmp, res, 1);
-cond = cond_make_ti(TCG_COND_NE, tmp, 0);
+cond = cond_make_vi(TCG_COND_TSTNE, res, 1);
 break;
 default:
 g_assert_not_reached();
-- 
2.34.1




[PATCH v2 27/45] target/hppa: Remove cond_free

2024-05-13 Thread Richard Henderson
Now that we do not need to free tcg temporaries, the only
thing cond_free does is reset the condition to never.
Instead, simply write a new condition over the old, which
may be simply cond_make_f() for the never condition.

The do_*_cond functions do the right thing with c or cf == 0,
so there's no need for a special case anymore.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 102 +++-
 1 file changed, 27 insertions(+), 75 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index ef62cd7e94..e06c14dd15 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -373,21 +373,6 @@ static DisasCond cond_make_vv(TCGCond c, TCGv_i64 a0, 
TCGv_i64 a1)
 return cond_make_tt(c, t0, t1);
 }
 
-static void cond_free(DisasCond *cond)
-{
-switch (cond->c) {
-default:
-cond->a0 = NULL;
-cond->a1 = NULL;
-/* fallthru */
-case TCG_COND_ALWAYS:
-cond->c = TCG_COND_NEVER;
-break;
-case TCG_COND_NEVER:
-break;
-}
-}
-
 static TCGv_i64 load_gpr(DisasContext *ctx, unsigned reg)
 {
 if (reg == 0) {
@@ -537,7 +522,7 @@ static void nullify_over(DisasContext *ctx)
 
 tcg_gen_brcond_i64(ctx->null_cond.c, ctx->null_cond.a0,
ctx->null_cond.a1, ctx->null_lab);
-cond_free(>null_cond);
+ctx->null_cond = cond_make_f();
 }
 }
 
@@ -555,7 +540,7 @@ static void nullify_save(DisasContext *ctx)
 ctx->null_cond.a0, ctx->null_cond.a1);
 ctx->psw_n_nonzero = true;
 }
-cond_free(>null_cond);
+ctx->null_cond = cond_make_f();
 }
 
 /* Set a PSW[N] to X.  The intention is that this is used immediately
@@ -1165,7 +1150,6 @@ static void do_add(DisasContext *ctx, unsigned rt, 
TCGv_i64 orig_in1,
 save_gpr(ctx, rt, dest);
 
 /* Install the new nullification.  */
-cond_free(>null_cond);
 ctx->null_cond = cond;
 }
 
@@ -1262,7 +1246,6 @@ static void do_sub(DisasContext *ctx, unsigned rt, 
TCGv_i64 in1,
 save_gpr(ctx, rt, dest);
 
 /* Install the new nullification.  */
-cond_free(>null_cond);
 ctx->null_cond = cond;
 }
 
@@ -1317,7 +1300,6 @@ static void do_cmpclr(DisasContext *ctx, unsigned rt, 
TCGv_i64 in1,
 save_gpr(ctx, rt, dest);
 
 /* Install the new nullification.  */
-cond_free(>null_cond);
 ctx->null_cond = cond;
 }
 
@@ -1332,10 +1314,7 @@ static void do_log(DisasContext *ctx, unsigned rt, 
TCGv_i64 in1,
 save_gpr(ctx, rt, dest);
 
 /* Install the new nullification.  */
-cond_free(>null_cond);
-if (cf) {
-ctx->null_cond = do_log_cond(ctx, cf, d, dest);
-}
+ctx->null_cond = do_log_cond(ctx, cf, d, dest);
 }
 
 static bool do_log_reg(DisasContext *ctx, arg_rrr_cf_d *a,
@@ -1430,7 +1409,6 @@ static void do_unit_addsub(DisasContext *ctx, unsigned 
rt, TCGv_i64 in1,
 }
 save_gpr(ctx, rt, dest);
 
-cond_free(>null_cond);
 ctx->null_cond = cond;
 }
 
@@ -1853,7 +1831,6 @@ static bool do_cbranch(DisasContext *ctx, int64_t disp, 
bool is_n,
 
 taken = gen_new_label();
 tcg_gen_brcond_i64(c, cond->a0, cond->a1, taken);
-cond_free(cond);
 
 /* Not taken: Condition not satisfied; nullify on backward branches. */
 n = is_n && disp < 0;
@@ -2035,7 +2012,7 @@ static void do_page_zero(DisasContext *ctx)
 
 static bool trans_nop(DisasContext *ctx, arg_nop *a)
 {
-cond_free(>null_cond);
+ctx->null_cond = cond_make_f();
 return true;
 }
 
@@ -2049,7 +2026,7 @@ static bool trans_sync(DisasContext *ctx, arg_sync *a)
 /* No point in nullifying the memory barrier.  */
 tcg_gen_mb(TCG_BAR_SC | TCG_MO_ALL);
 
-cond_free(>null_cond);
+ctx->null_cond = cond_make_f();
 return true;
 }
 
@@ -2061,7 +2038,7 @@ static bool trans_mfia(DisasContext *ctx, arg_mfia *a)
 tcg_gen_andi_i64(dest, dest, -4);
 
 save_gpr(ctx, a->t, dest);
-cond_free(>null_cond);
+ctx->null_cond = cond_make_f();
 return true;
 }
 
@@ -2076,7 +2053,7 @@ static bool trans_mfsp(DisasContext *ctx, arg_mfsp *a)
 
 save_gpr(ctx, rt, t0);
 
-cond_free(>null_cond);
+ctx->null_cond = cond_make_f();
 return true;
 }
 
@@ -2121,7 +2098,7 @@ static bool trans_mfctl(DisasContext *ctx, arg_mfctl *a)
 save_gpr(ctx, rt, tmp);
 
  done:
-cond_free(>null_cond);
+ctx->null_cond = cond_make_f();
 return true;
 }
 
@@ -2161,7 +2138,7 @@ static bool trans_mtctl(DisasContext *ctx, arg_mtctl *a)
 tcg_gen_andi_i64(tmp, reg, ctx->is_pa20 ? 63 : 31);
 save_or_nullify(ctx, cpu_sar, tmp);
 
-cond_free(>null_cond);
+ctx->null_cond = cond_make_f();
 return true;
 }
 
@@ -2235,7 +2212,7 @@ static bool trans_mtsarcm(DisasContext *ctx, arg_mtsarcm 
*a)
 tcg_gen_andi_i64(tmp, tmp, ctx->is_p

[PATCH v2 10/45] target/hppa: Skip nullified insns in unconditional dbranch path

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index a9196050dc..ca979f4137 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1805,11 +1805,17 @@ static bool do_dbranch(DisasContext *ctx, int64_t disp,
 
 if (ctx->null_cond.c == TCG_COND_NEVER && ctx->null_lab == NULL) {
 install_link(ctx, link, false);
-ctx->iaoq_n = dest;
-ctx->iaoq_n_var = NULL;
 if (is_n) {
+if (use_nullify_skip(ctx)) {
+nullify_set(ctx, 0);
+gen_goto_tb(ctx, 0, dest, dest + 4);
+ctx->base.is_jmp = DISAS_NORETURN;
+return true;
+}
 ctx->null_cond.c = TCG_COND_ALWAYS;
 }
+ctx->iaoq_n = dest;
+ctx->iaoq_n_var = NULL;
 } else {
 nullify_over(ctx);
 
-- 
2.34.1




[PATCH v2 32/45] target/hppa: Store full iaoq_f and page offset of iaoq_b in TB

2024-05-13 Thread Richard Henderson
In preparation for CF_PCREL. store the iaoq_f in 3 parts: high
bits in cs_base, middle bits in pc, and low bits in priv.
For iaoq_b, set a bit for either of space or page differing,
else the page offset.

Install iaq entries before goto_tb. The change to not record
the full direct branch difference in TB means that we have to
store at least iaoq_b before goto_tb.  But we since we'll need
both updated before goto_tb for CF_PCREL, do that now.

Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.h   |  2 ++
 target/hppa/cpu.c   | 72 ++---
 target/hppa/translate.c | 29 +
 3 files changed, 48 insertions(+), 55 deletions(-)

diff --git a/target/hppa/cpu.h b/target/hppa/cpu.h
index 5a1e720bb6..1232a4cef2 100644
--- a/target/hppa/cpu.h
+++ b/target/hppa/cpu.h
@@ -341,6 +341,8 @@ hwaddr hppa_abs_to_phys_pa2_w1(vaddr addr);
 #define TB_FLAG_SR_SAME PSW_I
 #define TB_FLAG_PRIV_SHIFT  8
 #define TB_FLAG_UNALIGN 0x400
+#define CS_BASE_DIFFPAGE(1 << 12)
+#define CS_BASE_DIFFSPACE   (1 << 13)
 
 void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
   uint64_t *cs_base, uint32_t *pflags);
diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index a007de5521..003af63e20 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -48,36 +48,43 @@ static vaddr hppa_cpu_get_pc(CPUState *cs)
 }
 
 void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
-  uint64_t *cs_base, uint32_t *pflags)
+  uint64_t *pcsbase, uint32_t *pflags)
 {
 uint32_t flags = env->psw_n * PSW_N;
+uint64_t cs_base = 0;
+
+/*
+ * TB lookup assumes that PC contains the complete virtual address.
+ * If we leave space+offset separate, we'll get ITLB misses to an
+ * incomplete virtual address.  This also means that we must separate
+ * out current cpu privilege from the low bits of IAOQ_F.
+ */
+*pc = hppa_cpu_get_pc(env_cpu(env));
+flags |= (env->iaoq_f & 3) << TB_FLAG_PRIV_SHIFT;
+
+if (hppa_is_pa20(env)) {
+cs_base = env->iaoq_f & MAKE_64BIT_MASK(32, 32);
+}
+
+/*
+ * The only really interesting case is if IAQ_Back is on the same page
+ * as IAQ_Front, so that we can use goto_tb between the blocks.  In all
+ * other cases, we'll be ending the TranslationBlock with one insn and
+ * not linking between them.
+ */
+if (env->iasq_f != env->iasq_b) {
+cs_base |= CS_BASE_DIFFSPACE;
+} else if ((env->iaoq_f ^ env->iaoq_b) & TARGET_PAGE_MASK) {
+cs_base |= CS_BASE_DIFFPAGE;
+} else {
+cs_base |= env->iaoq_b & ~TARGET_PAGE_MASK;
+}
 
-/* TB lookup assumes that PC contains the complete virtual address.
-   If we leave space+offset separate, we'll get ITLB misses to an
-   incomplete virtual address.  This also means that we must separate
-   out current cpu privilege from the low bits of IAOQ_F.  */
 #ifdef CONFIG_USER_ONLY
-*pc = env->iaoq_f & -4;
-*cs_base = env->iaoq_b & -4;
 flags |= TB_FLAG_UNALIGN * !env_cpu(env)->prctl_unalign_sigbus;
 #else
 /* ??? E, T, H, L, B bits need to be here, when implemented.  */
 flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_P);
-flags |= (env->iaoq_f & 3) << TB_FLAG_PRIV_SHIFT;
-
-*pc = hppa_cpu_get_pc(env_cpu(env));
-*cs_base = env->iasq_f;
-
-/* Insert a difference between IAOQ_B and IAOQ_F within the otherwise zero
-   low 32-bits of CS_BASE.  This will succeed for all direct branches,
-   which is the primary case we care about -- using goto_tb within a page.
-   Failure is indicated by a zero difference.  */
-if (env->iasq_f == env->iasq_b) {
-target_long diff = env->iaoq_b - env->iaoq_f;
-if (diff == (int32_t)diff) {
-*cs_base |= (uint32_t)diff;
-}
-}
 if ((env->sr[4] == env->sr[5])
 & (env->sr[4] == env->sr[6])
 & (env->sr[4] == env->sr[7])) {
@@ -85,6 +92,7 @@ void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
 }
 #endif
 
+*pcsbase = cs_base;
 *pflags = flags;
 }
 
@@ -93,25 +101,7 @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs,
 {
 HPPACPU *cpu = HPPA_CPU(cs);
 
-tcg_debug_assert(!tcg_cflags_has(cs, CF_PCREL));
-
-#ifdef CONFIG_USER_ONLY
-cpu->env.iaoq_f = tb->pc | PRIV_USER;
-cpu->env.iaoq_b = tb->cs_base | PRIV_USER;
-#else
-/* Recover the IAOQ values from the GVA + PRIV.  */
-uint32_t priv = (tb->flags >> TB_FLAG_PRIV_SHIFT) & 3;
-target_ulong cs_base = tb->cs_base;
-target_ulong iasq_f = cs_base & ~0xull;
-int32_t diff = cs_base;
-
-cpu->env.iasq_f = iasq_f;
-cpu->env.iaoq_f = (tb->pc & ~iasq_f) + priv;
-if (diff) {
-cpu->env.iaoq_b = cpu->env.

[PATCH v2 12/45] target/hppa: Add IASQ entries to DisasContext

2024-05-13 Thread Richard Henderson
Add variable to track space changes to IAQ.  So far, no such changes
are introduced, but the new checks vs ctx->iasq_b may eliminate an
unnecessary copy to cpu_iasq_f with e.g. BLR.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 39 ++-
 1 file changed, 30 insertions(+), 9 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 13a48d1b6c..d24220c60f 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -50,6 +50,13 @@ typedef struct DisasContext {
 uint64_t iaoq_b;
 uint64_t iaoq_n;
 TCGv_i64 iaoq_n_var;
+/*
+ * Null when IASQ_Back unchanged from IASQ_Front,
+ * or cpu_iasq_b, when IASQ_Back has been changed.
+ */
+TCGv_i64 iasq_b;
+/* Null when IASQ_Next unchanged from IASQ_Back, or set by branch. */
+TCGv_i64 iasq_n;
 
 DisasCond null_cond;
 TCGLabel *null_lab;
@@ -3916,12 +3923,12 @@ static bool trans_be(DisasContext *ctx, arg_be *a)
 if (a->n && use_nullify_skip(ctx)) {
 install_iaq_entries(ctx, -1, tmp, -1, NULL);
 tcg_gen_mov_i64(cpu_iasq_f, new_spc);
-tcg_gen_mov_i64(cpu_iasq_b, cpu_iasq_f);
+tcg_gen_mov_i64(cpu_iasq_b, new_spc);
 nullify_set(ctx, 0);
 } else {
 install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, -1, tmp);
-if (ctx->iaoq_b == -1) {
-tcg_gen_mov_i64(cpu_iasq_f, cpu_iasq_b);
+if (ctx->iasq_b) {
+tcg_gen_mov_i64(cpu_iasq_f, ctx->iasq_b);
 }
 tcg_gen_mov_i64(cpu_iasq_b, new_spc);
 nullify_set(ctx, a->n);
@@ -4035,8 +4042,8 @@ static bool trans_bve(DisasContext *ctx, arg_bve *a)
 
 install_link(ctx, a->l, false);
 install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, -1, dest);
-if (ctx->iaoq_b == -1) {
-tcg_gen_mov_i64(cpu_iasq_f, cpu_iasq_b);
+if (ctx->iasq_b) {
+tcg_gen_mov_i64(cpu_iasq_f, ctx->iasq_b);
 }
 tcg_gen_mov_i64(cpu_iasq_b, space_select(ctx, 0, dest));
 nullify_set(ctx, a->n);
@@ -4617,6 +4624,7 @@ static void hppa_tr_init_disas_context(DisasContextBase 
*dcbase, CPUState *cs)
 ctx->mmu_idx = MMU_USER_IDX;
 ctx->iaoq_f = ctx->base.pc_first | ctx->privilege;
 ctx->iaoq_b = ctx->base.tb->cs_base | ctx->privilege;
+ctx->iasq_b = NULL;
 ctx->unalign = (ctx->tb_flags & TB_FLAG_UNALIGN ? MO_UNALN : MO_ALIGN);
 #else
 ctx->privilege = (ctx->tb_flags >> TB_FLAG_PRIV_SHIFT) & 3;
@@ -4631,6 +4639,7 @@ static void hppa_tr_init_disas_context(DisasContextBase 
*dcbase, CPUState *cs)
 
 ctx->iaoq_f = (ctx->base.pc_first & ~iasq_f) + ctx->privilege;
 ctx->iaoq_b = (diff ? ctx->iaoq_f + diff : -1);
+ctx->iasq_b = (diff ? NULL : cpu_iasq_b);
 #endif
 
 ctx->zero = tcg_constant_i64(0);
@@ -4683,6 +4692,7 @@ static void hppa_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cs)
 
 /* Set up the IA queue for the next insn.
This will be overwritten by a branch.  */
+ctx->iasq_n = NULL;
 ctx->iaoq_n_var = NULL;
 ctx->iaoq_n = ctx->iaoq_b == -1 ? -1 : ctx->iaoq_b + 4;
 
@@ -4705,7 +4715,7 @@ static void hppa_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cs)
 return;
 }
 /* Note this also detects a priority change. */
-if (ctx->iaoq_b != ctx->iaoq_f + 4) {
+if (ctx->iaoq_b != ctx->iaoq_f + 4 || ctx->iasq_b) {
 ctx->base.is_jmp = DISAS_IAQ_N_STALE;
 return;
 }
@@ -4725,6 +4735,10 @@ static void hppa_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cs)
  gva_offset_mask(ctx->tb_flags));
 }
 }
+if (ctx->iasq_n) {
+tcg_gen_mov_i64(cpu_iasq_b, ctx->iasq_n);
+ctx->iasq_b = cpu_iasq_b;
+}
 }
 
 static void hppa_tr_tb_stop(DisasContextBase *dcbase, CPUState *cs)
@@ -4733,14 +4747,15 @@ static void hppa_tr_tb_stop(DisasContextBase *dcbase, 
CPUState *cs)
 DisasJumpType is_jmp = ctx->base.is_jmp;
 uint64_t fi, bi;
 TCGv_i64 fv, bv;
-TCGv_i64 fs;
+TCGv_i64 fs, bs;
 
 /* Assume the insn queue has not been advanced. */
 fi = ctx->iaoq_b;
 fv = cpu_iaoq_b;
-fs = fi == -1 ? cpu_iasq_b : NULL;
+fs = ctx->iasq_b;
 bi = ctx->iaoq_n;
 bv = ctx->iaoq_n_var;
+bs = ctx->iasq_n;
 
 switch (is_jmp) {
 case DISAS_NORETURN:
@@ -4749,12 +4764,15 @@ static void hppa_tr_tb_stop(DisasContextBase *dcbase, 
CPUState *cs)
 /* The insn queue has not been advanced. */
 bi = fi;
 bv = fv;
+bs = fs;
 fi = ctx->iaoq_f;
 fv = NULL;
 fs = NULL;
 /* FALLTHRU */
 case DISAS_IAQ_N_STALE:
-if (use_goto_tb(ctx, fi, bi)
+if (fs == NULL
+&& bs == NULL
+&& use_goto

[PATCH v2 44/45] target/hppa: Log cpu state at interrupt

2024-05-13 Thread Richard Henderson
This contains all of the information logged before, plus more.

Signed-off-by: Richard Henderson 
---
 target/hppa/int_helper.c | 27 ++-
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/target/hppa/int_helper.c b/target/hppa/int_helper.c
index b82f32fd12..391f32f27d 100644
--- a/target/hppa/int_helper.c
+++ b/target/hppa/int_helper.c
@@ -241,21 +241,22 @@ void hppa_cpu_do_interrupt(CPUState *cs)
 [EXCP_SYSCALL_LWS]   = "syscall-lws",
 [EXCP_TOC]   = "TOC (transfer of control)",
 };
-static int count;
-const char *name = NULL;
-char unknown[16];
 
-if (i >= 0 && i < ARRAY_SIZE(names)) {
-name = names[i];
+FILE *logfile = qemu_log_trylock();
+if (logfile) {
+const char *name = NULL;
+
+if (i >= 0 && i < ARRAY_SIZE(names)) {
+name = names[i];
+}
+if (name) {
+fprintf(logfile, "INT: cpu %d %s\n", cs->cpu_index, name);
+} else {
+fprintf(logfile, "INT: cpu %d unknown %d\n", cs->cpu_index, i);
+}
+hppa_cpu_dump_state(cs, logfile, 0);
+qemu_log_unlock(logfile);
 }
-if (!name) {
-snprintf(unknown, sizeof(unknown), "unknown %d", i);
-name = unknown;
-}
-qemu_log("INT %6d: %s @ " TARGET_FMT_lx ":" TARGET_FMT_lx
- " for " TARGET_FMT_lx ":" TARGET_FMT_lx "\n",
- ++count, name, env->cr[CR_IIASQ], env->cr[CR_IIAOQ],
- env->cr[CR_ISR], env->cr[CR_IOR]);
 }
 cs->exception_index = -1;
 }
-- 
2.34.1




[PATCH v2 28/45] target/hppa: Introduce DisasDelayException

2024-05-13 Thread Richard Henderson
Allow an exception to be emitted at the end of the TranslationBlock,
leaving only the conditional branch inline.  Use it for simple
exception instructions like break, which happen to be nullified.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 60 +
 1 file changed, 55 insertions(+), 5 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index e06c14dd15..e75e7e5b54 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -51,6 +51,17 @@ typedef struct DisasIAQE {
 int64_t disp;
 } DisasIAQE;
 
+typedef struct DisasDelayException {
+struct DisasDelayException *next;
+TCGLabel *lab;
+uint32_t insn;
+bool set_iir;
+int8_t set_n;
+uint8_t excp;
+/* Saved state at parent insn. */
+DisasIAQE iaq_f, iaq_b;
+} DisasDelayException;
+
 typedef struct DisasContext {
 DisasContextBase base;
 CPUState *cs;
@@ -66,6 +77,7 @@ typedef struct DisasContext {
 DisasCond null_cond;
 TCGLabel *null_lab;
 
+DisasDelayException *delay_excp_list;
 TCGv_i64 zero;
 
 uint32_t insn;
@@ -683,13 +695,38 @@ static void gen_excp(DisasContext *ctx, int exception)
 ctx->base.is_jmp = DISAS_NORETURN;
 }
 
+static DisasDelayException *delay_excp(DisasContext *ctx, uint8_t excp)
+{
+DisasDelayException *e = tcg_malloc(sizeof(DisasDelayException));
+
+memset(e, 0, sizeof(*e));
+e->next = ctx->delay_excp_list;
+ctx->delay_excp_list = e;
+
+e->lab = gen_new_label();
+e->insn = ctx->insn;
+e->set_iir = true;
+e->set_n = ctx->psw_n_nonzero ? 0 : -1;
+e->excp = excp;
+e->iaq_f = ctx->iaq_f;
+e->iaq_b = ctx->iaq_b;
+
+return e;
+}
+
 static bool gen_excp_iir(DisasContext *ctx, int exc)
 {
-nullify_over(ctx);
-tcg_gen_st_i64(tcg_constant_i64(ctx->insn),
-   tcg_env, offsetof(CPUHPPAState, cr[CR_IIR]));
-gen_excp(ctx, exc);
-return nullify_end(ctx);
+if (ctx->null_cond.c == TCG_COND_NEVER) {
+tcg_gen_st_i64(tcg_constant_i64(ctx->insn),
+   tcg_env, offsetof(CPUHPPAState, cr[CR_IIR]));
+gen_excp(ctx, exc);
+} else {
+DisasDelayException *e = delay_excp(ctx, exc);
+tcg_gen_brcond_i64(tcg_invert_cond(ctx->null_cond.c),
+   ctx->null_cond.a0, ctx->null_cond.a1, e->lab);
+ctx->null_cond = cond_make_f();
+}
+return true;
 }
 
 static bool gen_illegal(DisasContext *ctx)
@@ -4696,6 +4733,19 @@ static void hppa_tr_tb_stop(DisasContextBase *dcbase, 
CPUState *cs)
 default:
 g_assert_not_reached();
 }
+
+for (DisasDelayException *e = ctx->delay_excp_list; e ; e = e->next) {
+gen_set_label(e->lab);
+if (e->set_n >= 0) {
+tcg_gen_movi_i64(cpu_psw_n, e->set_n);
+}
+if (e->set_iir) {
+tcg_gen_st_i64(tcg_constant_i64(e->insn), tcg_env,
+   offsetof(CPUHPPAState, cr[CR_IIR]));
+}
+install_iaq_entries(ctx, >iaq_f, >iaq_b);
+gen_excp_1(e->excp);
+}
 }
 
 static void hppa_tr_disas_log(const DisasContextBase *dcbase,
-- 
2.34.1




[PATCH v2 11/45] target/hppa: Simplify TB end

2024-05-13 Thread Richard Henderson
Minimize the amount of code in hppa_tr_translate_insn advancing the
insn queue for the next insn.  Move the goto_tb path to hppa_tr_tb_stop.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 109 +---
 1 file changed, 57 insertions(+), 52 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index ca979f4137..13a48d1b6c 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -4699,54 +4699,31 @@ static void hppa_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cs)
 }
 }
 
-/* Advance the insn queue.  Note that this check also detects
-   a priority change within the instruction queue.  */
-if (ret == DISAS_NEXT && ctx->iaoq_b != ctx->iaoq_f + 4) {
-if (use_goto_tb(ctx, ctx->iaoq_b, ctx->iaoq_n)
-&& (ctx->null_cond.c == TCG_COND_NEVER
-|| ctx->null_cond.c == TCG_COND_ALWAYS)) {
-nullify_set(ctx, ctx->null_cond.c == TCG_COND_ALWAYS);
-gen_goto_tb(ctx, 0, ctx->iaoq_b, ctx->iaoq_n);
-ctx->base.is_jmp = ret = DISAS_NORETURN;
-} else {
-ctx->base.is_jmp = ret = DISAS_IAQ_N_STALE;
-}
+/* If the TranslationBlock must end, do so. */
+ctx->base.pc_next += 4;
+if (ret != DISAS_NEXT) {
+return;
 }
+/* Note this also detects a priority change. */
+if (ctx->iaoq_b != ctx->iaoq_f + 4) {
+ctx->base.is_jmp = DISAS_IAQ_N_STALE;
+return;
+}
+
+/*
+ * Advance the insn queue.
+ * The only exit now is DISAS_TOO_MANY from the translator loop.
+ */
 ctx->iaoq_f = ctx->iaoq_b;
 ctx->iaoq_b = ctx->iaoq_n;
-ctx->base.pc_next += 4;
-
-switch (ret) {
-case DISAS_NORETURN:
-case DISAS_IAQ_N_UPDATED:
-break;
-
-case DISAS_NEXT:
-case DISAS_IAQ_N_STALE:
-case DISAS_IAQ_N_STALE_EXIT:
-if (ctx->iaoq_f == -1) {
-install_iaq_entries(ctx, -1, cpu_iaoq_b,
-ctx->iaoq_n, ctx->iaoq_n_var);
-#ifndef CONFIG_USER_ONLY
-tcg_gen_mov_i64(cpu_iasq_f, cpu_iasq_b);
-#endif
-nullify_save(ctx);
-ctx->base.is_jmp = (ret == DISAS_IAQ_N_STALE_EXIT
-? DISAS_EXIT
-: DISAS_IAQ_N_UPDATED);
-} else if (ctx->iaoq_b == -1) {
-if (ctx->iaoq_n_var) {
-copy_iaoq_entry(ctx, cpu_iaoq_b, -1, ctx->iaoq_n_var);
-} else {
-tcg_gen_addi_i64(cpu_iaoq_b, cpu_iaoq_b, 4);
-tcg_gen_andi_i64(cpu_iaoq_b, cpu_iaoq_b,
- gva_offset_mask(ctx->tb_flags));
-}
+if (ctx->iaoq_b == -1) {
+if (ctx->iaoq_n_var) {
+copy_iaoq_entry(ctx, cpu_iaoq_b, -1, ctx->iaoq_n_var);
+} else {
+tcg_gen_addi_i64(cpu_iaoq_b, cpu_iaoq_b, 4);
+tcg_gen_andi_i64(cpu_iaoq_b, cpu_iaoq_b,
+ gva_offset_mask(ctx->tb_flags));
 }
-break;
-
-default:
-g_assert_not_reached();
 }
 }
 
@@ -4754,23 +4731,51 @@ static void hppa_tr_tb_stop(DisasContextBase *dcbase, 
CPUState *cs)
 {
 DisasContext *ctx = container_of(dcbase, DisasContext, base);
 DisasJumpType is_jmp = ctx->base.is_jmp;
+uint64_t fi, bi;
+TCGv_i64 fv, bv;
+TCGv_i64 fs;
+
+/* Assume the insn queue has not been advanced. */
+fi = ctx->iaoq_b;
+fv = cpu_iaoq_b;
+fs = fi == -1 ? cpu_iasq_b : NULL;
+bi = ctx->iaoq_n;
+bv = ctx->iaoq_n_var;
 
 switch (is_jmp) {
 case DISAS_NORETURN:
 break;
 case DISAS_TOO_MANY:
-case DISAS_IAQ_N_STALE:
-case DISAS_IAQ_N_STALE_EXIT:
-install_iaq_entries(ctx, ctx->iaoq_f, cpu_iaoq_f,
-ctx->iaoq_b, cpu_iaoq_b);
-nullify_save(ctx);
+/* The insn queue has not been advanced. */
+bi = fi;
+bv = fv;
+fi = ctx->iaoq_f;
+fv = NULL;
+fs = NULL;
 /* FALLTHRU */
-case DISAS_IAQ_N_UPDATED:
-if (is_jmp != DISAS_IAQ_N_STALE_EXIT) {
-tcg_gen_lookup_and_goto_ptr();
+case DISAS_IAQ_N_STALE:
+if (use_goto_tb(ctx, fi, bi)
+&& (ctx->null_cond.c == TCG_COND_NEVER
+|| ctx->null_cond.c == TCG_COND_ALWAYS)) {
+nullify_set(ctx, ctx->null_cond.c == TCG_COND_ALWAYS);
+gen_goto_tb(ctx, 0, fi, bi);
 break;
 }
 /* FALLTHRU */
+case DISAS_IAQ_N_STALE_EXIT:
+install_iaq_entries(ctx, fi, fv, bi, bv);
+if (fs) {
+tcg_gen_mov_i64(cpu_iasq_f, fs);
+}
+nullify_save(ctx);
+if (is_jmp == DISAS_IAQ_N_STALE_EXIT) {
+tcg_gen_exit_tb(NULL, 0);
+   

[PATCH v2 45/45] target/hppa: Log cpu state on return-from-interrupt

2024-05-13 Thread Richard Henderson
Inverse of the logging on taking an interrupt.

Signed-off-by: Richard Henderson 
---
 target/hppa/sys_helper.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/target/hppa/sys_helper.c b/target/hppa/sys_helper.c
index 22d6c89964..9b43b556fd 100644
--- a/target/hppa/sys_helper.c
+++ b/target/hppa/sys_helper.c
@@ -18,6 +18,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/log.h"
 #include "cpu.h"
 #include "exec/exec-all.h"
 #include "exec/helper-proto.h"
@@ -93,6 +94,17 @@ void HELPER(rfi)(CPUHPPAState *env)
 env->iaoq_b = env->cr_back[1];
 env->iasq_f = (env->cr[CR_IIASQ] << 32) & ~(env->iaoq_f & mask);
 env->iasq_b = (env->cr_back[0] << 32) & ~(env->iaoq_b & mask);
+
+if (qemu_loglevel_mask(CPU_LOG_INT)) {
+FILE *logfile = qemu_log_trylock();
+if (logfile) {
+CPUState *cs = env_cpu(env);
+
+fprintf(logfile, "RFI: cpu %d\n", cs->cpu_index);
+hppa_cpu_dump_state(cs, logfile, 0);
+qemu_log_unlock(logfile);
+}
+}
 }
 
 static void getshadowregs(CPUHPPAState *env)
-- 
2.34.1




[PATCH v2 22/45] target/hppa: Use TCG_COND_TST* in do_unit_zero_cond

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index e4e8034c5f..50cc6decd8 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1005,9 +1005,8 @@ static DisasCond do_unit_zero_cond(unsigned cf, bool d, 
TCGv_i64 res)
 tmp = tcg_temp_new_i64();
 tcg_gen_subi_i64(tmp, res, ones);
 tcg_gen_andc_i64(tmp, tmp, res);
-tcg_gen_andi_i64(tmp, tmp, sgns);
 
-return cond_make_ti(cf & 1 ? TCG_COND_EQ : TCG_COND_NE, tmp, 0);
+return cond_make_ti(cf & 1 ? TCG_COND_TSTEQ : TCG_COND_TSTNE, tmp, sgns);
 }
 
 static TCGv_i64 get_carry(DisasContext *ctx, bool d,
-- 
2.34.1




[PATCH v2 41/45] target/hppa: Implement CF_PCREL

2024-05-13 Thread Richard Henderson
Now that the groundwork has been laid, enabling CF_PCREL within the
translator proper is a simple matter of updating copy_iaoq_entry
and install_iaq_entries.

We also need to modify the unwind info, since we no longer have
absolute addresses to install.

As expected, this reduces the runtime overhead of compilation when
running a Linux kernel with address space randomization enabled.

Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.c   | 19 ++--
 target/hppa/translate.c | 68 -
 2 files changed, 55 insertions(+), 32 deletions(-)

diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 5f0df0697a..f0507874ce 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -62,10 +62,6 @@ void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
 *pc = hppa_cpu_get_pc(env_cpu(env));
 flags |= (env->iaoq_f & 3) << TB_FLAG_PRIV_SHIFT;
 
-if (hppa_is_pa20(env)) {
-cs_base = env->iaoq_f & MAKE_64BIT_MASK(32, 32);
-}
-
 /*
  * The only really interesting case is if IAQ_Back is on the same page
  * as IAQ_Front, so that we can use goto_tb between the blocks.  In all
@@ -113,19 +109,19 @@ static void hppa_restore_state_to_opc(CPUState *cs,
   const TranslationBlock *tb,
   const uint64_t *data)
 {
-HPPACPU *cpu = HPPA_CPU(cs);
+CPUHPPAState *env = cpu_env(cs);
 
-cpu->env.iaoq_f = data[0];
-if (data[1] != (target_ulong)-1) {
-cpu->env.iaoq_b = data[1];
+env->iaoq_f = (env->iaoq_f & TARGET_PAGE_MASK) | data[0];
+if (data[1] != INT32_MIN) {
+env->iaoq_b = env->iaoq_f + data[1];
 }
-cpu->env.unwind_breg = data[2];
+env->unwind_breg = data[2];
 /*
  * Since we were executing the instruction at IAOQ_F, and took some
  * sort of action that provoked the cpu_restore_state, we can infer
  * that the instruction was not nullified.
  */
-cpu->env.psw_n = 0;
+env->psw_n = 0;
 }
 
 static bool hppa_cpu_has_work(CPUState *cs)
@@ -191,6 +187,9 @@ static void hppa_cpu_realizefn(DeviceState *dev, Error 
**errp)
 hppa_ptlbe(>env);
 }
 #endif
+
+/* Use pc-relative instructions always to simplify the translator. */
+tcg_cflags_set(cs, CF_PCREL);
 }
 
 static void hppa_cpu_initfn(Object *obj)
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index fa79116d5b..79e29d722f 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -47,7 +47,7 @@ typedef struct DisasIAQE {
 TCGv_i64 space;
 /* IAOQ base; may be null for relative address. */
 TCGv_i64 base;
-/* IAOQ addend; if base is null, relative to ctx->iaoq_first. */
+/* IAOQ addend; if base is null, relative to cpu_iaoq_f. */
 int64_t disp;
 } DisasIAQE;
 
@@ -664,11 +664,7 @@ static DisasIAQE iaqe_next_absv(DisasContext *ctx, 
TCGv_i64 var)
 static void copy_iaoq_entry(DisasContext *ctx, TCGv_i64 dest,
 const DisasIAQE *src)
 {
-if (src->base == NULL) {
-tcg_gen_movi_i64(dest, ctx->iaoq_first + src->disp);
-} else {
-tcg_gen_addi_i64(dest, src->base, src->disp);
-}
+tcg_gen_addi_i64(dest, src->base ? : cpu_iaoq_f, src->disp);
 }
 
 static void install_iaq_entries(DisasContext *ctx, const DisasIAQE *f,
@@ -680,8 +676,28 @@ static void install_iaq_entries(DisasContext *ctx, const 
DisasIAQE *f,
 b_next = iaqe_incr(f, 4);
 b = _next;
 }
-copy_iaoq_entry(ctx, cpu_iaoq_f, f);
-copy_iaoq_entry(ctx, cpu_iaoq_b, b);
+
+/*
+ * There is an edge case
+ *bv   r0(rN)
+ *b,l  disp,r0
+ * for which F will use cpu_iaoq_b (from the indirect branch),
+ * and B will use cpu_iaoq_f (from the direct branch).
+ * In this case we need an extra temporary.
+ */
+if (f->base != cpu_iaoq_b) {
+copy_iaoq_entry(ctx, cpu_iaoq_b, b);
+copy_iaoq_entry(ctx, cpu_iaoq_f, f);
+} else if (f->base == b->base) {
+copy_iaoq_entry(ctx, cpu_iaoq_f, f);
+tcg_gen_addi_i64(cpu_iaoq_b, cpu_iaoq_f, b->disp - f->disp);
+} else {
+TCGv_i64 tmp = tcg_temp_new_i64();
+copy_iaoq_entry(ctx, tmp, b);
+copy_iaoq_entry(ctx, cpu_iaoq_f, f);
+tcg_gen_mov_i64(cpu_iaoq_b, tmp);
+}
+
 if (f->space) {
 tcg_gen_mov_i64(cpu_iasq_f, f->space);
 }
@@ -3979,9 +3995,8 @@ static bool trans_b_gate(DisasContext *ctx, arg_b_gate *a)
 /* Adjust the dest offset for the privilege change from the PTE. */
 TCGv_i64 off = tcg_temp_new_i64();
 
-gen_helper_b_gate_priv(off, tcg_env,
-   tcg_constant_i64(ctx->iaoq_first
-+ ctx->iaq_f.disp));
+copy_iaoq_entry(ctx, off, >iaq_f);
+gen_helper_b_gate_priv(off, tcg_env,

[PATCH v2 31/45] linux-user/hppa: Force all code addresses to PRIV_USER

2024-05-13 Thread Richard Henderson
The kernel does this along the return path to user mode.

Signed-off-by: Richard Henderson 
---
 linux-user/hppa/target_cpu.h |  4 ++--
 target/hppa/cpu.h|  3 +++
 linux-user/elfload.c |  4 ++--
 linux-user/hppa/cpu_loop.c   | 14 +++---
 linux-user/hppa/signal.c |  6 --
 target/hppa/cpu.c|  7 +--
 target/hppa/gdbstub.c|  6 ++
 target/hppa/translate.c  |  4 ++--
 8 files changed, 31 insertions(+), 17 deletions(-)

diff --git a/linux-user/hppa/target_cpu.h b/linux-user/hppa/target_cpu.h
index aacf3e9e02..4b84422a90 100644
--- a/linux-user/hppa/target_cpu.h
+++ b/linux-user/hppa/target_cpu.h
@@ -28,8 +28,8 @@ static inline void cpu_clone_regs_child(CPUHPPAState *env, 
target_ulong newsp,
 /* Indicate child in return value.  */
 env->gr[28] = 0;
 /* Return from the syscall.  */
-env->iaoq_f = env->gr[31];
-env->iaoq_b = env->gr[31] + 4;
+env->iaoq_f = env->gr[31] | PRIV_USER;
+env->iaoq_b = env->iaoq_f + 4;
 }
 
 static inline void cpu_clone_regs_parent(CPUHPPAState *env, unsigned flags)
diff --git a/target/hppa/cpu.h b/target/hppa/cpu.h
index c37b4e12fb..5a1e720bb6 100644
--- a/target/hppa/cpu.h
+++ b/target/hppa/cpu.h
@@ -42,6 +42,9 @@
 #define MMU_IDX_TO_P(MIDX)  (((MIDX) - MMU_KERNEL_IDX) & 1)
 #define PRIV_P_TO_MMU_IDX(PRIV, P)  ((PRIV) * 2 + !!(P) + MMU_KERNEL_IDX)
 
+#define PRIV_KERNEL   0
+#define PRIV_USER 3
+
 #define TARGET_INSN_START_EXTRA_WORDS 2
 
 /* No need to flush MMU_ABS*_IDX  */
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index b473cda6b4..c1e1511ff2 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -1887,8 +1887,8 @@ static inline void init_thread(struct target_pt_regs 
*regs,
 static inline void init_thread(struct target_pt_regs *regs,
struct image_info *infop)
 {
-regs->iaoq[0] = infop->entry;
-regs->iaoq[1] = infop->entry + 4;
+regs->iaoq[0] = infop->entry | PRIV_USER;
+regs->iaoq[1] = regs->iaoq[0] + 4;
 regs->gr[23] = 0;
 regs->gr[24] = infop->argv;
 regs->gr[25] = infop->argc;
diff --git a/linux-user/hppa/cpu_loop.c b/linux-user/hppa/cpu_loop.c
index d5232f37fe..bc093b8fe8 100644
--- a/linux-user/hppa/cpu_loop.c
+++ b/linux-user/hppa/cpu_loop.c
@@ -129,8 +129,8 @@ void cpu_loop(CPUHPPAState *env)
 default:
 env->gr[28] = ret;
 /* We arrived here by faking the gateway page.  Return.  */
-env->iaoq_f = env->gr[31];
-env->iaoq_b = env->gr[31] + 4;
+env->iaoq_f = env->gr[31] | PRIV_USER;
+env->iaoq_b = env->iaoq_f + 4;
 break;
 case -QEMU_ERESTARTSYS:
 case -QEMU_ESIGRETURN:
@@ -140,8 +140,8 @@ void cpu_loop(CPUHPPAState *env)
 case EXCP_SYSCALL_LWS:
 env->gr[21] = hppa_lws(env);
 /* We arrived here by faking the gateway page.  Return.  */
-env->iaoq_f = env->gr[31];
-env->iaoq_b = env->gr[31] + 4;
+env->iaoq_f = env->gr[31] | PRIV_USER;
+env->iaoq_b = env->iaoq_f + 4;
 break;
 case EXCP_IMP:
 force_sig_fault(TARGET_SIGSEGV, TARGET_SEGV_MAPERR, env->iaoq_f);
@@ -152,9 +152,9 @@ void cpu_loop(CPUHPPAState *env)
 case EXCP_PRIV_OPR:
 /* check for glibc ABORT_INSTRUCTION "iitlbp %r0,(%sr0, %r0)" */
 if (env->cr[CR_IIR] == 0x0400) {
-   force_sig_fault(TARGET_SIGILL, TARGET_ILL_ILLOPC, 
env->iaoq_f);
+force_sig_fault(TARGET_SIGILL, TARGET_ILL_ILLOPC, env->iaoq_f);
 } else {
-   force_sig_fault(TARGET_SIGILL, TARGET_ILL_PRVOPC, 
env->iaoq_f);
+force_sig_fault(TARGET_SIGILL, TARGET_ILL_PRVOPC, env->iaoq_f);
 }
 break;
 case EXCP_PRIV_REG:
@@ -170,7 +170,7 @@ void cpu_loop(CPUHPPAState *env)
 force_sig_fault(TARGET_SIGFPE, 0, env->iaoq_f);
 break;
 case EXCP_BREAK:
-force_sig_fault(TARGET_SIGTRAP, TARGET_TRAP_BRKPT, env->iaoq_f & 
~3);
+force_sig_fault(TARGET_SIGTRAP, TARGET_TRAP_BRKPT, env->iaoq_f);
 break;
 case EXCP_DEBUG:
 force_sig_fault(TARGET_SIGTRAP, TARGET_TRAP_BRKPT, env->iaoq_f);
diff --git a/linux-user/hppa/signal.c b/linux-user/hppa/signal.c
index 682ba25922..f6f094c960 100644
--- a/linux-user/hppa/signal.c
+++ b/linux-user/hppa/signal.c
@@ -101,7 +101,9 @@ static void restore_sigcontext(CPUArchState *env, struct 
target_sigcontext *sc)
 cpu_hppa_loaded_fr0(env);
 
 __get_user(env->iaoq_f, >sc_iaoq[0]);
+env->iaoq_f |= PRIV_USER;
 __get_user(env->iaoq_b, >sc_iaoq[1]);
+env->iaoq_b |= PRIV_USER;

[PATCH v2 40/45] target/hppa: Adjust priv for B,GATE at runtime

2024-05-13 Thread Richard Henderson
Do not compile in the priv change based on the first
translation; look up the PTE at execution time.

Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.h|  1 -
 target/hppa/helper.h |  1 +
 target/hppa/mem_helper.c | 34 +++---
 target/hppa/translate.c  | 36 +++-
 4 files changed, 47 insertions(+), 25 deletions(-)

diff --git a/target/hppa/cpu.h b/target/hppa/cpu.h
index 78ab0adcd0..2bcb3b602b 100644
--- a/target/hppa/cpu.h
+++ b/target/hppa/cpu.h
@@ -380,7 +380,6 @@ void hppa_cpu_do_transaction_failed(CPUState *cs, hwaddr 
physaddr,
 extern const MemoryRegionOps hppa_io_eir_ops;
 extern const VMStateDescription vmstate_hppa_cpu;
 void hppa_cpu_alarm_timer(void *);
-int hppa_artype_for_page(CPUHPPAState *env, target_ulong vaddr);
 #endif
 G_NORETURN void hppa_dynamic_excp(CPUHPPAState *env, int excp, uintptr_t ra);
 
diff --git a/target/hppa/helper.h b/target/hppa/helper.h
index c12b48a04a..de411923d9 100644
--- a/target/hppa/helper.h
+++ b/target/hppa/helper.h
@@ -86,6 +86,7 @@ DEF_HELPER_1(halt, noreturn, env)
 DEF_HELPER_1(reset, noreturn, env)
 DEF_HELPER_1(rfi, void, env)
 DEF_HELPER_1(rfi_r, void, env)
+DEF_HELPER_FLAGS_2(b_gate_priv, TCG_CALL_NO_WG, i64, env, i64)
 DEF_HELPER_FLAGS_2(write_interval_timer, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_FLAGS_2(write_eirr, TCG_CALL_NO_RWG, void, env, tl)
 DEF_HELPER_FLAGS_2(swap_system_mask, TCG_CALL_NO_RWG, tl, env, tl)
diff --git a/target/hppa/mem_helper.c b/target/hppa/mem_helper.c
index 2929226874..b984f730aa 100644
--- a/target/hppa/mem_helper.c
+++ b/target/hppa/mem_helper.c
@@ -691,13 +691,6 @@ target_ulong HELPER(lpa)(CPUHPPAState *env, target_ulong 
addr)
 return phys;
 }
 
-/* Return the ar_type of the TLB at VADDR, or -1.  */
-int hppa_artype_for_page(CPUHPPAState *env, target_ulong vaddr)
-{
-HPPATLBEntry *ent = hppa_find_tlb(env, vaddr);
-return ent ? ent->ar_type : -1;
-}
-
 /*
  * diag_btlb() emulates the PDC PDC_BLOCK_TLB firmware call to
  * allow operating systems to modify the Block TLB (BTLB) entries.
@@ -793,3 +786,30 @@ void HELPER(diag_btlb)(CPUHPPAState *env)
 break;
 }
 }
+
+uint64_t HELPER(b_gate_priv)(CPUHPPAState *env, uint64_t iaoq_f)
+{
+uint64_t gva = hppa_form_gva(env, env->iasq_f, iaoq_f);
+HPPATLBEntry *ent = hppa_find_tlb(env, gva);
+
+if (ent == NULL) {
+raise_exception_with_ior(env, EXCP_ITLB_MISS, GETPC(), gva, false);
+}
+
+/*
+ * There should be no need to check page permissions, as that will
+ * already have been done by tb_lookup via get_page_addr_code.
+ * All we need at this point is to check the ar_type.
+ *
+ * No change for non-gateway pages or for priv decrease.
+ */
+if (ent->ar_type & 4) {
+int old_priv = iaoq_f & 3;
+int new_priv = ent->ar_type & 3;
+
+if (new_priv < old_priv) {
+iaoq_f = (iaoq_f & -4) | new_priv;
+}
+}
+return iaoq_f;
+}
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 2d8410b8ea..fa79116d5b 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -3960,6 +3960,7 @@ static bool trans_bl(DisasContext *ctx, arg_bl *a)
 static bool trans_b_gate(DisasContext *ctx, arg_b_gate *a)
 {
 int64_t disp = a->disp;
+bool indirect = false;
 
 /* Trap if PSW[B] is set. */
 if (ctx->psw_xb & PSW_B) {
@@ -3969,24 +3970,22 @@ static bool trans_b_gate(DisasContext *ctx, arg_b_gate 
*a)
 nullify_over(ctx);
 
 #ifndef CONFIG_USER_ONLY
-if (ctx->tb_flags & PSW_C) {
-int type = hppa_artype_for_page(cpu_env(ctx->cs), ctx->base.pc_next);
-/* If we could not find a TLB entry, then we need to generate an
-   ITLB miss exception so the kernel will provide it.
-   The resulting TLB fill operation will invalidate this TB and
-   we will re-translate, at which point we *will* be able to find
-   the TLB entry and determine if this is in fact a gateway page.  */
-if (type < 0) {
-gen_excp(ctx, EXCP_ITLB_MISS);
-return true;
-}
-/* No change for non-gateway pages or for priv decrease.  */
-if (type >= 4 && type - 4 < ctx->privilege) {
-disp -= ctx->privilege;
-disp += type - 4;
-}
+if (ctx->privilege == 0) {
+/* Privilege cannot decrease. */
+} else if (!(ctx->tb_flags & PSW_C)) {
+/* With paging disabled, priv becomes 0. */
+disp -= ctx->privilege;
 } else {
-disp -= ctx->privilege;  /* priv = 0 */
+/* Adjust the dest offset for the privilege change from the PTE. */
+TCGv_i64 off = tcg_temp_new_i64();
+
+gen_helper_b_gate_priv(off, tcg_env,
+   tcg_constant_i64(ctx->iaoq_first
+  

Re: [PATCH] internal-fn: Do not force vcond operand to reg.

2024-05-13 Thread Richard Biener
On Mon, May 13, 2024 at 8:18 AM Robin Dapp  wrote:
>
> > How does this make a difference in the end?  I'd expect say forwprop to
> > fix things?
>
> In general we try to only add the masking "boilerplate" of our
> instructions at split time so fwprop, combine et al. can do their
> work uninhibited of it (and we don't need numerous
> (if_then_else ... (if_then_else) ...) combinations in our patterns).
> A vec constant we expand directly to a masked representation, though
> which makes further simplification difficult.  I can experiment with
> changing that if preferred.
>
> My thinking was, however, that for other operations like binops we
> directly emit the right variant via expand_operands without
> forcing to a reg and don't even need to fwprop so I wanted to
> imitate that.

Ah, so yeah, it probably makes sense for constants.  Btw,
there's prepare_operand which I think might be better for
its CONST_INT handling?  I can also see we usually do not
bother with force_reg, the force_reg was added with the
initial r6-4696-ga414c77f2a30bb already.

What happens if we simply remove all of the force_reg here?

Thanks,
Richard.

> Regards
>  Robin
>


[PATCH v2 33/45] target/hppa: Do not mask in copy_iaoq_entry

2024-05-13 Thread Richard Henderson
As with loads and stores, code offsets are kept intact until the
full gva is formed.  In qemu, this is in cpu_get_tb_cpu_state.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 67197e98b3..abb21b05c8 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -637,15 +637,10 @@ static DisasIAQE iaqe_next_absv(DisasContext *ctx, 
TCGv_i64 var)
 static void copy_iaoq_entry(DisasContext *ctx, TCGv_i64 dest,
 const DisasIAQE *src)
 {
-uint64_t mask = gva_offset_mask(ctx->tb_flags);
-
 if (src->base == NULL) {
-tcg_gen_movi_i64(dest, (ctx->iaoq_first + src->disp) & mask);
-} else if (src->disp == 0) {
-tcg_gen_andi_i64(dest, src->base, mask);
+tcg_gen_movi_i64(dest, ctx->iaoq_first + src->disp);
 } else {
 tcg_gen_addi_i64(dest, src->base, src->disp);
-tcg_gen_andi_i64(dest, dest, mask);
 }
 }
 
-- 
2.34.1




[PATCH v2 36/45] target/hppa: Manage PSW_X and PSW_B in translator

2024-05-13 Thread Richard Henderson
PSW_X is cleared after every instruction, and only set by RFI.
PSW_B is cleared after every non-branch, or branch not taken,
and only set by taken branches.  We can clear both bits with a
single store, at most once per TB.  Taken branches set PSW_B,
at most once per TB.

Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.c   | 10 ++---
 target/hppa/translate.c | 50 +
 2 files changed, 57 insertions(+), 3 deletions(-)

diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 003af63e20..5f0df0697a 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -50,7 +50,7 @@ static vaddr hppa_cpu_get_pc(CPUState *cs)
 void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
   uint64_t *pcsbase, uint32_t *pflags)
 {
-uint32_t flags = env->psw_n * PSW_N;
+uint32_t flags = 0;
 uint64_t cs_base = 0;
 
 /*
@@ -80,11 +80,14 @@ void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
 cs_base |= env->iaoq_b & ~TARGET_PAGE_MASK;
 }
 
+/* ??? E, T, H, L bits need to be here, when implemented.  */
+flags |= env->psw_n * PSW_N;
+flags |= env->psw_xb;
+flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_P);
+
 #ifdef CONFIG_USER_ONLY
 flags |= TB_FLAG_UNALIGN * !env_cpu(env)->prctl_unalign_sigbus;
 #else
-/* ??? E, T, H, L, B bits need to be here, when implemented.  */
-flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_P);
 if ((env->sr[4] == env->sr[5])
 & (env->sr[4] == env->sr[6])
 & (env->sr[4] == env->sr[7])) {
@@ -103,6 +106,7 @@ static void hppa_cpu_synchronize_from_tb(CPUState *cs,
 
 /* IAQ is always up-to-date before goto_tb. */
 cpu->env.psw_n = (tb->flags & PSW_N) != 0;
+cpu->env.psw_xb = tb->flags & (PSW_X | PSW_B);
 }
 
 static void hppa_restore_state_to_opc(CPUState *cs,
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index abb21b05c8..1a6a140d6f 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -84,7 +84,9 @@ typedef struct DisasContext {
 uint32_t tb_flags;
 int mmu_idx;
 int privilege;
+uint32_t psw_xb;
 bool psw_n_nonzero;
+bool psw_b_next;
 bool is_pa20;
 bool insn_start_updated;
 
@@ -263,6 +265,7 @@ static TCGv_i64 cpu_psw_n;
 static TCGv_i64 cpu_psw_v;
 static TCGv_i64 cpu_psw_cb;
 static TCGv_i64 cpu_psw_cb_msb;
+static TCGv_i32 cpu_psw_xb;
 
 void hppa_translate_init(void)
 {
@@ -315,6 +318,9 @@ void hppa_translate_init(void)
 *v->var = tcg_global_mem_new(tcg_env, v->ofs, v->name);
 }
 
+cpu_psw_xb = tcg_global_mem_new_i32(tcg_env,
+offsetof(CPUHPPAState, psw_xb),
+"psw_xb");
 cpu_iasq_f = tcg_global_mem_new_i64(tcg_env,
 offsetof(CPUHPPAState, iasq_f),
 "iasq_f");
@@ -509,6 +515,25 @@ static void load_spr(DisasContext *ctx, TCGv_i64 dest, 
unsigned reg)
 #endif
 }
 
+/*
+ * Write a value to psw_xb, bearing in mind the known value.
+ * To be used just before exiting the TB, so do not update the known value.
+ */
+static void store_psw_xb(DisasContext *ctx, uint32_t xb)
+{
+tcg_debug_assert(xb == 0 || xb == PSW_B);
+if (ctx->psw_xb != xb) {
+tcg_gen_movi_i32(cpu_psw_xb, xb);
+}
+}
+
+/* Write a value to psw_xb, and update the known value. */
+static void set_psw_xb(DisasContext *ctx, uint32_t xb)
+{
+store_psw_xb(ctx, xb);
+ctx->psw_xb = xb;
+}
+
 /* Skip over the implementation of an insn that has been nullified.
Use this when the insn is too complex for a conditional move.  */
 static void nullify_over(DisasContext *ctx)
@@ -576,6 +601,8 @@ static bool nullify_end(DisasContext *ctx)
 /* For NEXT, NORETURN, STALE, we can easily continue (or exit).
For UPDATED, we cannot update on the nullified path.  */
 assert(status != DISAS_IAQ_N_UPDATED);
+/* Taken branches are handled manually. */
+assert(!ctx->psw_b_next);
 
 if (likely(null_lab == NULL)) {
 /* The current insn wasn't conditional or handled the condition
@@ -1842,6 +1869,7 @@ static bool do_dbranch(DisasContext *ctx, int64_t disp,
 if (is_n) {
 if (use_nullify_skip(ctx)) {
 nullify_set(ctx, 0);
+store_psw_xb(ctx, 0);
 gen_goto_tb(ctx, 0, >iaq_j, NULL);
 ctx->base.is_jmp = DISAS_NORETURN;
 return true;
@@ -1849,20 +1877,24 @@ static bool do_dbranch(DisasContext *ctx, int64_t disp,
 ctx->null_cond.c = TCG_COND_ALWAYS;
 }
 ctx->iaq_n = >iaq_j;
+ctx->psw_b_next = true;
 } else {
 nullify_over(ctx);
 
 install_link(ctx, link, false);
 if (is_n && use_nullify_sk

[PATCH v2 25/45] target/hppa: Use registerfields.h for FPSR

2024-05-13 Thread Richard Henderson
Define all of the context dependent field definitions.
Use FIELD_EX32 and FIELD_DP32 with named fields instead
of extract32 and deposit32 with raw constants.

Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.h| 25 +
 target/hppa/fpu_helper.c | 26 +-
 target/hppa/translate.c  | 18 --
 3 files changed, 46 insertions(+), 23 deletions(-)

diff --git a/target/hppa/cpu.h b/target/hppa/cpu.h
index 61f1353133..c37b4e12fb 100644
--- a/target/hppa/cpu.h
+++ b/target/hppa/cpu.h
@@ -24,6 +24,7 @@
 #include "exec/cpu-defs.h"
 #include "qemu/cpu-float.h"
 #include "qemu/interval-tree.h"
+#include "hw/registerfields.h"
 
 #define MMU_ABS_W_IDX 6
 #define MMU_ABS_IDX   7
@@ -152,6 +153,30 @@
 #define CR_IPSW  22
 #define CR_EIRR  23
 
+FIELD(FPSR, ENA_I, 0, 1)
+FIELD(FPSR, ENA_U, 1, 1)
+FIELD(FPSR, ENA_O, 2, 1)
+FIELD(FPSR, ENA_Z, 3, 1)
+FIELD(FPSR, ENA_V, 4, 1)
+FIELD(FPSR, ENABLES, 0, 5)
+FIELD(FPSR, D, 5, 1)
+FIELD(FPSR, T, 6, 1)
+FIELD(FPSR, RM, 9, 2)
+FIELD(FPSR, CQ, 11, 11)
+FIELD(FPSR, CQ0_6, 15, 7)
+FIELD(FPSR, CQ0_4, 17, 5)
+FIELD(FPSR, CQ0_2, 19, 3)
+FIELD(FPSR, CQ0, 21, 1)
+FIELD(FPSR, CA, 15, 7)
+FIELD(FPSR, CA0, 21, 1)
+FIELD(FPSR, C, 26, 1)
+FIELD(FPSR, FLG_I, 27, 1)
+FIELD(FPSR, FLG_U, 28, 1)
+FIELD(FPSR, FLG_O, 29, 1)
+FIELD(FPSR, FLG_Z, 30, 1)
+FIELD(FPSR, FLG_V, 31, 1)
+FIELD(FPSR, FLAGS, 27, 5)
+
 typedef struct HPPATLBEntry {
 union {
 IntervalTreeNode itree;
diff --git a/target/hppa/fpu_helper.c b/target/hppa/fpu_helper.c
index 576f283b04..deaed2b65d 100644
--- a/target/hppa/fpu_helper.c
+++ b/target/hppa/fpu_helper.c
@@ -30,7 +30,7 @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
 
 env->fr0_shadow = shadow;
 
-switch (extract32(shadow, 9, 2)) {
+switch (FIELD_EX32(shadow, FPSR, RM)) {
 default:
 rm = float_round_nearest_even;
 break;
@@ -46,7 +46,7 @@ void HELPER(loaded_fr0)(CPUHPPAState *env)
 }
 set_float_rounding_mode(rm, >fp_status);
 
-d = extract32(shadow, 5, 1);
+d = FIELD_EX32(shadow, FPSR, D);
 set_flush_to_zero(d, >fp_status);
 set_flush_inputs_to_zero(d, >fp_status);
 }
@@ -57,7 +57,7 @@ void cpu_hppa_loaded_fr0(CPUHPPAState *env)
 }
 
 #define CONVERT_BIT(X, SRC, DST)\
-((SRC) > (DST)  \
+((unsigned)(SRC) > (unsigned)(DST)  \
  ? (X) / ((SRC) / (DST)) & (DST)\
  : ((X) & (SRC)) * ((DST) / (SRC)))
 
@@ -73,12 +73,12 @@ static void update_fr0_op(CPUHPPAState *env, uintptr_t ra)
 }
 set_float_exception_flags(0, >fp_status);
 
-hard_exp |= CONVERT_BIT(soft_exp, float_flag_inexact,   1u << 0);
-hard_exp |= CONVERT_BIT(soft_exp, float_flag_underflow, 1u << 1);
-hard_exp |= CONVERT_BIT(soft_exp, float_flag_overflow,  1u << 2);
-hard_exp |= CONVERT_BIT(soft_exp, float_flag_divbyzero, 1u << 3);
-hard_exp |= CONVERT_BIT(soft_exp, float_flag_invalid,   1u << 4);
-shadow |= hard_exp << (32 - 5);
+hard_exp |= CONVERT_BIT(soft_exp, float_flag_inexact,   R_FPSR_ENA_I_MASK);
+hard_exp |= CONVERT_BIT(soft_exp, float_flag_underflow, R_FPSR_ENA_U_MASK);
+hard_exp |= CONVERT_BIT(soft_exp, float_flag_overflow,  R_FPSR_ENA_O_MASK);
+hard_exp |= CONVERT_BIT(soft_exp, float_flag_divbyzero, R_FPSR_ENA_Z_MASK);
+hard_exp |= CONVERT_BIT(soft_exp, float_flag_invalid,   R_FPSR_ENA_V_MASK);
+shadow |= hard_exp << (R_FPSR_FLAGS_SHIFT - R_FPSR_ENABLES_SHIFT);
 env->fr0_shadow = shadow;
 env->fr[0] = (uint64_t)shadow << 32;
 
@@ -378,15 +378,15 @@ static void update_fr0_cmp(CPUHPPAState *env, uint32_t y,
 if (y) {
 /* targeted comparison */
 /* set fpsr[ca[y - 1]] to current compare */
-shadow = deposit32(shadow, 21 - (y - 1), 1, c);
+shadow = deposit32(shadow, R_FPSR_CA0_SHIFT - (y - 1), 1, c);
 } else {
 /* queued comparison */
 /* shift cq right by one place */
-shadow = deposit32(shadow, 11, 10, extract32(shadow, 12, 10));
+shadow = (shadow & ~R_FPSR_CQ_MASK) | ((shadow >> 1) & R_FPSR_CQ_MASK);
 /* move fpsr[c] to fpsr[cq[0]] */
-shadow = deposit32(shadow, 21, 1, extract32(shadow, 26, 1));
+shadow = FIELD_DP32(shadow, FPSR, CQ0, FIELD_EX32(shadow, FPSR, C));
 /* set fpsr[c] to current compare */
-shadow = deposit32(shadow, 26, 1, c);
+shadow = FIELD_DP32(shadow, FPSR, C, c);
 }
 
 env->fr0_shadow = shadow;
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index d8973a63df..6d76599ea0 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -4323,29 +4323,28 @@ static bool trans_ftest(DisasContext *ctx, arg_ftest *a)
 
 switch (a->c) {
 case 0: /* simple */
-tcg_gen_andi_i64(t, t, 0x400);
-ctx->n

[PATCH v2 42/45] target/hppa: Implement PSW_T

2024-05-13 Thread Richard Henderson
PSW_T enables a trap on taken branches, at the very end of the
execution of the branch instruction.

Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.c   |  4 +--
 target/hppa/translate.c | 55 +++--
 2 files changed, 44 insertions(+), 15 deletions(-)

diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index f0507874ce..2a3b5fc498 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -76,10 +76,10 @@ void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
 cs_base |= env->iaoq_b & ~TARGET_PAGE_MASK;
 }
 
-/* ??? E, T, H, L bits need to be here, when implemented.  */
+/* ??? E, H, L bits need to be here, when implemented.  */
 flags |= env->psw_n * PSW_N;
 flags |= env->psw_xb;
-flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_P);
+flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_P | PSW_T);
 
 #ifdef CONFIG_USER_ONLY
 flags |= TB_FLAG_UNALIGN * !env_cpu(env)->prctl_unalign_sigbus;
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 79e29d722f..2290dc8533 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1873,6 +1873,23 @@ static bool do_fop_dedd(DisasContext *ctx, unsigned rt,
 return nullify_end(ctx);
 }
 
+static bool do_taken_branch_trap(DisasContext *ctx, DisasIAQE *next, bool n)
+{
+if (unlikely(ctx->tb_flags & PSW_T)) {
+/*
+ * The X, B and N bits are updated, and the instruction queue
+ * is advanced before the trap is recognized.
+ */
+nullify_set(ctx, n);
+store_psw_xb(ctx, PSW_B);
+install_iaq_entries(ctx, >iaq_b, next);
+gen_excp_1(EXCP_TB);
+ctx->base.is_jmp = DISAS_NORETURN;
+return true;
+}
+return false;
+}
+
 /* Emit an unconditional branch to a direct target, which may or may not
have already had nullification handled.  */
 static bool do_dbranch(DisasContext *ctx, int64_t disp,
@@ -1882,6 +1899,9 @@ static bool do_dbranch(DisasContext *ctx, int64_t disp,
 
 if (ctx->null_cond.c == TCG_COND_NEVER && ctx->null_lab == NULL) {
 install_link(ctx, link, false);
+if (do_taken_branch_trap(ctx, >iaq_j, is_n)) {
+return true;
+}
 if (is_n) {
 if (use_nullify_skip(ctx)) {
 nullify_set(ctx, 0);
@@ -1898,7 +1918,9 @@ static bool do_dbranch(DisasContext *ctx, int64_t disp,
 nullify_over(ctx);
 
 install_link(ctx, link, false);
-if (is_n && use_nullify_skip(ctx)) {
+if (do_taken_branch_trap(ctx, >iaq_j, is_n)) {
+/* done */
+} else if (is_n && use_nullify_skip(ctx)) {
 nullify_set(ctx, 0);
 store_psw_xb(ctx, 0);
 gen_goto_tb(ctx, 0, >iaq_j, NULL);
@@ -1960,7 +1982,9 @@ static bool do_cbranch(DisasContext *ctx, int64_t disp, 
bool is_n,
 n = is_n && disp >= 0;
 
 next = iaqe_branchi(ctx, disp);
-if (n && use_nullify_skip(ctx)) {
+if (do_taken_branch_trap(ctx, , is_n)) {
+/* done */
+} else if (n && use_nullify_skip(ctx)) {
 nullify_set(ctx, 0);
 store_psw_xb(ctx, 0);
 gen_goto_tb(ctx, 1, , NULL);
@@ -1990,6 +2014,9 @@ static bool do_ibranch(DisasContext *ctx, unsigned link,
 {
 if (ctx->null_cond.c == TCG_COND_NEVER && ctx->null_lab == NULL) {
 install_link(ctx, link, with_sr0);
+if (do_taken_branch_trap(ctx, >iaq_j, is_n)) {
+return true;
+}
 if (is_n) {
 if (use_nullify_skip(ctx)) {
 install_iaq_entries(ctx, >iaq_j, NULL);
@@ -2005,20 +2032,22 @@ static bool do_ibranch(DisasContext *ctx, unsigned link,
 }
 
 nullify_over(ctx);
-
 install_link(ctx, link, with_sr0);
-if (is_n && use_nullify_skip(ctx)) {
-install_iaq_entries(ctx, >iaq_j, NULL);
-nullify_set(ctx, 0);
-store_psw_xb(ctx, 0);
-} else {
-install_iaq_entries(ctx, >iaq_b, >iaq_j);
-nullify_set(ctx, is_n);
-store_psw_xb(ctx, PSW_B);
+
+if (!do_taken_branch_trap(ctx, >iaq_j, is_n)) {
+if (is_n && use_nullify_skip(ctx)) {
+install_iaq_entries(ctx, >iaq_j, NULL);
+nullify_set(ctx, 0);
+store_psw_xb(ctx, 0);
+} else {
+install_iaq_entries(ctx, >iaq_b, >iaq_j);
+nullify_set(ctx, is_n);
+store_psw_xb(ctx, PSW_B);
+}
+tcg_gen_lookup_and_goto_ptr();
+ctx->base.is_jmp = DISAS_NORETURN;
 }
 
-tcg_gen_lookup_and_goto_ptr();
-ctx->base.is_jmp = DISAS_NORETURN;
 return nullify_end(ctx);
 }
 
-- 
2.34.1




[PATCH v2 26/45] target/hppa: Use TCG_COND_TST* in trans_ftest

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 22 ++
 1 file changed, 6 insertions(+), 16 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 6d76599ea0..ef62cd7e94 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -4310,6 +4310,8 @@ static bool trans_fcmp_d(DisasContext *ctx, arg_fclass2 
*a)
 
 static bool trans_ftest(DisasContext *ctx, arg_ftest *a)
 {
+TCGCond tc = TCG_COND_TSTNE;
+uint32_t mask;
 TCGv_i64 t;
 
 nullify_over(ctx);
@@ -4318,21 +4320,18 @@ static bool trans_ftest(DisasContext *ctx, arg_ftest *a)
 tcg_gen_ld32u_i64(t, tcg_env, offsetof(CPUHPPAState, fr0_shadow));
 
 if (a->y == 1) {
-int mask;
-bool inv = false;
-
 switch (a->c) {
 case 0: /* simple */
 mask = R_FPSR_C_MASK;
 break;
 case 2: /* rej */
-inv = true;
+tc = TCG_COND_TSTEQ;
 /* fallthru */
 case 1: /* acc */
 mask = R_FPSR_C_MASK | R_FPSR_CQ_MASK;
 break;
 case 6: /* rej8 */
-inv = true;
+tc = TCG_COND_TSTEQ;
 /* fallthru */
 case 5: /* acc8 */
 mask = R_FPSR_C_MASK | R_FPSR_CQ0_6_MASK;
@@ -4350,21 +4349,12 @@ static bool trans_ftest(DisasContext *ctx, arg_ftest *a)
 gen_illegal(ctx);
 return true;
 }
-if (inv) {
-TCGv_i64 c = tcg_constant_i64(mask);
-tcg_gen_or_i64(t, t, c);
-ctx->null_cond = cond_make_tt(TCG_COND_EQ, t, c);
-} else {
-tcg_gen_andi_i64(t, t, mask);
-ctx->null_cond = cond_make_ti(TCG_COND_EQ, t, 0);
-}
 } else {
 unsigned cbit = (a->y ^ 1) - 1;
-
-tcg_gen_extract_i64(t, t, R_FPSR_CA0_SHIFT - cbit, 1);
-ctx->null_cond = cond_make_ti(TCG_COND_NE, t, 0);
+mask = R_FPSR_CA0_MASK >> cbit;
 }
 
+ctx->null_cond = cond_make_ti(tc, t, mask);
 return nullify_end(ctx);
 }
 
-- 
2.34.1




[PATCH v2 37/45] target/hppa: Implement PSW_B

2024-05-13 Thread Richard Henderson
PSW_B causes B,GATE to trap as an illegal instruction, removing
the sequential execution test that was merely an approximation.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 25 ++---
 1 file changed, 6 insertions(+), 19 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 1a6a140d6f..2d8410b8ea 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -2061,11 +2061,8 @@ static void do_page_zero(DisasContext *ctx)
 g_assert_not_reached();
 }
 
-/* Check that we didn't arrive here via some means that allowed
-   non-sequential instruction execution.  Normally the PSW[B] bit
-   detects this by disallowing the B,GATE instruction to execute
-   under such conditions.  */
-if (iaqe_variable(>iaq_b) || ctx->iaq_b.disp != 4) {
+/* If PSW[B] is set, the B,GATE insn would trap. */
+if (ctx->psw_xb & PSW_B) {
 goto do_sigill;
 }
 
@@ -3964,23 +3961,13 @@ static bool trans_b_gate(DisasContext *ctx, arg_b_gate 
*a)
 {
 int64_t disp = a->disp;
 
-nullify_over(ctx);
-
-/* Make sure the caller hasn't done something weird with the queue.
- * ??? This is not quite the same as the PSW[B] bit, which would be
- * expensive to track.  Real hardware will trap for
- *b  gateway
- *b  gateway+4  (in delay slot of first branch)
- * However, checking for a non-sequential instruction queue *will*
- * diagnose the security hole
- *b  gateway
- *b  evil
- * in which instructions at evil would run with increased privs.
- */
-if (iaqe_variable(>iaq_b) || ctx->iaq_b.disp != ctx->iaq_f.disp + 4) {
+/* Trap if PSW[B] is set. */
+if (ctx->psw_xb & PSW_B) {
 return gen_illegal(ctx);
 }
 
+nullify_over(ctx);
+
 #ifndef CONFIG_USER_ONLY
 if (ctx->tb_flags & PSW_C) {
 int type = hppa_artype_for_page(cpu_env(ctx->cs), ctx->base.pc_next);
-- 
2.34.1




[PATCH v2 43/45] target/hppa: Implement PSW_H, PSW_L

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.c   |  4 +--
 target/hppa/translate.c | 68 +
 2 files changed, 64 insertions(+), 8 deletions(-)

diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 2a3b5fc498..f53e5a2788 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -76,10 +76,10 @@ void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
 cs_base |= env->iaoq_b & ~TARGET_PAGE_MASK;
 }
 
-/* ??? E, H, L bits need to be here, when implemented.  */
+/* ??? E bits need to be here, when implemented.  */
 flags |= env->psw_n * PSW_N;
 flags |= env->psw_xb;
-flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_P | PSW_T);
+flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_H | PSW_L | PSW_P | 
PSW_T);
 
 #ifdef CONFIG_USER_ONLY
 flags |= TB_FLAG_UNALIGN * !env_cpu(env)->prctl_unalign_sigbus;
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 2290dc8533..3ccec023af 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -56,6 +56,7 @@ typedef struct DisasDelayException {
 TCGLabel *lab;
 uint32_t insn;
 bool set_iir;
+bool set_b;
 int8_t set_n;
 uint8_t excp;
 /* Saved state at parent insn. */
@@ -745,6 +746,7 @@ static DisasDelayException *delay_excp(DisasContext *ctx, 
uint8_t excp)
 e->insn = ctx->insn;
 e->set_iir = true;
 e->set_n = ctx->psw_n_nonzero ? 0 : -1;
+e->set_b = false;
 e->excp = excp;
 e->iaq_f = ctx->iaq_f;
 e->iaq_b = ctx->iaq_b;
@@ -1873,6 +1875,54 @@ static bool do_fop_dedd(DisasContext *ctx, unsigned rt,
 return nullify_end(ctx);
 }
 
+/*
+ * Since B,GATE can only increase priv, and other indirect branches can
+ * only decrease priv, we only need to test in one direction.
+ * If maybe_priv == 0, no priv is possible with the current insn;
+ * if maybe_priv < 0, priv might increase, otherwise priv might decrease.
+ */
+static void do_priv_branch_trap(DisasContext *ctx, int maybe_priv,
+DisasIAQE *next, bool n)
+{
+DisasDelayException *e;
+uint32_t psw_bit, excp;
+TCGv_i64 new_priv;
+TCGCond cond;
+
+if (likely(maybe_priv == 0)) {
+return;
+}
+if (maybe_priv < 0) {
+psw_bit = PSW_H;
+excp = EXCP_HPT;
+cond = TCG_COND_LTU;
+} else {
+psw_bit = PSW_L;
+excp = EXCP_LPT;
+cond = TCG_COND_GTU;
+}
+if (likely(!(ctx->tb_flags & psw_bit))) {
+return;
+}
+
+e = tcg_malloc(sizeof(DisasDelayException));
+memset(e, 0, sizeof(*e));
+e->next = ctx->delay_excp_list;
+ctx->delay_excp_list = e;
+
+e->lab = gen_new_label();
+e->set_n = n ? 1 : ctx->psw_n_nonzero ? 0 : -1;
+e->set_b = ctx->psw_xb != PSW_B;
+e->excp = excp;
+e->iaq_f = ctx->iaq_b;
+e->iaq_b = *next;
+
+new_priv = tcg_temp_new_i64();
+copy_iaoq_entry(ctx, new_priv, next);
+tcg_gen_andi_i64(new_priv, new_priv, 3);
+tcg_gen_brcondi_i64(cond, new_priv, ctx->privilege, e->lab);
+}
+
 static bool do_taken_branch_trap(DisasContext *ctx, DisasIAQE *next, bool n)
 {
 if (unlikely(ctx->tb_flags & PSW_T)) {
@@ -2010,10 +2060,12 @@ static bool do_cbranch(DisasContext *ctx, int64_t disp, 
bool is_n,
  * This handles nullification of the branch itself.
  */
 static bool do_ibranch(DisasContext *ctx, unsigned link,
-   bool with_sr0, bool is_n)
+   bool with_sr0, bool is_n, int maybe_priv)
 {
 if (ctx->null_cond.c == TCG_COND_NEVER && ctx->null_lab == NULL) {
 install_link(ctx, link, with_sr0);
+
+do_priv_branch_trap(ctx, maybe_priv, >iaq_j, is_n);
 if (do_taken_branch_trap(ctx, >iaq_j, is_n)) {
 return true;
 }
@@ -2034,6 +2086,7 @@ static bool do_ibranch(DisasContext *ctx, unsigned link,
 nullify_over(ctx);
 install_link(ctx, link, with_sr0);
 
+do_priv_branch_trap(ctx, maybe_priv, >iaq_j, is_n);
 if (!do_taken_branch_trap(ctx, >iaq_j, is_n)) {
 if (is_n && use_nullify_skip(ctx)) {
 install_iaq_entries(ctx, >iaq_j, NULL);
@@ -3994,7 +4047,7 @@ static bool trans_be(DisasContext *ctx, arg_be *a)
 tcg_gen_addi_i64(ctx->iaq_j.base, load_gpr(ctx, a->b), a->disp);
 ctx->iaq_j.base = do_ibranch_priv(ctx, ctx->iaq_j.base);
 
-return do_ibranch(ctx, a->l, true, a->n);
+return do_ibranch(ctx, a->l, true, a->n, ctx->privilege == 3 ? 0 : 1);
 }
 
 static bool trans_bl(DisasContext *ctx, arg_bl *a)
@@ -4043,7 +4096,7 @@ static bool trans_b_gate(DisasContext *ctx, arg_b_gate *a)
 }
 
 if (indirect) {
-return do_ibranch(ctx, 0, false, a->n);
+return do_ibranch(ctx, 0, false, a->n, -1);
 }
 return do_dbranch(ctx, disp, 0, a-&g

[PATCH v2 21/45] target/hppa: Use TCG_COND_TST* in do_log_cond

2024-05-13 Thread Richard Henderson
We can directly test bits of a 32-bit comparison without
zero or sign-extending an intermediate result.
We can directly test bit 0 for odd/even.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 78 ++---
 1 file changed, 27 insertions(+), 51 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 421b0df9d4..e4e8034c5f 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -914,65 +914,41 @@ static DisasCond do_log_cond(DisasContext *ctx, unsigned 
cf, bool d,
  TCGv_i64 res)
 {
 TCGCond tc;
-bool ext_uns;
+uint64_t imm;
 
-switch (cf) {
-case 0:  /* never */
-case 9:  /* undef, C */
-case 11: /* undef, C & !Z */
-case 12: /* undef, V */
-return cond_make_f();
-
-case 1:  /* true */
-case 8:  /* undef, !C */
-case 10: /* undef, !C | Z */
-case 13: /* undef, !V */
-return cond_make_t();
-
-case 2:  /* == */
-tc = TCG_COND_EQ;
-ext_uns = true;
+switch (cf >> 1) {
+case 0:  /* never / always */
+case 4:  /* undef, C */
+case 5:  /* undef, C & !Z */
+case 6:  /* undef, V */
+return cf & 1 ? cond_make_t() : cond_make_f();
+case 1:  /* == / <> */
+tc = d ? TCG_COND_EQ : TCG_COND_TSTEQ;
+imm = d ? 0 : UINT32_MAX;
 break;
-case 3:  /* <> */
-tc = TCG_COND_NE;
-ext_uns = true;
+case 2:  /* < / >= */
+tc = d ? TCG_COND_LT : TCG_COND_TSTNE;
+imm = d ? 0 : 1ull << 31;
 break;
-case 4:  /* < */
-tc = TCG_COND_LT;
-ext_uns = false;
+case 3:  /* <= / > */
+tc = cf & 1 ? TCG_COND_GT : TCG_COND_LE;
+if (!d) {
+TCGv_i64 tmp = tcg_temp_new_i64();
+tcg_gen_ext32s_i64(tmp, res);
+return cond_make_ti(tc, tmp, 0);
+}
+return cond_make_vi(tc, res, 0);
+case 7: /* OD / EV */
+tc = TCG_COND_TSTNE;
+imm = 1;
 break;
-case 5:  /* >= */
-tc = TCG_COND_GE;
-ext_uns = false;
-break;
-case 6:  /* <= */
-tc = TCG_COND_LE;
-ext_uns = false;
-break;
-case 7:  /* > */
-tc = TCG_COND_GT;
-ext_uns = false;
-break;
-
-case 14: /* OD */
-case 15: /* EV */
-return do_cond(ctx, cf, d, res, NULL, NULL);
-
 default:
 g_assert_not_reached();
 }
-
-if (!d) {
-TCGv_i64 tmp = tcg_temp_new_i64();
-
-if (ext_uns) {
-tcg_gen_ext32u_i64(tmp, res);
-} else {
-tcg_gen_ext32s_i64(tmp, res);
-}
-return cond_make_ti(tc, tmp, 0);
+if (cf & 1) {
+tc = tcg_invert_cond(tc);
 }
-return cond_make_vi(tc, res, 0);
+return cond_make_vi(tc, res, imm);
 }
 
 /* Similar, but for shift/extract/deposit conditions.  */
-- 
2.34.1




[PATCH v2 29/45] target/hppa: Use delay_excp for conditional traps

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/helper.h |  1 -
 target/hppa/int_helper.c |  2 +-
 target/hppa/op_helper.c  |  7 ---
 target/hppa/translate.c  | 41 ++--
 4 files changed, 32 insertions(+), 19 deletions(-)

diff --git a/target/hppa/helper.h b/target/hppa/helper.h
index 5900fd70bc..3d0d143aed 100644
--- a/target/hppa/helper.h
+++ b/target/hppa/helper.h
@@ -1,6 +1,5 @@
 DEF_HELPER_2(excp, noreturn, env, int)
 DEF_HELPER_FLAGS_2(tsv, TCG_CALL_NO_WG, void, env, tl)
-DEF_HELPER_FLAGS_2(tcond, TCG_CALL_NO_WG, void, env, tl)
 
 DEF_HELPER_FLAGS_3(stby_b, TCG_CALL_NO_WG, void, env, tl, tl)
 DEF_HELPER_FLAGS_3(stby_b_parallel, TCG_CALL_NO_WG, void, env, tl, tl)
diff --git a/target/hppa/int_helper.c b/target/hppa/int_helper.c
index a667ee380d..1aa3e88ef1 100644
--- a/target/hppa/int_helper.c
+++ b/target/hppa/int_helper.c
@@ -134,13 +134,13 @@ void hppa_cpu_do_interrupt(CPUState *cs)
 switch (i) {
 case EXCP_ILL:
 case EXCP_BREAK:
+case EXCP_COND:
 case EXCP_PRIV_REG:
 case EXCP_PRIV_OPR:
 /* IIR set via translate.c.  */
 break;
 
 case EXCP_OVERFLOW:
-case EXCP_COND:
 case EXCP_ASSIST:
 case EXCP_DTLB_MISS:
 case EXCP_NA_ITLB_MISS:
diff --git a/target/hppa/op_helper.c b/target/hppa/op_helper.c
index 6cf49f33b7..a8b69fd481 100644
--- a/target/hppa/op_helper.c
+++ b/target/hppa/op_helper.c
@@ -49,13 +49,6 @@ void HELPER(tsv)(CPUHPPAState *env, target_ulong cond)
 }
 }
 
-void HELPER(tcond)(CPUHPPAState *env, target_ulong cond)
-{
-if (unlikely(cond)) {
-hppa_dynamic_excp(env, EXCP_COND, GETPC());
-}
-}
-
 static void atomic_store_mask32(CPUHPPAState *env, target_ulong addr,
 uint32_t val, uint32_t mask, uintptr_t ra)
 {
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index e75e7e5b54..7fa0c86a8f 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1116,6 +1116,25 @@ static TCGv_i64 do_sub_sv(DisasContext *ctx, TCGv_i64 
res,
 return sv;
 }
 
+static void gen_tc(DisasContext *ctx, DisasCond *cond)
+{
+DisasDelayException *e;
+
+switch (cond->c) {
+case TCG_COND_NEVER:
+break;
+case TCG_COND_ALWAYS:
+gen_excp_iir(ctx, EXCP_COND);
+break;
+default:
+e = delay_excp(ctx, EXCP_COND);
+tcg_gen_brcond_i64(cond->c, cond->a0, cond->a1, e->lab);
+/* In the non-trap path, the condition is known false. */
+*cond = cond_make_f();
+break;
+}
+}
+
 static void do_add(DisasContext *ctx, unsigned rt, TCGv_i64 orig_in1,
TCGv_i64 in2, unsigned shift, bool is_l,
bool is_tsv, bool is_tc, bool is_c, unsigned cf, bool d)
@@ -1174,9 +1193,7 @@ static void do_add(DisasContext *ctx, unsigned rt, 
TCGv_i64 orig_in1,
 /* Emit any conditional trap before any writeback.  */
 cond = do_cond(ctx, cf, d, dest, uv, sv);
 if (is_tc) {
-tmp = tcg_temp_new_i64();
-tcg_gen_setcond_i64(cond.c, tmp, cond.a0, cond.a1);
-gen_helper_tcond(tcg_env, tmp);
+gen_tc(ctx, );
 }
 
 /* Write back the result.  */
@@ -1195,6 +1212,10 @@ static bool do_add_reg(DisasContext *ctx, 
arg_rrr_cf_d_sh *a,
 {
 TCGv_i64 tcg_r1, tcg_r2;
 
+if (unlikely(is_tc && a->cf == 1)) {
+/* Unconditional trap on condition. */
+return gen_excp_iir(ctx, EXCP_COND);
+}
 if (a->cf) {
 nullify_over(ctx);
 }
@@ -1210,6 +1231,10 @@ static bool do_add_imm(DisasContext *ctx, arg_rri_cf *a,
 {
 TCGv_i64 tcg_im, tcg_r2;
 
+if (unlikely(is_tc && a->cf == 1)) {
+/* Unconditional trap on condition. */
+return gen_excp_iir(ctx, EXCP_COND);
+}
 if (a->cf) {
 nullify_over(ctx);
 }
@@ -1224,7 +1249,7 @@ static void do_sub(DisasContext *ctx, unsigned rt, 
TCGv_i64 in1,
TCGv_i64 in2, bool is_tsv, bool is_b,
bool is_tc, unsigned cf, bool d)
 {
-TCGv_i64 dest, sv, cb, cb_msb, tmp;
+TCGv_i64 dest, sv, cb, cb_msb;
 unsigned c = cf >> 1;
 DisasCond cond;
 
@@ -1272,9 +1297,7 @@ static void do_sub(DisasContext *ctx, unsigned rt, 
TCGv_i64 in1,
 
 /* Emit any conditional trap before any writeback.  */
 if (is_tc) {
-tmp = tcg_temp_new_i64();
-tcg_gen_setcond_i64(cond.c, tmp, cond.a0, cond.a1);
-gen_helper_tcond(tcg_env, tmp);
+gen_tc(ctx, );
 }
 
 /* Write back the result.  */
@@ -1440,9 +1463,7 @@ static void do_unit_addsub(DisasContext *ctx, unsigned 
rt, TCGv_i64 in1,
 }
 
 if (is_tc) {
-TCGv_i64 tmp = tcg_temp_new_i64();
-tcg_gen_setcond_i64(cond.c, tmp, cond.a0, cond.a1);
-gen_helper_tcond(tcg_env, tmp);
+gen_tc(ctx, );
 }
 save_gpr(ctx, rt, dest);
 
-- 
2.34.1




[PATCH v2 09/45] target/hppa: Delay computation of IAQ_Next

2024-05-13 Thread Richard Henderson
We no longer have to allocate a temp and perform an
addition before translation of the rest of the insn.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 26 ++
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index f816b337ee..a9196050dc 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1806,6 +1806,7 @@ static bool do_dbranch(DisasContext *ctx, int64_t disp,
 if (ctx->null_cond.c == TCG_COND_NEVER && ctx->null_lab == NULL) {
 install_link(ctx, link, false);
 ctx->iaoq_n = dest;
+ctx->iaoq_n_var = NULL;
 if (is_n) {
 ctx->null_cond.c = TCG_COND_ALWAYS;
 }
@@ -1862,11 +1863,6 @@ static bool do_cbranch(DisasContext *ctx, int64_t disp, 
bool is_n,
 ctx->null_lab = NULL;
 }
 nullify_set(ctx, n);
-if (ctx->iaoq_n == -1) {
-/* The temporary iaoq_n_var died at the branch above.
-   Regenerate it here instead of saving it.  */
-tcg_gen_addi_i64(ctx->iaoq_n_var, cpu_iaoq_b, 4);
-}
 gen_goto_tb(ctx, 0, ctx->iaoq_b, ctx->iaoq_n);
 }
 
@@ -4630,8 +4626,6 @@ static void hppa_tr_init_disas_context(DisasContextBase 
*dcbase, CPUState *cs)
 ctx->iaoq_f = (ctx->base.pc_first & ~iasq_f) + ctx->privilege;
 ctx->iaoq_b = (diff ? ctx->iaoq_f + diff : -1);
 #endif
-ctx->iaoq_n = -1;
-ctx->iaoq_n_var = NULL;
 
 ctx->zero = tcg_constant_i64(0);
 
@@ -4683,14 +4677,8 @@ static void hppa_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cs)
 
 /* Set up the IA queue for the next insn.
This will be overwritten by a branch.  */
-if (ctx->iaoq_b == -1) {
-ctx->iaoq_n = -1;
-ctx->iaoq_n_var = tcg_temp_new_i64();
-tcg_gen_addi_i64(ctx->iaoq_n_var, cpu_iaoq_b, 4);
-} else {
-ctx->iaoq_n = ctx->iaoq_b + 4;
-ctx->iaoq_n_var = NULL;
-}
+ctx->iaoq_n_var = NULL;
+ctx->iaoq_n = ctx->iaoq_b == -1 ? -1 : ctx->iaoq_b + 4;
 
 if (unlikely(ctx->null_cond.c == TCG_COND_ALWAYS)) {
 ctx->null_cond.c = TCG_COND_NEVER;
@@ -4741,7 +4729,13 @@ static void hppa_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cs)
 ? DISAS_EXIT
 : DISAS_IAQ_N_UPDATED);
 } else if (ctx->iaoq_b == -1) {
-copy_iaoq_entry(ctx, cpu_iaoq_b, -1, ctx->iaoq_n_var);
+if (ctx->iaoq_n_var) {
+copy_iaoq_entry(ctx, cpu_iaoq_b, -1, ctx->iaoq_n_var);
+} else {
+tcg_gen_addi_i64(cpu_iaoq_b, cpu_iaoq_b, 4);
+tcg_gen_andi_i64(cpu_iaoq_b, cpu_iaoq_b,
+ gva_offset_mask(ctx->tb_flags));
+}
 }
 break;
 
-- 
2.34.1




[PATCH v2 38/45] target/hppa: Implement PSW_X

2024-05-13 Thread Richard Henderson
Use PAGE_WRITE_INV to temporarily enable write permission
on for a given page, driven by PSW_X being set.

Signed-off-by: Richard Henderson 
---
 target/hppa/mem_helper.c | 46 +++-
 1 file changed, 27 insertions(+), 19 deletions(-)

diff --git a/target/hppa/mem_helper.c b/target/hppa/mem_helper.c
index d09877afd7..ca7bbe0a7c 100644
--- a/target/hppa/mem_helper.c
+++ b/target/hppa/mem_helper.c
@@ -296,30 +296,38 @@ int hppa_get_physical_address(CPUHPPAState *env, vaddr 
addr, int mmu_idx,
 goto egress;
 }
 
-/* In reverse priority order, check for conditions which raise faults.
-   As we go, remove PROT bits that cover the condition we want to check.
-   In this way, the resulting PROT will force a re-check of the
-   architectural TLB entry for the next access.  */
-if (unlikely(!ent->d)) {
-if (type & PAGE_WRITE) {
-/* The D bit is not set -- TLB Dirty Bit Fault.  */
-ret = EXCP_TLB_DIRTY;
-}
-prot &= PAGE_READ | PAGE_EXEC;
-}
-if (unlikely(ent->b)) {
-if (type & PAGE_WRITE) {
-/* The B bit is set -- Data Memory Break Fault.  */
-ret = EXCP_DMB;
-}
-prot &= PAGE_READ | PAGE_EXEC;
-}
+/*
+ * In priority order, check for conditions which raise faults.
+ * Remove PROT bits that cover the condition we want to check,
+ * so that the resulting PROT will force a re-check of the
+ * architectural TLB entry for the next access.
+ */
 if (unlikely(ent->t)) {
+prot &= PAGE_EXEC;
 if (!(type & PAGE_EXEC)) {
 /* The T bit is set -- Page Reference Fault.  */
 ret = EXCP_PAGE_REF;
 }
-prot &= PAGE_EXEC;
+} else if (!ent->d) {
+prot &= PAGE_READ | PAGE_EXEC;
+if (type & PAGE_WRITE) {
+/* The D bit is not set -- TLB Dirty Bit Fault.  */
+ret = EXCP_TLB_DIRTY;
+}
+} else if (unlikely(ent->b)) {
+prot &= PAGE_READ | PAGE_EXEC;
+if (type & PAGE_WRITE) {
+/*
+ * The B bit is set -- Data Memory Break Fault.
+ * Except when PSW_X is set, allow this single access to succeed.
+ * The write bit will be invalidated for subsequent accesses.
+ */
+if (env->psw_xb & PSW_X) {
+prot |= PAGE_WRITE_INV;
+} else {
+ret = EXCP_DMB;
+}
+}
 }
 
  egress:
-- 
2.34.1




[PATCH v2 13/45] target/hppa: Add space arguments to install_iaq_entries

2024-05-13 Thread Richard Henderson
Move space assighments to a central location.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 58 +++--
 1 file changed, 27 insertions(+), 31 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index d24220c60f..05383dcd04 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -624,8 +624,9 @@ static void copy_iaoq_entry(DisasContext *ctx, TCGv_i64 
dest,
 }
 }
 
-static void install_iaq_entries(DisasContext *ctx, uint64_t bi, TCGv_i64 bv,
-uint64_t ni, TCGv_i64 nv)
+static void install_iaq_entries(DisasContext *ctx,
+uint64_t bi, TCGv_i64 bv, TCGv_i64 bs,
+uint64_t ni, TCGv_i64 nv, TCGv_i64 ns)
 {
 copy_iaoq_entry(ctx, cpu_iaoq_f, bi, bv);
 
@@ -639,6 +640,12 @@ static void install_iaq_entries(DisasContext *ctx, 
uint64_t bi, TCGv_i64 bv,
 tcg_gen_andi_i64(cpu_iaoq_b, cpu_iaoq_b,
  gva_offset_mask(ctx->tb_flags));
 }
+if (bs) {
+tcg_gen_mov_i64(cpu_iasq_f, bs);
+}
+if (ns || bs) {
+tcg_gen_mov_i64(cpu_iasq_b, ns ? ns : bs);
+}
 }
 
 static void install_link(DisasContext *ctx, unsigned link, bool with_sr0)
@@ -670,7 +677,8 @@ static void gen_excp_1(int exception)
 
 static void gen_excp(DisasContext *ctx, int exception)
 {
-install_iaq_entries(ctx, ctx->iaoq_f, cpu_iaoq_f, ctx->iaoq_b, cpu_iaoq_b);
+install_iaq_entries(ctx, ctx->iaoq_f, cpu_iaoq_f, NULL,
+ctx->iaoq_b, cpu_iaoq_b, NULL);
 nullify_save(ctx);
 gen_excp_1(exception);
 ctx->base.is_jmp = DISAS_NORETURN;
@@ -724,10 +732,11 @@ static void gen_goto_tb(DisasContext *ctx, int which,
 {
 if (use_goto_tb(ctx, b, n)) {
 tcg_gen_goto_tb(which);
-install_iaq_entries(ctx, b, NULL, n, NULL);
+install_iaq_entries(ctx, b, NULL, NULL, n, NULL, NULL);
 tcg_gen_exit_tb(ctx->base.tb, which);
 } else {
-install_iaq_entries(ctx, b, cpu_iaoq_b, n, ctx->iaoq_n_var);
+install_iaq_entries(ctx, b, cpu_iaoq_b, ctx->iasq_b,
+n, ctx->iaoq_n_var, ctx->iasq_n);
 tcg_gen_lookup_and_goto_ptr();
 }
 }
@@ -1916,7 +1925,7 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 install_link(ctx, link, false);
 if (is_n) {
 if (use_nullify_skip(ctx)) {
-install_iaq_entries(ctx, -1, next, -1, NULL);
+install_iaq_entries(ctx, -1, next, NULL, -1, NULL, NULL);
 nullify_set(ctx, 0);
 ctx->base.is_jmp = DISAS_IAQ_N_UPDATED;
 return true;
@@ -1935,10 +1944,11 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 
 install_link(ctx, link, false);
 if (is_n && use_nullify_skip(ctx)) {
-install_iaq_entries(ctx, -1, next, -1, NULL);
+install_iaq_entries(ctx, -1, next, NULL, -1, NULL, NULL);
 nullify_set(ctx, 0);
 } else {
-install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, -1, next);
+install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, ctx->iasq_b,
+-1, next, NULL);
 nullify_set(ctx, is_n);
 }
 
@@ -2026,7 +2036,7 @@ static void do_page_zero(DisasContext *ctx)
 tcg_gen_st_i64(cpu_gr[26], tcg_env, offsetof(CPUHPPAState, cr[27]));
 tmp = tcg_temp_new_i64();
 tcg_gen_ori_i64(tmp, cpu_gr[31], 3);
-install_iaq_entries(ctx, -1, tmp, -1, NULL);
+install_iaq_entries(ctx, -1, tmp, NULL, -1, NULL, NULL);
 ctx->base.is_jmp = DISAS_IAQ_N_UPDATED;
 break;
 
@@ -2770,8 +2780,8 @@ static bool trans_or(DisasContext *ctx, arg_rrr_cf_d *a)
 nullify_over(ctx);
 
 /* Advance the instruction queue.  */
-install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b,
-ctx->iaoq_n, ctx->iaoq_n_var);
+install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, ctx->iasq_b,
+ctx->iaoq_n, ctx->iaoq_n_var, ctx->iasq_n);
 nullify_set(ctx, 0);
 
 /* Tell the qemu main loop to halt until this cpu has work.  */
@@ -3921,16 +3931,11 @@ static bool trans_be(DisasContext *ctx, arg_be *a)
 load_spr(ctx, new_spc, a->sp);
 install_link(ctx, a->l, true);
 if (a->n && use_nullify_skip(ctx)) {
-install_iaq_entries(ctx, -1, tmp, -1, NULL);
-tcg_gen_mov_i64(cpu_iasq_f, new_spc);
-tcg_gen_mov_i64(cpu_iasq_b, new_spc);
+install_iaq_entries(ctx, -1, tmp, new_spc, -1, NULL, new_spc);
 nullify_set(ctx, 0);
 } else {
-install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, -1, tmp);
-if (ctx->iasq_b) {
-tcg_gen_mov_i64(cpu_iasq_f, ctx->iasq_b);
-}
-tc

[PATCH v2 18/45] target/hppa: Use displacements in DisasIAQE

2024-05-13 Thread Richard Henderson
This is a first step in enabling CF_PCREL, but for now
we regenerate the absolute address before writeback.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 43 ++---
 1 file changed, 23 insertions(+), 20 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 4e2e35f9cc..196297422b 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -45,9 +45,9 @@ typedef struct DisasCond {
 typedef struct DisasIAQE {
 /* IASQ; may be null for no change from TB. */
 TCGv_i64 space;
-/* IAOQ base; may be null for immediate absolute address. */
+/* IAOQ base; may be null for relative address. */
 TCGv_i64 base;
-/* IAOQ addend; absolute immedate address if base is null. */
+/* IAOQ addend; if base is null, relative to ctx->iaoq_first. */
 int64_t disp;
 } DisasIAQE;
 
@@ -60,6 +60,9 @@ typedef struct DisasContext {
 /* IAQ_Next, for jumps, otherwise null for simple advance. */
 DisasIAQE iaq_j, *iaq_n;
 
+/* IAOQ_Front at entry to TB. */
+uint64_t iaoq_first;
+
 DisasCond null_cond;
 TCGLabel *null_lab;
 
@@ -640,7 +643,7 @@ static void copy_iaoq_entry(DisasContext *ctx, TCGv_i64 
dest,
 uint64_t mask = gva_offset_mask(ctx->tb_flags);
 
 if (src->base == NULL) {
-tcg_gen_movi_i64(dest, src->disp & mask);
+tcg_gen_movi_i64(dest, (ctx->iaoq_first + src->disp) & mask);
 } else if (src->disp == 0) {
 tcg_gen_andi_i64(dest, src->base, mask);
 } else {
@@ -672,12 +675,8 @@ static void install_link(DisasContext *ctx, unsigned link, 
bool with_sr0)
 {
 tcg_debug_assert(ctx->null_cond.c == TCG_COND_NEVER);
 if (link) {
-if (ctx->iaq_b.base) {
-tcg_gen_addi_i64(cpu_gr[link], ctx->iaq_b.base,
- ctx->iaq_b.disp + 4);
-} else {
-tcg_gen_movi_i64(cpu_gr[link], ctx->iaq_b.disp + 4);
-}
+DisasIAQE next = iaqe_incr(>iaq_b, 4);
+copy_iaoq_entry(ctx, cpu_gr[link], );
 #ifndef CONFIG_USER_ONLY
 if (with_sr0) {
 tcg_gen_mov_i64(cpu_sr[0], cpu_iasq_b);
@@ -730,7 +729,7 @@ static bool use_goto_tb(DisasContext *ctx, const DisasIAQE 
*f,
 {
 return (!iaqe_variable(f) &&
 (b == NULL || !iaqe_variable(b)) &&
-translator_use_goto_tb(>base, f->disp));
+translator_use_goto_tb(>base, ctx->iaoq_first + f->disp));
 }
 
 /* If the next insn is to be nullified, and it's on the same page,
@@ -741,7 +740,8 @@ static bool use_nullify_skip(DisasContext *ctx)
 {
 return (!(tb_cflags(ctx->base.tb) & CF_BP_PAGE)
 && !iaqe_variable(>iaq_b)
-&& is_same_page(>base, ctx->iaq_b.disp));
+&& (((ctx->iaoq_first + ctx->iaq_b.disp) ^ ctx->iaoq_first)
+& TARGET_PAGE_MASK) == 0);
 }
 
 static void gen_goto_tb(DisasContext *ctx, int which,
@@ -2004,6 +2004,8 @@ static TCGv_i64 do_ibranch_priv(DisasContext *ctx, 
TCGv_i64 offset)
aforementioned BE.  */
 static void do_page_zero(DisasContext *ctx)
 {
+assert(ctx->iaq_f.disp == 0);
+
 /* If by some means we get here with PSW[N]=1, that implies that
the B,GATE instruction would be skipped, and we'd fault on the
next insn within the privileged page.  */
@@ -2023,11 +2025,11 @@ static void do_page_zero(DisasContext *ctx)
non-sequential instruction execution.  Normally the PSW[B] bit
detects this by disallowing the B,GATE instruction to execute
under such conditions.  */
-if (iaqe_variable(>iaq_b) || ctx->iaq_b.disp != ctx->iaq_f.disp + 4) {
+if (iaqe_variable(>iaq_b) || ctx->iaq_b.disp != 4) {
 goto do_sigill;
 }
 
-switch (ctx->iaq_f.disp & -4) {
+switch (ctx->base.pc_first) {
 case 0x00: /* Null pointer call */
 gen_excp_1(EXCP_IMP);
 ctx->base.is_jmp = DISAS_NORETURN;
@@ -4618,8 +4620,8 @@ static void hppa_tr_init_disas_context(DisasContextBase 
*dcbase, CPUState *cs)
 #ifdef CONFIG_USER_ONLY
 ctx->privilege = MMU_IDX_TO_PRIV(MMU_USER_IDX);
 ctx->mmu_idx = MMU_USER_IDX;
-ctx->iaq_f.disp = ctx->base.pc_first | ctx->privilege;
-ctx->iaq_b.disp = ctx->base.tb->cs_base | ctx->privilege;
+ctx->iaoq_first = ctx->base.pc_first | ctx->privilege;
+ctx->iaq_b.disp = ctx->base.tb->cs_base - ctx->base.pc_first;
 ctx->unalign = (ctx->tb_flags & TB_FLAG_UNALIGN ? MO_UNALN : MO_ALIGN);
 #else
 ctx->privilege = (ctx->tb_flags >> TB_FLAG_PRIV_SHIFT) & 3;
@@ -4632,9 +4634,10 @@ static void hppa_tr_init_disas_context(DisasContextBase 
*dcbase, CPUState *cs)
 uint64_t iasq_f = cs_base & ~0xull;
 int32_t diff = cs_base;
 
-ctx->iaq_f.disp = (ctx->base.pc

[PATCH v2 14/45] target/hppa: Add space argument to do_ibranch

2024-05-13 Thread Richard Henderson
This allows unification of BE, BLR, BV, BVE with a common helper.
Since we can now track space with IAQ_Next, we can now let the
TranslationBlock continue across the delay slot with BE, BVE.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 76 ++---
 1 file changed, 26 insertions(+), 50 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 05383dcd04..ae66068123 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1913,8 +1913,8 @@ static bool do_cbranch(DisasContext *ctx, int64_t disp, 
bool is_n,
 
 /* Emit an unconditional branch to an indirect target.  This handles
nullification of the branch itself.  */
-static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
-   unsigned link, bool is_n)
+static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest, TCGv_i64 dspc,
+   unsigned link, bool with_sr0, bool is_n)
 {
 TCGv_i64 next;
 
@@ -1922,10 +1922,10 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 next = tcg_temp_new_i64();
 tcg_gen_mov_i64(next, dest);
 
-install_link(ctx, link, false);
+install_link(ctx, link, with_sr0);
 if (is_n) {
 if (use_nullify_skip(ctx)) {
-install_iaq_entries(ctx, -1, next, NULL, -1, NULL, NULL);
+install_iaq_entries(ctx, -1, next, dspc, -1, NULL, NULL);
 nullify_set(ctx, 0);
 ctx->base.is_jmp = DISAS_IAQ_N_UPDATED;
 return true;
@@ -1934,6 +1934,7 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 }
 ctx->iaoq_n = -1;
 ctx->iaoq_n_var = next;
+ctx->iasq_n = dspc;
 return true;
 }
 
@@ -1942,13 +1943,13 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 next = tcg_temp_new_i64();
 tcg_gen_mov_i64(next, dest);
 
-install_link(ctx, link, false);
+install_link(ctx, link, with_sr0);
 if (is_n && use_nullify_skip(ctx)) {
-install_iaq_entries(ctx, -1, next, NULL, -1, NULL, NULL);
+install_iaq_entries(ctx, -1, next, dspc, -1, NULL, NULL);
 nullify_set(ctx, 0);
 } else {
 install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, ctx->iasq_b,
--1, next, NULL);
+-1, next, dspc);
 nullify_set(ctx, is_n);
 }
 
@@ -3915,33 +3916,18 @@ static bool trans_depi_sar(DisasContext *ctx, 
arg_depi_sar *a)
 
 static bool trans_be(DisasContext *ctx, arg_be *a)
 {
-TCGv_i64 tmp;
+TCGv_i64 dest = tcg_temp_new_i64();
+TCGv_i64 space = NULL;
 
-tmp = tcg_temp_new_i64();
-tcg_gen_addi_i64(tmp, load_gpr(ctx, a->b), a->disp);
-tmp = do_ibranch_priv(ctx, tmp);
+tcg_gen_addi_i64(dest, load_gpr(ctx, a->b), a->disp);
+dest = do_ibranch_priv(ctx, dest);
 
-#ifdef CONFIG_USER_ONLY
-return do_ibranch(ctx, tmp, a->l, a->n);
-#else
-TCGv_i64 new_spc = tcg_temp_new_i64();
-
-nullify_over(ctx);
-
-load_spr(ctx, new_spc, a->sp);
-install_link(ctx, a->l, true);
-if (a->n && use_nullify_skip(ctx)) {
-install_iaq_entries(ctx, -1, tmp, new_spc, -1, NULL, new_spc);
-nullify_set(ctx, 0);
-} else {
-install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, ctx->iasq_b,
--1, tmp, new_spc);
-nullify_set(ctx, a->n);
-}
-tcg_gen_lookup_and_goto_ptr();
-ctx->base.is_jmp = DISAS_NORETURN;
-return nullify_end(ctx);
+#ifndef CONFIG_USER_ONLY
+space = tcg_temp_new_i64();
+load_spr(ctx, space, a->sp);
 #endif
+
+return do_ibranch(ctx, dest, space, a->l, true, a->n);
 }
 
 static bool trans_bl(DisasContext *ctx, arg_bl *a)
@@ -4010,7 +3996,7 @@ static bool trans_blr(DisasContext *ctx, arg_blr *a)
 tcg_gen_shli_i64(tmp, load_gpr(ctx, a->x), 3);
 tcg_gen_addi_i64(tmp, tmp, ctx->iaoq_f + 8);
 /* The computation here never changes privilege level.  */
-return do_ibranch(ctx, tmp, a->l, a->n);
+return do_ibranch(ctx, tmp, NULL, a->l, false, a->n);
 } else {
 /* BLR R0,RX is a good way to load PC+8 into RX.  */
 return do_dbranch(ctx, 0, a->l, a->n);
@@ -4029,30 +4015,20 @@ static bool trans_bv(DisasContext *ctx, arg_bv *a)
 tcg_gen_add_i64(dest, dest, load_gpr(ctx, a->b));
 }
 dest = do_ibranch_priv(ctx, dest);
-return do_ibranch(ctx, dest, 0, a->n);
+return do_ibranch(ctx, dest, NULL, 0, false, a->n);
 }
 
 static bool trans_bve(DisasContext *ctx, arg_bve *a)
 {
-TCGv_i64 dest;
+TCGv_i64 b = load_gpr(ctx, a->b);
+TCGv_i64 dest = do_ibranch_priv(ctx, b);
+TCGv_i64 space = NULL;
 
-#ifdef CONFIG_USER_ONLY
-dest = do_ibranch_priv(ctx, load_gpr(ctx, a->b));
-return do_ibranch(ctx, dest, a->l, a->n);
-#else
-

[PATCH v2 35/45] target/hppa: Split PSW X and B into their own field

2024-05-13 Thread Richard Henderson
Generally, both of these bits are cleared at the end of each
instruction.  By separating these, we will be able to clear
both with a single insn, instead of 2 or 3.

Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.h| 3 ++-
 target/hppa/helper.c | 6 +++---
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/target/hppa/cpu.h b/target/hppa/cpu.h
index 1232a4cef2..f247ad56d7 100644
--- a/target/hppa/cpu.h
+++ b/target/hppa/cpu.h
@@ -208,7 +208,8 @@ typedef struct CPUArchState {
 uint64_t fr[32];
 uint64_t sr[8];  /* stored shifted into place for gva */
 
-target_ulong psw;/* All psw bits except the following:  */
+uint32_t psw;/* All psw bits except the following:  */
+uint32_t psw_xb; /* X and B, in their normal positions */
 target_ulong psw_n;  /* boolean */
 target_long psw_v;   /* in most significant bit */
 
diff --git a/target/hppa/helper.c b/target/hppa/helper.c
index 7d22c248fb..b79ddd8184 100644
--- a/target/hppa/helper.c
+++ b/target/hppa/helper.c
@@ -54,7 +54,7 @@ target_ulong cpu_hppa_get_psw(CPUHPPAState *env)
 
 psw |= env->psw_n * PSW_N;
 psw |= (env->psw_v < 0) * PSW_V;
-psw |= env->psw;
+psw |= env->psw | env->psw_xb;
 
 return psw;
 }
@@ -76,8 +76,8 @@ void cpu_hppa_put_psw(CPUHPPAState *env, target_ulong psw)
 }
 psw &= ~reserved;
 
-env->psw = psw & (uint32_t)~(PSW_N | PSW_V | PSW_CB);
-
+env->psw = psw & (uint32_t)~(PSW_B | PSW_N | PSW_V | PSW_X | PSW_CB);
+env->psw_xb = psw & (PSW_X | PSW_B);
 env->psw_n = (psw / PSW_N) & 1;
 env->psw_v = -((psw / PSW_V) & 1);
 
-- 
2.34.1




[PATCH v2 24/45] target/hppa: Use TCG_COND_TST* in trans_bb_imm

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 12 +++-
 1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 47f4b23d1b..d8973a63df 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -3515,18 +3515,12 @@ static bool trans_bb_sar(DisasContext *ctx, arg_bb_sar 
*a)
 
 static bool trans_bb_imm(DisasContext *ctx, arg_bb_imm *a)
 {
-TCGv_i64 tmp, tcg_r;
 DisasCond cond;
-int p;
+int p = a->p | (a->d ? 0 : 32);
 
 nullify_over(ctx);
-
-tmp = tcg_temp_new_i64();
-tcg_r = load_gpr(ctx, a->r);
-p = a->p | (a->d ? 0 : 32);
-tcg_gen_shli_i64(tmp, tcg_r, p);
-
-cond = cond_make_ti(a->c ? TCG_COND_GE : TCG_COND_LT, tmp, 0);
+cond = cond_make_vi(a->c ? TCG_COND_TSTEQ : TCG_COND_TSTNE,
+load_gpr(ctx, a->r), 1ull << (63 - p));
 return do_cbranch(ctx, a->disp, a->n, );
 }
 
-- 
2.34.1




[PATCH v2 03/45] target/hppa: Move constant destination check into use_goto_tb

2024-05-13 Thread Richard Henderson
Share this check between gen_goto_tb and hppa_tr_translate_insn.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 6d45611888..398803981c 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -662,9 +662,10 @@ static bool gen_illegal(DisasContext *ctx)
 } while (0)
 #endif
 
-static bool use_goto_tb(DisasContext *ctx, uint64_t dest)
+static bool use_goto_tb(DisasContext *ctx, uint64_t bofs, uint64_t nofs)
 {
-return translator_use_goto_tb(>base, dest);
+return (bofs != -1 && nofs != -1 &&
+translator_use_goto_tb(>base, bofs));
 }
 
 /* If the next insn is to be nullified, and it's on the same page,
@@ -678,16 +679,16 @@ static bool use_nullify_skip(DisasContext *ctx)
 }
 
 static void gen_goto_tb(DisasContext *ctx, int which,
-uint64_t f, uint64_t b)
+uint64_t b, uint64_t n)
 {
-if (f != -1 && b != -1 && use_goto_tb(ctx, f)) {
+if (use_goto_tb(ctx, b, n)) {
 tcg_gen_goto_tb(which);
-copy_iaoq_entry(ctx, cpu_iaoq_f, f, NULL);
-copy_iaoq_entry(ctx, cpu_iaoq_b, b, NULL);
+copy_iaoq_entry(ctx, cpu_iaoq_f, b, NULL);
+copy_iaoq_entry(ctx, cpu_iaoq_b, n, NULL);
 tcg_gen_exit_tb(ctx->base.tb, which);
 } else {
-copy_iaoq_entry(ctx, cpu_iaoq_f, f, cpu_iaoq_b);
-copy_iaoq_entry(ctx, cpu_iaoq_b, b, ctx->iaoq_n_var);
+copy_iaoq_entry(ctx, cpu_iaoq_f, b, cpu_iaoq_b);
+copy_iaoq_entry(ctx, cpu_iaoq_b, n, ctx->iaoq_n_var);
 tcg_gen_lookup_and_goto_ptr();
 }
 }
@@ -4744,8 +4745,7 @@ static void hppa_tr_translate_insn(DisasContextBase 
*dcbase, CPUState *cs)
 /* Advance the insn queue.  Note that this check also detects
a priority change within the instruction queue.  */
 if (ret == DISAS_NEXT && ctx->iaoq_b != ctx->iaoq_f + 4) {
-if (ctx->iaoq_b != -1 && ctx->iaoq_n != -1
-&& use_goto_tb(ctx, ctx->iaoq_b)
+if (use_goto_tb(ctx, ctx->iaoq_b, ctx->iaoq_n)
 && (ctx->null_cond.c == TCG_COND_NEVER
 || ctx->null_cond.c == TCG_COND_ALWAYS)) {
 nullify_set(ctx, ctx->null_cond.c == TCG_COND_ALWAYS);
-- 
2.34.1




[PATCH v2 15/45] target/hppa: Use umax in do_ibranch_priv

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index ae66068123..22935f4645 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1981,7 +1981,7 @@ static TCGv_i64 do_ibranch_priv(DisasContext *ctx, 
TCGv_i64 offset)
 dest = tcg_temp_new_i64();
 tcg_gen_andi_i64(dest, offset, -4);
 tcg_gen_ori_i64(dest, dest, ctx->privilege);
-tcg_gen_movcond_i64(TCG_COND_GTU, dest, dest, offset, dest, offset);
+tcg_gen_umax_i64(dest, dest, offset);
 break;
 }
 return dest;
-- 
2.34.1




[PATCH v2 06/45] target/hppa: Use CF_BP_PAGE instead of cpu_breakpoint_test

2024-05-13 Thread Richard Henderson
The generic tcg driver will have already checked for breakpoints.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 140dfb747a..d272be0e6e 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -674,8 +674,9 @@ static bool use_goto_tb(DisasContext *ctx, uint64_t bofs, 
uint64_t nofs)
executing a TB that merely branches to the next TB.  */
 static bool use_nullify_skip(DisasContext *ctx)
 {
-return (((ctx->iaoq_b ^ ctx->iaoq_f) & TARGET_PAGE_MASK) == 0
-&& !cpu_breakpoint_test(ctx->cs, ctx->iaoq_b, BP_ANY));
+return (!(tb_cflags(ctx->base.tb) & CF_BP_PAGE)
+&& ctx->iaoq_b != -1
+&& is_same_page(>base, ctx->iaoq_b));
 }
 
 static void gen_goto_tb(DisasContext *ctx, int which,
-- 
2.34.1




[PATCH v2 30/45] target/hppa: Use delay_excp for conditional trap on overflow

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/helper.h |  1 -
 target/hppa/int_helper.c |  2 +-
 target/hppa/op_helper.c  |  7 ---
 target/hppa/translate.c  | 21 +
 4 files changed, 14 insertions(+), 17 deletions(-)

diff --git a/target/hppa/helper.h b/target/hppa/helper.h
index 3d0d143aed..c12b48a04a 100644
--- a/target/hppa/helper.h
+++ b/target/hppa/helper.h
@@ -1,5 +1,4 @@
 DEF_HELPER_2(excp, noreturn, env, int)
-DEF_HELPER_FLAGS_2(tsv, TCG_CALL_NO_WG, void, env, tl)
 
 DEF_HELPER_FLAGS_3(stby_b, TCG_CALL_NO_WG, void, env, tl, tl)
 DEF_HELPER_FLAGS_3(stby_b_parallel, TCG_CALL_NO_WG, void, env, tl, tl)
diff --git a/target/hppa/int_helper.c b/target/hppa/int_helper.c
index 1aa3e88ef1..97e5f0b9a7 100644
--- a/target/hppa/int_helper.c
+++ b/target/hppa/int_helper.c
@@ -134,13 +134,13 @@ void hppa_cpu_do_interrupt(CPUState *cs)
 switch (i) {
 case EXCP_ILL:
 case EXCP_BREAK:
+case EXCP_OVERFLOW:
 case EXCP_COND:
 case EXCP_PRIV_REG:
 case EXCP_PRIV_OPR:
 /* IIR set via translate.c.  */
 break;
 
-case EXCP_OVERFLOW:
 case EXCP_ASSIST:
 case EXCP_DTLB_MISS:
 case EXCP_NA_ITLB_MISS:
diff --git a/target/hppa/op_helper.c b/target/hppa/op_helper.c
index a8b69fd481..66cad78a57 100644
--- a/target/hppa/op_helper.c
+++ b/target/hppa/op_helper.c
@@ -42,13 +42,6 @@ G_NORETURN void hppa_dynamic_excp(CPUHPPAState *env, int 
excp, uintptr_t ra)
 cpu_loop_exit_restore(cs, ra);
 }
 
-void HELPER(tsv)(CPUHPPAState *env, target_ulong cond)
-{
-if (unlikely((target_long)cond < 0)) {
-hppa_dynamic_excp(env, EXCP_OVERFLOW, GETPC());
-}
-}
-
 static void atomic_store_mask32(CPUHPPAState *env, target_ulong addr,
 uint32_t val, uint32_t mask, uintptr_t ra)
 {
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 7fa0c86a8f..d061b30191 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1135,6 +1135,17 @@ static void gen_tc(DisasContext *ctx, DisasCond *cond)
 }
 }
 
+static void gen_tsv(DisasContext *ctx, TCGv_i64 *sv, bool d)
+{
+DisasCond cond = do_cond(ctx, /* SV */ 12, d, NULL, NULL, *sv);
+DisasDelayException *e = delay_excp(ctx, EXCP_OVERFLOW);
+
+tcg_gen_brcond_i64(cond.c, cond.a0, cond.a1, e->lab);
+
+/* In the non-trap path, V is known zero. */
+*sv = tcg_constant_i64(0);
+}
+
 static void do_add(DisasContext *ctx, unsigned rt, TCGv_i64 orig_in1,
TCGv_i64 in2, unsigned shift, bool is_l,
bool is_tsv, bool is_tc, bool is_c, unsigned cf, bool d)
@@ -1177,10 +1188,7 @@ static void do_add(DisasContext *ctx, unsigned rt, 
TCGv_i64 orig_in1,
 if (is_tsv || cond_need_sv(c)) {
 sv = do_add_sv(ctx, dest, in1, in2, orig_in1, shift, d);
 if (is_tsv) {
-if (!d) {
-tcg_gen_ext32s_i64(sv, sv);
-}
-gen_helper_tsv(tcg_env, sv);
+gen_tsv(ctx, , d);
 }
 }
 
@@ -1281,10 +1289,7 @@ static void do_sub(DisasContext *ctx, unsigned rt, 
TCGv_i64 in1,
 if (is_tsv || cond_need_sv(c)) {
 sv = do_sub_sv(ctx, dest, in1, in2);
 if (is_tsv) {
-if (!d) {
-tcg_gen_ext32s_i64(sv, sv);
-}
-gen_helper_tsv(tcg_env, sv);
+gen_tsv(ctx, , d);
 }
 }
 
-- 
2.34.1




[PATCH v2 08/45] target/hppa: Add install_link

2024-05-13 Thread Richard Henderson
Add a common routine for writing the return address.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 54 +++--
 1 file changed, 31 insertions(+), 23 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 08d5e2a4bc..f816b337ee 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -634,6 +634,23 @@ static void install_iaq_entries(DisasContext *ctx, 
uint64_t bi, TCGv_i64 bv,
 }
 }
 
+static void install_link(DisasContext *ctx, unsigned link, bool with_sr0)
+{
+tcg_debug_assert(ctx->null_cond.c == TCG_COND_NEVER);
+if (link) {
+if (ctx->iaoq_b == -1) {
+tcg_gen_addi_i64(cpu_gr[link], cpu_iaoq_b, 4);
+} else {
+tcg_gen_movi_i64(cpu_gr[link], ctx->iaoq_b + 4);
+}
+#ifndef CONFIG_USER_ONLY
+if (with_sr0) {
+tcg_gen_mov_i64(cpu_sr[0], cpu_iasq_b);
+}
+#endif
+}
+}
+
 static inline uint64_t iaoq_dest(DisasContext *ctx, int64_t disp)
 {
 return ctx->iaoq_f + disp + 8;
@@ -1787,9 +1804,7 @@ static bool do_dbranch(DisasContext *ctx, int64_t disp,
 uint64_t dest = iaoq_dest(ctx, disp);
 
 if (ctx->null_cond.c == TCG_COND_NEVER && ctx->null_lab == NULL) {
-if (link != 0) {
-copy_iaoq_entry(ctx, cpu_gr[link], ctx->iaoq_n, ctx->iaoq_n_var);
-}
+install_link(ctx, link, false);
 ctx->iaoq_n = dest;
 if (is_n) {
 ctx->null_cond.c = TCG_COND_ALWAYS;
@@ -1797,10 +1812,7 @@ static bool do_dbranch(DisasContext *ctx, int64_t disp,
 } else {
 nullify_over(ctx);
 
-if (link != 0) {
-copy_iaoq_entry(ctx, cpu_gr[link], ctx->iaoq_n, ctx->iaoq_n_var);
-}
-
+install_link(ctx, link, false);
 if (is_n && use_nullify_skip(ctx)) {
 nullify_set(ctx, 0);
 gen_goto_tb(ctx, 0, dest, dest + 4);
@@ -1892,9 +1904,7 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 next = tcg_temp_new_i64();
 tcg_gen_mov_i64(next, dest);
 
-if (link != 0) {
-copy_iaoq_entry(ctx, cpu_gr[link], ctx->iaoq_n, ctx->iaoq_n_var);
-}
+install_link(ctx, link, false);
 if (is_n) {
 if (use_nullify_skip(ctx)) {
 install_iaq_entries(ctx, -1, next, -1, NULL);
@@ -1911,16 +1921,17 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 
 nullify_over(ctx);
 
+next = tcg_temp_new_i64();
+tcg_gen_mov_i64(next, dest);
+
+install_link(ctx, link, false);
 if (is_n && use_nullify_skip(ctx)) {
-install_iaq_entries(ctx, -1, dest, -1, NULL);
+install_iaq_entries(ctx, -1, next, -1, NULL);
 nullify_set(ctx, 0);
 } else {
-install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, -1, dest);
+install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, -1, next);
 nullify_set(ctx, is_n);
 }
-if (link != 0) {
-copy_iaoq_entry(ctx, cpu_gr[link], ctx->iaoq_n, ctx->iaoq_n_var);
-}
 
 tcg_gen_lookup_and_goto_ptr();
 ctx->base.is_jmp = DISAS_NORETURN;
@@ -3899,10 +3910,7 @@ static bool trans_be(DisasContext *ctx, arg_be *a)
 nullify_over(ctx);
 
 load_spr(ctx, new_spc, a->sp);
-if (a->l) {
-copy_iaoq_entry(ctx, cpu_gr[31], ctx->iaoq_n, ctx->iaoq_n_var);
-tcg_gen_mov_i64(cpu_sr[0], cpu_iasq_b);
-}
+install_link(ctx, a->l, true);
 if (a->n && use_nullify_skip(ctx)) {
 install_iaq_entries(ctx, -1, tmp, -1, NULL);
 tcg_gen_mov_i64(cpu_iasq_f, new_spc);
@@ -4019,16 +4027,16 @@ static bool trans_bve(DisasContext *ctx, arg_bve *a)
 return do_ibranch(ctx, dest, a->l, a->n);
 #else
 nullify_over(ctx);
-dest = do_ibranch_priv(ctx, load_gpr(ctx, a->b));
+dest = tcg_temp_new_i64();
+tcg_gen_mov_i64(dest, load_gpr(ctx, a->b));
+dest = do_ibranch_priv(ctx, dest);
 
+install_link(ctx, a->l, false);
 install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, -1, dest);
 if (ctx->iaoq_b == -1) {
 tcg_gen_mov_i64(cpu_iasq_f, cpu_iasq_b);
 }
 tcg_gen_mov_i64(cpu_iasq_b, space_select(ctx, 0, dest));
-if (a->l) {
-copy_iaoq_entry(ctx, cpu_gr[a->l], ctx->iaoq_n, ctx->iaoq_n_var);
-}
 nullify_set(ctx, a->n);
 tcg_gen_lookup_and_goto_ptr();
 ctx->base.is_jmp = DISAS_NORETURN;
-- 
2.34.1




[PATCH v2 04/45] target/hppa: Pass displacement to do_dbranch

2024-05-13 Thread Richard Henderson
Pass a displacement instead of an absolute value.

In trans_be, remove the user-only do_dbranch case.  The branch we are
attempting to optimize is to the zero page, which is perforce on a
different page than the code currently executing, which means that
we will *not* use a goto_tb.  Use a plain indirect branch instead,
which is what we got out of the attempted direct branch anyway.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 33 +
 1 file changed, 9 insertions(+), 24 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 398803981c..4c42b518c5 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1766,9 +1766,11 @@ static bool do_fop_dedd(DisasContext *ctx, unsigned rt,
 
 /* Emit an unconditional branch to a direct target, which may or may not
have already had nullification handled.  */
-static bool do_dbranch(DisasContext *ctx, uint64_t dest,
+static bool do_dbranch(DisasContext *ctx, int64_t disp,
unsigned link, bool is_n)
 {
+uint64_t dest = iaoq_dest(ctx, disp);
+
 if (ctx->null_cond.c == TCG_COND_NEVER && ctx->null_lab == NULL) {
 if (link != 0) {
 copy_iaoq_entry(ctx, cpu_gr[link], ctx->iaoq_n, ctx->iaoq_n_var);
@@ -1815,10 +1817,7 @@ static bool do_cbranch(DisasContext *ctx, int64_t disp, 
bool is_n,
 
 /* Handle TRUE and NEVER as direct branches.  */
 if (c == TCG_COND_ALWAYS) {
-return do_dbranch(ctx, dest, 0, is_n && disp >= 0);
-}
-if (c == TCG_COND_NEVER) {
-return do_dbranch(ctx, ctx->iaoq_n, 0, is_n && disp < 0);
+return do_dbranch(ctx, disp, 0, is_n && disp >= 0);
 }
 
 taken = gen_new_label();
@@ -3914,22 +3913,6 @@ static bool trans_be(DisasContext *ctx, arg_be *a)
 {
 TCGv_i64 tmp;
 
-#ifdef CONFIG_USER_ONLY
-/* ??? It seems like there should be a good way of using
-   "be disp(sr2, r0)", the canonical gateway entry mechanism
-   to our advantage.  But that appears to be inconvenient to
-   manage along side branch delay slots.  Therefore we handle
-   entry into the gateway page via absolute address.  */
-/* Since we don't implement spaces, just branch.  Do notice the special
-   case of "be disp(*,r0)" using a direct branch to disp, so that we can
-   goto_tb to the TB containing the syscall.  */
-if (a->b == 0) {
-return do_dbranch(ctx, a->disp, a->l, a->n);
-}
-#else
-nullify_over(ctx);
-#endif
-
 tmp = tcg_temp_new_i64();
 tcg_gen_addi_i64(tmp, load_gpr(ctx, a->b), a->disp);
 tmp = do_ibranch_priv(ctx, tmp);
@@ -3939,6 +3922,8 @@ static bool trans_be(DisasContext *ctx, arg_be *a)
 #else
 TCGv_i64 new_spc = tcg_temp_new_i64();
 
+nullify_over(ctx);
+
 load_spr(ctx, new_spc, a->sp);
 if (a->l) {
 copy_iaoq_entry(ctx, cpu_gr[31], ctx->iaoq_n, ctx->iaoq_n_var);
@@ -3968,7 +3953,7 @@ static bool trans_be(DisasContext *ctx, arg_be *a)
 
 static bool trans_bl(DisasContext *ctx, arg_bl *a)
 {
-return do_dbranch(ctx, iaoq_dest(ctx, a->disp), a->l, a->n);
+return do_dbranch(ctx, a->disp, a->l, a->n);
 }
 
 static bool trans_b_gate(DisasContext *ctx, arg_b_gate *a)
@@ -4022,7 +4007,7 @@ static bool trans_b_gate(DisasContext *ctx, arg_b_gate *a)
 save_gpr(ctx, a->l, tmp);
 }
 
-return do_dbranch(ctx, dest, 0, a->n);
+return do_dbranch(ctx, dest - iaoq_dest(ctx, 0), 0, a->n);
 }
 
 static bool trans_blr(DisasContext *ctx, arg_blr *a)
@@ -4035,7 +4020,7 @@ static bool trans_blr(DisasContext *ctx, arg_blr *a)
 return do_ibranch(ctx, tmp, a->l, a->n);
 } else {
 /* BLR R0,RX is a good way to load PC+8 into RX.  */
-return do_dbranch(ctx, ctx->iaoq_f + 8, a->l, a->n);
+return do_dbranch(ctx, 0, a->l, a->n);
 }
 }
 
-- 
2.34.1




[PATCH v2 19/45] target/hppa: Rename cond_make_* helpers

2024-05-13 Thread Richard Henderson
Use 'v' for a variable that needs copying, 't' for a temp that
doesn't need copying, and 'i' for an immediate, and use this
naming for both arguments of the comparison.  So:

   cond_make_tmp -> cond_make_tt
   cond_make_0_tmp -> cond_make_ti
   cond_make_0 -> cond_make_vi
   cond_make -> cond_make_vv

Pass 0 explictly, rather than implicitly in the function name.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 52 -
 1 file changed, 26 insertions(+), 26 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 196297422b..5e32d985c9 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -345,32 +345,32 @@ static DisasCond cond_make_n(void)
 };
 }
 
-static DisasCond cond_make_tmp(TCGCond c, TCGv_i64 a0, TCGv_i64 a1)
+static DisasCond cond_make_tt(TCGCond c, TCGv_i64 a0, TCGv_i64 a1)
 {
 assert (c != TCG_COND_NEVER && c != TCG_COND_ALWAYS);
 return (DisasCond){ .c = c, .a0 = a0, .a1 = a1 };
 }
 
-static DisasCond cond_make_0_tmp(TCGCond c, TCGv_i64 a0)
+static DisasCond cond_make_ti(TCGCond c, TCGv_i64 a0, uint64_t imm)
 {
-return cond_make_tmp(c, a0, tcg_constant_i64(0));
+return cond_make_tt(c, a0, tcg_constant_i64(imm));
 }
 
-static DisasCond cond_make_0(TCGCond c, TCGv_i64 a0)
+static DisasCond cond_make_vi(TCGCond c, TCGv_i64 a0, uint64_t imm)
 {
 TCGv_i64 tmp = tcg_temp_new_i64();
 tcg_gen_mov_i64(tmp, a0);
-return cond_make_0_tmp(c, tmp);
+return cond_make_ti(c, tmp, imm);
 }
 
-static DisasCond cond_make(TCGCond c, TCGv_i64 a0, TCGv_i64 a1)
+static DisasCond cond_make_vv(TCGCond c, TCGv_i64 a0, TCGv_i64 a1)
 {
 TCGv_i64 t0 = tcg_temp_new_i64();
 TCGv_i64 t1 = tcg_temp_new_i64();
 
 tcg_gen_mov_i64(t0, a0);
 tcg_gen_mov_i64(t1, a1);
-return cond_make_tmp(c, t0, t1);
+return cond_make_tt(c, t0, t1);
 }
 
 static void cond_free(DisasCond *cond)
@@ -788,7 +788,7 @@ static DisasCond do_cond(DisasContext *ctx, unsigned cf, 
bool d,
 tcg_gen_ext32u_i64(tmp, res);
 res = tmp;
 }
-cond = cond_make_0(TCG_COND_EQ, res);
+cond = cond_make_vi(TCG_COND_EQ, res, 0);
 break;
 case 2: /* < / >=(N ^ V / !(N ^ V) */
 tmp = tcg_temp_new_i64();
@@ -796,7 +796,7 @@ static DisasCond do_cond(DisasContext *ctx, unsigned cf, 
bool d,
 if (!d) {
 tcg_gen_ext32s_i64(tmp, tmp);
 }
-cond = cond_make_0_tmp(TCG_COND_LT, tmp);
+cond = cond_make_ti(TCG_COND_LT, tmp, 0);
 break;
 case 3: /* <= / >(N ^ V) | Z / !((N ^ V) | Z) */
 /*
@@ -818,10 +818,10 @@ static DisasCond do_cond(DisasContext *ctx, unsigned cf, 
bool d,
 tcg_gen_sari_i64(tmp, tmp, 63);
 tcg_gen_and_i64(tmp, tmp, res);
 }
-cond = cond_make_0_tmp(TCG_COND_EQ, tmp);
+cond = cond_make_ti(TCG_COND_EQ, tmp, 0);
 break;
 case 4: /* NUV / UV  (!UV / UV) */
-cond = cond_make_0(TCG_COND_EQ, uv);
+cond = cond_make_vi(TCG_COND_EQ, uv, 0);
 break;
 case 5: /* ZNV / VNZ (!UV | Z / UV & !Z) */
 tmp = tcg_temp_new_i64();
@@ -829,7 +829,7 @@ static DisasCond do_cond(DisasContext *ctx, unsigned cf, 
bool d,
 if (!d) {
 tcg_gen_ext32u_i64(tmp, tmp);
 }
-cond = cond_make_0_tmp(TCG_COND_EQ, tmp);
+cond = cond_make_ti(TCG_COND_EQ, tmp, 0);
 break;
 case 6: /* SV / NSV  (V / !V) */
 if (!d) {
@@ -837,12 +837,12 @@ static DisasCond do_cond(DisasContext *ctx, unsigned cf, 
bool d,
 tcg_gen_ext32s_i64(tmp, sv);
 sv = tmp;
 }
-cond = cond_make_0(TCG_COND_LT, sv);
+cond = cond_make_ti(TCG_COND_LT, sv, 0);
 break;
 case 7: /* OD / EV */
 tmp = tcg_temp_new_i64();
 tcg_gen_andi_i64(tmp, res, 1);
-cond = cond_make_0_tmp(TCG_COND_NE, tmp);
+cond = cond_make_ti(TCG_COND_NE, tmp, 0);
 break;
 default:
 g_assert_not_reached();
@@ -904,9 +904,9 @@ static DisasCond do_sub_cond(DisasContext *ctx, unsigned 
cf, bool d,
 tcg_gen_ext32s_i64(t1, in1);
 tcg_gen_ext32s_i64(t2, in2);
 }
-return cond_make_tmp(tc, t1, t2);
+return cond_make_tt(tc, t1, t2);
 }
-return cond_make(tc, in1, in2);
+return cond_make_vv(tc, in1, in2);
 }
 
 /*
@@ -978,9 +978,9 @@ static DisasCond do_log_cond(DisasContext *ctx, unsigned 
cf, bool d,
 } else {
 tcg_gen_ext32s_i64(tmp, res);
 }
-return cond_make_0_tmp(tc, tmp);
+return cond_make_ti(tc, tmp, 0);
 }
-return cond_make_0(tc, res);
+return cond_make_vi(tc, res, 0);
 }
 
 /* Similar, but for shift/extract/deposit conditions.  */
@@ -1039,7 +1039,7 @@ static DisasCond do_unit_zero_cond(unsigned cf, bool d, 
TCGv_i64 res)
 tcg_gen_andc_i64(tmp, tmp, 

[PATCH v2 07/45] target/hppa: Add install_iaq_entries

2024-05-13 Thread Richard Henderson
Instead of two separate cpu_iaoq_entry calls, use one call to update
both IAQ_Front and IAQ_Back.  Simplify with an argument combination
that automatically handles a simple increment from Front to Back.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 64 +
 1 file changed, 33 insertions(+), 31 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index d272be0e6e..08d5e2a4bc 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -617,6 +617,23 @@ static void copy_iaoq_entry(DisasContext *ctx, TCGv_i64 
dest,
 }
 }
 
+static void install_iaq_entries(DisasContext *ctx, uint64_t bi, TCGv_i64 bv,
+uint64_t ni, TCGv_i64 nv)
+{
+copy_iaoq_entry(ctx, cpu_iaoq_f, bi, bv);
+
+/* Allow ni variable, with nv null, to indicate a trivial advance. */
+if (ni != -1 || nv) {
+copy_iaoq_entry(ctx, cpu_iaoq_b, ni, nv);
+} else if (bi != -1) {
+copy_iaoq_entry(ctx, cpu_iaoq_b, bi + 4, NULL);
+} else {
+tcg_gen_addi_i64(cpu_iaoq_b, cpu_iaoq_f, 4);
+tcg_gen_andi_i64(cpu_iaoq_b, cpu_iaoq_b,
+ gva_offset_mask(ctx->tb_flags));
+}
+}
+
 static inline uint64_t iaoq_dest(DisasContext *ctx, int64_t disp)
 {
 return ctx->iaoq_f + disp + 8;
@@ -629,8 +646,7 @@ static void gen_excp_1(int exception)
 
 static void gen_excp(DisasContext *ctx, int exception)
 {
-copy_iaoq_entry(ctx, cpu_iaoq_f, ctx->iaoq_f, cpu_iaoq_f);
-copy_iaoq_entry(ctx, cpu_iaoq_b, ctx->iaoq_b, cpu_iaoq_b);
+install_iaq_entries(ctx, ctx->iaoq_f, cpu_iaoq_f, ctx->iaoq_b, cpu_iaoq_b);
 nullify_save(ctx);
 gen_excp_1(exception);
 ctx->base.is_jmp = DISAS_NORETURN;
@@ -684,12 +700,10 @@ static void gen_goto_tb(DisasContext *ctx, int which,
 {
 if (use_goto_tb(ctx, b, n)) {
 tcg_gen_goto_tb(which);
-copy_iaoq_entry(ctx, cpu_iaoq_f, b, NULL);
-copy_iaoq_entry(ctx, cpu_iaoq_b, n, NULL);
+install_iaq_entries(ctx, b, NULL, n, NULL);
 tcg_gen_exit_tb(ctx->base.tb, which);
 } else {
-copy_iaoq_entry(ctx, cpu_iaoq_f, b, cpu_iaoq_b);
-copy_iaoq_entry(ctx, cpu_iaoq_b, n, ctx->iaoq_n_var);
+install_iaq_entries(ctx, b, cpu_iaoq_b, n, ctx->iaoq_n_var);
 tcg_gen_lookup_and_goto_ptr();
 }
 }
@@ -1883,9 +1897,7 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 }
 if (is_n) {
 if (use_nullify_skip(ctx)) {
-copy_iaoq_entry(ctx, cpu_iaoq_f, -1, next);
-tcg_gen_addi_i64(next, next, 4);
-copy_iaoq_entry(ctx, cpu_iaoq_b, -1, next);
+install_iaq_entries(ctx, -1, next, -1, NULL);
 nullify_set(ctx, 0);
 ctx->base.is_jmp = DISAS_IAQ_N_UPDATED;
 return true;
@@ -1900,14 +1912,10 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 nullify_over(ctx);
 
 if (is_n && use_nullify_skip(ctx)) {
-copy_iaoq_entry(ctx, cpu_iaoq_f, -1, dest);
-next = tcg_temp_new_i64();
-tcg_gen_addi_i64(next, dest, 4);
-copy_iaoq_entry(ctx, cpu_iaoq_b, -1, next);
+install_iaq_entries(ctx, -1, dest, -1, NULL);
 nullify_set(ctx, 0);
 } else {
-copy_iaoq_entry(ctx, cpu_iaoq_f, ctx->iaoq_b, cpu_iaoq_b);
-copy_iaoq_entry(ctx, cpu_iaoq_b, -1, dest);
+install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b, -1, dest);
 nullify_set(ctx, is_n);
 }
 if (link != 0) {
@@ -1998,9 +2006,7 @@ static void do_page_zero(DisasContext *ctx)
 tcg_gen_st_i64(cpu_gr[26], tcg_env, offsetof(CPUHPPAState, cr[27]));
 tmp = tcg_temp_new_i64();
 tcg_gen_ori_i64(tmp, cpu_gr[31], 3);
-copy_iaoq_entry(ctx, cpu_iaoq_f, -1, tmp);
-tcg_gen_addi_i64(tmp, tmp, 4);
-copy_iaoq_entry(ctx, cpu_iaoq_b, -1, tmp);
+install_iaq_entries(ctx, -1, tmp, -1, NULL);
 ctx->base.is_jmp = DISAS_IAQ_N_UPDATED;
 break;
 
@@ -2744,8 +2750,8 @@ static bool trans_or(DisasContext *ctx, arg_rrr_cf_d *a)
 nullify_over(ctx);
 
 /* Advance the instruction queue.  */
-copy_iaoq_entry(ctx, cpu_iaoq_f, ctx->iaoq_b, cpu_iaoq_b);
-copy_iaoq_entry(ctx, cpu_iaoq_b, ctx->iaoq_n, ctx->iaoq_n_var);
+install_iaq_entries(ctx, ctx->iaoq_b, cpu_iaoq_b,
+ctx->iaoq_n, ctx->iaoq_n_var);
 nullify_set(ctx, 0);
 
 /* Tell the qemu main loop to halt until this cpu has work.  */
@@ -3898,18 +3904,15 @@ static bool trans_be(DisasContext *ctx, arg_be *a)
 tcg_gen_mov_i64(cpu_sr[0], cpu_iasq_b);
 }
 if (a->n && use_nullify_skip(ctx)) {
-copy_iaoq_entry(ctx, cpu_iaoq_f, -1, tmp);
-tcg_gen_addi_i64(tmp, tmp, 4);
-copy_iaoq_entry

[PATCH v2 02/45] target/hppa: Use hppa_form_gva_psw in hppa_cpu_get_pc

2024-05-13 Thread Richard Henderson
This function is for log_pc(), which needs to produce a
similar result to cpu_get_tb_cpu_state().

Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 582036b31e..be8c558014 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -38,9 +38,10 @@ static void hppa_cpu_set_pc(CPUState *cs, vaddr value)
 
 static vaddr hppa_cpu_get_pc(CPUState *cs)
 {
-HPPACPU *cpu = HPPA_CPU(cs);
+CPUHPPAState *env = cpu_env(cs);
 
-return cpu->env.iaoq_f;
+return hppa_form_gva_psw(env->psw, (env->psw & PSW_C ? env->iasq_f : 0),
+ env->iaoq_f & -4);
 }
 
 void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
@@ -61,8 +62,7 @@ void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
 flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_P);
 flags |= (env->iaoq_f & 3) << TB_FLAG_PRIV_SHIFT;
 
-*pc = hppa_form_gva_psw(env->psw, (env->psw & PSW_C ? env->iasq_f : 0),
-env->iaoq_f & -4);
+*pc = hppa_cpu_get_pc(env_cpu(env));
 *cs_base = env->iasq_f;
 
 /* Insert a difference between IAOQ_B and IAOQ_F within the otherwise zero
-- 
2.34.1




[PATCH v2 05/45] target/hppa: Allow prior nullification in do_ibranch

2024-05-13 Thread Richard Henderson
Simplify the function by not attempting a conditional move
on the branch destination -- just use nullify_over normally.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 73 +++--
 1 file changed, 20 insertions(+), 53 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 4c42b518c5..140dfb747a 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1871,17 +1871,15 @@ static bool do_cbranch(DisasContext *ctx, int64_t disp, 
bool is_n,
 static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
unsigned link, bool is_n)
 {
-TCGv_i64 a0, a1, next, tmp;
-TCGCond c;
+TCGv_i64 next;
 
-assert(ctx->null_lab == NULL);
+if (ctx->null_cond.c == TCG_COND_NEVER && ctx->null_lab == NULL) {
+next = tcg_temp_new_i64();
+tcg_gen_mov_i64(next, dest);
 
-if (ctx->null_cond.c == TCG_COND_NEVER) {
 if (link != 0) {
 copy_iaoq_entry(ctx, cpu_gr[link], ctx->iaoq_n, ctx->iaoq_n_var);
 }
-next = tcg_temp_new_i64();
-tcg_gen_mov_i64(next, dest);
 if (is_n) {
 if (use_nullify_skip(ctx)) {
 copy_iaoq_entry(ctx, cpu_iaoq_f, -1, next);
@@ -1895,60 +1893,29 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 dest,
 }
 ctx->iaoq_n = -1;
 ctx->iaoq_n_var = next;
-} else if (is_n && use_nullify_skip(ctx)) {
-/* The (conditional) branch, B, nullifies the next insn, N,
-   and we're allowed to skip execution N (no single-step or
-   tracepoint in effect).  Since the goto_ptr that we must use
-   for the indirect branch consumes no special resources, we
-   can (conditionally) skip B and continue execution.  */
-/* The use_nullify_skip test implies we have a known control path.  */
-tcg_debug_assert(ctx->iaoq_b != -1);
-tcg_debug_assert(ctx->iaoq_n != -1);
+return true;
+}
 
-/* We do have to handle the non-local temporary, DEST, before
-   branching.  Since IOAQ_F is not really live at this point, we
-   can simply store DEST optimistically.  Similarly with IAOQ_B.  */
+nullify_over(ctx);
+
+if (is_n && use_nullify_skip(ctx)) {
 copy_iaoq_entry(ctx, cpu_iaoq_f, -1, dest);
 next = tcg_temp_new_i64();
 tcg_gen_addi_i64(next, dest, 4);
 copy_iaoq_entry(ctx, cpu_iaoq_b, -1, next);
-
-nullify_over(ctx);
-if (link != 0) {
-copy_iaoq_entry(ctx, cpu_gr[link], ctx->iaoq_n, ctx->iaoq_n_var);
-}
-tcg_gen_lookup_and_goto_ptr();
-return nullify_end(ctx);
+nullify_set(ctx, 0);
 } else {
-c = ctx->null_cond.c;
-a0 = ctx->null_cond.a0;
-a1 = ctx->null_cond.a1;
-
-tmp = tcg_temp_new_i64();
-next = tcg_temp_new_i64();
-
-copy_iaoq_entry(ctx, tmp, ctx->iaoq_n, ctx->iaoq_n_var);
-tcg_gen_movcond_i64(c, next, a0, a1, tmp, dest);
-ctx->iaoq_n = -1;
-ctx->iaoq_n_var = next;
-
-if (link != 0) {
-tcg_gen_movcond_i64(c, cpu_gr[link], a0, a1, cpu_gr[link], tmp);
-}
-
-if (is_n) {
-/* The branch nullifies the next insn, which means the state of N
-   after the branch is the inverse of the state of N that applied
-   to the branch.  */
-tcg_gen_setcond_i64(tcg_invert_cond(c), cpu_psw_n, a0, a1);
-cond_free(>null_cond);
-ctx->null_cond = cond_make_n();
-ctx->psw_n_nonzero = true;
-} else {
-cond_free(>null_cond);
-}
+copy_iaoq_entry(ctx, cpu_iaoq_f, ctx->iaoq_b, cpu_iaoq_b);
+copy_iaoq_entry(ctx, cpu_iaoq_b, -1, dest);
+nullify_set(ctx, is_n);
 }
-return true;
+if (link != 0) {
+copy_iaoq_entry(ctx, cpu_gr[link], ctx->iaoq_n, ctx->iaoq_n_var);
+}
+
+tcg_gen_lookup_and_goto_ptr();
+ctx->base.is_jmp = DISAS_NORETURN;
+return nullify_end(ctx);
 }
 
 /* Implement
-- 
2.34.1




[PATCH v2 01/45] target/hppa: Move cpu_get_tb_cpu_state out of line

2024-05-13 Thread Richard Henderson
Signed-off-by: Richard Henderson 
---
 target/hppa/cpu.h | 43 ++-
 target/hppa/cpu.c | 42 ++
 2 files changed, 44 insertions(+), 41 deletions(-)

diff --git a/target/hppa/cpu.h b/target/hppa/cpu.h
index fb2e4c4a98..61f1353133 100644
--- a/target/hppa/cpu.h
+++ b/target/hppa/cpu.h
@@ -314,47 +314,8 @@ hwaddr hppa_abs_to_phys_pa2_w1(vaddr addr);
 #define TB_FLAG_PRIV_SHIFT  8
 #define TB_FLAG_UNALIGN 0x400
 
-static inline void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
-uint64_t *cs_base, uint32_t *pflags)
-{
-uint32_t flags = env->psw_n * PSW_N;
-
-/* TB lookup assumes that PC contains the complete virtual address.
-   If we leave space+offset separate, we'll get ITLB misses to an
-   incomplete virtual address.  This also means that we must separate
-   out current cpu privilege from the low bits of IAOQ_F.  */
-#ifdef CONFIG_USER_ONLY
-*pc = env->iaoq_f & -4;
-*cs_base = env->iaoq_b & -4;
-flags |= TB_FLAG_UNALIGN * !env_cpu(env)->prctl_unalign_sigbus;
-#else
-/* ??? E, T, H, L, B bits need to be here, when implemented.  */
-flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_P);
-flags |= (env->iaoq_f & 3) << TB_FLAG_PRIV_SHIFT;
-
-*pc = hppa_form_gva_psw(env->psw, (env->psw & PSW_C ? env->iasq_f : 0),
-env->iaoq_f & -4);
-*cs_base = env->iasq_f;
-
-/* Insert a difference between IAOQ_B and IAOQ_F within the otherwise zero
-   low 32-bits of CS_BASE.  This will succeed for all direct branches,
-   which is the primary case we care about -- using goto_tb within a page.
-   Failure is indicated by a zero difference.  */
-if (env->iasq_f == env->iasq_b) {
-target_long diff = env->iaoq_b - env->iaoq_f;
-if (diff == (int32_t)diff) {
-*cs_base |= (uint32_t)diff;
-}
-}
-if ((env->sr[4] == env->sr[5])
-& (env->sr[4] == env->sr[6])
-& (env->sr[4] == env->sr[7])) {
-flags |= TB_FLAG_SR_SAME;
-}
-#endif
-
-*pflags = flags;
-}
+void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
+  uint64_t *cs_base, uint32_t *pflags);
 
 target_ulong cpu_hppa_get_psw(CPUHPPAState *env);
 void cpu_hppa_put_psw(CPUHPPAState *env, target_ulong);
diff --git a/target/hppa/cpu.c b/target/hppa/cpu.c
index 393a81988d..582036b31e 100644
--- a/target/hppa/cpu.c
+++ b/target/hppa/cpu.c
@@ -43,6 +43,48 @@ static vaddr hppa_cpu_get_pc(CPUState *cs)
 return cpu->env.iaoq_f;
 }
 
+void cpu_get_tb_cpu_state(CPUHPPAState *env, vaddr *pc,
+  uint64_t *cs_base, uint32_t *pflags)
+{
+uint32_t flags = env->psw_n * PSW_N;
+
+/* TB lookup assumes that PC contains the complete virtual address.
+   If we leave space+offset separate, we'll get ITLB misses to an
+   incomplete virtual address.  This also means that we must separate
+   out current cpu privilege from the low bits of IAOQ_F.  */
+#ifdef CONFIG_USER_ONLY
+*pc = env->iaoq_f & -4;
+*cs_base = env->iaoq_b & -4;
+flags |= TB_FLAG_UNALIGN * !env_cpu(env)->prctl_unalign_sigbus;
+#else
+/* ??? E, T, H, L, B bits need to be here, when implemented.  */
+flags |= env->psw & (PSW_W | PSW_C | PSW_D | PSW_P);
+flags |= (env->iaoq_f & 3) << TB_FLAG_PRIV_SHIFT;
+
+*pc = hppa_form_gva_psw(env->psw, (env->psw & PSW_C ? env->iasq_f : 0),
+env->iaoq_f & -4);
+*cs_base = env->iasq_f;
+
+/* Insert a difference between IAOQ_B and IAOQ_F within the otherwise zero
+   low 32-bits of CS_BASE.  This will succeed for all direct branches,
+   which is the primary case we care about -- using goto_tb within a page.
+   Failure is indicated by a zero difference.  */
+if (env->iasq_f == env->iasq_b) {
+target_long diff = env->iaoq_b - env->iaoq_f;
+if (diff == (int32_t)diff) {
+*cs_base |= (uint32_t)diff;
+}
+}
+if ((env->sr[4] == env->sr[5])
+& (env->sr[4] == env->sr[6])
+& (env->sr[4] == env->sr[7])) {
+flags |= TB_FLAG_SR_SAME;
+}
+#endif
+
+*pflags = flags;
+}
+
 static void hppa_cpu_synchronize_from_tb(CPUState *cs,
  const TranslationBlock *tb)
 {
-- 
2.34.1




[PATCH v2 17/45] target/hppa: Introduce and use DisasIAQE for branch management

2024-05-13 Thread Richard Henderson
Wrap offset and space together in one structure, ensuring
that they're copied together as required.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 378 +---
 1 file changed, 198 insertions(+), 180 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index f267de14c6..4e2e35f9cc 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -42,21 +42,23 @@ typedef struct DisasCond {
 TCGv_i64 a0, a1;
 } DisasCond;
 
+typedef struct DisasIAQE {
+/* IASQ; may be null for no change from TB. */
+TCGv_i64 space;
+/* IAOQ base; may be null for immediate absolute address. */
+TCGv_i64 base;
+/* IAOQ addend; absolute immedate address if base is null. */
+int64_t disp;
+} DisasIAQE;
+
 typedef struct DisasContext {
 DisasContextBase base;
 CPUState *cs;
 
-uint64_t iaoq_f;
-uint64_t iaoq_b;
-uint64_t iaoq_n;
-TCGv_i64 iaoq_n_var;
-/*
- * Null when IASQ_Back unchanged from IASQ_Front,
- * or cpu_iasq_b, when IASQ_Back has been changed.
- */
-TCGv_i64 iasq_b;
-/* Null when IASQ_Next unchanged from IASQ_Back, or set by branch. */
-TCGv_i64 iasq_n;
+/* IAQ_Front, IAQ_Back. */
+DisasIAQE iaq_f, iaq_b;
+/* IAQ_Next, for jumps, otherwise null for simple advance. */
+DisasIAQE iaq_j, *iaq_n;
 
 DisasCond null_cond;
 TCGLabel *null_lab;
@@ -602,49 +604,67 @@ static bool nullify_end(DisasContext *ctx)
 return true;
 }
 
+static bool iaqe_variable(const DisasIAQE *e)
+{
+return e->base || e->space;
+}
+
+static DisasIAQE iaqe_incr(const DisasIAQE *e, int64_t disp)
+{
+return (DisasIAQE){
+.space = e->space,
+.base = e->base,
+.disp = e->disp + disp,
+};
+}
+
+static DisasIAQE iaqe_branchi(DisasContext *ctx, int64_t disp)
+{
+return (DisasIAQE){
+.space = ctx->iaq_b.space,
+.disp = ctx->iaq_f.disp + 8 + disp,
+};
+}
+
+static DisasIAQE iaqe_next_absv(DisasContext *ctx, TCGv_i64 var)
+{
+return (DisasIAQE){
+.space = ctx->iaq_b.space,
+.base = var,
+};
+}
+
 static void copy_iaoq_entry(DisasContext *ctx, TCGv_i64 dest,
-uint64_t ival, TCGv_i64 vval)
+const DisasIAQE *src)
 {
 uint64_t mask = gva_offset_mask(ctx->tb_flags);
 
-if (ival != -1) {
-tcg_gen_movi_i64(dest, ival & mask);
-return;
-}
-tcg_debug_assert(vval != NULL);
-
-/*
- * We know that the IAOQ is already properly masked.
- * This optimization is primarily for "iaoq_f = iaoq_b".
- */
-if (vval == cpu_iaoq_f || vval == cpu_iaoq_b) {
-tcg_gen_mov_i64(dest, vval);
+if (src->base == NULL) {
+tcg_gen_movi_i64(dest, src->disp & mask);
+} else if (src->disp == 0) {
+tcg_gen_andi_i64(dest, src->base, mask);
 } else {
-tcg_gen_andi_i64(dest, vval, mask);
+tcg_gen_addi_i64(dest, src->base, src->disp);
+tcg_gen_andi_i64(dest, dest, mask);
 }
 }
 
-static void install_iaq_entries(DisasContext *ctx,
-uint64_t bi, TCGv_i64 bv, TCGv_i64 bs,
-uint64_t ni, TCGv_i64 nv, TCGv_i64 ns)
+static void install_iaq_entries(DisasContext *ctx, const DisasIAQE *f,
+const DisasIAQE *b)
 {
-copy_iaoq_entry(ctx, cpu_iaoq_f, bi, bv);
+DisasIAQE b_next;
 
-/* Allow ni variable, with nv null, to indicate a trivial advance. */
-if (ni != -1 || nv) {
-copy_iaoq_entry(ctx, cpu_iaoq_b, ni, nv);
-} else if (bi != -1) {
-copy_iaoq_entry(ctx, cpu_iaoq_b, bi + 4, NULL);
-} else {
-tcg_gen_addi_i64(cpu_iaoq_b, cpu_iaoq_f, 4);
-tcg_gen_andi_i64(cpu_iaoq_b, cpu_iaoq_b,
- gva_offset_mask(ctx->tb_flags));
+if (b == NULL) {
+b_next = iaqe_incr(f, 4);
+b = _next;
 }
-if (bs) {
-tcg_gen_mov_i64(cpu_iasq_f, bs);
+copy_iaoq_entry(ctx, cpu_iaoq_f, f);
+copy_iaoq_entry(ctx, cpu_iaoq_b, b);
+if (f->space) {
+tcg_gen_mov_i64(cpu_iasq_f, f->space);
 }
-if (ns || bs) {
-tcg_gen_mov_i64(cpu_iasq_b, ns ? ns : bs);
+if (b->space || f->space) {
+tcg_gen_mov_i64(cpu_iasq_b, b->space ? : f->space);
 }
 }
 
@@ -652,10 +672,11 @@ static void install_link(DisasContext *ctx, unsigned 
link, bool with_sr0)
 {
 tcg_debug_assert(ctx->null_cond.c == TCG_COND_NEVER);
 if (link) {
-if (ctx->iaoq_b == -1) {
-tcg_gen_addi_i64(cpu_gr[link], cpu_iaoq_b, 4);
+if (ctx->iaq_b.base) {
+tcg_gen_addi_i64(cpu_gr[link], ctx->iaq_b.base,
+ ctx->iaq_b.disp + 4);
 } else {
-tcg_gen_movi_i64(cpu_gr[link], ctx->iaoq_b + 4);
+tcg_gen_movi_i64(cpu_gr

[PATCH v2 16/45] target/hppa: Always make a copy in do_ibranch_priv

2024-05-13 Thread Richard Henderson
This simplifies callers, which might otherwise have
to make another copy.

Signed-off-by: Richard Henderson 
---
 target/hppa/translate.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 22935f4645..f267de14c6 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1967,18 +1967,17 @@ static bool do_ibranch(DisasContext *ctx, TCGv_i64 
dest, TCGv_i64 dspc,
  */
 static TCGv_i64 do_ibranch_priv(DisasContext *ctx, TCGv_i64 offset)
 {
-TCGv_i64 dest;
+TCGv_i64 dest = tcg_temp_new_i64();
 switch (ctx->privilege) {
 case 0:
 /* Privilege 0 is maximum and is allowed to decrease.  */
-return offset;
+tcg_gen_mov_i64(dest, offset);
+break;
 case 3:
 /* Privilege 3 is minimum and is never allowed to increase.  */
-dest = tcg_temp_new_i64();
 tcg_gen_ori_i64(dest, offset, 3);
 break;
 default:
-dest = tcg_temp_new_i64();
 tcg_gen_andi_i64(dest, offset, -4);
 tcg_gen_ori_i64(dest, dest, ctx->privilege);
 tcg_gen_umax_i64(dest, dest, offset);
-- 
2.34.1




[PATCH v2 00/45] target/hppa: Misc improvements

2024-05-13 Thread Richard Henderson
Most of the patches lead up to implementing CF_PCREL.
Along the way there is a grab bag of code updates (TCG_COND_TST*),
bug fixes (space changes during branch-in-branch-delay-slot),
and implementation of features (PSW bits B, X, T, H, L).

Sven reported that PSW L tripped up HP/UX, so possibly there's
something wrong there, but that's right at the end of the patch set.
So I'd like some feedback on the rest leading up to that too.

Changes for v2:
  - Rebase and update for tcg_cflags_set.


r~


Richard Henderson (45):
  target/hppa: Move cpu_get_tb_cpu_state out of line
  target/hppa: Use hppa_form_gva_psw in hppa_cpu_get_pc
  target/hppa: Move constant destination check into use_goto_tb
  target/hppa: Pass displacement to do_dbranch
  target/hppa: Allow prior nullification in do_ibranch
  target/hppa: Use CF_BP_PAGE instead of cpu_breakpoint_test
  target/hppa: Add install_iaq_entries
  target/hppa: Add install_link
  target/hppa: Delay computation of IAQ_Next
  target/hppa: Skip nullified insns in unconditional dbranch path
  target/hppa: Simplify TB end
  target/hppa: Add IASQ entries to DisasContext
  target/hppa: Add space arguments to install_iaq_entries
  target/hppa: Add space argument to do_ibranch
  target/hppa: Use umax in do_ibranch_priv
  target/hppa: Always make a copy in do_ibranch_priv
  target/hppa: Introduce and use DisasIAQE for branch management
  target/hppa: Use displacements in DisasIAQE
  target/hppa: Rename cond_make_* helpers
  target/hppa: Use TCG_COND_TST* in do_cond
  target/hppa: Use TCG_COND_TST* in do_log_cond
  target/hppa: Use TCG_COND_TST* in do_unit_zero_cond
  target/hppa: Use TCG_COND_TST* in do_unit_addsub
  target/hppa: Use TCG_COND_TST* in trans_bb_imm
  target/hppa: Use registerfields.h for FPSR
  target/hppa: Use TCG_COND_TST* in trans_ftest
  target/hppa: Remove cond_free
  target/hppa: Introduce DisasDelayException
  target/hppa: Use delay_excp for conditional traps
  target/hppa: Use delay_excp for conditional trap on overflow
  linux-user/hppa: Force all code addresses to PRIV_USER
  target/hppa: Store full iaoq_f and page offset of iaoq_b in TB
  target/hppa: Do not mask in copy_iaoq_entry
  target/hppa: Improve hppa_cpu_dump_state
  target/hppa: Split PSW X and B into their own field
  target/hppa: Manage PSW_X and PSW_B in translator
  target/hppa: Implement PSW_B
  target/hppa: Implement PSW_X
  target/hppa: Drop tlb_entry return from hppa_get_physical_address
  target/hppa: Adjust priv for B,GATE at runtime
  target/hppa: Implement CF_PCREL
  target/hppa: Implement PSW_T
  target/hppa: Implement PSW_H, PSW_L
  target/hppa: Log cpu state at interrupt
  target/hppa: Log cpu state on return-from-interrupt

 linux-user/hppa/target_cpu.h |4 +-
 target/hppa/cpu.h|   80 +--
 target/hppa/helper.h |3 +-
 linux-user/elfload.c |4 +-
 linux-user/hppa/cpu_loop.c   |   14 +-
 linux-user/hppa/signal.c |6 +-
 target/hppa/cpu.c|   92 ++-
 target/hppa/fpu_helper.c |   26 +-
 target/hppa/gdbstub.c|6 +
 target/hppa/helper.c |   66 +-
 target/hppa/int_helper.c |   33 +-
 target/hppa/mem_helper.c |   99 +--
 target/hppa/op_helper.c  |   17 +-
 target/hppa/sys_helper.c |   12 +
 target/hppa/translate.c  | 1232 ++
 15 files changed, 947 insertions(+), 747 deletions(-)

-- 
2.34.1




Kubernetes runners down?

2024-05-13 Thread Richard Henderson

Our gitlab ci has fallen over this weekend, not connecting to k8s.

https://gitlab.com/qemu-project/qemu/-/pipelines/1287356573


r~



[VOTE] Apache StormCrawler (Incubating) 3.0 Release Candidate 2

2024-05-13 Thread Richard Zowalla
Hi folks,

I have posted release candidate for the Apache StormCrawler
(Incubating) 3.0 release and it is ready for testing.

This is our first release after joining the ASF incubator as a
poddling. It is a breaking change with renamings in the group ids and
the removal of the elasticsearch module.

Thank you to everyone who contributed to this release, including all of
our users and the people who submitted bug reports,
contributed code or documentation enhancements.

The release was made using the Apache StormCrawler (Incubating) release
process, documented here:
https://github.com/apache/incubator-stormcrawler/RELEASING.md

Here is the VOTE and VOTE Result on our dev@stormcrawler.a.o list

- https://lists.apache.org/thread/l66fyzf9ly7twmfz1vjw1x3sfl0jkzd3
- https://lists.apache.org/thread/19txfl2ykmjzot8nb1mx6krh8ogzjhkp


Source:

https://dist.apache.org/repos/dist/dev/incubator/stormcrawler/stormcrawler-3.0/


Tag:

https://github.com/apache/incubator-stormcrawler/releases/tag/stormcrawler-3.0

Maven Repo:

https://repository.apache.org/content/repositories/orgapachestormcrawler-1001/



stormcrawler-3.0-rc2
Testing StormCrawler 3.0 release candidate


https://repository.apache.org/content/repositories/orgapachestormcrawler-1001/





Preview of website:

https://stormcrawler.staged.apache.org/download/index.html

Release notes:

https://github.com/apache/incubator-stormcrawler/releases/tag/stormcrawler-3.0

Reminder: The up-2-date KEYS file for signature verification can be
found here:
https://dist.apache.org/repos/dist/release/incubator/stormcrawler/KEYS

Please vote on releasing these packages as Apache StormCrawler
(Incubating) 3.0

The vote is open for at least the next 72 hours.

Only votes from IPMC members are binding, but everyone is
welcome to check the release candidate and vote.

The vote passes if at least three binding +1 votes are cast.

Please VOTE

[+1] go ship it
[+0] meh, don't care
[-1] stop, there is a ${showstopper}

Please include your checklist in your vote:
https://cwiki.apache.org/confluence/display/INCUBATOR/Incubator+Release+Checklist


Thanks!
Richard



signature.asc
Description: This is a digitally signed message part


Re: [PATCH] Don't reduce estimated unrolled size for innermost loop.

2024-05-13 Thread Richard Biener
dge edge_to_cancel,
>
>  static unsigned HOST_WIDE_INT
>  estimated_unrolled_size (struct loop_size *size,
> -unsigned HOST_WIDE_INT nunroll)
> +unsigned HOST_WIDE_INT nunroll,
> +enum unroll_level ul,
> +class loop* loop)
>  {
>HOST_WIDE_INT unr_insns = ((nunroll)
>  * (HOST_WIDE_INT) (size->overall
> @@ -453,7 +455,15 @@ estimated_unrolled_size (struct loop_size *size,
>  unr_insns = 0;
>unr_insns += size->last_iteration - 
> size->last_iteration_eliminated_by_peeling;
>
> -  unr_insns = unr_insns * 2 / 3;
> +  /* For innermost loop, loop body is not likely to be simplied as much as 
> 1/3.
> + and may increase a lot of register pressure.
> + UL != UL_ALL is need to unroll small loop at O2.  */
> +  class loop *loop_father = loop_outer (loop);
> +  if (loop->inner || !loop_father

Do we ever get here for !loop_father?  We shouldn't.

> +  || loop_father->latch == EXIT_BLOCK_PTR_FOR_FN (cfun)

This means you excempt all loops that are direct children of the loop
root tree.  That doesn't make much sense.

> +  || ul != UL_ALL)

This is also quite odd - we're being more optimistic for UL_NO_GROWTH
than for UL_ALL?  This doesn't make much sense.

Overall I think this means removal of being optimistic doesn't work so well?

If we need some extra leeway for UL_NO_GROWTH for what we expect
to unroll it might be better to add sth like --param
nogrowth-completely-peeled-insns
specifying a fixed surplus size?  Or we need to look at what's the problem
with the testcases regressing or the one you are trying to fix.

I did experiment with better estimating cleanup done at some point
(see attached),
but didn't get to finishing that (and as said, as we're running VN on the result
we'd ideally do that as part of the estimation somehow).

Richard.

> +unr_insns = unr_insns * 2 / 3;
> +
>if (unr_insns <= 0)
>  unr_insns = 1;
>
> @@ -837,7 +847,7 @@ try_unroll_loop_completely (class loop *loop,
>
>   unsigned HOST_WIDE_INT ninsns = size.overall;
>   unsigned HOST_WIDE_INT unr_insns
> -   = estimated_unrolled_size (, n_unroll);
> +   = estimated_unrolled_size (, n_unroll, ul, loop);
>   if (dump_file && (dump_flags & TDF_DETAILS))
> {
>   fprintf (dump_file, "  Loop size: %d\n", (int) ninsns);
> --
> 2.31.1
>


p
Description: Binary data


[VOTE] [RESULT] Apache StormCrawler (Incubating) 3.0 Release Candidate 2

2024-05-13 Thread Richard Zowalla
Hi all,

this vote passes with the following +1 being cast:


+1 Tim Allison (binding, PPMC & IPMC)
+1 Josh Fischer 
+1 Dave Fisher (binding, mentor & IPMC)
+1 Julien Nioche (PPMC)
+1 Richard Zowalla (binding, PPMC & IPMC)
+1 Ayush Saxena (binding, mentor & IPMC)

I will move the vote to the general@ list now.

Thanks für your votes.
Richard


Re: [VOTE] Apache StormCrawler (Incubating) 3.0 Release Candidate 2

2024-05-13 Thread Richard Zowalla
Here is my own +1 (binding)

I did the following:

- Build from source
- Build from tag
- Used artifacts from the staging repository in our SC-based project
- Conducted a crawl (mostly with OpenSearch as a backend) as a local
cluster, a docker compose cluster setup and on a k8s cluster setup.
- Checked checksums + asc signatures
- Re-checked licenses.

Gruß
Richard

Am Dienstag, dem 07.05.2024 um 11:12 +0200 schrieb Richard Zowalla:
> Hi folks,
> 
> I have posted a 2nd release candidate for the Apache StormCrawler
> (Incubating) 3.0 release and it is ready for testing. 
> 
> The previous VOTE was cancelled because building from source (without
> an initalized git repo) wasn't possible.
> 
> This is our first release after joining the ASF incubator as a
> poddling. It is a breaking change with renamings in the group ids and
> the removal of the elasticsearch module.
> 
> Thank you to everyone who contributed to this release, including all
> of
> our users and the people who submitted bug reports,
> contributed code or documentation enhancements.
> 
> The release was made using the Apache StormCrawler (Incubating)
> release
> process, documented here:
> https://github.com/apache/incubator-stormcrawler/RELEASING.md
> 
> Maven Repo:
> https://repository.apache.org/content/repositories/orgapachestormcrawler-1001/
> 
> 
> 
> stormcrawler-3.0-rc1
> Testing StormCrawler 3.0 release candidate
> 
> 
> https://repository.apache.org/content/repositories/orgapachestormcrawler-1001/
> 
> 
> 
> 
> Source:
> 
> https://dist.apache.org/repos/dist/dev/incubator/stormcrawler/stormcrawler-3.0/
> 
> 
> Tag:
> 
> 
> https://github.com/apache/incubator-stormcrawler/releases/tag/stormcrawler-3.0
> 
> Preview of website:
> 
> https://stormcrawler.staged.apache.org/download/index.html
> 
> Release notes:
> 
> https://github.com/apache/incubator-stormcrawler/releases/tag/stormcrawler-3.0
> 
> Reminder: The up-2-date KEYS file for signature verification can be
> found here:
> https://dist.apache.org/repos/dist/release/incubator/stormcrawler/KEYS
> 
> Please vote on releasing these packages as Apache StormCrawler
> (Incubating) 3.0
> 
> The vote is open for at least the next 72 hours.
> 
> Only votes from IPMC members are binding, but everyone on the PPMC is
> welcome to check the release candidate and vote.
> 
> The vote passes if at least three binding +1 votes are cast.
> 
> Please VOTE
> 
> [+1] go ship it
> [+0] meh, don't care
> [-1] stop, there is a ${showstopper}
> 
> Please include your checklist in your vote:
> https://cwiki.apache.org/confluence/display/INCUBATOR/Incubator+Release+Checklist
> 
> Note: After this VOTE passes on our dev@ list, the VOTE will be
> brought
> to general@ in order to get the necessary IPMC votes.
> 
> Thanks!
> Richard
> 



Re: [PATCH 1/4] rs6000: Make all 128 bit scalar FP modes have 128 bit precision [PR112993]

2024-05-13 Thread Richard Biener
On Mon, May 13, 2024 at 3:39 AM Kewen.Lin  wrote:
>
> Hi Joseph and Richi,
>
> Thanks for the suggestions and comments!
>
> on 2024/5/10 14:31, Richard Biener wrote:
> > On Thu, May 9, 2024 at 9:12 PM Joseph Myers  wrote:
> >>
> >> On Wed, 8 May 2024, Kewen.Lin wrote:
> >>
> >>> to widen IFmode to TFmode.  To make build_common_tree_nodes
> >>> be able to find the correct mode for long double type node,
> >>> it introduces one hook mode_for_longdouble to offer target
> >>> a way to specify the mode used for long double type node.
> >>
> >> I don't really like layering a hook on top of the old target macro as a
> >> way to address a deficiency in the design of that target macro (floating
> >> types should have their mode, not a poorly defined precision value,
> >> specified directly by the target).
>
> Good point!
>
> >
> > Seconded.
> >
> >> A better hook design might be something like mode_for_floating_type (enum
> >> tree_index), where the argument is TI_FLOAT_TYPE, TI_DOUBLE_TYPE or
> >> TI_LONG_DOUBLE_TYPE, replacing all definitions and uses of
> >> FLOAT_TYPE_SIZE, DOUBLE_TYPE_SIZE and LONG_DOUBLE_TYPE_SIZE with the
> >> single new hook and appropriate definitions for each target (with a
> >> default definition that uses SFmode for float and DFmode for double and
> >> long double, which would be suitable for many targets).
> >
>
> The originally proposed hook was meant to make the other ports unaffected,
> but I agree that introducing such hook would be more clear.
>
> > In fact replacing all of X_TYPE_SIZE with a single hook might be worthwhile
> > though this removes the "convenient" defaulting, requiring each target to
> > enumerate all standard C ABI type modes.  But that might be also a good 
> > thing.
> >
>
> I guess the main value by extending from floating point types to all is to
> unify them?  (Assuming that excepting for floating types the others would
> not have multiple possible representations like what we faces on 128bit fp).
>
> > The most pragmatic solution would be to do
> > s/LONG_DOUBLE_TYPE_SIZE/LONG_DOUBLE_TYPE_MODE/
>
> Yeah, this beats my proposed hook (assuming the default is VOIDmode too).
>
> So it seems we have three alternatives here:
>   1) s/LONG_DOUBLE_TYPE_SIZE/LONG_DOUBLE_TYPE_MODE/
>   2) mode_for_floating_type
>   3) mode_for_abi_type
>
> Since 1) would make long double type special (different from the other types
> having _TYPE_SIZE), personally I'm inclined to 3): implement 2) first, get
> this patch series landed, extend to all.
>
> Do you have any preference?

Maybe do 3) but have the default hook implementation look at
*_TYPE_SIZE when the target doesn't implement the hook?  That would
force you to transition rs6000 away from *_TYPE_SIZE completely
but this would also prove the design.

Btw, for .c.mode_for_abi_type I'd exclude ADA_LONG_TYPE_SIZE.

Joseph, do you agree with this?  I'd not touch the target macros like
PTRDIFF_TYPE (those evaluating to a string) at this point though
they could be handled with a common target hook as well (not sure
if we'd want to have a unified hook for both?).

Thanks,
Richard.

>
> BR,
> Kewen


[clang] [Clang][Sema] Fix malformed AST for anonymous class access in template. (PR #90842)

2024-05-13 Thread Richard Smith via cfe-commits

https://github.com/zygoloid approved this pull request.

Nothing further on my side, LGTM

https://github.com/llvm/llvm-project/pull/90842
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


Re: How to run automatically a script as soon root login

2024-05-12 Thread Richard
Should be as easy as executing the script from the .profile of root - that
means if "log in as root" actually means root, not just sudo'ing. .profile
will always be read as soon as the user logs in, no matter how. Through a
terminal, a GUI, doesn't matter. No idea if doing this through systemd is
even possible.

Best
Richard

On Mon, May 13, 2024, 04:10 Mario Marietto  wrote:

> Hello to everyone.
>
> I'm using Debian 12. I'm configuring a little Debian 12 vm with qemu that
> I will use to forward the cloudflare connection to FreeBD.
> What I want to do is to run the script below as soon as root has logged
> in.
>
> I've configured the automatic login of root adding to this service file :
>
> nano /etc/systemd/system/getty.target.wants/getty@tty1.service
>
> this line :
>
> ExecStart=/sbin/agetty -o '-p -f -- \\u" --noclear --autologin root %I
> $TERM
>
> Now,what I want to do is that the script below is ran as soon root is able
> to logged in automatically :
>
> /usr/bin/warp
>
> warp-cli disconnect
> echo 1 > /proc/sys/net/ipv4/ip_forward
> iptables -A POSTROUTING -t nat -s 192.168.1.5 -j MASQUERADE
> OLD_IP="$(curl -s api.ipify.org)"
> warp-cli connect
> NEW_IP="$(curl -s api.ipify.org)"
> echo Connected to Cloudflare Warp...
> echo OLD IP is : $OLD_IP,NEW IP is : $NEW_IP
>
> [Forgot to say that I switched boot target to text with this command :
>
> sudo systemctl set-default multi-user.target]
>
> What I tried right now has been to create a respawn service with systemd.
> I created a file in /etc/systemd/system/ i.e. warp.service
>
> [Unit]
> Desription=warp with systemd, respawn
> After=pre-network.target
>
> [Service]
> ExecStart=/usr/bin/warp
> Restart=always
>
> [Install]
> WantedBy=multi-user.target
>
>
> and I've activated it :
>
> systemctl enable warp.service
>
>
> rebooted and started it manually :
>
> systemctl daemon-reload
> systemctl start warp.service
>
> It does not work and anyway it does not seem to be what I want...
>
> [image: Istantanea_2024-05-12_23-46-37.png]
>
> I want that the warp script is run everytime root is logged in,not
> more,not less.
> I suspect that the solution is easier than what I'm trying to do...
>
> --
> Mario.
>


Re: Can't Add Bookshare Books to VoiceDream

2024-05-12 Thread Richard Turner
First Bookshare's response doesn't make a lot of sense to me. Once the book is downloaded, you do not need internet access. So, I'm not sure what they mean. I never had any issue switching to another app, and then coming back to Voice Dream and continuing reading a book. Richard, USA“Grandma always told us, “Be careful when you pray for patience. God stores it on the other side of Hell and you will have to go through Hell to get it.”-- Cedrick Bridgeforth My web site: https://www.turner42.com/ On May 12, 2024, at 2:27 PM, Jenifer Barr  wrote:When I contacted bookshare about this... they said sometimes books are taken offline due to copyright etc. The skeliton is still there but the book is taken down. (shrug) Sent from my iPhoneOn May 12, 2024, at 5:10 PM, Jody ianuzzi  wrote:Oh I can add books from Bookshare to my voic Voice Dream reader but I have another problem. If I go to a different app and then go back to the Voice Dream reader and hit play nothing happens. I have to remove it from the app switcher and start again and then it's OK.Has anyone else had this problem?JODYTo Boldly Go   thunderwalker...@gmail.com "What's within you is stronger than what's in your way."  NO BARRIERS  Erik WeihenmayerOn May 12, 2024, at 12:28 PM, CC  wrote:Thanx, Richard. A three finger swipe down, refreshed the, recently updated element. There were 12 updates waiting to be integrated. Voicedream version 4.34.3, still offers no download option in bookshare.  On Sunday, May 12, 2024 at 11:02:10 AM UTC-4 Richard Turner wrote:Now, it not showing up in the updates list is very, very strange.That would point to an Apple issue.When I check for updates, I do a triple tap on the app store icon, then swipe to updates and double tap.  Then, touch where it says previously Updated and do a three finger swipe down to refresh the screen.I have found that has been the only reliable way for me to get all updates available to show up consistently. If Voice Dream doesn’t show up then, I think there is a bigger issue with your phone.    Richard, USA"It's no great honor to be blind, but it's more than a nuisance and less than a disaster. Either you're going to fight like hell when your sight fails or you're going to stand on the sidelines for the rest of your life." -- Dr. Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009) My web site: https://www.turner42.com From: vip...@googlegroups.com <vip...@googlegroups.com> On Behalf Of CCSent: Sunday, May 12, 2024 7:47 AMTo: VIPhone <vip...@googlegroups.com>Subject: Re: Can't Add Bookshare Books to VoiceDream Two new problems. Version 4.34.2 still gives no download option and voicedream updates don't appear in new app store updates. It seems I must search for the voicedream app, open the voicedream result, then click on, update button. All other app updates appear as expected. On Sunday, May 12, 2024 at 12:31:21 AM UTC-4 Richard Turner wrote:Well, to test this, I went and turned audio ducking on, but on or off, all audio books play just fine for me. So that may have nothing to do with your issue. But for me, not only do audio books play just fine, but the speed adjustment sticks. I wish I had a good suggestion for what else to try.   Richard, USA“Grandma always told us, “Be careful when you pray for patience. God stores it on the other side of Hell and you will have to go through Hell to get it.”-- Cedrick Bridgeforth My web site: https://www.turner42.com/  On May 11, 2024, at 9:01 PM, Richard Turner <richard...@comcast.net> wrote:Based on someone else’s post, is audio ducking on?  If so, turn it off and see if it plays.   Richard, USA"It's no great honor to be blind, but it's more than a nuisance and less than a disaster. Either you're going to fight like hell when your sight fails or you're going to stand on the sidelines for the rest of your life." -- Dr. Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009) My web site: https://www.turner42.com From: vip...@googlegroups.com <vip...@googlegroups.com> On Behalf Of CCSent: Saturday, May 11, 2024 4:27 PMTo: VIPhone <vip...@googlegroups.com>Subject: Re: Can't Add Bookshare Books to VoiceDream Now, I am no longer able to play previously downloaded audio books from bookshare in the voicedream app. As soon as I press play, the app closes and I am returned to my homepage. Has anyone else encountered this new problem? On Friday, May 10, 2024 at 6:53:08 AM UTC-4 Richard Turner wrote:All my posts to that list go through. Have you checked your junk folder?  Richard, USA“Grandma always told us, “Be careful when you pray for patience. God stores it on the other side of Hell and you will have to go through Hell to get it.”-- Cedrick Bridgeforth My web site: https://www.turner42.com/   On May 10, 2024, at 3:18 AM, Kelby Carlson <kelbyc...@gmail.com> wrote:I have not run into the voice error, thankfully. I post to that list, but none of my posts ever see

[BVARC] Satellites -- 2m -- 70cm -- antennas

2024-05-12 Thread Richard Bonica via BVARC
To all...

As I go down the path of satalites and antennas, I have now realized how
much I really don't have a clue.
The things I need to learn about are :
1) Yagis -- how they work beyond the normal parts and basic point and
listen.
2) satalites and orbits and solar winds and solar interference in general
and what it takes to track beyond software
3) up and down links -- beyond the 2m and 70cm frequency and how Doppler
and angle effects.

Basically -- looks to me that the magic smoke we work to keep in
electronics is also in the antenna.

Someone know where I can start to figure this stuff out. I am almost to the
point of calling Mr Daniel and CPT Jack .

Thank you for help in advance.

Richard Bonica
KG5YCU

Brazos Valley Amateur Radio Club

BVARC mailing list
BVARC@bvarc.org
http://mail.bvarc.org/mailman/listinfo/bvarc_bvarc.org
Publicly available archives are available here: 
https://www.mail-archive.com/bvarc@bvarc.org/ 


Re: [BVARC] test ing the ref lector

2024-05-12 Thread Richard Bonica via BVARC
Looks like is it reflecting here. Just not sure I should be reflecting --
some of my reflections could scare someone... LOL..

On Sun, May 12, 2024, 5:03 PM Bob Tomlinson via BVARC 
wrote:

> Looks like it works. 
>
> On Sun, May 12, 2024, 3:22 PM Eddie Runner via BVARC 
> wrote:
>
>> hamiam
>> 
>> Brazos Valley Amateur Radio Club
>>
>> BVARC mailing list
>> BVARC@bvarc.org
>> http://mail.bvarc.org/mailman/listinfo/bvarc_bvarc.org
>> Publicly available archives are available here:
>> https://www.mail-archive.com/bvarc@bvarc.org/
>>
> 
> Brazos Valley Amateur Radio Club
>
> BVARC mailing list
> BVARC@bvarc.org
> http://mail.bvarc.org/mailman/listinfo/bvarc_bvarc.org
> Publicly available archives are available here:
> https://www.mail-archive.com/bvarc@bvarc.org/
>

Brazos Valley Amateur Radio Club

BVARC mailing list
BVARC@bvarc.org
http://mail.bvarc.org/mailman/listinfo/bvarc_bvarc.org
Publicly available archives are available here: 
https://www.mail-archive.com/bvarc@bvarc.org/ 


Bug#625895: logcheck-database: /etc/logcheck/ignore.d.server/dovecot rule misses unusual Message-Id

2024-05-12 Thread Richard Lewis
On Fri, 06 May 2011 11:32:03 -0700 Gerald Turner  wrote:

> Hello, I've seen some legitimate mails with unusual Message-Id headers
> that cause logchecks dovecot delivery rule to be bypassed.
>
> Example: … sieve: msgid=<20110422T2108.GA.(stdi.s...@fsing.rootsland.net>: 
> stored mail into mailbox 'Mailing Lists/Debian/debian-devel'

It's a shame no-one replied since 2011.

That doesnt seem to be a valid msgid, so not sure logcheck should be
ignoring it by default. Obviously you can edit / make your own rules
to do so.
So not sure there is anything for debian to do in this one. Perhaps we
should close the bug?



[Libguestfs] Re: downloaded file does not have correct content

2024-05-12 Thread Richard W.M. Jones
On Fri, May 10, 2024 at 08:10:25PM -, tommy.henssc...@fau.de wrote:
> Hallo Zusammen!
> Ich hoffe ich bin hier richtig und es ist okay, dass ich meine Frage hier 
> stelle. 
> 
> Ich nutze GuestFS zur Zeit um aus einem QEMU-Abbild (qcow2) Dateien 
> herunterzuladen und zu listen. Dafür nutze ich die python-bindings. 
> Das herunterladen klappt auch soweit, allerdings haben die Dateien oft nicht 
> den richtigen Inhalt. 
> 
> Ein Beispiel:
> - VM startet
> - herunterladen von Dateien z.B. "/Windows/System32/config/SOFTWARE"
> - Sonstige Dinge in der VM ausführen
> - VM herunterfahren
> - erneutes Herunterladen der gleichen Dateien nur in ein anderes Verzeichnis

If the VM is running, then changes made by the VM are not
always written back to the disk immediately.

Libguestfs only sees what's on the disk.

Usually this works fine, but you may see differences between
what the VM "thinks" is the content and what libguestfs
sees (by looking at what is on the disk).

> Oft sind die Dateien, die zu unterschiedlichen Zeiten heruntergeladen wurden, 
> jedoch identisch. Wohl bemerkt obwohl die VM heruntergefahren ist. 
> Wenn ich nun jedoch im Nachgang nochmal eine neue unabhängige Instanz meiner 
> Python Klasse starte, werden die Dateien mit dem richtigem Inhalt 
> heruntergeladen.
> Da ich irgendwelche Cacheprobleme im Verdacht habe, habe ich für das finale 
> Herunterladen die Partition auch nochmal unmounted und wieder gemountet.
> 
> Kann mir das jemand erklären oder mache ich was falsch?
> Wäre für Hinweise echt dankbar!
> 
> Version: 1.52.0 stable

HTH,

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW
___
Libguestfs mailing list -- guestfs@lists.libguestfs.org
To unsubscribe send an email to guestfs-le...@lists.libguestfs.org


Bug#491127: logcheck: please consider an option which will always check the entire log file

2024-05-12 Thread Richard Lewis
On Sun, 12 May 2024 at 19:57, Marc Haber  wrote:
>
> On Sun, May 12, 2024 at 06:54:59PM +0100, R Lewis wrote:
> > On Wed, 16 Jul 2008 23:15:51 +0200 Marc Haber
> >  wrote:
> >
> > > It would help with debugging to have an option that causes logcheck to
> > > always look through the entire log file, ie not using logtail.
> >
> > Looking at this old bug from 2008: does the -t option meet this need?
>
> Not quite. Imagine: I have a system running logcheck hourly. At 11:00, I
> get a mail with a log message from 10:55 that is not yet covered by the
> rule. The stamp gets updated to 11:00.
>
> I now fix the rule that should have filtered the message. logcheck -t
> will still only start checking a 11:00, so the test run will not prove
> that the change has actually filtered the message from 10:55.
>
> Does this make the usecase clear?

Does logcheck-test help with that?



Re: Can't Add Bookshare Books to VoiceDream

2024-05-12 Thread Richard Turner
This is very odd. I have never lost the download button. Perhaps everyone needs to specify which phone and iOS is being used. That might help the developers figure out what is happening, or if it is an Apple issue. I'm not a programmer, so all this is a mystery to me. I am using an iPhone 13 pro with iOS 17.4.1Richard, USA“Grandma always told us, “Be careful when you pray for patience. God stores it on the other side of Hell and you will have to go through Hell to get it.”-- Cedrick Bridgeforth My web site: https://www.turner42.com/ On May 12, 2024, at 9:28 AM, CC  wrote:Thanx, Richard. A three finger swipe down, refreshed the, recently updated element. There were 12 updates waiting to be integrated. Voicedream version 4.34.3, still offers no download option in bookshare.  On Sunday, May 12, 2024 at 11:02:10 AM UTC-4 Richard Turner wrote:Now, it not showing up in the updates list is very, very strange.That would point to an Apple issue.When I check for updates, I do a triple tap on the app store icon, then swipe to updates and double tap.  Then, touch where it says previously Updated and do a three finger swipe down to refresh the screen.I have found that has been the only reliable way for me to get all updates available to show up consistently. If Voice Dream doesn’t show up then, I think there is a bigger issue with your phone.Richard, USA"It's no great honor to be blind, but it's more than a nuisance and less than a disaster. Either you're going to fight like hell when your sight fails or you're going to stand on the sidelines for the rest of your life." -- Dr. Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009) My web site: https://www.turner42.com From: vip...@googlegroups.com <vip...@googlegroups.com> On Behalf Of CCSent: Sunday, May 12, 2024 7:47 AMTo: VIPhone <vip...@googlegroups.com>Subject: Re: Can't Add Bookshare Books to VoiceDream Two new problems. Version 4.34.2 still gives no download option and voicedream updates don't appear in new app store updates. It seems I must search for the voicedream app, open the voicedream result, then click on, update button. All other app updates appear as expected. On Sunday, May 12, 2024 at 12:31:21 AM UTC-4 Richard Turner wrote:Well, to test this, I went and turned audio ducking on, but on or off, all audio books play just fine for me. So that may have nothing to do with your issue. But for me, not only do audio books play just fine, but the speed adjustment sticks. I wish I had a good suggestion for what else to try.   Richard, USA“Grandma always told us, “Be careful when you pray for patience. God stores it on the other side of Hell and you will have to go through Hell to get it.”-- Cedrick Bridgeforth My web site: https://www.turner42.com/  On May 11, 2024, at 9:01 PM, Richard Turner <richard...@comcast.net> wrote:Based on someone else’s post, is audio ducking on?  If so, turn it off and see if it plays.   Richard, USA"It's no great honor to be blind, but it's more than a nuisance and less than a disaster. Either you're going to fight like hell when your sight fails or you're going to stand on the sidelines for the rest of your life." -- Dr. Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009) My web site: https://www.turner42.com From: vip...@googlegroups.com <vip...@googlegroups.com> On Behalf Of CCSent: Saturday, May 11, 2024 4:27 PMTo: VIPhone <vip...@googlegroups.com>Subject: Re: Can't Add Bookshare Books to VoiceDream Now, I am no longer able to play previously downloaded audio books from bookshare in the voicedream app. As soon as I press play, the app closes and I am returned to my homepage. Has anyone else encountered this new problem? On Friday, May 10, 2024 at 6:53:08 AM UTC-4 Richard Turner wrote:All my posts to that list go through. Have you checked your junk folder?  Richard, USA“Grandma always told us, “Be careful when you pray for patience. God stores it on the other side of Hell and you will have to go through Hell to get it.”-- Cedrick Bridgeforth My web site: https://www.turner42.com/   On May 10, 2024, at 3:18 AM, Kelby Carlson <kelbyc...@gmail.com> wrote:I have not run into the voice error, thankfully. I post to that list, but none of my posts ever seem to go through; they must not check their moderation filters much. On May 10, 2024, at 5:59 AM, Richard Turner <richard...@comcast.net> wrote:While I still have the download buttons when searching for Bookshare books, the audio speed adjustment is broken on my 13 Pro running iOS 17.4.1. I have reported this through the app. You all really should join the Voice-Dream list the company has started so everything is seen by the company as well as reporting through the Voice Dream app. voice-drea...@groups.io  Richard, USA“Grandma always told us, “Be careful when you pray for patience. God stores it on the other side of Hell and you will have to go through Hell to get it.”-- Cedrick Bridg

Bug#975694: [logcheck-database] stop filtering smartd attribute change events

2024-05-12 Thread Richard Lewis
control: tags -1 + moreinfo
thanks

On Wed, 25 Nov 2020 13:13:14 +0500 Alex Volkov  wrote:

> IDK how it was in 2006 when this stupid decision was made, but nowadays
> `smartd` has all the needed filtering features in itself, in a case someone
> gets "annoyed" by attribute changes. Yeah, sure, it "can send warning mails",
> but by default only in a case of attribute FAILURE, and some Old-age related
> attributes NEVER fail (like Power-On Hours on WD, which counts down to 1 and
> just sits happily there, never getting to "failing" 0). In such cases at least
> seeing them changing (or not) is useful. Overall, it's not the place for a
> *maintainer* to decide (instead of a user) which events of a third-party
> package are annoying and which are not.

Following up on the above  bug from 2020 -- we need more information
here: what is the issue that you would like fixed?
you can already (even in 2020) add or subtract any rules for logcheck
that you like. smartd has a way to alert you to issues, but
that is nothing to do with logcheck.

in the absence of a clearer statement we would need to close this one.



Bug#862638: logcheck: Please add suricata rules to logcheck

2024-05-12 Thread Richard Lewis
control: tags -1 moreinfo
control: severity -1 wishlist
thanks

On Mon, 15 May 2017 10:42:03 +0200

> I am very happy with logcheck. It is great working and very usefull. However, 
> it would be nice, if you could add a ruleset for suricata (a successor to the 
> well known snort IDS), so I get alerted, when something fishy is going on.

It's a shame no-one replied to this bug from 2017 - let's change that now.

>In my case logcheck is run every 30 minutes, so I am very fast aware, when an 
>attack is going on. On the other hand, I found no realtime alert option with 
>suricata. Best way, IMO, would be a ruleset for suricata logs, which do alert 
>me by mail (as logcheck normally do).

Unfortunately more information is needed to help this.

Is the request to use logcheck to scan non-log files created by
suricata? you can definitely do that but would need to write your own
rules to ignore things that are not "fishy".
...but i dont think logcheck-database should ship such rules unless
there is clear demand. It looks like suricata can send its own alerts
so not sure this is even needed in 2024?

If there are messages produced by suricata in the journal that
logcheck should be filtering, then we need to know what those are?

(In the absence of more information we would likely close this bug as
unactionable)



Bug#735287: logcheck: invent conditional logging

2024-05-12 Thread Richard Lewis
On Tue, 14 Jan 2014 13:33:25 +0100 Arne Wichmann  wrote:

> There is one thing I would like to have in logcheck for quite a long time
> already:
>
> Invent a mechanism by which a pattern is only mailed (or not mailed) if
> another pattern was seen a given time before it (or also possibly after
> it).
>
> For example I would like to make reboots invisible on some machines, but I
> do want to see it if the sshd terminates as long as the machine is not
> rebooting.

Hi - It's a shame no-one replied to this bug in 10 years: let's change that now.

The only realistic way i can see this working is to have some kind of
pre-processing of log entries. I'm thinking you would write a
pre-processor that takes each line in the log
and look back in the journal (or syslog) for related lines -- i dont
think we'd want to implement that in logcheck, as it would be a whole
other project to write, but we could allow
the user to do it. There are several reasons to make logcheck
configurable to pre-processing ( - work on this is in progress. watch
this space!).
You can maybe even today do this with post-processing by writing a
'syslog-summary' script - again this would need the user to write
their own code.

(I think the point in the last para is basically solved by using
systemd, which makes it much easier to restart daemons when they
crash)

In the absence of other suggestions, i suggest we implement
configurable pre-processing, leave syslog-summary support in place and
close this bug.



Re: [blind-gamers] questions for home quest players

2024-05-12 Thread Richard Sherman
Hi,
I am pretty good about finding stuff with a google search. But only tried
the fleet question and got nothing. I asked my sighted son to see what he
could find. He only found info on what kind of fleet to make, not how to
start one.

If you think the answer is there we will give another look.

Shermanator

-Original Message-
From: Jude DaShiell  
Sent: Sunday, May 12, 2024 5:53 AM

reddit has r.homequest where things like this get answered.


--
 Jude 
 "There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo.
 Please use in that order."
 Ed Howdershelt 1940.

On Sun, 12 May 2024, Richard Sherman wrote:

> HI folks,
> Got home quest a few days go. Not a bad game for free. Am not a fan of the
new update but oh well. Am running version 4.09. the latest that came down a
few days ago.
>
> My first question for those of you playing this game is fairly simple. How
does one create a fleet of ships? I built a harbor and yet cannot find a way
to build ships.
>
> My 2nd question is how do I make mercury in this latest version? I play
around on the spirit tab and press a button that says poke I think. Then
swipe to the activate all and some times the number decreases, some times it
does not. But in the end its not making mercury.
>
> Thanks in advance.
>
> Shermanator



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#127303): https://groups.io/g/blind-gamers/message/127303
Mute This Topic: https://groups.io/mt/106051444/21656
Group Owner: blind-gamers+ow...@groups.io
Unsubscribe: 
https://groups.io/g/blind-gamers/leave/607459/21656/1071380848/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-




Bug#919866: logcheck: Feature request: wildcards in .logfiles pathnames

2024-05-12 Thread Richard Lewis
On Sun, 20 Jan 2019 15:50:55 +0530 Charles Atkinson
 wrote:

> Please consider introducing wildcards into the paths in the .logfiles 
> configuration files.  Perhaps similar to the way they are used in logrotate's 
> paths.

> A use case is when using logcheck to check logs from multiple non-Debian 
> systems such as routers, each having a dedicated directory with the last 
> directory being their FQDN.  The router FQDNs follow a fixed pattern.  
> Currently this requires an /etc/logcheck-for-routers/logcheck.logfile.d file 
> for each router.  If .logfiles supported wildcards, a single 
> /etc/logcheck-for-routers/logcheck.logfile.d file could be used for all 
> routers and there would be one less step in the procedures to commission and 
> to decommission a router.

(It's a shame no-one replied since 2019: let's change that now)

You could instead implement this by a script that updates the
.logfiles file before logcheck runs: if you put this into
logcheck.conf it would work, so eg:

echo /var/log/whatever/*.log >
/etc/logcheck-for-routers/logcheck.logfiles.d/routers.logfiles

If that would meet your needs we could close this bug - the code to
read logfiles.list is already over-complicated and adding more
features needs a good reason



[Sheflug] ShefLUG - June 2024 Meeting

2024-05-12 Thread Richard Ibbotson

Hi

https://www.sheflug.org.uk/indexpage/sheflug-meetings-page-2/
https://twitter.com/ShefLUG


The next meeting is on Saturday the 1st of June At Morrisons Store 
Ecclesfield Sheffield. Morrisons – 299 The Common, Ecclesfield, 
Sheffield, S35 9AE.  About 1 p.m. to 4 p.m.  See the above web page for 
more information.  When you walk into the store you need to ask where 
the community room is.  Ask a member of staff for that. There is a cafe 
in the store. You can get a full lunch or dinner if you want one.  Also 
coffee and tea and snacks. Parking is free.  The local bus stops just 
outside Morrisons.


--
Richard

___
Sheffield Linux User's Group
http://sheflug.org.uk/mailman/listinfo/sheflug_sheflug.org.uk
FAQ at: http://www.sheflug.org.uk/mailfaq.html

GNU - The Choice of a Complete Generation


Re: [9fans] one weird trick to break p9sk1 ?

2024-05-12 Thread Richard Miller
23h...@gmail.com:
> sorry for ignoring your ideas about a p9sk3, but is your mentioning of
> ocam's razor implying that dp9ik is too complicated?
> is there any other reason to stick with DES instead of AES in
> particular? i'm not a cryptographer by any means, but just curious.

My comments are about p9sk1; I'm not implying anything about other
algorithms.  When working with other people's software, whether
professionally or for my own purposes, I try to take a
minimum-intervention approach: because it's respectful, because of
Occam's Razor, because of Tony Hoare's observation that software can
be either so simple that it obviously has no bugs, or so complicated
that it has no obvious bugs.

I thought of 3DES in the first instance because of this desire to be
minimally disruptive.  Support for DES is already there and tested.
3DES only needs extra keys in /mnt/keys, and because 3DES encryption
with all three keys the same becomes single DES, there's a graceful
fallback when users have access only via an older client with
unmodified p9sk1. Obviously the server ticket would always be protected
by 3DES.

This is only the first scratching of an idea, not implemented yet.

I've got nothing against AES. I'm not a cryptographer either, but I did once
have to build a javacard implementation for a proprietary smartcard which
involved a lot of crypto infrastructure, and had to pass EMV certification.
Naturally that needed AES, elliptic curves, and plenty of other esoterica
to fit in with the existing environment and specifications.


--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T56397eff6269af27-M2003e6b5eb34ea3270a33bec
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


[marxmail] Ukraine's fight for freedom: a socialist case for solidarity and self-determination

2024-05-12 Thread Richard Fidler
Historian and activist Paul Le Blanc offers an essential socialist
perspective on the Russia-Ukraine war, arguing for solidarity with Ukraine's
fight for self-determination while opposing the imperialist agendas of both
Russia and Western powers. Drawing on history and revolutionary principles,
Le Blanc makes the case that the democratic and socialist left must stand
with Ukraine's resistance by any means necessary. The text is based on a
talk that Le Blanc delivered on April 15, 2024. First published at
 Anti*Capitalist
Resistance.

* * *

It is necessary for those who support socialism and democracy to support
Ukrainian resistance to the Russian invasion of their country.  Here I want
to offer some historical and political background as to why I think this is
so. 

 

Full:
https://lifeonleft.blogspot.com/2024/05/ukraines-fight-for-freedom-socialist
.html

 



-=-=-=-=-=-=-=-=-=-=-=-
Groups.io Links: You receive all messages sent to this group.
View/Reply Online (#30306): https://groups.io/g/marxmail/message/30306
Mute This Topic: https://groups.io/mt/106057597/21656
-=-=-
POSTING RULES & NOTES
#1 YOU MUST clip all extraneous text when replying to a message.
#2 This mail-list, like most, is publicly & permanently archived.
#3 Subscribe and post under an alias if #2 is a concern.
#4 Do not exceed five posts a day.
-=-=-
Group Owner: marxmail+ow...@groups.io
Unsubscribe: https://groups.io/g/marxmail/leave/8674936/21656/1316126222/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-




Re: Should we escape \url special characters in Beamer?

2024-05-12 Thread Richard Kimberly Heck

On 5/12/24 05:41, Jürgen Spitzmüller wrote:

Am Samstag, dem 25.03.2023 um 15:43 +0100 schrieb Jürgen Spitzmüller:

Should LyX escape '#' in the 'Frame' environment, or do we put the
responsibility on the user to know to use a Fragile Frame?

The former, but this requires some effort (probably a new layout flag
that advises to escape chars in embedded verbatim insets). Probably
not 2.4 stuff.

FWIW I have a patch ready for this for master. However, it requires a
file format change, so I'd better wait with it until 2.4.0 is settled?

In any case, please tell me before you plan to create
lyx2lyx/lyx_2_5.py, since I have this and all the other stuff needed to
get 2.5 conversion ready here.


Since 2.4.x has been branched, you can go ahead with that.

Riki


--
lyx-devel mailing list
lyx-devel@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-devel


Re: [LyX/master] Introduce NeedCProtect -1 layout option

2024-05-12 Thread Richard Kimberly Heck

On 5/12/24 02:11, Jürgen Spitzmüller wrote:

Am Sonntag, dem 12.05.2024 um 05:53 + schrieb Juergen Spitzmueller:

commit 207eaeee9071cb828a2ab7f4680f8ff92e379af8
Author: Juergen Spitzmueller 
Date:   Sun May 12 07:52:16 2024 +0200

     Introduce NeedCProtect -1 layout option
 
     It turns out beamer frame does not allow \cprotect and errors if

it is     used. Hence we need to prevent it in this context entirely.

Riki, I am tempted to propose this for 2.4.0, although we are very late
in the game and this introduces a slight documentation addition.

The problem this fixes is that beamer frame errors if \cprotect is used
within. This can happen if you have an URL with an underscore or some
other special character within a command (e.g., \alert or \structure)
within a frame (see attached MWE). This is not an exceptional scenario
and I am surprised that I only encountered it yesterday with a
presentation that works with 2.3.x. :-(


Since it's a regression, I think you should go ahead. I assume it is 
otherwise safe?


Note that you'll need to commit to 2.4.x and master separately.

Riki


--
lyx-devel mailing list
lyx-devel@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-devel


[clang] [Clang][CWG1815] Support lifetime extension of temporary created by aggregate initialization using a default member initializer (PR #87933)

2024-05-12 Thread Richard Smith via cfe-commits

zygoloid wrote:

> I'd like to proposal a separate PR for static analyzer. #91879 WDYT?

That sounds good to me.

https://github.com/llvm/llvm-project/pull/87933
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Analyzer][CFG] Correctly handle rebuilt default arg and default init expression (PR #91879)

2024-05-12 Thread Richard Smith via cfe-commits


@@ -2433,6 +2429,30 @@ CFGBlock *CFGBuilder::VisitChildren(Stmt *S) {
   return B;
 }
 
+CFGBlock *CFGBuilder::VisitCXXDefaultArgExpr(CXXDefaultArgExpr *Arg,
+ AddStmtChoice asc) {
+  if (Arg->hasRewrittenInit()) {
+if (asc.alwaysAdd(*this, Arg)) {
+  autoCreateBlock();
+  appendStmt(Block, Arg);
+}
+return VisitStmt(Arg->getExpr(), asc);
+  }
+  return VisitStmt(Arg, asc);

zygoloid wrote:

Actually, are we safe from that even if the default argument is rewritten? Do 
we guarantee to recreate all subexpressions in that case?

https://github.com/llvm/llvm-project/pull/91879
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Analyzer][CFG] Correctly handle rebuilt default arg and default init expression (PR #91879)

2024-05-12 Thread Richard Smith via cfe-commits


@@ -2433,6 +2429,30 @@ CFGBlock *CFGBuilder::VisitChildren(Stmt *S) {
   return B;
 }
 
+CFGBlock *CFGBuilder::VisitCXXDefaultArgExpr(CXXDefaultArgExpr *Arg,
+ AddStmtChoice asc) {
+  if (Arg->hasRewrittenInit()) {
+if (asc.alwaysAdd(*this, Arg)) {
+  autoCreateBlock();
+  appendStmt(Block, Arg);
+}
+return VisitStmt(Arg->getExpr(), asc);
+  }
+  return VisitStmt(Arg, asc);

zygoloid wrote:

I think it'd be useful to keep some of the old comment here: we can't add the 
default argument if it's not rewritten because we could end up with the same 
expression appearing multiple times.

https://github.com/llvm/llvm-project/pull/91879
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


Bug#241787: options to seperate hosts and for log compaction would both be nice

2024-05-12 Thread Richard Lewis
> This bug is nearly 20 years old. (It is a shame no-one replied - the links
> no longer work and there is not enough info recorded to action)
>
> Unless anyone is watching and can proivde more info about what the issue
> is/was then i suggest we close it.

A year later: closing.

logcheck can send reports as gzip'd attachments so perhaps it was even
fixed sometime in the last 20 years.



Bug#302379: dh_installlogcheck installs files as root:root 644, not root:logcheck 640

2024-05-12 Thread Richard Lewis
On Mon, 24 Aug 2009 08:36:21 -0400
=?iso-8859-1?B?RnLpZOlyaWMgQnJp6HJl?=  wrote:
> On Thu, Mar 31, 2005 at 09:54:34AM -0500, Marc Sherman wrote:
> > I reported a bug on a couple clamav packages (302253, 302254) which
> > noted that in Sarge, logcheck files are supposed to be root:logcheck
> > 640, not root:root 644.  The clamav maintainer replied that he's using
>
> I should note that while the /etc/logcheck/* directories are setgid to
> attempt to fix this discrepancy, this doesn't work, as dpkg will chown()
> the installed files anyway.
>
> I guess there should be a note in README.Maintainer to instruct people
> not to install those files as 640, as tempting as it may be.

Closing this bug because it no longer applies some 15 years later:
- The directory /etc/logcheck is no longer setgid (since bookworm),
- Rules can have any permissions (as long as logcheck can read them).
  (the default is 644 and  root:root (which means harder to delete rules files).
- so no need for dh_logcheck to be patched here

therefore, closing as no more bug. (But feel free to reopen if there
is anything more to do)



Bug#383289: RFE: logtail locking

2024-05-12 Thread Richard Lewis
On Wed, 16 Aug 2006 05:33:26 -0500 bingo  wrote:

> It would be good if logtail supports locking.

I think we need some more information if this bug is to be action-ed.
logcheck uses logtail2 now (and syslog is not the default):so perhaps
it is not relevant after nearly 20 years (there were other replies
requesting patches in the intervening period, and no-one has provided
them -- so perhaps we should just close this as no longer relevant).

If the issue is that rsyslog might still be writing to the file, when
logtail runs, surely all that might happen is that the same entries
might be re-checked on the next run?
(and I dont think it would be sensible for logtail to try and stop
rsyslog writing while logtail runs)
the journal has a better way to avoid races, using --cursor-file --
logcheck doesnt yet use that (watch this space!) which i think might
also avoid potential race conditions here



Bug#750232: logtail2 should not not print the final log entry if it does not end with "\n"

2024-05-12 Thread Richard Lewis
On Mon, 2 Jun 2014 10:25:40 -0700 (PDT) Chris Stromsoe  wrote:

> logtail2 does not do any sanity checking on the final line of input to
> make sure that it is complete and "\n" terminated.  If syslog is not set
> to flush on every write, it's possible for consecutive runs of logcheck to
> get a single log entry split in half for each run, resulting in false
> positives from logcheck.

Is this still an issue in 2024? if not we could close this old bug.

If you ran logtail2 on a non-syslog file (which might actually be an
increasingly common usage in a systemd world?), then ignoring a line
without a trailing \n means that last line might never be checked,
which seems far worse than the occasional false-positive. I dont think
i ever saw such false positives with rsyslog, when i used it.



Bug#470997: logcheck: allow running w/o locking

2024-05-12 Thread Richard Lewis
On Fri, 14 Mar 2008 21:50:17 -0400 =?utf-8?b?RnLDqWTDqXJpYyBCcmnDqHJl?= <

> When testing a checked-out copy of the rulefiles against an old log copy
> and sending the output to stdout, I still have to use sudo because
> logcheck insists on creating a lockfile.  It'd be nice to provide an
> option to skip that part.

This is easy to solve by changing LOCKDIR to point to something you can access.
You presumably are setting a custom configuration file to run as
non-root (since you wont be able to read /etc/logcheck/logcheck.conf
without sudo), and then
that custom config file can change the location to /tmp or wherever.

I think this already works in the version in bookworm, so perhaps we
can close this bug



Re: [PATCH] fortran: Assume there is no cyclic reference with submodule symbols [PR99798]

2024-05-12 Thread Paul Richard Thomas
Hi Mikael,

That is an ingenious solution. Given the complexity, I think that the
comments are well warranted.

OK for master and, I would suggest, 14-branch after a few weeks.

Thanks!

Paul

On Sun, 12 May 2024 at 14:16, Mikael Morin  wrote:

> Hello,
>
> Here is my final patch to fix the ICE of PR99798.
> It's maybe overly verbose with comments, but the memory management is
> hopefully clarified.
> I tested this with a full fortran regression test on x86_64-linux and a
> manual check with valgrind on the testcase.
> OK for master?
>
> -- 8< --
>
> This prevents a premature release of memory with procedure symbols from
> submodules, causing random compiler crashes.
>
> The problem is a fragile detection of cyclic references, which can match
> with procedures host-associated from a module in submodules, in cases
> where it
> shouldn't.  The formal namespace is released, and with it the dummy
> arguments
> symbols of the procedure.  But there is no cyclic reference, so the
> procedure
> symbol itself is not released and remains, with pointers to its dummy
> arguments
> now dangling.
>
> The fix adds a condition to avoid the case, and refactors to a new
> predicate
> by the way.  Part of the original condition is also removed, for lack of a
> reason to keep it.
>
> PR fortran/99798
>
> gcc/fortran/ChangeLog:
>
> * symbol.cc (gfc_release_symbol): Move the condition guarding
> the handling cyclic references...
> (cyclic_reference_break_needed): ... here as a new predicate.
> Remove superfluous parts.  Add a condition preventing any premature
> release with submodule symbols.
>
> gcc/testsuite/ChangeLog:
>
> * gfortran.dg/submodule_33.f08: New test.
> ---
>  gcc/fortran/symbol.cc  | 54 +-
>  gcc/testsuite/gfortran.dg/submodule_33.f08 | 20 
>  2 files changed, 72 insertions(+), 2 deletions(-)
>  create mode 100644 gcc/testsuite/gfortran.dg/submodule_33.f08
>
> diff --git a/gcc/fortran/symbol.cc b/gcc/fortran/symbol.cc
> index 8f7deac1d1e..0a1646def67 100644
> --- a/gcc/fortran/symbol.cc
> +++ b/gcc/fortran/symbol.cc
> @@ -3179,6 +3179,57 @@ gfc_free_symbol (gfc_symbol *)
>  }
>
>
> +/* Returns true if the symbol SYM has, through its FORMAL_NS field, a
> reference
> +   to itself which should be eliminated for the symbol memory to be
> released
> +   via normal reference counting.
> +
> +   The implementation is crucial as it controls the proper release of
> symbols,
> +   especially (contained) procedure symbols, which can represent a lot of
> memory
> +   through the namespace of their body.
> +
> +   We try to avoid freeing too much memory (causing dangling pointers),
> to not
> +   leak too much (wasting memory), and to avoid expensive walks of the
> symbol
> +   tree (which would be the correct way to check for a cycle).  */
> +
> +bool
> +cyclic_reference_break_needed (gfc_symbol *sym)
> +{
> +  /* Normal symbols don't reference themselves.  */
> +  if (sym->formal_ns == nullptr)
> +return false;
> +
> +  /* Procedures at the root of the file do have a self reference, but
> they don't
> + have a reference in a parent namespace preventing the release of the
> + procedure namespace, so they can use the normal reference counting.
> */
> +  if (sym->formal_ns == sym->ns)
> +return false;
> +
> +  /* If sym->refs == 1, we can use normal reference counting.  If
> sym->refs > 2,
> + the symbol won't be freed anyway, with or without cyclic reference.
> */
> +  if (sym->refs != 2)
> +return false;
> +
> +  /* Procedure symbols host-associated from a module in submodules are
> special,
> + because the namespace of the procedure block in the submodule is
> different
> + from the FORMAL_NS namespace generated by host-association.  So
> there are
> + two different namespaces representing the same procedure namespace.
> As
> + FORMAL_NS comes from host-association, which only imports symbols
> visible
> + from the outside (dummy arguments basically), we can assume there is
> no
> + self reference through FORMAL_NS in that case.  */
> +  if (sym->attr.host_assoc && sym->attr.used_in_submodule)
> +return false;
> +
> +  /* We can assume that contained procedures have cyclic references,
> because
> + the symbol of the procedure itself is accessible in the procedure
> body
> + namespace.  So we assume that symbols with a formal namespace
> different
> + from the declaration namespace and two references, one of which is
> about
> + to be removed, are procedures with just the self reference left.  At
> this
> + point, the symbol SYM matches that pattern, so we return true here to
> + permit the release of SYM.  */
> +  return true;
> +}
> +
> +
>  /* Decrease the reference counter and free memory when we reach zero.
> Returns true if the symbol has been freed, false otherwise.  */
>
> @@ -3188,8 +3239,7 @@ gfc_release_symbol (gfc_symbol *)
>

Re: [PATCH] fortran: Assume there is no cyclic reference with submodule symbols [PR99798]

2024-05-12 Thread Paul Richard Thomas
Hi Mikael,

That is an ingenious solution. Given the complexity, I think that the
comments are well warranted.

OK for master and, I would suggest, 14-branch after a few weeks.

Thanks!

Paul

On Sun, 12 May 2024 at 14:16, Mikael Morin  wrote:

> Hello,
>
> Here is my final patch to fix the ICE of PR99798.
> It's maybe overly verbose with comments, but the memory management is
> hopefully clarified.
> I tested this with a full fortran regression test on x86_64-linux and a
> manual check with valgrind on the testcase.
> OK for master?
>
> -- 8< --
>
> This prevents a premature release of memory with procedure symbols from
> submodules, causing random compiler crashes.
>
> The problem is a fragile detection of cyclic references, which can match
> with procedures host-associated from a module in submodules, in cases
> where it
> shouldn't.  The formal namespace is released, and with it the dummy
> arguments
> symbols of the procedure.  But there is no cyclic reference, so the
> procedure
> symbol itself is not released and remains, with pointers to its dummy
> arguments
> now dangling.
>
> The fix adds a condition to avoid the case, and refactors to a new
> predicate
> by the way.  Part of the original condition is also removed, for lack of a
> reason to keep it.
>
> PR fortran/99798
>
> gcc/fortran/ChangeLog:
>
> * symbol.cc (gfc_release_symbol): Move the condition guarding
> the handling cyclic references...
> (cyclic_reference_break_needed): ... here as a new predicate.
> Remove superfluous parts.  Add a condition preventing any premature
> release with submodule symbols.
>
> gcc/testsuite/ChangeLog:
>
> * gfortran.dg/submodule_33.f08: New test.
> ---
>  gcc/fortran/symbol.cc  | 54 +-
>  gcc/testsuite/gfortran.dg/submodule_33.f08 | 20 
>  2 files changed, 72 insertions(+), 2 deletions(-)
>  create mode 100644 gcc/testsuite/gfortran.dg/submodule_33.f08
>
> diff --git a/gcc/fortran/symbol.cc b/gcc/fortran/symbol.cc
> index 8f7deac1d1e..0a1646def67 100644
> --- a/gcc/fortran/symbol.cc
> +++ b/gcc/fortran/symbol.cc
> @@ -3179,6 +3179,57 @@ gfc_free_symbol (gfc_symbol *)
>  }
>
>
> +/* Returns true if the symbol SYM has, through its FORMAL_NS field, a
> reference
> +   to itself which should be eliminated for the symbol memory to be
> released
> +   via normal reference counting.
> +
> +   The implementation is crucial as it controls the proper release of
> symbols,
> +   especially (contained) procedure symbols, which can represent a lot of
> memory
> +   through the namespace of their body.
> +
> +   We try to avoid freeing too much memory (causing dangling pointers),
> to not
> +   leak too much (wasting memory), and to avoid expensive walks of the
> symbol
> +   tree (which would be the correct way to check for a cycle).  */
> +
> +bool
> +cyclic_reference_break_needed (gfc_symbol *sym)
> +{
> +  /* Normal symbols don't reference themselves.  */
> +  if (sym->formal_ns == nullptr)
> +return false;
> +
> +  /* Procedures at the root of the file do have a self reference, but
> they don't
> + have a reference in a parent namespace preventing the release of the
> + procedure namespace, so they can use the normal reference counting.
> */
> +  if (sym->formal_ns == sym->ns)
> +return false;
> +
> +  /* If sym->refs == 1, we can use normal reference counting.  If
> sym->refs > 2,
> + the symbol won't be freed anyway, with or without cyclic reference.
> */
> +  if (sym->refs != 2)
> +return false;
> +
> +  /* Procedure symbols host-associated from a module in submodules are
> special,
> + because the namespace of the procedure block in the submodule is
> different
> + from the FORMAL_NS namespace generated by host-association.  So
> there are
> + two different namespaces representing the same procedure namespace.
> As
> + FORMAL_NS comes from host-association, which only imports symbols
> visible
> + from the outside (dummy arguments basically), we can assume there is
> no
> + self reference through FORMAL_NS in that case.  */
> +  if (sym->attr.host_assoc && sym->attr.used_in_submodule)
> +return false;
> +
> +  /* We can assume that contained procedures have cyclic references,
> because
> + the symbol of the procedure itself is accessible in the procedure
> body
> + namespace.  So we assume that symbols with a formal namespace
> different
> + from the declaration namespace and two references, one of which is
> about
> + to be removed, are procedures with just the self reference left.  At
> this
> + point, the symbol SYM matches that pattern, so we return true here to
> + permit the release of SYM.  */
> +  return true;
> +}
> +
> +
>  /* Decrease the reference counter and free memory when we reach zero.
> Returns true if the symbol has been freed, false otherwise.  */
>
> @@ -3188,8 +3239,7 @@ gfc_release_symbol (gfc_symbol *)
>

RE: Can't Add Bookshare Books to VoiceDream

2024-05-12 Thread Richard Turner
Now, it not showing up in the updates list is very, very strange.

That would point to an Apple issue.

When I check for updates, I do a triple tap on the app store icon, then swipe 
to updates and double tap.  Then, touch where it says previously Updated and do 
a three finger swipe down to refresh the screen.

I have found that has been the only reliable way for me to get all updates 
available to show up consistently.

 

If Voice Dream doesn’t show up then, I think there is a bigger issue with your 
phone.

 

 

 

 

Richard, USA

"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

 

My web site:  <https://www.turner42.com> https://www.turner42.com

 

From: viphone@googlegroups.com  On Behalf Of CC
Sent: Sunday, May 12, 2024 7:47 AM
To: VIPhone 
Subject: Re: Can't Add Bookshare Books to VoiceDream

 

Two new problems. Version 4.34.2 still gives no download option and voicedream 
updates don't appear in new app store updates. It seems I must search for the 
voicedream app, open the voicedream result, then click on, update button. All 
other app updates appear as expected. 

On Sunday, May 12, 2024 at 12:31:21 AM UTC-4 Richard Turner wrote:

Well, to test this, I went and turned audio ducking on, but on or off, all 
audio books play just fine for me. 

So that may have nothing to do with your issue. But for me, not only do audio 
books play just fine, but the speed adjustment sticks. 

I wish I had a good suggestion for what else to try. 

 

 

Richard, USA

“Grandma always told us, “Be careful when you pray for patience. God stores it 
on the other side of Hell and you will have to go through Hell to get it.”

-- Cedrick Bridgeforth

 

My web site: https://www.turner42.com/

 

 





On May 11, 2024, at 9:01 PM, Richard Turner mailto:richard...@comcast.net> > wrote:



Based on someone else’s post, is audio ducking on?  If so, turn it off and see 
if it plays.

 

 

 

Richard, USA

"It's no great honor to be blind, but it's more than a nuisance and less than a 
disaster. Either you're going to fight like hell when your sight fails or 
you're going to stand on the sidelines for the rest of your life." -- Dr. 
Margaret Rockwell Phanstiehl Founder of Audio Description (1932-2009)

 

My web site:  <https://www.turner42.com> https://www.turner42.com

 

From: vip...@googlegroups.com <mailto:vip...@googlegroups.com>  
mailto:vip...@googlegroups.com> > On Behalf Of CC
Sent: Saturday, May 11, 2024 4:27 PM
To: VIPhone mailto:vip...@googlegroups.com> >
Subject: Re: Can't Add Bookshare Books to VoiceDream

 

Now, I am no longer able to play previously downloaded audio books from 
bookshare in the voicedream app. As soon as I press play, the app closes and I 
am returned to my homepage. Has anyone else encountered this new problem? 

On Friday, May 10, 2024 at 6:53:08 AM UTC-4 Richard Turner wrote:

All my posts to that list go through. Have you checked your junk folder?

 

 

Richard, USA

“Grandma always told us, “Be careful when you pray for patience. God stores it 
on the other side of Hell and you will have to go through Hell to get it.”

-- Cedrick Bridgeforth

 

My web site: https://www.turner42.com/

 

 

 

On May 10, 2024, at 3:18 AM, Kelby Carlson mailto:kelbyc...@gmail.com> > wrote:



I have not run into the voice error, thankfully.

 

I post to that list, but none of my posts ever seem to go through; they must 
not check their moderation filters much.

 

On May 10, 2024, at 5:59 AM, Richard Turner mailto:richard...@comcast.net> > wrote:

While I still have the download buttons when searching for Bookshare books, 
the audio speed adjustment is broken on my 13 Pro running iOS 17.4.1. I have 
reported this through the app. You all really should join the Voice-Dream list 
the company has started so everything is seen by the company as well as 
reporting through the Voice Dream app. 

voice-drea...@groups.io <mailto:voice-drea...@groups.io> 

 

 

Richard, USA

“Grandma always told us, “Be careful when you pray for patience. God stores it 
on the other side of Hell and you will have to go through Hell to get it.”

-- Cedrick Bridgeforth

 

My web site: https://www.turner42.com/

 

 

 

On May 9, 2024, at 11:42 PM, Arnold Schmidt mailto:als...@gmail.com> > wrote:

I just updated it on both my S E 2, and on this one, with no  changes. I still 
have download buttons, and the playback speed is still adjustable. I am running 
version 17.4.1 on both my devices.   Sadly to say, they are going to have  to 
have a device in hand that exhibits these specific problems before they will be 
able to address them. If it works on their devices as it should, they can't 

Bug#470608: work-around for logcheck email charset

2024-05-12 Thread Richard Lewis
On Sat, 16 May 2020 17:12:42 -0700 Wade Richards  wrote:
> This is regarding Debian bug #47608 "wrong charset in logcheck mail
> (charset=unknown-8bit)"
>
>
> The maintainer has closed this bug as 'wontfix', but if an end-user is
> looking for a work-around, you can add the following to your
> /etc/logcheck.conf file
>
>
> # Additional args to mime-construct for sending email
>
> MIMECONSTRUCTARGS="$MINECONSTRUCTARGS --type 'text/plain;
> charset=UTF-8'"
>
>
> It's an undocumented option, so it may stop working with a future
> version of logcheck, but it works for now.

logcheck also has an option 'MIMEENCODING=' does that solve this issue?



Bug#1033059: logcheck: NEWS advice how to deal with timestamps in different formats

2024-05-12 Thread Richard Lewis
On Sat, 18 Mar 2023 18:55:25 + Richard Lewis
 wrote:
> On Sat, 18 Mar 2023, 15:12 Holger Levsen,  wrote:
>
> > On Thu, Mar 16, 2023 at 06:00:06PM +, Holger Levsen wrote:
> > > aaah, thanks! I only checked
> > /usr/share/doc/logcheck/NEWS.Debian.gz
> > > but not /usr/share/doc/logcheck-database/NEWS.Debian.gz
> >
> > now that I read it and followed the advice and the very nice
> > sed example there, I can they that it worked flawlessly and was
> > very easy to do. Thank you for that NEWS entry!
> >
> > > so maybe reassign this bug to src:release-notes?
> >
> > this question is still open... though maybe cloning the bug is even
> > better, I'd really appreciated a small pointer to logcheck-database's NEWS
> > file in the NEWS for logcheck...

Think we may as well close this bug, unless anyone objects. bookworm
release-notes cover the issue.
Next big change im planning to document in logcheck's NEWS.Debian to
catch all users - watch this space!



Bug#583600: ignore individual entries but write summaries

2024-05-12 Thread Richard Lewis
On Fri, 28 May 2010 19:04:17 +0200 Holger Levsen  wrote:

> I often add logcheck ignore rules for security related events (like ssh login
> attemps. etc), cause they are too many and login is protected reasonably
> anyway.
>
> But then I would like to get summaries for some ignored patterns, probably one
> mail per host day.
>
> Do you think thats a reasonable feature request?

This is already possible, maybe that's why no-one replied for 15
years. Simply create a command called syslog-summary and tell logcheck
to use it (via the setting in logcheck.conf)

Think we should close this bug on that basis

it would be better to develop such a summarising programme outside
logcheck as it's a whole other project to work out what to summarise,
and how to present the results - you'd need a lot of flexibility to
please everyone, i suspect. There was packaged version called
syslog-summary some releases ago, but no-one maintained it and it was
removed from Debian. but the code to use it remains in logcheck.



[9fans] one weird trick to break p9sk1 ?

2024-05-12 Thread Richard Miller
I'm using a new subject [was: Interoperating between 9legacy and 9front]
in the hope of continuing discussion of the vulnerability of p9sk1 without
too many other distractions.

mo...@posixcafe.org said:
> If we agree that:
> 
> 1) p9sk1 allows the shared secret to be brute-forced offline.
> 2) The average consumer machine is fast enough to make a large amount of 
> attempts in a short time,
>in other words triple DES is not computationally hard to brute force these 
> days.
> 
> I don't know how you don't see how this is trivial to do.

I agree that 1) is true, but I don't think it's serious. The shared secret is
only valid for the current session, so by the time it's brute forced, it may
be too late to use. I think the bad vulnerability is that the ticket request
and response can be used offline to brute force the (more permanent) DES keys
of the client and server. Provided, of course, that the random teenager somehow
is able to listen in on the conversation between my p9sk1 clients and servers.

On the other hand, it's hard to know whether to agree or disagree with 2),
without knowing exactly what is meant by "large amount", "short time",
"computationally hard", and "trivial".

When Jacob told me at IWP9 in Waterloo that p9sk1 had been broken, not
just theoretically but in practice, I was looking forward to seeing publication
of the details. Ori's recent claim in 9fans seemed more specific:

> From: o...@eigenstate.org
> ...
> keep in mind that it can literally be brute forced in an
> afternoon by a teenager; even a gpu isn't needed to do
> this in a reasonable amount of time.

I was hoping for a citation to the experimental result Ori's claim was
based on. If the "it" which can be brute forced refers to p9sk1, it
would be very interesting to learn if there are flaws in the algorithm
which will allow it to be broken without breaking DES. My assumption
was that "it" was referring simply to brute forcing DES keys with a
known-plaintext attack. In that case, a back of the envelope calculation
can help us to judge whether the "in an afternoon" claim is plausible.

In an afternoon from noon to 6pm, there are 6*60*60 seconds. To crack
a single DES key by brute force, we'd expect to have to search on average
half the 56-bit key space, performing about 2^55 DES encryptions. So how
fast would the teenager's computer have to be?

cpu% hoc
2^55/(6*60*60)
1667999861989
1/_
5.995204332976e-13

1667 billion DES encryptions per second, or less than a picosecond
per encryption. I think just enumerating the keys at that speed would
be quite a challenge for "the average consumer machine" (even with a GPU).

A bit of googling for actual results on DES brute force brings up
https://www.sciencedirect.com/science/article/abs/pii/S138376212266
from March 2022, which says:
 "Our best optimizations provided 3.87 billion key searches per second for 
Des/3des
 ... on an RTX 3070 GPU."

So even with a GPU, the expected time to crack a random 56-bit key would be
something like:

cpu% hoc
2^55/3.87e9
9309766.671567
_/(60*60*24)
107.7519290691

More than three months. The same paper mentions someone else's purpose-built
machine called RIVYERA which "uses 128 Xilinx Spartan-6 LX150 FPGAs ... 
can try 691 billion Des keys in a second ... costs around 100,000 Euros".
Still not quite fast enough to break a key in an afternoon.

When Jacob says "triple DES is not computationally hard to brute force these 
days",
I assume this is just a slip of the keyboard, since p9sk1 uses only single DES.
But if we are worried about the shaky foundations of p9sk1 being based on
single DES, Occam's Razor indicates that we should look for the minimal and 
simplest
possible extension to p9sk1 to mitigate the brute force threat. The manual 
entry for
des(2) suggests that the Plan 9 authors were already thinking along these lines:

 BUGS
  Single DES can be realistically broken by brute-force; its
  56-bit key is just too short.  It should not be used in new
  code, which should probably use aes(2) instead, or at least
  triple DES.

Let's postulate a p9sk3 which is identical to p9sk1 except that it encrypts the
ticket responses using 3DES instead of DES. The effective keyspace of 3DES is
considered to be 112 bits because of the theoretical meet-in-the-middle attack.
So brute forcing a 3DES key with commodity hardware (including GPU) would be
expected to take something like:

cpu% hoc
2^111/3.87e9
6.708393874076e+23
_/(60*60*24*365.25)
2.125761741728e+16

That's quadrillions of years. Not what most people would call "trivial".
And that's generously assuming the implementation of meet-in-the-middle
is zero cost. Without meet-in-the-middle, we're looking at a 168-bit
keyspace and an even more preposterous number of years.

I was looking forward to the "proof of concept". Even if we can't see
the 

[Qemu-commits] [qemu/qemu] 9f07e4: target/i386: remove PCOMMIT from TCG, deprecate pr...

2024-05-12 Thread Richard Henderson via Qemu-commits
  Branch: refs/heads/staging
  Home:   https://github.com/qemu/qemu
  Commit: 9f07e47a5e96c88c1d2892fbdcbc8ff0437b7ac3
  
https://github.com/qemu/qemu/commit/9f07e47a5e96c88c1d2892fbdcbc8ff0437b7ac3
  Author: Paolo Bonzini 
  Date:   2024-05-10 (Fri, 10 May 2024)

  Changed paths:
M docs/about/deprecated.rst
M target/i386/cpu.c
M target/i386/cpu.h
M target/i386/tcg/translate.c

  Log Message:
  ---
  target/i386: remove PCOMMIT from TCG, deprecate property

The PCOMMIT instruction was never included in any physical processor.
TCG implements it as a no-op instruction, but its utility is debatable
to say the least.  Drop it from the decoder since it is only available
with "-cpu max", which does not guarantee migration compatibility
across versions, and deprecate the property just in case someone is
using it as "pcommit=off".

Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 


  Commit: 41c685dc59bb611096f3bb6a663cfa82e4cba97b
  
https://github.com/qemu/qemu/commit/41c685dc59bb611096f3bb6a663cfa82e4cba97b
  Author: Paolo Bonzini 
  Date:   2024-05-10 (Fri, 10 May 2024)

  Changed paths:
M target/i386/tcg/translate.c

  Log Message:
  ---
  target/i386: fix operand size for DATA16 REX.W POPCNT

According to the manual, 32-bit vs 64-bit is governed by REX.W
and REX ignores the 0x66 prefix.  This can be confirmed with this
program:

#include 
int main()
{
   int x = 0x1234;
   int y;
   asm("popcntl %1, %0" : "=r" (y) : "r" (x)); printf("%x\n", y);
   asm("mov $-1, %0; .byte 0x66; popcntl %1, %0" : "+r" (y) : "r" (x)); 
printf("%x\n", y);
   asm("mov $-1, %0; .byte 0x66; popcntq %q1, %q0" : "+r" (y) : "r" (x)); 
printf("%x\n", y);
}

which prints 5//5 on real hardware and 5//
on QEMU.

Cc: qemu-sta...@nongnu.org
Reviewed-by: Zhao Liu 
Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 


  Commit: 40a3ec7b5ffde500789d016660a171057d6b467c
  
https://github.com/qemu/qemu/commit/40a3ec7b5ffde500789d016660a171057d6b467c
  Author: Paolo Bonzini 
  Date:   2024-05-10 (Fri, 10 May 2024)

  Changed paths:
M target/i386/tcg/translate.c

  Log Message:
  ---
  target/i386: rdpkru/wrpkru are no-prefix instructions

Reject 0x66/0xf3/0xf2 in front of them.

Cc: qemu-sta...@nongnu.org
Reviewed-by: Richard Henderson 
Signed-off-by: Paolo Bonzini 


  Commit: 3fabbe0b7d458d6380f4b3246b8b32400f6bd1d9
  
https://github.com/qemu/qemu/commit/3fabbe0b7d458d6380f4b3246b8b32400f6bd1d9
  Author: Paolo Bonzini 
  Date:   2024-05-10 (Fri, 10 May 2024)

  Changed paths:
M target/i386/tcg/decode-new.c.inc
M target/i386/tcg/decode-new.h
M target/i386/tcg/emit.c.inc
M target/i386/tcg/translate.c

  Log Message:
  ---
  target/i386: move prefetch and multi-byte UD/NOP to new decoder

These are trivial to add, and moving them to the new decoder fixes some
corner cases: raising #UD instead of an instruction fetch page fault for
the undefined opcodes, and incorrectly rejecting 0F 18 prefetches with
register operands (which are treated as reserved NOPs).

Reviewed-by: Richard Henderson 
Reviewed-by: Zhao Liu 
Signed-off-by: Paolo Bonzini 


  Commit: fe01af5d47d4cf7fdf90c54d43f784e5068c8d72
  
https://github.com/qemu/qemu/commit/fe01af5d47d4cf7fdf90c54d43f784e5068c8d72
  Author: Paolo Bonzini 
  Date:   2024-05-10 (Fri, 10 May 2024)

  Changed paths:
M target/i386/cpu.c

  Log Message:
  ---
  target/i386: fix feature dependency for WAITPKG

The VMX feature bit depends on general availability of WAITPKG,
not the other way round.

Fixes: 33cc88261c3 ("target/i386: add support for 
VMX_SECONDARY_EXEC_ENABLE_USER_WAIT_PAUSE", 2023-08-28)
Cc: qemu-sta...@nongnu.org
Reviewed-by: Zhao Liu 
Signed-off-by: Paolo Bonzini 


  Commit: ff5b5739f97d08d9ca984ec8016b54487a76401b
  
https://github.com/qemu/qemu/commit/ff5b5739f97d08d9ca984ec8016b54487a76401b
  Author: Paolo Bonzini 
  Date:   2024-05-10 (Fri, 10 May 2024)

  Changed paths:
M tests/tcg/i386/test-i386.c

  Log Message:
  ---
  tests/tcg: cover lzcnt/tzcnt/popcnt

Reviewed-by: Zhao Liu 
Signed-off-by: Paolo Bonzini 


  Commit: 23b1f53c2c8990ed745acede171e49645af3d6d0
  
https://github.com/qemu/qemu/commit/23b1f53c2c8990ed745acede171e49645af3d6d0
  Author: Paolo Bonzini 
  Date:   2024-05-10 (Fri, 10 May 2024)

  Changed paths:
M configure

  Log Message:
  ---
  configure: quote -D options that are passed through to meson

Ensure that they go through unmodified, instead of removing one layer
of quoting.

-D is a pretty specialized option and most options that can have spaces
do not need it (for example, c_args is covered by --extra-cflags).
Therefore it's unlikely that this causes actual trouble.  However,
a somewhat realis

Re: [PATCH 1/1] target/ppc: Move VMX integer add/sub saturate insns to decodetree.

2024-05-12 Thread Richard Henderson

On 5/12/24 11:38, Chinmay Rath wrote:

@@ -2934,6 +2870,184 @@ static bool do_vx_vaddsubcuw(DisasContext *ctx, arg_VX 
*a, int add)
  return true;
  }
  
+static inline void do_vadd_vsub_sat

+(
+unsigned vece, TCGv_vec t, TCGv_vec sat, TCGv_vec a, TCGv_vec b,
+void (*norm_op)(unsigned, TCGv_vec, TCGv_vec, TCGv_vec),
+void (*sat_op)(unsigned, TCGv_vec, TCGv_vec, TCGv_vec))
+{
+TCGv_vec x = tcg_temp_new_vec_matching(t);
+norm_op(vece, x, a, b);
+sat_op(vece, t, a, b);
+tcg_gen_cmp_vec(TCG_COND_NE, vece, x, x, t);
+tcg_gen_or_vec(vece, sat, sat, x);
+}


As a separate change, before or after, the cmp_vec may be simplified to xor_vec.  Which 
means that INDEX_op_cmp_vec need not be probed in the vecop_lists.  See


https://lore.kernel.org/qemu-devel/20240506010403.6204-31-richard.hender...@linaro.org/

which is performing the same operation on AArch64.



+static bool do_vx_vadd_vsub_sat(DisasContext *ctx, arg_VX *a,
+int sign, int vece, int add)
+{
+static const TCGOpcode vecop_list_sub_u[] = {
+INDEX_op_sub_vec, INDEX_op_ussub_vec, INDEX_op_cmp_vec, 0
+};
+static const TCGOpcode vecop_list_sub_s[] = {
+INDEX_op_sub_vec, INDEX_op_sssub_vec, INDEX_op_cmp_vec, 0
+};
+static const TCGOpcode vecop_list_add_u[] = {
+INDEX_op_add_vec, INDEX_op_usadd_vec, INDEX_op_cmp_vec, 0
+};
+static const TCGOpcode vecop_list_add_s[] = {
+INDEX_op_add_vec, INDEX_op_ssadd_vec, INDEX_op_cmp_vec, 0
+};
+
+static const GVecGen4 op[2][3][2] = {
+{
+{
+{
+.fniv = gen_vsub_sat_u,
+.fno = gen_helper_VSUBUBS,
+.opt_opc = vecop_list_sub_u,
+.write_aofs = true,
+.vece = MO_8
+},
+{
+.fniv = gen_vadd_sat_u,
+.fno = gen_helper_VADDUBS,
+.opt_opc = vecop_list_add_u,
+.write_aofs = true,
+.vece = MO_8
+},
+},
+{
+{
+.fniv = gen_vsub_sat_u,
+.fno = gen_helper_VSUBUHS,
+.opt_opc = vecop_list_sub_u,
+.write_aofs = true,
+.vece = MO_16
+},
+{
+.fniv = gen_vadd_sat_u,
+.fno = gen_helper_VADDUHS,
+.opt_opc = vecop_list_add_u,
+.write_aofs = true,
+.vece = MO_16
+},
+},
+{
+{
+.fniv = gen_vsub_sat_u,
+.fno = gen_helper_VSUBUWS,
+.opt_opc = vecop_list_sub_u,
+.write_aofs = true,
+.vece = MO_32
+},
+{
+.fniv = gen_vadd_sat_u,
+.fno = gen_helper_VADDUWS,
+.opt_opc = vecop_list_add_u,
+.write_aofs = true,
+.vece = MO_32
+},
+},
+},
+{
+{
+{
+.fniv = gen_vsub_sat_s,
+.fno = gen_helper_VSUBSBS,
+.opt_opc = vecop_list_sub_s,
+.write_aofs = true,
+.vece = MO_8
+},
+{
+.fniv = gen_vadd_sat_s,
+.fno = gen_helper_VADDSBS,
+.opt_opc = vecop_list_add_s,
+.write_aofs = true,
+.vece = MO_8
+},
+},
+{
+{
+.fniv = gen_vsub_sat_s,
+.fno = gen_helper_VSUBSHS,
+.opt_opc = vecop_list_sub_s,
+.write_aofs = true,
+.vece = MO_16
+},
+{
+.fniv = gen_vadd_sat_s,
+.fno = gen_helper_VADDSHS,
+.opt_opc = vecop_list_add_s,
+.write_aofs = true,
+.vece = MO_16
+},
+},
+{
+{
+.fniv = gen_vsub_sat_s,
+.fno = gen_helper_VSUBSWS,
+.opt_opc = vecop_list_sub_s,
+.write_aofs = true,
+.vece = MO_32
+},
+{
+.fniv = gen_vadd_sat_s,
+.fno = gen_helper_VADDSWS,
+.opt_opc = vecop_list_add_s,
+.write_aofs = true,
+.vece = MO_32
+},
+},
+},
+};


While this table is not wrong, I think it is clearer to have separate tables, one per 

Re: [Patch, fortran] PR113363 - ICE on ASSOCIATE and unlimited polymorphic function

2024-05-12 Thread Paul Richard Thomas
Hi Harald,

Please find attached my resubmission for pr113363. The changes are as
follows:
(i) The chunk in gfc_conv_procedure_call is new. This was the source of one
of the memory leaks;
(ii) The incorporation of the _len field in trans_class_assignment was done
for the pr84006 patch;
(iii) The source of all the invalid memory accesses and so on was down to
the use of realloc. I tried all sorts of workarounds such as testing the
vptrs and the sizes but only free followed by malloc worked. I have no idea
at all why this is the case; and
(iv) I took account of your remarks about the chunk in trans-array.cc by
removing it and that the chunk in trans-stmt.cc would leak frontend memory.

OK for mainline (and -14 branch after a few-weeks)?

Regards

Paul

Fortran: Fix wrong code in unlimited polymorphic assignment [PR113363]

2024-05-12  Paul Thomas  

gcc/fortran
PR fortran/113363
* trans-array.cc (gfc_array_init_size): Use the expr3 dtype so
that the correct element size is used.
* trans-expr.cc (gfc_conv_procedure_call): Remove restriction
that ss and ss->loop be present for the finalization of class
array function results.
(trans_class_assignment): Use free and malloc, rather than
realloc, for character expressions assigned to unlimited poly
entities.
* trans-stmt.cc (gfc_trans_allocate): Build a correct rhs for
the assignment of an unlimited polymorphic 'source'.

gcc/testsuite/
PR fortran/113363
* gfortran.dg/pr113363.f90: New test.


> > The first chunk in trans-array.cc ensures that the array dtype is set to
> > the source dtype. The second chunk ensures that the lhs _len field does
> not
> > default to zero and so is specific to dynamic types of character.
> >
>
> Why the two gfc_copy_ref?  valgrind pointed my to the tail
> of gfc_copy_ref which already has:
>
>dest->next = gfc_copy_ref (src->next);
>
> so this looks redundant and leaks frontend memory?
>
> ***
>
> Playing with the testcase, I find several invalid writes with
> valgrind, or a heap buffer overflow with -fsanitize=address .
>
>
>
diff --git a/gcc/fortran/trans-array.cc b/gcc/fortran/trans-array.cc
index 7ec33fb1598..c5b56f4e273 100644
--- a/gcc/fortran/trans-array.cc
+++ b/gcc/fortran/trans-array.cc
@@ -5957,6 +5957,11 @@ gfc_array_init_size (tree descriptor, int rank, int corank, tree * poffset,
   tmp = gfc_conv_descriptor_dtype (descriptor);
   gfc_add_modify (pblock, tmp, gfc_get_dtype_rank_type (rank, type));
 }
+  else if (expr3_desc && GFC_DESCRIPTOR_TYPE_P (TREE_TYPE (expr3_desc)))
+{
+  tmp = gfc_conv_descriptor_dtype (descriptor);
+  gfc_add_modify (pblock, tmp, gfc_conv_descriptor_dtype (expr3_desc));
+}
   else
 {
   tmp = gfc_conv_descriptor_dtype (descriptor);
diff --git a/gcc/fortran/trans-expr.cc b/gcc/fortran/trans-expr.cc
index 4590aa6edb4..e315e2d3370 100644
--- a/gcc/fortran/trans-expr.cc
+++ b/gcc/fortran/trans-expr.cc
@@ -8245,8 +8245,7 @@ gfc_conv_procedure_call (gfc_se * se, gfc_symbol * sym,
 	 call the finalization function of the temporary. Note that the
 	 nullification of allocatable components needed by the result
 	 is done in gfc_trans_assignment_1.  */
-  if (expr && ((gfc_is_class_array_function (expr)
-		&& se->ss && se->ss->loop)
+  if (expr && (gfc_is_class_array_function (expr)
 		   || gfc_is_alloc_class_scalar_function (expr))
 	  && se->expr && GFC_CLASS_TYPE_P (TREE_TYPE (se->expr))
 	  && expr->must_finalize)
@@ -12028,18 +12027,25 @@ trans_class_assignment (stmtblock_t *block, gfc_expr *lhs, gfc_expr *rhs,
 
   /* Reallocate if dynamic types are different. */
   gfc_init_block (_alloc);
-  tmp = fold_convert (pvoid_type_node, class_han);
-  re = build_call_expr_loc (input_location,
-builtin_decl_explicit (BUILT_IN_REALLOC), 2,
-tmp, size);
-  re = fold_build2_loc (input_location, MODIFY_EXPR, TREE_TYPE (tmp), tmp,
-			re);
-  tmp = fold_build2_loc (input_location, NE_EXPR,
-			 logical_type_node, rhs_vptr, old_vptr);
-  re = fold_build3_loc (input_location, COND_EXPR, void_type_node,
-			tmp, re, build_empty_stmt (input_location));
-  gfc_add_expr_to_block (_alloc, re);
-
+  if (UNLIMITED_POLY (lhs) && rhs->ts.type == BT_CHARACTER)
+	{
+	  gfc_add_expr_to_block (_alloc, gfc_call_free (class_han));
+	  gfc_allocate_using_malloc (_alloc, class_han, size, NULL_TREE);
+	}
+  else
+	{
+	  tmp = fold_convert (pvoid_type_node, class_han);
+	  re = build_call_expr_loc (input_location,
+builtin_decl_explicit (BUILT_IN_REALLOC),
+2, tmp, size);
+	  re = fold_build2_loc (input_location, MODIFY_EXPR, TREE_TYPE (tmp),
+tmp, re);
+	  tmp = fold_build2_loc (input_location, NE_EXPR,
+ logical_type_node, rhs_vptr, old_vptr);
+	  re = fold_build3_loc (input_location, COND_EXPR, void_type_node,
+tmp, re, build_empty_stmt (input_location));
+	  gfc_add_expr_to_block (_alloc, re);
+	}
   tree realloc_expr = lhs->ts.type == BT_CLASS ?
 	  gfc_finish_block (_alloc) :
 			

Re: [Patch, fortran] PR113363 - ICE on ASSOCIATE and unlimited polymorphic function

2024-05-12 Thread Paul Richard Thomas
Hi Harald,

Please find attached my resubmission for pr113363. The changes are as
follows:
(i) The chunk in gfc_conv_procedure_call is new. This was the source of one
of the memory leaks;
(ii) The incorporation of the _len field in trans_class_assignment was done
for the pr84006 patch;
(iii) The source of all the invalid memory accesses and so on was down to
the use of realloc. I tried all sorts of workarounds such as testing the
vptrs and the sizes but only free followed by malloc worked. I have no idea
at all why this is the case; and
(iv) I took account of your remarks about the chunk in trans-array.cc by
removing it and that the chunk in trans-stmt.cc would leak frontend memory.

OK for mainline (and -14 branch after a few-weeks)?

Regards

Paul

Fortran: Fix wrong code in unlimited polymorphic assignment [PR113363]

2024-05-12  Paul Thomas  

gcc/fortran
PR fortran/113363
* trans-array.cc (gfc_array_init_size): Use the expr3 dtype so
that the correct element size is used.
* trans-expr.cc (gfc_conv_procedure_call): Remove restriction
that ss and ss->loop be present for the finalization of class
array function results.
(trans_class_assignment): Use free and malloc, rather than
realloc, for character expressions assigned to unlimited poly
entities.
* trans-stmt.cc (gfc_trans_allocate): Build a correct rhs for
the assignment of an unlimited polymorphic 'source'.

gcc/testsuite/
PR fortran/113363
* gfortran.dg/pr113363.f90: New test.


> > The first chunk in trans-array.cc ensures that the array dtype is set to
> > the source dtype. The second chunk ensures that the lhs _len field does
> not
> > default to zero and so is specific to dynamic types of character.
> >
>
> Why the two gfc_copy_ref?  valgrind pointed my to the tail
> of gfc_copy_ref which already has:
>
>dest->next = gfc_copy_ref (src->next);
>
> so this looks redundant and leaks frontend memory?
>
> ***
>
> Playing with the testcase, I find several invalid writes with
> valgrind, or a heap buffer overflow with -fsanitize=address .
>
>
>
diff --git a/gcc/fortran/trans-array.cc b/gcc/fortran/trans-array.cc
index 7ec33fb1598..c5b56f4e273 100644
--- a/gcc/fortran/trans-array.cc
+++ b/gcc/fortran/trans-array.cc
@@ -5957,6 +5957,11 @@ gfc_array_init_size (tree descriptor, int rank, int corank, tree * poffset,
   tmp = gfc_conv_descriptor_dtype (descriptor);
   gfc_add_modify (pblock, tmp, gfc_get_dtype_rank_type (rank, type));
 }
+  else if (expr3_desc && GFC_DESCRIPTOR_TYPE_P (TREE_TYPE (expr3_desc)))
+{
+  tmp = gfc_conv_descriptor_dtype (descriptor);
+  gfc_add_modify (pblock, tmp, gfc_conv_descriptor_dtype (expr3_desc));
+}
   else
 {
   tmp = gfc_conv_descriptor_dtype (descriptor);
diff --git a/gcc/fortran/trans-expr.cc b/gcc/fortran/trans-expr.cc
index 4590aa6edb4..e315e2d3370 100644
--- a/gcc/fortran/trans-expr.cc
+++ b/gcc/fortran/trans-expr.cc
@@ -8245,8 +8245,7 @@ gfc_conv_procedure_call (gfc_se * se, gfc_symbol * sym,
 	 call the finalization function of the temporary. Note that the
 	 nullification of allocatable components needed by the result
 	 is done in gfc_trans_assignment_1.  */
-  if (expr && ((gfc_is_class_array_function (expr)
-		&& se->ss && se->ss->loop)
+  if (expr && (gfc_is_class_array_function (expr)
 		   || gfc_is_alloc_class_scalar_function (expr))
 	  && se->expr && GFC_CLASS_TYPE_P (TREE_TYPE (se->expr))
 	  && expr->must_finalize)
@@ -12028,18 +12027,25 @@ trans_class_assignment (stmtblock_t *block, gfc_expr *lhs, gfc_expr *rhs,
 
   /* Reallocate if dynamic types are different. */
   gfc_init_block (_alloc);
-  tmp = fold_convert (pvoid_type_node, class_han);
-  re = build_call_expr_loc (input_location,
-builtin_decl_explicit (BUILT_IN_REALLOC), 2,
-tmp, size);
-  re = fold_build2_loc (input_location, MODIFY_EXPR, TREE_TYPE (tmp), tmp,
-			re);
-  tmp = fold_build2_loc (input_location, NE_EXPR,
-			 logical_type_node, rhs_vptr, old_vptr);
-  re = fold_build3_loc (input_location, COND_EXPR, void_type_node,
-			tmp, re, build_empty_stmt (input_location));
-  gfc_add_expr_to_block (_alloc, re);
-
+  if (UNLIMITED_POLY (lhs) && rhs->ts.type == BT_CHARACTER)
+	{
+	  gfc_add_expr_to_block (_alloc, gfc_call_free (class_han));
+	  gfc_allocate_using_malloc (_alloc, class_han, size, NULL_TREE);
+	}
+  else
+	{
+	  tmp = fold_convert (pvoid_type_node, class_han);
+	  re = build_call_expr_loc (input_location,
+builtin_decl_explicit (BUILT_IN_REALLOC),
+2, tmp, size);
+	  re = fold_build2_loc (input_location, MODIFY_EXPR, TREE_TYPE (tmp),
+tmp, re);
+	  tmp = fold_build2_loc (input_location, NE_EXPR,
+ logical_type_node, rhs_vptr, old_vptr);
+	  re = fold_build3_loc (input_location, COND_EXPR, void_type_node,
+tmp, re, build_empty_stmt (input_location));
+	  gfc_add_expr_to_block (_alloc, re);
+	}
   tree realloc_expr = lhs->ts.type == BT_CLASS ?
 	  gfc_finish_block (_alloc) :
 			

Re: target/ppc: Move VMX int add/sub saturate insns to decodetree.

2024-05-12 Thread Richard Henderson

On 5/12/24 11:38, Chinmay Rath wrote:

1. vsubsbs and bcdtrunc :

In this pair, bcdtrunc has the insn flag check PPC2_ISA300 in the
vmx-impl file, within the GEN_VXFORM_DUAL macro, which does this flag
check.
However it also has this flag check in the vmx-ops file.
Hence I have retained the same in the new entry in the vmx-ops file.
This is consistent with the behaviour in done in the following commit :
https://github.com/qemu/qemu/commit/b132be53a4ba6a0a40d5643d791822f958a36e53
So even though the flag check is removed from the vmx-impl file, it is
retained in the vmx-ops file. All good here.

2. vadduhs and vmul10euq :

In this pair, vmul10euq has the insn flag check PPC2_ISA300 in the
vmx-impl file, check done within the GEN_VXFORM_DUAL macro.
However the same flag was NOT originally present in the
vmx-ops file, so I have NOT included in its new entry in the vmx-ops
file. I have done this, following the behaviour done in the following
commit :
https://github.com/qemu/qemu/commit/c85929b2ddf6bbad737635c9b85213007ec043af
So this flag check for vmul10euq is excluded now. Is this not a problem ?
I feel that this leads to the flag check being skipped now, however this
behaviour was followed in the above mentioned commit.


This second link is for VAVG* and VABSD*.

Yes you are correct that this second case was done incorrectly. Thankfully the mistake was 
fixed in the very next commit, when VABSD* was converted to decodetree as well.



r~



<    4   5   6   7   8   9   10   11   12   13   >