Re: [PATCH] Always enable LRA

2022-10-14 Thread Koning, Paul via Gcc-patches



> On Oct 14, 2022, at 5:15 PM, Jeff Law via Gcc-patches 
>  wrote:
> 
> 
> On 10/14/22 11:36, Koning, Paul wrote:
>> 
>>> On Oct 14, 2022, at 1:10 PM, Jeff Law  wrote:
>>> 
>>> On 10/14/22 10:37, Koning, Paul wrote:
> ...
> But that approach falls down with reload/lra doing substitutions without 
> validating the result.  I guess it might be possible to cobble together 
> something with secondary reloads, but it's way way way down on my todo 
> list.
 Aren't the constraints enforced?  My experience is that I was getting 
 these bad addressing modes in some test programs, and that the constraints 
 I created to make the requirement explicit cured that.  Maybe I'm 
 expecting too much from constraints, but my (admittedly inexperienced) 
 understanding of them is that they inform reload what sort of things it 
 can construct, and what it cannot.
>>> It's not really a constraint issue -- the pattern's condition would cause 
>>> this not to recognize, but LRA doesn't re-recognize the insn.  We might be 
>>> able to hack something in the constraints to force a reload of the source 
>>> operand in this case.   Ugly, but a possibility.
>> I find it hard to cope with constraints that don't constrain.  Minimally it 
>> should be clearly documented exactly what cases fail to obey the constraints 
>> and what a target writer can do to deal with those failures.
> 
> Constraints have a purpose, but as I've noted, they really don't come into 
> play here.   Had LRA tried to see if what it created as a valid move insn, 
> the backend would have said "nope, that's not valid".  That's a stronger test 
> than checking the constraints.  If the insn is not valid according to its 
> condition, then the constraints simply don't matter.
> 
> I'm not aware of a case where constraints are failing to be obeyed and 
> constraints simply aren't a viable solution here other than to paper over the 
> problem and hope it doesn't show up elsewhere.
> 
> Right now operand 0's constraint is "<" meaning pre-inc operand, operand 1 is 
> "r".  How would you define a new constraint for operand 1 that disallows 
> overlap with operand 0 given that the H8 allows autoinc on any register 
> operand?   You can't look at operand 0 while processing the constraint for 
> operand 1. Similarly if you try to define a new constraint for operand0 
> without looking at operand1.

Easy but cumbersome: define constraints for "register N" (for each N) and 
another set for "autoinc on any register other than N".  In pdp11, I called 
these Z0, Z1... and Za, Zb... respectively.  Then the insn gets constraints 
that look like "Z0,Z1,Z2..." and "Za, Zb, Zc..." for the two operands.  As I 
said, see pdp11.md, the mov insn.

paul



Re: [PATCH] Always enable LRA

2022-10-14 Thread Koning, Paul via Gcc-patches



> On Oct 14, 2022, at 4:12 PM, Segher Boessenkool  
> wrote:
> 
> On Fri, Oct 14, 2022 at 07:58:39PM +, Koning, Paul wrote:
>>> On Oct 14, 2022, at 2:03 PM, Jeff Law via Gcc-patches 
>>>  wrote:
>>> On 10/14/22 11:35, Segher Boessenkool wrote:
 On Fri, Oct 14, 2022 at 11:07:43AM -0600, Jeff Law wrote:
>> LRA only ever generates insns that pass recog.  The backend allows this
>> define_insn, requiring it to be split (it returns template "#"), but
>> then somehow it doesn't match in any split pass?
> Nope.  The elimination code will just change one register without
> re-recognizing.  That's precisely what happens here.
 That is a big oversight then.  Please file a PR?
>>> 
>>> Sure.  But just recognizing (for this particular case) will just move the 
>>> fault from a failure to split to a failure to recognize. From my wanderings 
>>> in the elimination code, I don't see that it has a path that would allow it 
>>> to reasonably handle this case -- ie, if the insn does not recognize, what 
>>> then?   Conceptually we need to generate an input-reload but I don't see a 
>>> way to do that in the elimination code.  Maybe Vlad knows how it ought to 
>>> be handled.
>> 
>> I probably have too simplistic a view of this, but the way I think of it is 
>> that LRA (and reload) make decisions subject to constraints, and among those 
>> constraints are the ones specified in the MD file patterns.  That to me 
>> means that a substitution proposed to be made by the LRA code is subject to 
>> those invariants: it can't do that if the constraints say "no" and must then 
>> consider some other alternative.
> 
> I think that is exactly right for LRA.
> 
> Old reload conceptually changed the whole function all at once, starting
> with valid RTL, and ending with strictly valid RTL.  LRA works locally,
> one instruction at a time essentially, and makes the changes
> immediately.  If when it has finished work on the function offsets have
> changed, it walks over the whole function again, repeat until done.
> 
> "Strictly valid" means that the constraints are considered, and the insn
> is only valid if some enabled alternative satisfies all constraints.
> 
> I hope I got that all right, I'm not an expert!  :-)

Thanks Segher.

As I said earlier, if for some reason this straightforward understanding is not 
completely accurate, that can be handled provided it is documented when and why 
the exceptions arise, and what methods the target author should use to deal 
with these things when they happen.

As a target maintainer not deeply skilled in the GCC common internals, I tend 
to trip over these things.  With the old reload, and secondary reload in 
particular, it always felt to  me  like the answer was "keep tweaking the 
target definition files until the test cases stop breaking".  That isn't how it 
should be.  

Perhaps some of these issues come from out of the ordinary target restrictions. 
 The autoinc/autodec case we're discussing may be an example of that.  The one 
I remember in particular was the pdp11 float instructions, where I have 6 
registers but only 4 of these can be loaded from or stored to memory.  Putting 
the other two to work while having spill to memory work right took quite a lot 
of iteration.

It may be LRA is better in these areas.  I haven't spent much time with that, 
other than to create a way to enable its use and observing that (a) I got about 
the same test suite numbers either way and (b) the LRA code was not as good in 
some of the cases.

paul



Re: [PATCH] Always enable LRA

2022-10-14 Thread Koning, Paul via Gcc-patches



> On Oct 14, 2022, at 2:03 PM, Jeff Law via Gcc-patches 
>  wrote:
> 
> 
> On 10/14/22 11:35, Segher Boessenkool wrote:
>> On Fri, Oct 14, 2022 at 11:07:43AM -0600, Jeff Law wrote:
 LRA only ever generates insns that pass recog.  The backend allows this
 define_insn, requiring it to be split (it returns template "#"), but
 then somehow it doesn't match in any split pass?
>>> Nope.  The elimination code will just change one register without
>>> re-recognizing.  That's precisely what happens here.
>> That is a big oversight then.  Please file a PR?
> 
> Sure.  But just recognizing (for this particular case) will just move the 
> fault from a failure to split to a failure to recognize. From my wanderings 
> in the elimination code, I don't see that it has a path that would allow it 
> to reasonably handle this case -- ie, if the insn does not recognize, what 
> then?   Conceptually we need to generate an input-reload but I don't see a 
> way to do that in the elimination code.  Maybe Vlad knows how it ought to be 
> handled.

I probably have too simplistic a view of this, but the way I think of it is 
that LRA (and reload) make decisions subject to constraints, and among those 
constraints are the ones specified in the MD file patterns.  That to me means 
that a substitution proposed to be made by the LRA code is subject to those 
invariants: it can't do that if the constraints say "no" and must then consider 
some other alternative.

paul




Re: [PATCH] Always enable LRA

2022-10-14 Thread Koning, Paul via Gcc-patches



> On Oct 14, 2022, at 1:10 PM, Jeff Law  wrote:
> 
> On 10/14/22 10:37, Koning, Paul wrote:
>> 
>>> ...
>>> But that approach falls down with reload/lra doing substitutions without 
>>> validating the result.  I guess it might be possible to cobble together 
>>> something with secondary reloads, but it's way way way down on my todo list.
>> Aren't the constraints enforced?  My experience is that I was getting these 
>> bad addressing modes in some test programs, and that the constraints I 
>> created to make the requirement explicit cured that.  Maybe I'm expecting 
>> too much from constraints, but my (admittedly inexperienced) understanding 
>> of them is that they inform reload what sort of things it can construct, and 
>> what it cannot.
> 
> It's not really a constraint issue -- the pattern's condition would cause 
> this not to recognize, but LRA doesn't re-recognize the insn.  We might be 
> able to hack something in the constraints to force a reload of the source 
> operand in this case.   Ugly, but a possibility.

I find it hard to cope with constraints that don't constrain.  Minimally it 
should be clearly documented exactly what cases fail to obey the constraints 
and what a target writer can do to deal with those failures.

As it stands, I find myself working hard to write MD code that accurately 
describes the rules of the machine, and for the core machinery to disregard 
those instructions is painful.

Is there a compelling argument for every case where LRA fails to obey the 
constraints?  If not, can they just be called bugs and added to the to-be-fixed 
queue?

paul



Re: [PATCH] Always enable LRA

2022-10-14 Thread Koning, Paul via Gcc-patches



> On Oct 14, 2022, at 12:18 PM, Segher Boessenkool  
> wrote:
> 
> On Fri, Oct 14, 2022 at 12:36:47AM +, Koning, Paul wrote:
>> I guess I'll have to look harder to see if it's possible to make LRA handle 
>> CISC addressing modes like memory indirect, autoincrement, autodecrement, 
>> and others that the old reload handles at least somewhat.  Ideally LRA 
>> should do a better job; right now I believe it doesn't really do these 
>> things at all.  Targets like pdp11 and vax would like these.
> 
> So what does it do now?  Break every more complex addressing mode apart
> again?  Or ICE?  Or something in between?

The former.  LRA does handle some cases but not all that the target permits and 
not as many as the old reload.

Example:

unsigned int cksum (unsigned int *buf, unsigned int len)
{
unsigned int ret = 0;
do {
ret += *buf++;
} while (--len != 0);
return ret;
}

The loop looks like this:

L_2:
add (r2)+,r0
sob r1,L_2

which is what I would expect.  Now throw in an indirection:

Old reload produces this loop:

L_2:
add @(r2)+,r0
sob r1,L_2

while LRA doesn't understand it can use the autoincrement indirect mode:

L_2:
mov (r2)+,r3
add (r3),r0
sob r1,L_2

This is from a GCC 13.0 test build of last June, with -O2 -m45, with and 
without -mlra.

paul



Re: [PATCH] Always enable LRA

2022-10-14 Thread Koning, Paul via Gcc-patches



> On Oct 14, 2022, at 10:38 AM, Jeff Law via Gcc-patches 
>  wrote:
> 
> 
> On 10/14/22 06:37, Koning, Paul wrote:
>> 
>>> On Oct 13, 2022, at 9:07 PM, Jeff Law via Gcc-patches 
>>>  wrote:
>>> 
>>> 
>>> On 10/13/22 17:56, Segher Boessenkool wrote:
 h8300 fails during GCC build:
 /home/segher/src/gcc/libgcc/unwind.inc: In function 
 '_Unwind_SjLj_RaiseException':
 /home/segher/src/gcc/libgcc/unwind.inc:141:1: error: could not split insn
   141 | }
   | ^
 (insn 69 256 327 (set (mem/f:SI (pre_dec:SI (reg/f:SI 7 sp)) [12  S4 A32])
 (reg/f:SI 7 sp)) "/home/segher/src/gcc/libgcc/unwind.inc":118:12 
 19 {*movsi}
  (expr_list:REG_ARGS_SIZE (const_int 4 [0x4])
 (nil)))
 during RTL pass: final
 which looks like a backend bug, I don't see a pattern that could split
 this (without needing an extra clobber)?
>>> I'm aware of this -- its invalid RTL:
>>> 
>>> Uses of the register outside of an address are not permitted within the
>>> same insn as a use in an embedded side effect expression because such
>>> insns behave differently on different machines and hence must be treated
>>> as ambiguous and disallowed.
>> I had a bit of a fight with this sort of thing in pdp11, where in fact such 
>> operations are executed differently on different machine models.  The 
>> solution I picked is to create two sets of machine-specific constraint 
>> codes, one for "register N" and the other for "autoinc/dec of any register 
>> other than N" and pairing those.  (You can see this in pdp11.md, the 
>> mov definition.)
> 
> I've long suspected the pdp11 was the inspiration for this restriction (I 
> have memories of noting it before I relocated to Utah, so circa 1992).  The 
> key problem is the generic parts of the compiler don't know what the 
> semantics ought to be -- so it's not obvious when they do a substitution 
> whether or not the substitution of one reg for another is actually valid.  
> It's important to remember that sometimes when we substitute one register for 
> another, we don't have any contextual information about source vs dest -- 
> it's a long standing wart that causes problems in other cases as well.
> 
> That punts the problem to the backends and the H8 actually tries to deal with 
> this restriction.  Basically in the movxx pattern conditions, when the 
> destination uses an autoinc addressing mode, the pattern's condition will 
> check that the source register is different.  I would expect other ports 
> likely to do something similar.
> 
> But that approach falls down with reload/lra doing substitutions without 
> validating the result.  I guess it might be possible to cobble together 
> something with secondary reloads, but it's way way way down on my todo list.

Aren't the constraints enforced?  My experience is that I was getting these bad 
addressing modes in some test programs, and that the constraints I created to 
make the requirement explicit cured that.  Maybe I'm expecting too much from 
constraints, but my (admittedly inexperienced) understanding of them is that 
they inform reload what sort of things it can construct, and what it cannot.

If reload obeys the constraints in the patterns then the back end machine 
definition can be written to avoid the problematic cases, and it is no longer 
necessary to have a general (and as I pointed out, overly broad) rule in 
generic code.

paul



Re: [PATCH] Always enable LRA

2022-10-14 Thread Koning, Paul via Gcc-patches



> On Oct 13, 2022, at 9:07 PM, Jeff Law via Gcc-patches 
>  wrote:
> 
> 
> On 10/13/22 17:56, Segher Boessenkool wrote:
>> 
>> h8300 fails during GCC build:
>> /home/segher/src/gcc/libgcc/unwind.inc: In function 
>> '_Unwind_SjLj_RaiseException':
>> /home/segher/src/gcc/libgcc/unwind.inc:141:1: error: could not split insn
>>   141 | }
>>   | ^
>> (insn 69 256 327 (set (mem/f:SI (pre_dec:SI (reg/f:SI 7 sp)) [12  S4 A32])
>> (reg/f:SI 7 sp)) "/home/segher/src/gcc/libgcc/unwind.inc":118:12 19 
>> {*movsi}
>>  (expr_list:REG_ARGS_SIZE (const_int 4 [0x4])
>> (nil)))
>> during RTL pass: final
>> which looks like a backend bug, I don't see a pattern that could split
>> this (without needing an extra clobber)?
> 
> I'm aware of this -- its invalid RTL:
> 
> Uses of the register outside of an address are not permitted within the
> same insn as a use in an embedded side effect expression because such
> insns behave differently on different machines and hence must be treated
> as ambiguous and disallowed.

I had a bit of a fight with this sort of thing in pdp11, where in fact such 
operations are executed differently on different machine models.  The solution 
I picked is to create two sets of machine-specific constraint codes, one for 
"register N" and the other for "autoinc/dec of any register other than N" and 
pairing those.  (You can see this in pdp11.md, the mov definition.)

But the pdp11 case is actually not as restrictive as the rule you mentioned.  
The problem case is register N source, autoinc/dec rN destination.  The 
opposite case, which we see here -- autoinc/dec Rn source, Rn destination -- is 
just fine.  Perhaps not all that important, but the ISA definition does not 
object to it.  So I'm not sure why there would be a general rule that says it's 
considered ambiguous when the target machine architecture says it is not.

paul




Re: [PATCH] Always enable LRA

2022-10-13 Thread Koning, Paul via Gcc-patches



> On Oct 13, 2022, at 7:56 PM, Segher Boessenkool  
> wrote:
> 
> This small patch changes everything that checks targetm.lra_p behave as
> if it returned true.
> 
> It has no effect on any primary or secondary target.  It also is fine
> for nds32 and for nios2, and it works fine for microblaze (which used
> old reload before), resulting in smaller code.
> 
> I have patches to completely rip out old reload, and more stuff after
> that, but of course not everything is nice yet:

I guess I'll have to look harder to see if it's possible to make LRA handle 
CISC addressing modes like memory indirect, autoincrement, autodecrement, and 
others that the old reload handles at least somewhat.  Ideally LRA should do a 
better job; right now I believe it doesn't really do these things at all.  
Targets like pdp11 and vax would like these.

paul




Re: [PATCH V2] place `const volatile' objects in read-only sections

2022-09-28 Thread Koning, Paul via Gcc-patches



> On Sep 27, 2022, at 8:51 PM, Jeff Law via Gcc-patches 
>  wrote:
> 
> 
> On 8/5/22 05:41, Jose E. Marchesi via Gcc-patches wrote:
>> [Changes from V1:
>> - Added a test.]
>> 
>> It is common for C BPF programs to use variables that are implicitly
>> set by the BPF loader and run-time.  It is also necessary for these
>> variables to be stored in read-only storage so the BPF verifier
>> recognizes them as such.  This leads to declarations using both
>> `const' and `volatile' qualifiers, like this:
>> 
>>   const volatile unsigned char is_allow_list = 0;
>> 
>> Where `volatile' is used to avoid the compiler to optimize out the
>> variable, or turn it into a constant, and `const' to make sure it is
>> placed in .rodata.
>> 
>> Now, it happens that:
>> 
>> - GCC places `const volatile' objects in the .data section, under the
>>   assumption that `volatile' somehow voids the `const'.
>> 
>> - LLVM places `const volatile' objects in .rodata, under the
>>   assumption that `volatile' is orthogonal to `const'.
>> ...
> 
> The best use I've heard for const volatile is stuff like hardware status 
> registers which are readonly from the standpoint of the compiler, but which 
> are changed by the hardware.   But for those, we're looking for the const to 
> trigger compiler diagnostics if we try to write the value.  The volatile (of 
> course) indicates the value changes behind our back.

I'd go a bit further and say that this is the only use of const volatile that 
makes any sense.

> What you're trying to do seems to parallel that case reasonably well for the 
> volatile aspect.  You want to force the compiler to read the data for every 
> access.
> 
> Your need for the const is a bit different.  Instead of looking to get a 
> diagnostic out of the compiler if its modified, you need the data to live in 
> .rodata so the BPF verifier knows the compiler/code won't change the value.  
> Presumably the BPF verifier can't read debug info to determine the const-ness.
> 
> I'm not keen on the behavior change, but nobody else is stepping in to review 
> and I don't have a strong case to reject.  So OK for the trunk.

A const volatile that sits in memory feels like a programmer error.  Instead of 
worrying about how it's handled, would it not make more sense to tag it with a 
warning?

paul



Re: [PATCH] Handle > INF and < INF correctly in range-op-float.cc

2022-09-06 Thread Koning, Paul via Gcc-patches



> On Sep 6, 2022, at 8:06 AM, Jakub Jelinek via Gcc-patches 
>  wrote:
> 
> On Tue, Sep 06, 2022 at 01:47:43PM +0200, Aldy Hernandez wrote:
>> Question...for !HONOR_NANS or !HONOR_INFINITIES or whatever, say the
>> range for the domain is [-MIN, +MAX] for the min and max representable
>> numbers.  What happens for MAX+1?  Is that undefined?  I wonder what
>> real.cc does for that.
> 
> I'm afraid I have no idea.
> 
> The formats without Inf/NaN are:
> spu_single_format
> vax_{f,d,g}_format
> arm_half_format

Actually, DEC (VAX and PDP-11) float does have NaN; signaling NaN to be precise.

paul



Re: C++: add -std={c,gnu}++{current,future}

2022-08-30 Thread Koning, Paul via Gcc-patches



> On Aug 30, 2022, at 9:22 AM, Jason Merrill via Gcc-patches 
>  wrote:
> 
> On 7/13/22 15:29, Nathan Sidwell wrote:
>> Inspired by a user question.  Jason, thoughts?
>> Since C++ is such a moving target, Microsoft have /std:c++latest
>> (AFAICT clang does not), to select the currently implemented version
>> of the working paper.  But the use of 'std:latest' is somewhat
>> ambiguous -- the current std is C++20 -- that's the latest std, the
>> next std will more than likely but not necessarily be C++23.  So this
>> adds:
>>   -std=c++current -- the current std (c++20)
>>   -std=c++future -- the working paper (c++2b)
>> also adds gnu++current and gnu++future to select the gnu-extended
>> variants.
> 
> I like this direction.
> 
> I imagine people using these to mean roughly beta and alpha, respectively.
> 
> Perhaps we also want -std=c++stable, which would currently be equivalent to 
> the default (c++17) but might not always be.
> 
> Jason

I'm not so sure.

In general, switches have a fixed meaning.  These do not.  "Current" has an 
intuitive meaning, but the actual meaning is "whatever what current when the 
version you happen to be invoking was released".  "Future" is like that only 
stranger.

If I create a software package that uses one of these switches in its Makefile, 
what will happen?  In a few years, perhaps sooner, the outcome will change 
without any changes to my code.  The answer would be "don't use those 
switches", and that is a good answer, but if so why add these switches?

paul



Re: [PATCH] [ranger] x == -0.0 does not mean we can replace x with -0.0

2022-08-29 Thread Koning, Paul via Gcc-patches



> On Aug 29, 2022, at 1:07 PM, Jeff Law via Gcc-patches 
>  wrote:
> 
> ...
> I guess we could do specialization based on the input range.  So rather than 
> calling "sin" we could call a special one that didn't have the reduction step 
> when we know the input value is in a sensible range.

There's some precedent for that, though for a somewhat different reason: 
functions like "log1p".  And in fact, it would make sense for the optimizer to 
transform log calls into log1p calls when the range is known to be right for 
doing so.

paul



Re: [PATCH, libgomp] Fix chunk_size<1 for dynamic schedule

2022-08-04 Thread Koning, Paul via Gcc-patches



> On Aug 4, 2022, at 9:17 AM, Chung-Lin Tang  wrote:
> 
> On 2022/6/28 10:06 PM, Jakub Jelinek wrote:
>> On Thu, Jun 23, 2022 at 11:47:59PM +0800, Chung-Lin Tang wrote:
>>> with the way that chunk_size < 1 is handled for gomp_iter_dynamic_next:
>>> 
>>> (1) chunk_size <= -1: wraps into large unsigned value, seems to work though.
>>> (2) chunk_size == 0:  infinite loop
>>> 
>>> The (2) behavior is obviously not desired. This patch fixes this by changing
>> Why?  It is a user error, undefined behavior, we shouldn't slow down valid
>> code for users who don't bother reading the standard.
> 
> This is loop init code, not per-iteration. The overhead really isn't that 
> much.
> 
> The question should be, if GCC having infinite loop behavior is reasonable,
> even if it is undefined in the spec.

I wouldn't think so.  The way I see "undefined code" is that you can't complain 
about "wrong code" produced by the compiler.  But for the compiler to 
malfunction on wrong input is an entirely differerent matter.  For one thing, 
it's hard to fix your code if the compiler fails.  How would you locate the 
offending source line?

paul




Re: [PATCH] Canonicalize X&-Y as X*Y in match.pd when Y is [0,1].

2022-05-25 Thread Koning, Paul via Gcc-patches



On May 25, 2022, at 10:39 AM, Roger Sayle 
mailto:ro...@nextmovesoftware.com>> wrote:


On May 25, 2022, at 7:34 AM, Richard Biener via Gcc-patches mailto:patc...@gcc.gnu.org>> wrote:

On Tue, May 24, 2022 at 3:55 PM Roger Sayle
mailto:ro...@nextmovesoftware.com>> wrote:


"For every pessimization, there's an equal and opposite optimization".

In the review of my original patch for PR middle-end/98865, Richard
Biener pointed out that match.pd shouldn't be transforming X*Y into
X&-Y as the former is considered cheaper by tree-ssa's cost model
(operator count).  A corollary of this is that we should instead be
transforming X&-Y into the cheaper X*Y as a preferred canonical form
(especially as RTL expansion now intelligently selects the
appropriate implementation based on the target's costs).

With this patch we now generate identical code for:
int foo(int x, int y) { return -(x&1) & y; } int bar(int x, int y) {
return (x&1) * y; }

What, if anything, does the target description have to do for "the
appropriate
implementation" to be selected?  For example, if the target has an "AND
with
complement" operation, it's probably cheaper than multiply and would be
the
preferred generated code.

RTL expansion will use an AND and NEG instruction pair if that's cheaper
than the cost of a MULT or a synth_mult sequence.  Even, without the
backend providing an rtx_costs function, GCC will default to AND and NEG
having COSTS_N_INSNS(1), and MULT having COSTS_N_INSNS(4).
But consider the case where y is cloned/inlined/CSE'd to have the
value 2, in which (on many targets) case the LSHIFT is cheaper than
a AND and a NEG.

Alas, I don't believe a existence of ANDN, such as with BMI or SSE, has
any impact on the decision, as this is NEG;AND not NOT;AND.  If you
known of any target that has an "AND with negation" instruction, I'll
probably need to tweak RTL expansion to check for that explicitly.

I don't know of one either (in the two's complement world); I misread the minus 
as tilde in the "before".  Sorry about the mixup.

paul




Re: [PATCH] Canonicalize X&-Y as X*Y in match.pd when Y is [0,1].

2022-05-25 Thread Koning, Paul via Gcc-patches



> On May 25, 2022, at 7:34 AM, Richard Biener via Gcc-patches 
>  wrote:
> 
> On Tue, May 24, 2022 at 3:55 PM Roger Sayle  
> wrote:
>> 
>> 
>> "For every pessimization, there's an equal and opposite optimization".
>> 
>> In the review of my original patch for PR middle-end/98865, Richard
>> Biener pointed out that match.pd shouldn't be transforming X*Y into
>> X&-Y as the former is considered cheaper by tree-ssa's cost model
>> (operator count).  A corollary of this is that we should instead be
>> transforming X&-Y into the cheaper X*Y as a preferred canonical form
>> (especially as RTL expansion now intelligently selects the appropriate
>> implementation based on the target's costs).
>> 
>> With this patch we now generate identical code for:
>> int foo(int x, int y) { return -(x&1) & y; }
>> int bar(int x, int y) { return (x&1) * y; }

What, if anything, does the target description have to do for "the appropriate 
implementation" to be selected?  For example, if the target has an "AND with 
complement" operation, it's probably cheaper than multiply and would be the 
preferred generated code.

paul




Re: [PATCH] libstdc++: Update documentation about copyright and GPL notices in tests

2022-04-28 Thread Koning, Paul via Gcc-patches



> On Apr 28, 2022, at 8:37 AM, Jonathan Wakely via Gcc-patches 
>  wrote:
> 
> I intend to commit this patch soon. This isn't changing the policy, just
> adjusting the docs to match the current policy.
> 
> I'm open to suggestions for better ways to phrase the second sentence,
> clarifying that our tests generally have nothing novel or "authored".
> 
> -- >8 --
> 
> There is no need to require FSF copyright for tests that are just
> "self-evident" ways to check the API and behaviour of the library.
> This is consistent with tests for the compiler, which do not have
> copyright and licence notices either.

So is the theory that "self-evident" documents are in the public domain for 
that reason?  Or is the policy that for such file it is fine for the copyright 
to be held by the author (which is the default when no assignment is made)?  
And a similar question applies to the license aspect also.

I think I understand the intent, and that seems to make sense, but I'm 
wondering if it has been verified by the appropriate FSF IP lawyers.

paul



Re: [RFC] Remove default option -fpie for projects that use -T linker options

2022-04-04 Thread Koning, Paul via Gcc-patches
I'm not sure if it is valid to assume that a linker script "usually" specifies 
a fixed memory location.  

paul


> On Apr 4, 2022, at 11:06 AM, Carlos Bilbao via Gcc-patches 
>  wrote:
> 
> Projects that rely on a linker script usually specify a memory location 
> where the executable should be placed. This directly contradicts the 
> default option -fpie for position independent executables. In fact, using
> PIE generates a GOT, which might be undesirable for developers that need
> control over the generated sections.
> 
> Would it be positive to assume -fno-pie on these situations?
> 
> Signed-off-by: Carlos Bilbao 



Re: [committed] libstdc++: Support VAX floats in std::strong_order

2022-03-10 Thread Koning, Paul via Gcc-patches



> On Mar 10, 2022, at 9:27 AM, Jonathan Wakely via Gcc-patches 
>  wrote:
> 
> On Thu, 10 Mar 2022 at 12:16, Jonathan Wakely wrote:
>> 
>> On Thu, 10 Mar 2022 at 11:53, Jonathan Wakely via Libstdc++
>>  wrote:
>>> 
>>> Tested x86_64-linux, and basic soundness check on vax-dec-netbsdelf.
>> 
>> But apparently not enough of a soundness check, because
>> isnan(__builtin_nan("")) is true for VAX, so GCC seems to have a NaN
>> pattern, despite what I read online about the format.

VAX float has signalling NaN, but not a non-signalling NaN nor an Inf.  See the 
VAX architecture manual.  Signalling NaN (called "reserved operand") is encoded 
as sign=1 and exponent=0.

paul



Re: [PATCH] ira: Fix old-reload targets [PR103974]

2022-01-12 Thread Koning, Paul via Gcc-patches



> On Jan 12, 2022, at 1:13 PM, Hans-Peter Nilsson via Gcc-patches 
>  wrote:
> 
>> ...
> I recall comments about code quality regressions.  Are there
> actual numbers?  (Preferably from around the transition
> time, because I bet targets still supporting "-mlra" have
> regressed on the reload side since then.)

I haven't looked in a while, but it is certainly the case that the -mlra code 
out of pdp11 is not as good as that coming out of the old reload.  My 
understanding is that LRA isn't as friendly to memory-centric targets like 
pdp11 (and vax?).  In particular, from what I understood there is no support, 
or at least no significant support, for the pre-decrement and post-increment 
register indirect references that those targets like so much.

There was some suggestion along the lines of "please feel free to add it to 
LRA" but that's a seriously hairy undertaking for a programmer with no current 
knowledge of LRA at all.

 people who did porting to LRA.
>>> So in theory it might be just pulling the switch for some?  That is,
>>> removing their definition of TARGET_LRA_P which then defaults
>>> to true?
>>> 
>>> Jeff might be able to test this for (all) targets on his harness.
>> Given a patch, it's trivial do throw it in and see what the fallout is.
> 
> Again there's talk about LRA and comparing it to CC0, so
> again I'll remind of the lack of documentation for LRA (in
> contrast to CC0).  I'm not just referring to guides to use
> for switching over a target to LRA, but sure, that'll help
> too.
> 
> For starters, for each constraint and register-class macro
> and hook, what's the difference between reload and LRA;
> which ones are unused and which ones are new?


What I found interesting is that apparently, to first approximation, supporting 
LRA amounted to "just turn it on".  I think there were some issues to fix in 
register classes or constraints, more along the lines of "these are things you 
should not do for either system but the old reload usually lets you get away 
with it".  So those were handled by cleaning up the issue in question 
generally, not as an LRA-specific change.

Compared to CC0 work the effort was vastly smaller, it's a major rewrite of the 
back end vs. a few small changes in a few spots.  But it's quite possible that 
the true picture is different and that there should be more changes.  And yes, 
there really should be documentation saying so.  GCCint has traditionally been 
quite excellent; it would be distressing if the creation of new technology like 
LRA causes it to regress.

paul



Re: [PATCH] Fix spelling of ones' complement.

2021-11-16 Thread Koning, Paul via Gcc-patches



> On Nov 16, 2021, at 4:19 PM, Marek Polacek via Gcc-patches 
>  wrote:
> 
> On Tue, Nov 16, 2021 at 01:09:15PM -0800, Mike Stump via Gcc-patches wrote:
>> On Nov 15, 2021, at 5:48 PM, Marek Polacek via Gcc-patches 
>>  wrote:
>>> 
>>> Nitpicking time.  It's spelled "ones' complement" rather than "one's
>>> complement".  I didn't go into config/.
>>> 
>>> Ok for trunk?
>> 
>> So, is it two's complement or twos' complement then?  Seems like it should 
>> be the same, but  wikipedia suggests it is two's complement, as does google. 
>>  If that is wrong, you should go edit it as well.  :-)
> 
> It is "two's complement":
> https://gcc.gnu.org/pipermail/gcc-patches/2021-November/584543.html
> but Knuth also continues to say that there's "twos' complement notation",
> which "has radix 3 and complementation with respect to (2...22)_3."
> 
> 
> It's not lost on me how inconsequential this patch is; I'm happy to just
> drop it and let the copy editor in me sleep.
> 
> Marek

To me it isn't so much a question of copy editing, but rather the fact that 
there clearly are two spellings, and if anything the one in the current text is 
the common one and the Knuth one found less often (perhaps only in Knuth).  My 
answer is to go fix Wikipedia, if possible.

paul




Re: [PATCH] Fix spelling of ones' complement.

2021-11-16 Thread Koning, Paul via Gcc-patches



> On Nov 16, 2021, at 2:03 AM, Aldy Hernandez via Gcc-patches 
>  wrote:
> 
> On Tue, Nov 16, 2021, 03:20 Marek Polacek via Gcc-patches <
> gcc-patches@gcc.gnu.org> wrote:
> 
>> On Tue, Nov 16, 2021 at 02:01:47AM +, Koning, Paul via Gcc-patches
>> wrote:
>>> 
>>> 
>>>> On Nov 15, 2021, at 8:48 PM, Marek Polacek via Gcc-patches <
>> gcc-patches@gcc.gnu.org> wrote:
>>>> 
>>>> Nitpicking time.  It's spelled "ones' complement" rather than "one's
>>>> complement".
>>> 
>>> Is that so?  I see Wikipedia claims it is, but there are no sources for
>> that claim.  (There is an assertion that it is "discussed at length on the
>> talk page" of an article about number representation, but in fact there is
>> no discussion there at all.)
>>> 
>>> I have never seen this spelling before, and I very much doubt its
>> validity.  For one thing, why then have "two's complement"?  For another,
>> to pick one random authority, J.E. Thornton in "Design of a computer -- the
>> Control Data 6600" refers to "one's complement" to describe the well known
>> mode used by that machine and its relatives.
>> 
>> Knuth, The Art of Computer Programming Volume 2, page 203-4:
>> 
>> "A two's complement number is complemented with respect to a single
>> power of 2, while a ones' complement number is complemented with respect
>> to a long sequence of 1s."
>> 
> 
> I think you get to do a drop mike when you pull out Knuth.
> 
> :-)

If that were the only source, sure.  But with authoritative sources for both 
terms (with the ones I quoted being the earlier ones) at the very least there 
is an argument that both terms are used.  

Some more: DEC PDP-1 handbook (April 1960), page 9: "Negative numbers are 
represented as the 1's complement of the positive numbers."

Univac 1107 CPU manual, page 2-6: "Next, the adder subtracts the one's 
complement..."

CDC 160 programming manual (1963), page 2-1: "All arithmetic is binary, one's 
complement notation".

Incidentally, these are the four of the five machines cited by the Wikipedia 
article.

Re: [PATCH] Fix spelling of ones' complement.

2021-11-15 Thread Koning, Paul via Gcc-patches



> On Nov 15, 2021, at 8:48 PM, Marek Polacek via Gcc-patches 
>  wrote:
> 
> Nitpicking time.  It's spelled "ones' complement" rather than "one's
> complement". 

Is that so?  I see Wikipedia claims it is, but there are no sources for that 
claim.  (There is an assertion that it is "discussed at length on the talk 
page" of an article about number representation, but in fact there is no 
discussion there at all.)

I have never seen this spelling before, and I very much doubt its validity.  
For one thing, why then have "two's complement"?  For another, to pick one 
random authority, J.E. Thornton in "Design of a computer -- the Control Data 
6600" refers to "one's complement" to describe the well known mode used by that 
machine and its relatives.

paul




Re: [PATCH] Always default to DWARF2_DEBUG if not specified, warn about deprecated STABS

2021-09-28 Thread Koning, Paul via Gcc-patches



> On Sep 28, 2021, at 2:14 AM, Richard Biener via Gcc-patches 
>  wrote:
> 
> On Tue, Sep 21, 2021 at 4:26 PM Richard Biener via Gcc-patches
>  wrote:
>> 
>> This makes defaults.h choose DWARF2_DEBUG if PREFERRED_DEBUGGING_TYPE
>> is not specified by the target and errors out if DWARF DWARF is not 
>> supported.
>> 
>> ...
>> 
>> This completes the series of deprecating STABS for GCC 12.
>> 
>> Bootstrapped and tested on x86_64-unknown-linux-gnu.
>> 
>> OK for trunk?
> 
> Ping.

pdp11 is fine.

paul



Re: [PATCH][v2] Always default to DWARF2_DEBUG if not specified, warn about deprecated STABS

2021-09-16 Thread Koning, Paul via Gcc-patches



> On Sep 16, 2021, at 11:05 AM, Jeff Law  wrote:
> 
> 
> On 9/16/2021 1:41 AM, Richard Biener wrote:
>> ...
>> That said - yes, I'd consider a.out purely legacy and not fit
>> for the future.  But it never came up on the radar of standing
>> in the way of modernizing GCC in any area.
> I'd definitely consider a.out & SOM as purely legacy.  As long as they 
> continue to work, great, but I wouldn't make any significant investment in 
> either.  And yes, there are mechanisms in collect2 to support things like 
> global initializers/finalizers on a.out systems.

"Legacy" sounds fine.  My main concern was whether it was, or is likely to 
become soon, "deprecated" or "unsupported".  For an old platform to use legacy 
formats is perfectly sensible, for it to use deprecated mechanisms is not.

For this to work, if there are no supported debug formats for the object format 
in question -- which will be the case for a.out with STABS going away -- that 
would mean you'd get output without debug symbols.  There was a suggestion that 
this wouldn't be allowed and that it would be grounds for removing such 
platforms.  I'd rather not see things tied like that.

paul



Re: [PATCH][v2] Always default to DWARF2_DEBUG if not specified, warn about deprecated STABS

2021-09-15 Thread Koning, Paul via Gcc-patches



> On Sep 13, 2021, at 3:31 AM, Richard Biener  wrote:
> 
> This makes defaults.h choose DWARF2_DEBUG if PREFERRED_DEBUGGING_TYPE
> is not specified by the target and NO_DEBUG if DWARF is not supported.

As I'm looking at questions about old debug formats, it brings up the question 
of old object formats.  I don't remember what the status of a.out is.  Is that 
considered deprecated?  Still current?  Of course most targets use elf, but is 
there an expectation to move away from a.out the way there is an expectation to 
move away from STABS?

Is this actually a binutils rather than a gcc question?

paul



Re: [PATCH][v2] Always default to DWARF2_DEBUG if not specified, warn about deprecated STABS

2021-09-15 Thread Koning, Paul via Gcc-patches



> On Sep 15, 2021, at 11:55 AM, John David Anglin  wrote:
> 
> On 2021-09-15 10:06 a.m., Richard Biener wrote:
>>> Is there a simple way to enable -gstabs in build?
>> Currently not.  If we're retaining more than pdp11 with a non-DWARF
>> config I'm considering to allow STABS by default for those without
>> diagnostics for GCC 12.
>> 
>> With GCC 13 we'll definitely either remove the configurations or
>> leave the target without any support for debug info.
> I tend to think targets without any support for debug information should be 
> removed.  There is
> some time before GCC 13.  This provides a chance for the target to implement 
> DWARF support.

I suppose.  But for pdp11 at least, DWARF and ELF are both somewhat unnatural 
and anachronistic.  PDP11 unixes use much older debug formats, and DEC 
operating systems are more primitive still (no debug symbols at all, of any 
kind).  So for that case at least, supporting the target but without debug 
symbols would not be a crazy option.

Of course, it would be neat to be able to debug PDP-11 code with GDB...

paul




Re: [PATCH][v2] Always default to DWARF2_DEBUG if not specified, warn about deprecated STABS

2021-09-13 Thread Koning, Paul via Gcc-patches



> On Sep 13, 2021, at 3:31 AM, Richard Biener  wrote:
> 
> This makes defaults.h choose DWARF2_DEBUG if PREFERRED_DEBUGGING_TYPE
> is not specified by the target and NO_DEBUG if DWARF is not supported.
> 
> It also makes us warn when STABS is enabled and removes the corresponding
> diagnostic from the Ada frontend.  The warnings are pruned from the
> testsuite output via prune_gcc_output.
> 
> This leaves the following targets without debug support:
> 
> pdp11-*-*   pdp11 is a.out, dwarf support is difficult

I'll admit that I don't know much about debug formats.  It is definitely the 
case that pdp11 output is a.out (it may be BSD 2.x style a.out -- which I think 
is somewhat different though it's been many years since I looked at that, and 
then only briefly).  I guess that constrains which debug formats can be used, 
but I don't know any details.

pdp11-elf was done as an experiment by someone else, in binutils.  I'll ask 
about the status of that.  If it's possible to deliver that, it would 
presumably enable DWARF support.  Is that all common code so it's a matter of 
enabling it, or would "DWARF machine details for pdp11" have to be defined?

paul




Re: [PATCH] warn for more impossible null pointer tests

2021-09-01 Thread Koning, Paul via Gcc-patches



> On Sep 1, 2021, at 3:35 PM, Iain Sandoe  wrote:
> 
> 
> [EXTERNAL EMAIL] 
> 
> Hi Paul,
> 
>> ...
>> If so, then I would think that ignoring it for this patch as well is 
>> reasonable.  If in a given target a pointer that C thinks of as NULL is in 
>> fact a valid object pointer, then all sorts of optimizations are incorrect.  
>> If the target really cares, it can use a different representation for the 
>> null pointer.  (Does GCC give us a way to do that?)  For example, pdp11 
>> could use the all-ones bit pattern to represent an invalid pointer.
> 
> regardless of whether GCC supports it or not - trying to use a non-0 NULL 
> pointer is likely to break massive amounts of code in the wild.

It depends on what you mean by "non-0 NULL pointer".  The constant written as 0 
in pointer context doesn't represent the all-zeroes bit pattern but rather 
whatever is a null pointer on that target.  Most code would not notice that.  
The two places I can think of where this would break is (a) if you cast a 
pointer to int or look at it via a pointer/int union and expect to see integer 
zero, and (b) if you initialize pointers by using bzero.  The former seems 
rather unlikely, the latter is somewhat common.  Can GCC detect the bzero case? 
 It would make a good check for -Wpedantic on the usual platforms that use all 
zero bits as NULL.

> It might, OTOH, be possible to use a non-0 special value to represent the 
> valid 0 address-use (providing that there is somewhere in the address space 
> you can steal that from).

That would be really ugly, because every pointer reference would have to do the 
address translation at run time.

paul



Re: [PATCH] warn for more impossible null pointer tests

2021-09-01 Thread Koning, Paul via Gcc-patches



> On Sep 1, 2021, at 3:08 PM, Jeff Law via Gcc-patches 
>  wrote:
> 
> 
> 
> On 9/1/2021 12:57 PM, Koning, Paul wrote:
>> 
>>> On Sep 1, 2021, at 1:35 PM, Jeff Law via Gcc-patches 
>>>  wrote:
>>> 
>>> Generally OK.  There's some C++ front-end bits that Jason ought to take a 
>>> quick looksie at.   Second, how does this interact with targets that allow 
>>> objects at address 0?   We have a few targets like that and that makes me 
>>> wonder if we should be suppressing some, if not all, of these warnings for 
>>> targets that turn on -fno-delete-null-pointer-checks?
>> But in C, the pointer constant 0 represents the null (invalid) pointer, not 
>> the actual address zero necessarily.
>> 
>> If a target supports objects at address zero, how does it represent the 
>> pointer value 0 (which we usually refer to as NULL)?  Is the issue simply 
>> ignored?  It seems to me it is in pdp11, which I would guess is one of the 
>> targets for which objects at address 0 make sense.
> The issue is ignored to the best of my knowledge.

If so, then I would think that ignoring it for this patch as well is 
reasonable.  If in a given target a pointer that C thinks of as NULL is in fact 
a valid object pointer, then all sorts of optimizations are incorrect.  If the 
target really cares, it can use a different representation for the null 
pointer.  (Does GCC give us a way to do that?)  For example, pdp11 could use 
the all-ones bit pattern to represent an invalid pointer.

paul



Re: [PATCH] warn for more impossible null pointer tests

2021-09-01 Thread Koning, Paul via Gcc-patches



> On Sep 1, 2021, at 1:35 PM, Jeff Law via Gcc-patches 
>  wrote:
> 
> Generally OK.  There's some C++ front-end bits that Jason ought to take a 
> quick looksie at.   Second, how does this interact with targets that allow 
> objects at address 0?   We have a few targets like that and that makes me 
> wonder if we should be suppressing some, if not all, of these warnings for 
> targets that turn on -fno-delete-null-pointer-checks?

But in C, the pointer constant 0 represents the null (invalid) pointer, not the 
actual address zero necessarily.

If a target supports objects at address zero, how does it represent the pointer 
value 0 (which we usually refer to as NULL)?  Is the issue simply ignored?  It 
seems to me it is in pdp11, which I would guess is one of the targets for which 
objects at address 0 make sense.

paul



Re: Benefits of using Sphinx documentation format

2021-07-12 Thread Koning, Paul via Gcc-patches


> On Jul 12, 2021, at 12:36 PM, David Malcolm via Gcc-patches 
>  wrote:
> 
> On Mon, 2021-07-12 at 15:25 +0200, Martin Liška wrote:
>> ...
> 
> I think the output formats we need to support are:
> - HTML
> - PDF
> - man page (hardly "modern", but still used)

Also info format (for the Emacs info reader).  And ebook formats (epub and/or 
mobi).  Having good quality ebook output is a major benefit in my view; it 
would be very good for the standard makefiles to offer make targets for these 
formats.

paul



Re: pdp11: Fix warnings to allow compilation with a recent GCC and --enable-werror-always

2021-06-28 Thread Koning, Paul via Gcc-patches



> On Jun 28, 2021, at 11:33 AM, Jan-Benedict Glaw  wrote:
> 
> Hi Paul!
> 
> I'd like to install this patch to let the pdp11-aout configuration
> build again with eg.
> 
> ../gcc/configure --target=pdp11-aout --enable-werror-always \
>   --enable-languages=all --disable-gcov --disable-shared \
>   --disable-threads --without-headers \
>   --prefix=/var/lib/laminar/run/gcc-pdp11-aout/5/toolchain-install
> 
> No testsuite (yet? Maybe I'd add a bit), but re-checked some Hello
> World'ish code for no changes and it still runs on a SIMH pdp11.
> ...
> Okay for master?
> 
> MfG, JBG

Yes, thanks!

The test suite "compile" section isn't completely clean yet but it mostly 
works.  Execution is more problematic at this point.

paul



Re: GCC documentation: porting to Sphinx

2021-06-11 Thread Koning, Paul via Gcc-patches



> On Jun 11, 2021, at 11:50 AM, Joseph Myers  wrote:
> 
> ...
> 
> "make" at top level should build all the info manuals and man pages, as at 
> present (if a suitable Sphinx version is installed), and "make install" 
> should install them, in the same directories as at present.
> 
> "make html" at top level should build all the HTML manuals, and "make 
> install-html" should install them.
> 
> "make pdf" and "make install-pdf" at top level should work likewise.
> 
> "make install-html" and "make install-pdf" should put things under 
> $(DESTDIR)$(htmldir) and $(DESTDIR)$(pdfdir) as at present.

And in addition, it would be nice to have additional make  and make 
install- targets for other output formats that Sphinx can generate for us, 
at least some of them.  "epub" comes to mind as an example I would like to have.

paul



Re: RFC: Sphinx for GCC documentation

2021-06-04 Thread Koning, Paul via Gcc-patches


> On Jun 4, 2021, at 3:55 AM, Tobias Burnus  wrote:
> 
> Hello,
> 
> On 13.05.21 13:45, Martin Liška wrote:
>> On 4/1/21 3:30 PM, Martin Liška wrote:
>>> That said, I'm asking the GCC community for a green light before I
>>> invest
>>> more time on it?
>> So far, I've received just a small feedback about the transition. In
>> most cases positive.
>> 
>> [1] https://splichal.eu/scripts/sphinx/
> 
> The HTML output looks quite nice.
> 
> What I observed:
> 
> * Looking at
>  
> https://splichal.eu/scripts/sphinx/gfortran/_build/html/intrinsic-procedures/access-checks-file-access-modes.html
> why is the first argument description in bold?
> It is also not very readable to have a scollbar there – linebreaks would be 
> better.
> → I think that's because the assumption is that the first line contains a 
> header
>  and the rest the data

Explicit line breaks are likely to be wrong depending on the reader's window 
size.  I would suggest setting the table to have cells with line-wrapped 
contents.  That would typically be the default in HTML, I'm curious why that is 
not happening here.

paul




Re: [PATCH] MAINTAINERS: create DCO section; add myself to it

2021-06-02 Thread Koning, Paul via Gcc-patches



> On Jun 2, 2021, at 11:03 AM, Jason Merrill via Gcc-patches 
>  wrote:
> 
> On 6/1/21 3:22 PM, Richard Biener via Gcc wrote:
>> On June 1, 2021 7:30:54 PM GMT+02:00, David Malcolm via Gcc 
>>  wrote:
 ...
>>> 
>>> The MAINTAINERS file doesn't seem to have such a "DCO list"
>>> yet; does the following patch look like what you had in mind?
>>> 
>>> ChangeLog
>>> 
>>> * MAINTAINERS: Create DCO section; add myself to it.
>>> ---
>>> MAINTAINERS | 12 
>>> 1 file changed, 12 insertions(+)
>>> 
>>> diff --git a/MAINTAINERS b/MAINTAINERS
>>> index db25583b37b..1148e0915cf 100644
>>> --- a/MAINTAINERS
>>> +++ b/MAINTAINERS
>>> @@ -685,3 +685,15 @@ Josef Zlomek   
>>> 
>>> James Dennett   
>>> Christian Ehrhardt  
>>> Dara Hazeghi
>>> +
>>> +
>>> +DCO
>>> +===
>>> +
>>> +Developers with commit access may add their name to the following list
>>> +to certify the DCO (https://developercertificate.org/) for all
>> There should be a verbatim copy of the DCO in this file or the repository.
> 
> It's on the website now, at gcc.gnu.org/dco.html , and I've added the section 
> to MAINTAINERS.  It's not clear to me that it needs to be in the source tree 
> as well, since it's project contribution policy rather than license.

I'm wondering about change control of this document.  The GPL has a version 
number and references to use the version number.  The DCO seems to have a 
version number, but the DCO section in the MAINTAINERS file does not give it.  
I would think that a certification should call out which DCO it uses, whether 
in a one-off (in a patch) or in the MAINTAINERS DCO list.

paul



Re: [PATCH] libstdc++: More efficient std::chrono::year::leap.

2021-05-21 Thread Koning, Paul via Gcc-patches



> On May 21, 2021, at 1:46 PM, Cassio Neri via Gcc-patches 
>  wrote:
> 
> Simple change to std::chrono::year::is_leap. If a year is multiple of 100,
> then it's divisible by 400 if and only if it's divisible by 16. The latter
> allows for better code generation.

I wonder if the optimizer could be taught to do that.

The change seems correct but it is very confusing; at the very least the 
reasoning you gave should be stated in a comment on that check.

paul




Re: [GOVERNANCE] Where to file complaints re project-maintainers?

2021-05-09 Thread Koning, Paul via Gcc-patches



> On May 9, 2021, at 11:33 AM, abebeos via Gcc-patches 
>  wrote:
> 
> Thank you for your quick response.
> 
> ...
> The Issue:
> 
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92729
> 
> The Bounty (a bit higher than $7K)
> 
> https://www.bountysource.com/issues/84630749-avr-convert-the-backend-to-mode_cc-so-it-can-be-kept-in-future-releases
> 
> The Complaint re Voting Process:
> 
> https://github.com/bountysource/core/issues/1532

I don't understand why you're coming to the GCC lists to debate this.  The 
issue you're talking about isn't a GCC issue at all.  You need to take it up 
with the group with which you have the dispute.

paul



Re: [PATCH] Remove CC0

2021-05-05 Thread Koning, Paul via Gcc-patches



> On May 5, 2021, at 8:45 AM, Segher Boessenkool  
> wrote:
> 
> Hi~
> 
> On Tue, May 04, 2021 at 04:08:22PM +0100, Richard Earnshaw wrote:
>> On 03/05/2021 23:55, Segher Boessenkool wrote:
>>> CC_STATUS_INIT is suggested in final.c to also be useful for ports that
>>> are not CC0, and at least arm seems to use it for something.  So I am
>>> leaving that alone, but most targets that have it could remove it.
>> 
>> A quick look through the code suggests it's being used for thumb1 code 
>> gen to try to reproduce the traditional CC0 type behaviour of 
>> eliminating redundant compare operations when you have sequences such as
>> 
>> cmp a, b
>> b d1
>> cmp a, b
>> b d2
>> 
>> The second compare operation can be eliminated.
>> 
>> It might be possible to eliminate this another way by reworking the 
>> thumb1 codegen to expose the condition codes after register allocation 
>> has completed (much like x86 does these days), but that would be quite a 
>> lot of work right now.  I don't know if such splitting would directly 
>> lead to the ability to remove the redundant compares - it might need a 
>> new pass to spot them.
> 
> At least on rs6000 on a simple example this is handled by fwprop1
> already.  Does that work for thumb1?  Or maybe that uses hard regs for
> the condition codes and that does not work here?
> 
> Example code:
> 
> ===
> void g(void);
> void h(void);
> void i(void);
> void f(long a, long b)
> {
>if (a < b)
>g();
>if (a == b)
>h();
>if (a > b)
>i();
> }

FWIW, that also works on pdp11, so it seems the general mechanism is in place 
and working.  Well, with one oddity, an unnecessary third conditional branch:

_f:
mov 02(sp),r1
mov 04(sp),r0
cmp r1,r0
blt L_7
beq L_4
bgt L_5
rts pc
L_5:
jsr pc,_i
rts pc
L_4:
jsr pc,_h
rts pc
L_7:
jsr pc,_g
rts pc



Re: [PATCH 0/3] VAX backend preparatory updates for switching to LRA

2021-04-22 Thread Koning, Paul via Gcc-patches



> On Apr 21, 2021, at 5:32 PM, Maciej W. Rozycki  wrote:
> 
> ...
> OTOH switching to LRA regresses code generation seriously, by making the 
> indexed and indirect VAX address modes severely underutilised, so while 
> with these changes in place the backend can be switched to LRA with just a 
> trivial to remove the redefinition of TARGET_LRA_P, I think it is not yet 
> the right time to do it.

I noticed similar issues with pdp11, which at the moment allows LRA via a -mlra 
switch but doesn't make it the default.  Another mode that isn't handled well 
(or at all) by LRA is autoincrement/autodecrement.  It would be great if all 
these things could be done better, that would help several targets.  (I wonder 
if m68k would be another; doesn't it have similar addressing modes at least on 
the 68040?)

paul




Re: move selftests into their own files?

2021-04-19 Thread Koning, Paul via Gcc-patches



> On Apr 19, 2021, at 7:26 PM, Martin Sebor via Gcc-patches 
>  wrote:
> 
> On 4/19/21 3:13 PM, Koning, Paul wrote:
>>> On Apr 19, 2021, at 4:50 PM, Martin Sebor via Gcc-patches 
>>>  wrote:
>>> 
>>> ...
>>> I was actually thinking of just #including each foo-tests.c file
>>> to bring in the code right where it is now, so this shouldn't be
>>> a problem.  Would that work for you?
>>> 
>>> Martin
>> How does that help the problem you said need to be solved?  If having self 
>> test code be part of the compilation unit makes modifying things more 
>> difficult, it doesn't matter whether that code is in the compilation unit 
>> due to being in the main source file, or due to being a #include.
> 
> The self tests make the sources bigger and so harder to move around
> in and difficult to find just the calls to tested functions made
> from elsewhere in the file or from other parts of the compiler (i.e.,
> not tests).  They are only rarely relevant when reading or changing
> the file.
> 
> Keeping them separate from the code they exercise will be helpful
> to me and I assumed to others as well.  But I wouldn't want to make
> some common tasks difficult, so if you or someone else has one that
> would be made so by it, I won't pursue it.  Do you?

No, I don't have objections.  For one thing, I don't work on files that have 
selftest in them at the moment.  I was just trying to understand better why 
using #include would help.  I understand your point; I'm not sure I'd feel the 
same way but I don't see any reason to object to your proposed approach.

paul




Re: move selftests into their own files?

2021-04-19 Thread Koning, Paul via Gcc-patches



> On Apr 19, 2021, at 4:50 PM, Martin Sebor via Gcc-patches 
>  wrote:
> 
> On 4/19/21 2:03 PM, David Malcolm wrote:
>> On Mon, 2021-04-19 at 13:47 -0600, Martin Sebor via Gcc-patches wrote:
>>> The selftests at the end of many source files are only rarely read
>>> or modified, but they contribute to the size/complexity of the files
>>> and make moving within the rest of the code more difficult.
>>> 
>> FWIW I prefer having the tests in the same file as the code they test.
>>> Would anyone be opposed to moving any of them into new files of their
>>> own? E.g., those in tree.c to tree-tests.c, etc.?  I would be happy
>>> to do this for a subset of these, with the goal of eventually moving
>>> all of them and adding new ones accordingly.
>> Having the selftests in the same source file as the thing they test
>> allows for the selftest to use "static" declarations and anonymous
>> namespaces from that file.  For example, the selftests in diagnostic-
>> show-locus.c make use of various things declared in an anonymous
>> namespace in that file.  If I had to move the selftests to a different
>> file, I'd have to expose these interfaces, which I don't want to do.
> 
> I was actually thinking of just #including each foo-tests.c file
> to bring in the code right where it is now, so this shouldn't be
> a problem.  Would that work for you?
> 
> Martin

How does that help the problem you said need to be solved?  If having self test 
code be part of the compilation unit makes modifying things more difficult, it 
doesn't matter whether that code is in the compilation unit due to being in the 
main source file, or due to being a #include.

paul



Re: A suggestion for going forward from the RMS/FSF debate

2021-04-16 Thread Koning, Paul via Gcc-patches



> On Apr 16, 2021, at 6:13 AM, Ville Voutilainen via Gcc-patches 
>  wrote:
> 
> The actual suggestion is at the end; skip straight to it if you wish.

Could you shift this discussion to the gcc list where it fits better?  
gcc-patches is for discussion patches to the code.

paul



Re: RFC: Sphinx for GCC documentation

2021-04-02 Thread Koning, Paul via Gcc-patches



> On Apr 2, 2021, at 11:40 AM, Martin Sebor via Gcc-patches 
>  wrote:
> 
> ...
> I'm not excited about changing tools.  I like that Texinfo is a GNU
> project; AFACT, Sphinx is not. 

Why is that important?  It's an open source tool, and if it better in 
interesting ways I don't see why its affiliation should matter.

I view its support for ebook output as a major benefit, which as far as I know 
TexInfo does not offer.  

paul




Re: RFC: Sphinx for GCC documentation

2021-04-01 Thread Koning, Paul via Gcc-patches


> On Apr 1, 2021, at 9:30 AM, Martin Liška  wrote:
> 
> Hey.
> 
> I've returned to the David's project and I'm willing to finish his transition 
> effort.
> I believe using Sphinx documentation can rapidly improve readability, both 
> HTML and PDF version,
> of various GCC manuals ([1]). I've spent some time working on the David's 
> texi2rsf conversion tool ([2])
> and I'm presenting a result that is not perfect, but can be acceptable after 
> a reasonable
> amount of manual modifications.
> ...
> That said, I'm asking the GCC community for a green light before I invest
> more time on it?

Looks VERY good to me.  Given what I've seen about Sphinx (now that I've 
refreshed my memory) this would be a major improvement.

paul



Re: RFC: Sphinx for GCC documentation

2021-04-01 Thread Koning, Paul via Gcc-patches


On Apr 1, 2021, at 9:51 AM, Martin Liška 
mailto:mli...@suse.cz>> wrote:


[EXTERNAL EMAIL]

On 4/1/21 3:42 PM, Koning, Paul wrote:
Can  it provide EPUB or MOBI output?

Yes, [1] lists 'epub' as one of the possible "buildername" options.
Btw. what Python project do you speak about?

Cheers,
Martin

[1] 
https://urldefense.com/v3/__https://www.sphinx-doc.org/en/master/man/sphinx-build.html*cmdoption-sphinx-build-b__;Iw!!LpKI!0_fWcPNFwNBId3ubX-HgUu7gNFW_aV4Esdt-et67FMeHmhLWOiOVEbZZktJiLH8w$
 [sphinx-doc[.]org]

Good to know.  I mean the project that delivers Python 
(python.org).  It delivers documentation in several forms, 
EPUB among them: https://docs.python.org/3/download.html

paul



Re: RFC: Sphinx for GCC documentation

2021-04-01 Thread Koning, Paul via Gcc-patches
Can  it provide EPUB or MOBI output?  Some of the documentation systems used in 
various open source products have that capability, and that is very nice to 
have.  I have seen this from the one used by the Python project, for example.  
Converting other formats to EPUB sometimes works tolerably well, but often not 
-- in particular, PDF to anything else is generally utterly unuseable, which is 
actually a design goal of PDF.

paul

> On Apr 1, 2021, at 9:30 AM, Martin Liška  wrote:
> 
> Hey.
> 
> I've returned to the David's project and I'm willing to finish his transition 
> effort.
> I believe using Sphinx documentation can rapidly improve readability, both 
> HTML and PDF version,
> of various GCC manuals ([1]). I've spent some time working on the David's 
> texi2rsf conversion tool ([2])
> and I'm presenting a result that is not perfect, but can be acceptable after 
> a reasonable
> amount of manual modifications.
> 
> So far I focused on the 2 biggest manuals we have:
> 
> (1) User documentation
> https://splichal.eu/sphinx/
> https://splichal.eu/sphinx/gcc.pdf
> 
> (2) GCC Internals Manual
> https://splichal.eu/sphinx-internal
> https://splichal.eu/sphinx-internal/gccint.pdf
> 
> I'm aware of missing pieces (thanks Joseph) and I'm willing to finish it.
> 
> That said, I'm asking the GCC community for a green light before I invest
> more time on it?
> 
> Cheers,
> Martin
> 
> [1] https://gcc.gnu.org/onlinedocs/
> [2] https://github.com/davidmalcolm/texi2rst



Re: require et random_device for cons token test

2021-03-24 Thread Koning, Paul via Gcc-patches



> On Mar 24, 2021, at 4:59 AM, Jonathan Wakely via Gcc-patches 
>  wrote:
> 
> On 24/03/21 03:53 -0300, Alexandre Oliva wrote:
>> 
>> On target systems that don't support any random_device, not even the
>> default one,
> 
> It should be impossible to have no random_device.

Not true; deeply embedded systems might not have one.  Among GCC platforms, 
pdp11 doesn't have one, and at least some vax platforms probably don't either.

> As a fallback a
> pseudo random number generator should be used.

Presumably yes -- it seems unlikely that GCC tests depend on cryptographic 
strength of the random nummber generator.  If a PRNG is used then the classic 
FORTRAN "random" function would serve.  

paul