Re: Signing your git commits

2024-09-16 Thread Paul Koning via Gcc



> On Sep 16, 2024, at 8:13 AM, Richard Biener via Gcc  wrote:
> 
> On Mon, Sep 16, 2024 at 1:37 PM Jonathan Wakely via Gcc  
> wrote:
>> 
>> Git supports signing commits with a GPG key, and more recently (since
>> Git 2.34) also started supporting signing with an SSH key. The latter
>> is IMHO much easier to set up, because anybody who can push to the GCC
>> repo already has an SSH key configured.
>> 
>> To start signing your git commits, just enable commit.gpgsign (which
>> also enables signing with SSH, despite the name) and tell Git where to
>> find your public key. To use SSH keys instead of GPG, set
>> gpg.format=ssh. I suggest the ssh key you sign with should be the same
>> key that you use to push to gcc.gnu.org / sourceware.org
>> 
>> i.e.
>> 
>> git config --global gpg.format ssh
>> git config user.signingKey ~/.ssh/id_your_gcc_key.pub
>> git config commit.gpgsign true
>> 
>> More info online e.g.
>> https://docs.gitlab.com/ee/user/project/repository/signed_commits/ssh.html
> 
> What is the benefit of having a SSH signature in addition to
> sourceware verifying
> the SSA key upon commit?
> 
> Richard.

I was wondering the same.  PGP/GPG has a public key infrastructure (the Web of 
Trust and the PGP key servers).  PGP signatures are valuable because anyone can 
check them, given a published public key -- which is the intended use of PGP.

SSH key pairs are authentication keys, but they aren't interesting as signing 
keys unless they are backed up by a public key 
publication/distribution/authentication scheme similar to the PGP web of trust 
(or, similar but different, the CA infrastructure of X.509 keys).

paul

Re: How to target a processor with very primitive addressing modes?

2024-06-10 Thread Paul Koning via Gcc



> On Jun 10, 2024, at 11:48 AM, Georg-Johann Lay  wrote:
> 
> 
> 
> Am 08.06.24 um 11:32 schrieb Mikael Pettersson via Gcc:
>> On Thu, Jun 6, 2024 at 8:59 PM Dimitar Dimitrov  wrote:
>>> Have you tried defining TARGET_LEGITIMIZE_ADDRESS for your target? From
>>> a quick search I see that the iq2000 and rx backends are rewriting some
>>> PLUS expression addresses with insn sequence to calculate the address.
>> I have partial success.
>> The key was to define both TARGET_LEGITIMATE_ADDRESS_P and an
>> addptr3 insn.
>> I had tried TARGET_LEGITIMATE_ADDRESS_P before, together with various
>> combinations of TARGET_LEGITIMIZE_ADDRESS and
>> LEGITIMIZE_RELOAD_ADDRESS, but they all threw gcc into reload loops.
>> My add3 insn clobbers the CC register. The docs say to define
>> addptr3 in this case, and that eliminated the reload loops.
>> The issue now is that the machine cannot perform an add without
>> clobbering the CC register, so I'll have to hide that somehow. When
>> emitting the asm code, can one check if the CC register is LIVE-OUT
>> from the insn? If it isn't I shouldn't have to generate code to
>> preserve it.
>> /Mikael
> 
> There is a different approach like liken by AVR (and maybe some
> more targets):
> 
> Don't introduce CC until after reload, i.e. keep cbranch insns
> and only split them to compare + branch after reload in the
> first post reload split pass.
> 
> It's some effort because the number of insns is effectively
> doubled: One pre-reload version of the insn without CC,
> and a postreload version with CC.  On AVR, most insns don't
> set CCmode is a usable way, so that works, though not as
> well like the killed cc0 representation.

Yes, PDP11 does this also.  And it uses define_subst to create two post-reload 
flavors, one that clobbers CC, one that sets it.  (The CC set from most 
instructions is pretty useable in the sense that it's generally what's needed 
for a compare against zero.)

> Then I am not sure whether TARGET_LEGITIMIZE_ADDRESS works in
> all situations, in particular when it comes to accessing frame
> locations. It might be required to fake-supply reg+offset
> addressing, and then split that post-reload.
> 
> An example is the Reduced Tiny cores (-mavrtiny) of the AVR
> port that only support POST_INC or byte addressing.  Splitting
> locations after reload improved code quality a lot.

Did you find a good way to handle POST_INC or similar modes in LRA?  PDP11 
would like to use that (and PRE_DEC) but it seems that LRA is even less willing 
than recent versions of old reload to generate such modes.

paul


Re: How to target a processor with very primitive addressing modes?

2024-06-10 Thread Paul Koning via Gcc



> On Jun 10, 2024, at 4:47 AM, Florian Weimer via Gcc  wrote:
> 
> * Jeff Law via Gcc:
> 
>> If he's got a CC register exposed prior to LRA and LRA needs to insert
>> any code, that inserted code may clobber the CC state.  This is
>> discussed in the reload-to-LRA transition wiki page.
> 
> Do you mean the CC0 conversion page?
> 
>  
> 
> Thanks,
> Florian

There is also a very short page on LRA: https://gcc.gnu.org/wiki/LRAIsDefault . 
 It says that the change may be trivial, but a few points may need attention.

paul



Re: How to target a processor with very primitive addressing modes?

2024-06-08 Thread Paul Koning via Gcc



> On Jun 8, 2024, at 1:12 PM, Jeff Law  wrote:
> 
> 
> 
> On 6/8/24 10:45 AM, Paul Koning via Gcc wrote:
>>> On Jun 8, 2024, at 5:32 AM, Mikael Pettersson via Gcc  
>>> wrote:
>>> 
>>> On Thu, Jun 6, 2024 at 8:59 PM Dimitar Dimitrov  wrote:
>>>> Have you tried defining TARGET_LEGITIMIZE_ADDRESS for your target? From
>>>> a quick search I see that the iq2000 and rx backends are rewriting some
>>>> PLUS expression addresses with insn sequence to calculate the address.
>>> 
>>> I have partial success.
>>> 
>>> The key was to define both TARGET_LEGITIMATE_ADDRESS_P and an
>>> addptr3 insn.
>>> 
>>> I had tried TARGET_LEGITIMATE_ADDRESS_P before, together with various
>>> combinations of TARGET_LEGITIMIZE_ADDRESS and
>>> LEGITIMIZE_RELOAD_ADDRESS, but they all threw gcc into reload loops.
>>> 
>>> My add3 insn clobbers the CC register. The docs say to define
>>> addptr3 in this case, and that eliminated the reload loops.
>>> 
>>> The issue now is that the machine cannot perform an add without
>>> clobbering the CC register, so I'll have to hide that somehow. When
>>> emitting the asm code, can one check if the CC register is LIVE-OUT
>>> from the insn? If it isn't I shouldn't have to generate code to
>>> preserve it.
>>> 
>>> /Mikael
>> I'm not sure why add that clobbers CC requires anything special (other than 
>> of course showing the CC register as clobbered in the definition).  pdp11 is 
>> another target that only has a CC-clobbering add.  Admittedly, it does have 
>> register+offset addressing modes, but still the reload machinery deals just 
>> fine with add operations like that.
> If he's got a CC register exposed prior to LRA and LRA needs to insert any 
> code, that inserted code may clobber the CC state.  This is discussed in the 
> reload-to-LRA transition wiki page.
> 
> jeff

I remember working through all that in the CC0 to CCmode transition, though I 
didn't notice any new issues with LRA (in pdp11, LRA conversion simply amounted 
to turning it on).  The CC0 conversion documentation, as I recall, speaks of 
two flavors of targets, and two different ways of managing CC in the MD file.  
PDP11 is the type of target where all ALU operations update CC, so in 
particular there is no way to do address arithmetic beyond what the addressing 
modes do, without affecting CC.  The style of CC description I used given that 
guidance works just fine.

paul



Re: How to target a processor with very primitive addressing modes?

2024-06-08 Thread Paul Koning via Gcc



> On Jun 8, 2024, at 5:32 AM, Mikael Pettersson via Gcc  wrote:
> 
> On Thu, Jun 6, 2024 at 8:59 PM Dimitar Dimitrov  wrote:
>> Have you tried defining TARGET_LEGITIMIZE_ADDRESS for your target? From
>> a quick search I see that the iq2000 and rx backends are rewriting some
>> PLUS expression addresses with insn sequence to calculate the address.
> 
> I have partial success.
> 
> The key was to define both TARGET_LEGITIMATE_ADDRESS_P and an
> addptr3 insn.
> 
> I had tried TARGET_LEGITIMATE_ADDRESS_P before, together with various
> combinations of TARGET_LEGITIMIZE_ADDRESS and
> LEGITIMIZE_RELOAD_ADDRESS, but they all threw gcc into reload loops.
> 
> My add3 insn clobbers the CC register. The docs say to define
> addptr3 in this case, and that eliminated the reload loops.
> 
> The issue now is that the machine cannot perform an add without
> clobbering the CC register, so I'll have to hide that somehow. When
> emitting the asm code, can one check if the CC register is LIVE-OUT
> from the insn? If it isn't I shouldn't have to generate code to
> preserve it.
> 
> /Mikael

I'm not sure why add that clobbers CC requires anything special (other than of 
course showing the CC register as clobbered in the definition).  pdp11 is 
another target that only has a CC-clobbering add.  Admittedly, it does have 
register+offset addressing modes, but still the reload machinery deals just 
fine with add operations like that.

paul



Re: How to avoid some built-in expansions in gcc?

2024-05-31 Thread Paul Koning via Gcc



> On May 31, 2024, at 11:06 AM, Georg-Johann Lay  wrote:
> 
> 
> 
> Am 31.05.24 um 17:00 schrieb Paul Koning:
>>> On May 31, 2024, at 9:52 AM, Georg-Johann Lay  wrote:
>>> 
>>> What's the recommended way to stop built-in expansions in gcc?
>>> 
>>> For example, avr-gcc expands isinff() to a bloated version of an isinff() 
>>> implementation that's written in asm (PR115307).
>>> 
>>> Johann
>> Isn't that up to the target back end?
>>  paul
> 
> 
> Yes, that's the reason why it's a target PR.
> 
> My question is where/how to do it.
> 
> It's clear that twiddling the options works and is a simple and 
> comprehensible solution, but it seems a bit of a hack to me.
> 
> Johann

I haven't dug deep into this, but I would think at least part of the answer is 
in the target cost functions.  If those assign RTX cost according to size, then 
the result would be the optimizer would favor smaller code.  Right?

Does inline assembly expansion of builtins depend on target code supplying that 
expansion?  If so, the answer would be not to supply it, or at least not unless 
asked for by an option.  If it comes from common code, that's a different 
matter, then perhaps there should be target hooks to let the target disallow or 
discourage such expansion.  I might want such a thing for pdp11 as well.

paul



Re: How to avoid some built-in expansions in gcc?

2024-05-31 Thread Paul Koning via Gcc



> On May 31, 2024, at 9:52 AM, Georg-Johann Lay  wrote:
> 
> What's the recommended way to stop built-in expansions in gcc?
> 
> For example, avr-gcc expands isinff() to a bloated version of an isinff() 
> implementation that's written in asm (PR115307).
> 
> Johann

Isn't that up to the target back end?  It should define the optimization rules, 
and those should allow it to bias towards small code rather than fast big code.

paul



Re: [committed] PATCH for Re: Stepping down as maintainer for ARC and Epiphany

2024-05-21 Thread Paul Koning via Gcc



> On May 21, 2024, at 9:57 AM, Jeff Law  wrote:
> 
> 
> 
> On 5/21/24 12:05 AM, Richard Biener via Gcc wrote:
>> On Mon, May 20, 2024 at 4:45 PM Gerald Pfeifer  wrote:
>>> 
>>> On Wed, 5 Jul 2023, Joern Rennecke wrote:
 I haven't worked with these targets in years and can't really do
 sensible maintenance or reviews of patches for them. I am currently
 working on optimizations for other ports like RISC-V.
>>> 
>>> I noticed MAINTAINERS was not updated, so pushed the patch below.
>> That leaves the epiphany port unmaintained.  Should we automatically add such
>> ports to the list of obsoleted ports?
> Given that epiphany has randomly failed tests for the last 3+ years due to 
> bugs in its patterns, yes, it really needs to be deprecated.
> 
> I tried to fix the worst of the offenders in epiphany.md a few years back and 
> gave up.  Essentially seemingly innocent changes in the RTL will cause reload 
> to occasionally not see a path to get constraints satisfied.  So a test which 
> passes today, will flip to failing tomorrow while some other test of tests 
> will go the other way.

Does LRA make that issue go away, or does it not help?

paul



Re: [RFC] Linux system call builtins

2024-04-10 Thread Paul Koning via Gcc



> On Apr 9, 2024, at 9:48 PM, Matheus Afonso Martins Moreira via Gcc 
>  wrote:
> 
> ...
> MIPS calling conventions work like this:
> 
>> mips/n32,64 a0 a1 a2 a3 a4 a5
>> mips/o32a0 a1 a2 a3 ...
>> mips/o32args5-8 are passed on the stack

Yes, for regular function calls, but at least in the case of NetBSD, not for 
syscalls.  They have a somewhat odd calling convention that doesn't match any 
of the normal function call ABIs, though it's similar.

paul



Re: Sourceware mitigating and preventing the next xz-backdoor

2024-04-09 Thread Paul Koning via Gcc



> On Apr 9, 2024, at 3:59 PM, Jonathon Anderson via Gcc  wrote:
> 
> On Tue, Apr 9, 2024, 10:57 Andreas Schwab  wrote:
> 
>> On Apr 09 2024, anderson.jonath...@gmail.com wrote:
>> 
>>> - This xz backdoor injection unpacked attacker-controlled files and ran
>> them during `configure`. Newer build systems implement a build abstraction
>> (aka DSL) that acts similar to a sandbox and enforces rules (e.g. the only
>> code run during `meson setup` is from `meson.build` files and CMake).
>> Generally speaking the only way to disobey those rules is via an "escape"
>> command (e.g. `run_command()`) of which there are few. This reduces the
>> task of auditing the build scripts for sandbox-breaking malicious intent
>> significantly, only the "escapes" need investigation and they which
>> should(tm) be rare for well-behaved projects.
>> 
>> Just like you can put your backdoor in *.m4 files, you can put them in
>> *.cmake files.
> 
> 
> CMake has its own sandbox and rules and escapes (granted, much more of
> them). But regardless, the injection code would be committed to the
> repository (point 2) and would not hold up to a source directory mounted
> read-only (point 3).

Why would the injection code necessarily be committed to the repository?  It 
wasn't in the xz attack -- one hole in the procedures is that the kits didn't 
match the repository and no checks caught this.  I don't see how a different 
build system would cure that issue.  Instead, there needs to be some sort of 
audit that verifies there aren't rogue or modified elements in the kit.

paul




Re: [RFC] Linux system call builtins

2024-04-08 Thread Paul Koning via Gcc



> On Apr 8, 2024, at 4:01 PM, Paul Iannetta via Gcc  wrote:
> 
> On Mon, Apr 08, 2024 at 11:26:40AM -0700, Andrew Pinski wrote:
>> On Mon, Apr 8, 2024 at 11:20 AM Paul Iannetta via Gcc  
>> wrote:
>>> ...
>> Also do you sign or zero extend a 32bit argument for LP64 targets?
>> Right now it is not obvious nor documented in your examples.
>> 
> 
> Another case would be targets allowing an immediate argument for their
> syscall instruction.  Sign extend is probably always an error, zero
> extend may give the expected results. 

It depends on the ABI.  For example, on MIPS, pointers are treated as signed 
when extending from 32 to 64 bits.

paul




Re: Sourceware mitigating and preventing the next xz-backdoor

2024-04-03 Thread Paul Koning via Gcc



> On Apr 3, 2024, at 2:04 PM, Toon Moene  wrote:
> 
> On 4/1/24 17:06, Mark Wielaard wrote:
> 
>> A big thanks to everybody working this long Easter weekend who helped
>> analyze the xz-backdoor and making sure the impact on Sourceware and
>> the hosted projects was minimal.
> 
> Thanks for those efforts !
> 
> Now, I have seen two more days of thinking about this vulnerability ... but 
> no one seem to address the following issues:
> 
> A hack was made in liblzma, which, when the code was executed by a daemon 
> that by virtue of its function, *has* to be run as root, was effective.
> 
> Two questions arise (as far as I am concerned):
> 
> 1. Do daemons like sshd *have* to be linked with shared libraries ?
>   Or could it be left to the security minded of the downstream
>   (binary) distributions to link it statically with known & proven
>   correct libraries ?

I would add: should IFUNC be deleted?  Or alternatively, should it be strictly 
limited only to non-security-sensitive applications when not running as root?

> 2. Is it a limitation of the Unix / Linux daemon concept that, once
>   such a process needs root access, it has to have root access
>   *always* - even when performing trivial tasks like compressing
>   data ?

Clearly not, given the existence of the "seteuid" syscall.

> I recall quite well (vis-a-vis question 2) that the VMS equivalent would drop 
> all privileges at the start of the code, and request only those relevant when 
> actually needed (e.g., to open a file for reading that was owned by [the 
> equivalent on VMS] of root - or perform other functions that only root could 
> do), and then drop them immediately afterwards again.

Yes, and with additional effort all "root" type applications could be written 
that way.

paul



Re: Sourceware mitigating and preventing the next xz-backdoor

2024-04-03 Thread Paul Koning via Gcc



> On Apr 3, 2024, at 10:00 AM, Michael Matz  wrote:
> 
> Hello,
> 
> On Wed, 3 Apr 2024, Martin Uecker via Gcc wrote:
> 
 Seems reasonable, but note that it wouldn't make any difference to
 this attack.  The liblzma library was modified to corrupt the sshd
 binary, when sshd was linked against liblzma.  The actual attack
 occurred via a connection to a corrupt sshd.  If sshd was running as
 root, as is normal, the attacker had root access to the machine.  None
 of the attacking steps had anything to do with having root access
 while building or installing the program.
>> 
>> There does not seem a single good solution against something like this.
>> 
>> My take a way is that software needs to become less complex. Do 
>> we really still need complex build systems such as autoconf?
> 
> Do we really need complex languages like C++ to write our software in?  
> SCNR :)  Complexity lies in the eye of the beholder, but to be honest in 
> the software that we're dealing with here, the build system or autoconf 
> does _not_ come to mind first when thinking about complexity.
> 
> (And, FWIW, testing for features isn't "complex".  And have you looked at 
> other build systems?  I have, and none of them are less complex, just 
> opaque in different ways from make+autotools).
> 
> Ciao,
> Michael.

I would tend to agree with that even given my limited exposure to alternatives.

One aspect of the present attack that needs to be cured is that -- as I 
understand it -- the open source repository was fine but the kit as distributed 
had been subverted.  In other words, the standard assumption that the 
repository actually corresponds to the released code was not valid.  And 
furthermore, that it wasn't unusual for the kit to contain different or 
additional elements, just that it wasn't supposed to differ in malicious ways.

One possible answer is for all elements of kits to be made explicitly visible, 
though generated files probably don't want to be held in a normal source 
control system.  Another possible answer is for consumers of kits to treat kits 
as suspect, and have them unpacked and examined -- including any elements not 
source controlled -- before acceptance.  I think the first option is better 
because it exposes these additional elements to ongoing scrutiny from the 
entire community, rather than only one-time inspection by release managers who 
are probably quite pressed for time.

Either way, the reasons for these extra files to exist and the manner in which 
they are supposed to be generated would need to be both well documented and 
readily reproducible by outside parties.

paul



Re: Sourceware mitigating and preventing the next xz-backdoor

2024-04-02 Thread Paul Koning via Gcc



> On Apr 2, 2024, at 6:08 PM, Guinevere Larsen  wrote:
> 
> On 4/2/24 16:54, Sandra Loosemore wrote:
>> On 4/1/24 09:06, Mark Wielaard wrote:
>>> A big thanks to everybody working this long Easter weekend who helped
>>> analyze the xz-backdoor and making sure the impact on Sourceware and
>>> the hosted projects was minimal.
>>> 
>>> This email isn't about the xz-backdoor itself. Do see Sam James FAQ
>>> https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27
>>> (Sorry for the github link, but this one does seem viewable without
>>> proprietary javascript)
>>> 
>>> We should discuss what we have been doing and should do more to
>>> mitigate and prevent the next xz-backdoor. There are a couple of
>>> Sourceware services that can help with that.
>>> 
>>> TLDR;
>>> - Replicatable isolated container/VMs are nice, we want more.
>>> - autoregen buildbots, it should be transparent (and automated) how to
>>>regenerate build/source files.
>>> - Automate (snapshot) releases tarballs.
>>> - Reproducible releases (from git).
>>> 
>>> [snip]
>> 
>> While I appreciate the effort to harden the Sourceware infrastructure 
>> against malicious attacks and want to join in on thanking everyone who 
>> helped analyze this issue, to me it seems like the much bigger problem is 
>> that XZ had a maintainer who appears to have acted in bad faith.  Are the 
>> development processes used by the GNU toolchain components robust enough to 
>> cope with deliberate sabotage of the code base?  Do we have enough eyes 
>> available to ensure that every commit, even those by designated maintainers, 
>> is vetted by someone else?  Do we to harden our process, too, to require all 
>> patches to be signed off by someone else before committing?
>> 
>> -Sandra
>> 
>> 
> What likely happened for the maintainer who acted in bad faith was that they 
> entered the project with bad faith intent from the start - seeing as they 
> were only involved with the project for 2 years, and there was much social 
> pressure from fake email accounts for the single maintainer of XZ to accept 
> help.
> 
> While we would obviously like to have more area maintainers and possibly 
> global maintainers to help spread the load, I don't think any of the projects 
> listed here are all that susceptible to the same type of social engineering. 
> For one, getting the same type of blanket approval would be a much more 
> involved process because we already have a reasonable amount of people with 
> those privileges, no one is dealing with burnout and sassy customers saying 
> we aren't doing enough.
> 
> Beyond that, we (GDB) are already experimenting with approved-by, and I think 
> glibc was doing the same. That guarantees at least a second set of eyes that 
> analyzed and agreed with the patch, I don't think signed-off would add more 
> than that tag (even if security was not the reason why we implemented them).
> 
> -- 
> Cheers,
> Guinevere Larsen
> She/Her/Hers

I agree that GDB, and for that matter other projects with significant numbers 
of contributors, are not nearly as likely to be vulnerable to this sort of 
attack.  But I worry that xz may not be the only project that's small enough to 
be vulnerable, and be security-relevant in not so obvious ways.

One question that comes to mind is whether there has been an effort across the 
open source community to identify possible other targets of such attacks.  
Contributions elsewhere by the suspect in this case are an obvious concern, but 
similar scenarios with different names could also be.  That probably should be 
an ongoing activity: whenever some external component is used, it would be 
worth knowing how it is maintained, and how many eyeballs are involved.  Even 
if this isn't done by everyone, it seems like a proper precaution for security 
sensitive projects.

Another question that comes to mind: I would guess that relevant law 
enforcement agencies are already looking into this, but it would seem 
appropriate for those closest to the attacked software to reach out explicitly 
and assist in any criminal investigations.

paul



Re: Sourceware mitigating and preventing the next xz-backdoor

2024-04-02 Thread Paul Koning via Gcc



> On Apr 2, 2024, at 4:03 PM, Paul Eggert  wrote:
> 
> On 4/2/24 12:54, Sandra Loosemore wrote:
>> Do we to harden our process, too, to require all patches to be signed off by 
>> someone else before committing?
> 
> It's easy for an attacker to arrange to have "someone else" in cahoots.
> 
> Although signoffs can indeed help catch inadvertent mistakes, they're 
> relatively useless against determined attacks of this form, and we must 
> assume that nation-state attackers will be determined.

Another consideration is the size of the project.  "Many eyeballs" helps if 
there are plenty of people watching.  For smaller tools that have only a small 
body of contributors, it's easier for one or two malicious ones to subvert 
things.

Would it help to require (rather than just recommend) "don't use root except 
for the actual 'install' step" ?

paul



Re: Odd Python errors in the G++ testsuite

2023-10-09 Thread Paul Koning via Gcc



> On Oct 9, 2023, at 7:42 PM, Ben Boeckel via Gcc  wrote:
> 
> On Mon, Oct 09, 2023 at 20:12:01 +0100, Maciej W. Rozycki wrote:
>> I'm seeing these tracebacks for several cases across the G++ testsuite:
>> 
>> Executing on host: python3 -c "import sys; assert sys.version_info >= (3, 
>> 6)"(timeout = 300)
>> spawn -ignore SIGHUP python3 -c import sys; assert sys.version_info >= (3, 6)
> 
> What version of Python3 do you have? The test suite might not actually
> properly handle not having 3.7 (i.e., skip the tests that require it).

But the rule that you can't put a dict in a set is as old as set support (2.x 
for some small x).

paul



Re: Complex numbers in compilers - upcoming GNU Tools Cauldron.

2023-09-12 Thread Paul Koning via Gcc



> On Sep 12, 2023, at 7:12 AM, Martin Uecker via Gcc  wrote:
> 
> Am Dienstag, dem 12.09.2023 um 11:25 +0200 schrieb Richard Biener via Gcc:
>>> ...
>> 
>> Lack of applications / benchmarks using complex numbers is also a
>> problem for any work on this.
> 
> I could probably provide some examples such as a FFT, 
> complex Gaussian random number generation, mandelbrot
> set computation, etc.

Another nice one might be NEC-2, an electromagnetic field analyzer used to 
model antennas and similar problems.

paul



Re: Unexpected behavior of gcc on pointer dereference & increment

2023-09-01 Thread Paul Koning via Gcc



> On Sep 1, 2023, at 12:35 PM, Tomas Bortoli via Gcc  wrote:
> 
> Hi,
> 
> I recently discovered that the following C statement:
> 
> pointer++;
> 
> is semantically equivalent to the following:
> 
> *pointer++;
> 
> Is this due to operators' priority? To me, that looks weird.

Yes, https://en.cppreference.com/w/c/language/operator_precedence shows that.  
Liberal use of parentheses is a very good practice. 

paul



Re: Porting to a custom ISA

2023-08-15 Thread Paul Koning via Gcc



> On Aug 15, 2023, at 8:49 AM, MegaIng  wrote:
> 
> 
> On Aug 15, 2023, at 7:37 AM, Paul Koning wrote:
> 
>> 
>>> On Aug 15, 2023, at 7:37 AM, MegaIng via Gcc  wrote:
>>> 
>>> ...
>>> Also, on another backend I saw comments relating to libgcc (or newlib?) not 
>>> working that well on systems where int is 16bit. Is that still true, and 
>>> what is the best workaround?
>> I haven't seen that comment and it doesn't make sense to me.  Libgcc 
>> certainly is fine for 16 bit targets -- after all, GCC supports pdp11 which 
>> is such a target.  And while newlib doesn't have pdp11 support I have done 
>> some fragments of pdp11 support for it, to give me a way to run the 
>> execution part of the GCC test suite.  One of these days I should do a 
>> better job.  As far as I know it's entirely doable.
> The comment is in msp430.h and rl78.h, line 160. And it appears quite common 
> to artifically set `UNITS_PER_WORD` to 4 instead of the correct 2 or 1 when 
> compiling libgcc accross other backends as well, including avr, gcn. Is this 
> out-of-date and no longer required for libgcc?

It simply seems wrong.  In pdp11.h UNITS_PER_WORD is 2, which is what the 
machine requires.  And that works fine.

Perhaps the issue is that the libgcc routines implement operations not handled 
by the target hardware, or by expansion in the machine description.  So for 16 
bit machines you're going to need a bunch of 32 bit support routines, but 
probably not 16 bit support routines.  There are some examples, though.  Not 
all pdp11s have divide instructions so there is a udivhi3 function in libgcc, 
and the pdp11 config files call for that to be included.  

paul




Re: Porting to a custom ISA

2023-08-15 Thread Paul Koning via Gcc



> On Aug 15, 2023, at 8:06 AM, Richard Biener via Gcc  wrote:
> 
> On Tue, Aug 15, 2023 at 1:38 PM MegaIng via Gcc  wrote:
>> ...
>> And a bit more concrete with something I am having a hard time
>> debugging. I am getting errors `invalid_void`, seemingly triggered by an
>> absent of `gen_return` when compiling with anything higher than -O0. How
>> do I correctly provide an implementation for that? Or disable it's
>> usage? Our epilogue is non-trivial, and it doesn't look like the other
>> backends use something like `(define_insn "return" ...)`.
> 
> Somebody else has to answer this.

Again using pdp11 as an example -- it doesn't define any of the *return 
patterns either.  Instead it has a define_expand "epilogue" which calls a 
function to generate all the necessary elements, the last of which is a 
machine-specific pattern that will produce the final return instruction.  

This is using the RTL flavor of prologue/epilogue.  At one point in the past 
that target directly emitted the assembly code for those, which isn't the 
recommended approach.  I tried the RTL flavor and found it to be a better 
answer, and it certainly works for all the optimization levels.

paul



Re: Porting to a custom ISA

2023-08-15 Thread Paul Koning via Gcc



> On Aug 15, 2023, at 7:37 AM, MegaIng via Gcc  wrote:
> 
> ...
> Also, on another backend I saw comments relating to libgcc (or newlib?) not 
> working that well on systems where int is 16bit. Is that still true, and what 
> is the best workaround?

I haven't seen that comment and it doesn't make sense to me.  Libgcc certainly 
is fine for 16 bit targets -- after all, GCC supports pdp11 which is such a 
target.  And while newlib doesn't have pdp11 support I have done some fragments 
of pdp11 support for it, to give me a way to run the execution part of the GCC 
test suite.  One of these days I should do a better job.  As far as I know it's 
entirely doable.

pdp11, in GCC, can have either 16 or 32 bit int, it's a compiler option.  
Pointers are 16 bits, of course.  And it does support larger types (even 64 
bits), expanding into multiple instructions or libgcc calls.  A lot of that is 
handled by common code; the pdp11.md machine description handles a few of them 
to optimize those cases beyond what the common code would produce.  If you're 
doing a 16 bit target you might look at pdp11 for ideas.  One limitation is 
that right now it does only C, mainly because the object format is a.out rather 
than ELF.  That could be fixed.

paul



Re: Where to place warning about non-optimized tail and sibling calls

2023-08-01 Thread Paul Koning via Gcc
I'm puzzled.

The fundamental rule of optimization is that it doesn't change the (defined) 
semantics of the program.  How is it possible to write valid C that is correct 
only if some optimization is done?

In other words, if it matters whether an optimization is done or not, that 
suggests to me you're writing code with undefined semantics, and the answer is 
not to do so. 

paul

> On Aug 1, 2023, at 12:43 PM, Bradley Lucier via Gcc  wrote:
> 
> The Gambit Scheme->C compiler has an option to generate more efficient code 
> if it knows that all tail and sibling calls in the generated C code will be 
> optimized.  If gcc does not, however, optimize a tail or sibling call, the 
> generated C code may be incorrect (depending on circumstances).
> 
> So I would like to add a warning enabled by -Wdisabled-optimization so that 
> if -foptimize-sibling-calls is given and a tail or sibling call is not 
> optimized, then a warning is triggered.
> 
> I don't quite know where to place the warning.  It would be good if there 
> were one piece of code to identify all tail and sibling calls, and then 
> another piece that decides whether the optimization can be performed.
> 
> I see code in gcc/tree-tailcall.cc
> 
> suitable_for_tail_opt_p
> suitable_for_tail_call_opt_p
> 
> which are called by
> 
> tree_optimize_tail_calls_1
> 
> which takes an argument
> 
> opt_tailcalls
> 
> and it's called in one place with opt_tailcalls true and in another place 
> with opt_tailcalls false.
> 
> So I'm losing the plot here.
> 
> There is other code dealing with tail calls in gcc/calls.cc I don't seem to 
> understand at all.
> 
> Any advice?
> 
> Brad



Re: LRA for avr: help with FP and elimination

2023-07-27 Thread Paul Koning via Gcc



> On Jul 27, 2023, at 7:50 AM, Maciej W. Rozycki  wrote:
> 
> On Fri, 14 Jul 2023, Vladimir Makarov via Gcc wrote:
> 
>>>  On the avr, the stack pointer (SP)
>>>   is not used to access stack slots
>> It is very uncommon target then.
> 
> Same with the VAX target.  SP is used for outgoing function arguments, 
> function calls, alloca only.  AP is used for incoming function arguments 
> and is set automatically by hardware at function entry.  FP is used for 
> local variables and is likewise set by hardware at function entry.

While most other targets maintain FP in software, doesn't the same description 
apply to any target that can have a frame pointer.  The frame pointer may be 
used only some of the time (PDP-11) or always (VAX) but when it's used local 
variable references and argument references would go through FP, not SP, right?

paul




Re: abi

2023-07-09 Thread Paul Koning via Gcc
Because implementing an ABI, or dealing with an incompatibnle change, is hard 
work.  Also, ABI stability means that old binaries work.  So ABI stability 
isn't so much a requirement for the compiler as it is a requirement for any 
sane operating system.  An OS that changes ABI without an extremely good reason 
is an OS that doesn't care about compatibility, which means it doesn't care 
about its customers.

The MIPS examples I pointed to are a good illustration of this.  The original 
("O32") ABI is for MIPS with 32 bit registers and 32 bit addressing.  N32 and 
N64 were introduced by SGI to support 64 bit registers, and (for N64) 64 bit 
pointers.  That's a very compelling benefit.  64 bbit addressing is obvious, 
and the performance benefit from using 64 bit registers on machines that have 
them is very large.  So there, the quite large cost of doing this was totally 
justified.

paul

> On Jul 9, 2023, at 4:55 PM, André Albergaria Coelho via Gcc  
> wrote:
> 
> If we can select the ABi for our program (using gcc), why is there a need for 
> ABI stability?!
> 
> why not put it on a define
> 
> 
> #define abi v3
> 
> int main() {
> 
> }
> 
> 
> Each user would just have to compile the code, to follow the abi...no need to 
> worry changing it
> 
> 
> thanks
> 
> 
> andre
> 



Re: abi

2023-07-06 Thread Paul Koning via Gcc
It does, for machine architectures that have multiple ABIs.  MIPS is an example 
where GCC has supported this for at least 20 years.

paul

> On Jul 6, 2023, at 5:19 PM, André Albergaria Coelho via Gcc  
> wrote:
> 
> Could gcc have an option to specify ABI?
> 
> say
> 
> 
> gcc something.c -g -abi 1 -o something
> 
> 
> thanks
> 
> 
> andre
> 



Re: Will GCC eventually learn to use BSR or even TZCNT on AMD/Intel processors?

2023-06-05 Thread Paul Koning via Gcc



> On Jun 5, 2023, at 8:09 PM, Dave Blanchard  wrote:
> 
> On Tue, 6 Jun 2023 01:59:42 +0200
> Gabriel Ravier  wrote:
> 
>> [nothing of value]
> 
> If this guy's threads are such a terrible waste of your time, how about 
> employing your email client's filters to ignore his posts (and mine too) and 
> fuck off? 

Done.  Since you have not shown any ability to have a civilized conversation, I 
will now use my email filter to be spared any further exposure.

paul




Re: End of subscription

2023-05-24 Thread Paul Koning via Gcc



> On May 23, 2023, at 10:08 PM, LIU Hao via Gcc  wrote:
> 
> 在 2023/5/19 20:59, Florian Weimer via Gcc 写道:
>> * Jonathan Wakely via Gcc:
>>> Unfortunately even the Gmail web UI doesn't offer an unsubscribe
>>> option, despite knowing the mails come from a list and showing:
>>> 
>>> mailing list: gcc@gcc.gnu.org Filter messages from this mailing list
>> It does for me, under the ⏷ menu at the end of the recipient list in the
>> message pane.  Be sure that you select a message copy that you actually
>> received through the mailing list.
> 
> The iOS official mail app does not say 'this message is from a mailing list' 
> either. All the other mails from GitHub, SourceForge, StackOverflow, ncurses 
> etc. are marked as from mailing lists. I suspect there is kinda 
> misconfiguration.

Curious, because Mac OS mail does show it as a mailing list message, offering 
up an "unsubscribe" button.  So it looks like an iOS mail bug.

paul



Re: More C type errors by default for GCC 14

2023-05-10 Thread Paul Koning via Gcc



> On May 10, 2023, at 10:39 AM, Eli Zaretskii via Gcc  wrote:
> 
>> ...
>> Sweeping problems under the carpet and hoping no one trips over the 
>> bumps is, at best, pushing problems down the road for future developers.
> 
> I'm not sweeping anything.  This is not GCC's problem to solve, that's
> all.  If the developer avoids dealing with this problem, then he or
> she might be sweeping the problem under the carpet.  But this is not
> GCC's problem.

Agreed.  -Wall -Werror exists for a reason, and choosing to use that it helpful 
but not necessarily feasible for everyone if confronted with old mouldy code.

I remember a wonderful article (out of MIT?) explaining a whole bunch of 
somewhat-surprising C standard rules and why they allowed the compiler to do 
things that many people don't expect.  As I recall, a lot of those were things 
that Linux didn't want and therefore would suppress with suitable -f 
flags.  "Strict aliasing" may have been one of those -- I still remember my 
somewhat-surprised reaction when I first learned what that is and why my 
"obvious" C code was not valid.

I also agree with Eli that using C to write highly reliable code is, shall we 
say, quite a challenge.  The language just isn't well suited for that.  But GCC 
also supports Ada :-)  and now Modula-2.

paul



Re: I have questions regarding the 4.3 codebase...

2023-03-23 Thread Paul Koning via Gcc



> On Mar 23, 2023, at 10:13 AM, Sid Maxwell via Gcc  wrote:
> 
> Thanks for reaching out, Julian, I greatly appreciate your help.  Please
> forgive and over- or under-sharing.  If I've left something out, please let
> me know.
> 
> From my pdp10.md:
> 
> ;; JIRA sw_gcc-68.  gcc recognizes the "movmemhi" 'instruction' for
> ;; doing block moves, as in struct assignment.  This pattern wasn't
> ;; present, however.  I've added it, and it's companion function
> ;; pdp10_expand_movmemhi().
> 
> (define_expand "movmemhi"
>   [(match_operand:BLK 0 "general_operand" "=r")
>(match_operand:BLK 1 "general_operand" "=r")
>(match_operand:SI 2 "general_operand" "=r")
>(match_operand:SI 3 "const_int_operand" "")
>   ]...

I don't remember that far back, but looking at current examples (like vax.md) 
that seems like an odd pattern.  vax.md has a three operand pattern with the 
first two marked as "memory_operand" and only the first one has the = modifier 
on it showing it's an output operand.

What does the 4.3 version of gccint say about it?  Or the 4.3 version of vax.md?

paul



Re: LRA produces RTL not meeting constraint

2023-01-11 Thread Paul Koning via Gcc



> On Jan 11, 2023, at 7:38 PM, Paul Koning via Gcc  wrote:
> 
> 
> 
>> On Jan 11, 2023, at 2:52 PM, Segher Boessenkool  
>> wrote:
>> 
>> Hi Paul,
>> 
>> On Tue, Jan 10, 2023 at 02:39:34PM -0500, Paul Koning via Gcc wrote:
>>> In pdp11.md I have:
>>> 
>>> (define_insn_and_split "addhi3"
>>> [(set (match_operand:HI 0 "nonimmediate_operand" "=rR,rR,Q,Q")
>>> (plus:HI (match_operand:HI 1 "general_operand" "%0,0,0,0")
>>>  (match_operand:HI 2 "general_operand" "rRLM,Qi,rRLM,Qi")))]
>>> ""
>>> "#"
>>> "reload_completed"
>>> [(parallel [(set (match_dup 0)
>>>(plus:HI (match_dup 1) (match_dup 2)))
>>>   (clobber (reg:CC CC_REGNUM))])]
>>> ""
>>> [(set_attr "length" "2,4,4,6")])
>>> 
>>> While compiling libgcc2.c I see this RTL in the .ira dump file:
>>> 
>>> (insn 49 48 53 5 (set (reg/f:HI 136)
>>>   (plus:HI (reg/f:HI 5 r5)
>>>   (const_int -8 [0xfff8]))) 
>>> "../../../../../gcc/libgcc/libgcc2.c":276:4 68 {addhi3}
>>>(expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
>>>   (const_int -8 [0xfff8]))
>>>   (nil)))
>> 
>> What hard register was assigned by IRA to r136?  It shows this in the
>> .ira dump file, search for "Disposition:".
> 
> Disposition:
>3:r25  l0   mem   40:r26  l0 4   51:r31  l0   mem   47:r35  l0   mem
>   42:r45  l0 2   18:r47  l0   mem   38:r52  l0   mem   34:r54  l0   mem
>   29:r64  l0 2   21:r80  l0 0   15:r88  l0 02:r99  l0 0
>   19:r102 l0   mem5:r103 l0   mem   31:r110 l0 0   44:r114 l0 0
>   41:r115 l0   mem   46:r116 l0 0   28:r117 l0   mem   33:r118 l0 0
>   20:r119 l0 2   14:r120 l0 21:r121 l0 20:r122 l0   mem
>9:r123 l0   mem8:r124 l0   mem   55:r125 l0 0   53:r126 l0   mem
>   54:r129 l0 0   52:r135 l0 0   49:r136 l0 5   48:r137 l0 4
>   50:r139 l0 0   45:r145 l0   mem   43:r146 l0 0   39:r147 l0 0
>   36:r148 l0 5   35:r149 l0 4   37:r151 l0 0   32:r157 l0   mem
>   30:r158 l0 0   27:r159 l0 0   25:r160 l0   mem   26:r161 l0 0
>   24:r164 l0 0   22:r165 l0   mem   23:r166 l0 0   16:r170 l0   mem
>   17:r171 l0 0   11:r175 l0 0   13:r176 l0 2   12:r177 l0 2
>   10:r178 l0 06:r179 l0   mem7:r180 l0 04:r184 l0 0
> 
> so R5, if I read that correctly.  Which makes sense given that the input 
> operand is R5.  
>> 
>>> Then in the .reload dump it appears this way:
>>> 
>>> (insn 49 48 53 5 (set (reg/f:HI 5 r5 [136])
>>>   (plus:HI (reg/f:HI 6 sp)
>>>   (const_int 40 [0x28]))) 
>>> "../../../../../gcc/libgcc/libgcc2.c":276:4 68 {addhi3}
>>>(expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
>>>   (const_int -8 [0xfff8]))
>>>   (nil)))
>>> 
>>> which obviously causes an ICE because that RTL doesn't meet the constraints.
>> 
>> Before reload it did not have operands[0] and operands[1] the same,
>> already?
> 
> No, and given that it's an addhi3 pattern that is fine before reload.  It's 
> reload that has to make them match because the machine instruction is two 
> operand.

It occurs to me there's a strange transformation LRA made that I don't 
understand, which is the cause of the trouble.

Input:

(insn 49 48 53 5 (set (reg/f:HI 136)
(plus:HI (reg/f:HI 5 r5)
(const_int -8 [0xfff8]))) "_mulvdi3.i":38:4 68 {addhi3}
 (expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
(const_int -8 [0xfff8]))
(nil)))
(insn 53 49 50 5 (set (reg/f:HI 137)
(symbol_ref:HI ("__muldi3") [flags 0x41]  )) "_mulvdi3.i":38:4 25 {movhi}
 (expr_list:REG_EQUIV (symbol_ref:HI ("__muldi3") [flags 0x41]  
)
(nil)))

and the IRA "disposition" says it assigned R5 for R136, which is what should 
happen given that operand 1 (in the plus:HI) is R5 and the constraint says that 
operands 0 and 1 should match.

However, Reload shows that it is given:

(insn 49 48 53 5 (set (reg/f:HI 5 r5 [136])
(plus:HI (reg/f:HI 6 sp)
(const_int 40 [0x28]))) "_mulvdi3.i":38:4 68 {addhi3}
 (expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
(const_int -8 [0xfff8]))
(nil)))

In other words, R136 was replaced by R5 as expected -- but at the same time, 
the source operands were replaced by something entirely different.  It is a 
misguided attempt to eliminate the frame pointer?  R5 can be the frame pointer, 
if it is needed for that purpose.  And replacing an FP reference by an SP 
reference is reasonable enough, but such a substitution has to verify that the 
constraints are still satisfied.  Is that something the target code has to 
provide?  I'm not aware of it.

paul



Re: LRA produces RTL not meeting constraint

2023-01-11 Thread Paul Koning via Gcc


> On Jan 11, 2023, at 2:52 PM, Segher Boessenkool  
> wrote:
> 
> Hi Paul,
> 
> On Tue, Jan 10, 2023 at 02:39:34PM -0500, Paul Koning via Gcc wrote:
>> In pdp11.md I have:
>> 
>> (define_insn_and_split "addhi3"
>>  [(set (match_operand:HI 0 "nonimmediate_operand" "=rR,rR,Q,Q")
>>  (plus:HI (match_operand:HI 1 "general_operand" "%0,0,0,0")
>>   (match_operand:HI 2 "general_operand" "rRLM,Qi,rRLM,Qi")))]
>>  ""
>>  "#"
>>  "reload_completed"
>>  [(parallel [(set (match_dup 0)
>> (plus:HI (match_dup 1) (match_dup 2)))
>>(clobber (reg:CC CC_REGNUM))])]
>>  ""
>>  [(set_attr "length" "2,4,4,6")])
>> 
>> While compiling libgcc2.c I see this RTL in the .ira dump file:
>> 
>> (insn 49 48 53 5 (set (reg/f:HI 136)
>>(plus:HI (reg/f:HI 5 r5)
>>(const_int -8 [0xfff8]))) 
>> "../../../../../gcc/libgcc/libgcc2.c":276:4 68 {addhi3}
>> (expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
>>(const_int -8 [0xfff8]))
>>(nil)))
> 
> What hard register was assigned by IRA to r136?  It shows this in the
> .ira dump file, search for "Disposition:".

Disposition:
3:r25  l0   mem   40:r26  l0 4   51:r31  l0   mem   47:r35  l0   mem
   42:r45  l0 2   18:r47  l0   mem   38:r52  l0   mem   34:r54  l0   mem
   29:r64  l0 2   21:r80  l0 0   15:r88  l0 02:r99  l0 0
   19:r102 l0   mem5:r103 l0   mem   31:r110 l0 0   44:r114 l0 0
   41:r115 l0   mem   46:r116 l0 0   28:r117 l0   mem   33:r118 l0 0
   20:r119 l0 2   14:r120 l0 21:r121 l0 20:r122 l0   mem
9:r123 l0   mem8:r124 l0   mem   55:r125 l0 0   53:r126 l0   mem
   54:r129 l0 0   52:r135 l0 0   49:r136 l0 5   48:r137 l0 4
   50:r139 l0 0   45:r145 l0   mem   43:r146 l0 0   39:r147 l0 0
   36:r148 l0 5   35:r149 l0 4   37:r151 l0 0   32:r157 l0   mem
   30:r158 l0 0   27:r159 l0 0   25:r160 l0   mem   26:r161 l0 0
   24:r164 l0 0   22:r165 l0   mem   23:r166 l0 0   16:r170 l0   mem
   17:r171 l0 0   11:r175 l0 0   13:r176 l0 2   12:r177 l0 2
   10:r178 l0 06:r179 l0   mem7:r180 l0 04:r184 l0 0

so R5, if I read that correctly.  Which makes sense given that the input 
operand is R5.  
> 
>> Then in the .reload dump it appears this way:
>> 
>> (insn 49 48 53 5 (set (reg/f:HI 5 r5 [136])
>>(plus:HI (reg/f:HI 6 sp)
>>(const_int 40 [0x28]))) 
>> "../../../../../gcc/libgcc/libgcc2.c":276:4 68 {addhi3}
>> (expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
>>(const_int -8 [0xfff8]))
>>(nil)))
>> 
>> which obviously causes an ICE because that RTL doesn't meet the constraints.
> 
> Before reload it did not have operands[0] and operands[1] the same,
> already?

No, and given that it's an addhi3 pattern that is fine before reload.  It's 
reload that has to make them match because the machine instruction is two 
operand.

>> This happens only when LRA is used.
>> 
>> I also see this in the .reload file, but I don't know what it means:
>> 
>>   Choosing alt 1 in insn 49:  (0) rR  (1) 0  (2) Qi {addhi3}
> 
> It chose alternative 1 in this instruction, which uses constraints rR
> for operands[0], a tie to that for operands[1], and Qi for operands[2].
> 
>>1 Non-pseudo reload: reject+=2
>>1 Non input pseudo reload: reject++
>>Cycle danger: overall += LRA_MAX_REJECT
>>  alt=0,overall=609,losers=1,rld_nregs=2
>>alt=1: Bad operand -- refuse
>>alt=2: Bad operand -- refuse
>>  alt=3,overall=0,losers=0,rld_nregs=0
> 
> This is for the *next* instruction.  There is a "Choosing alt" thing
> for that right after this.  That will be alt 3: 1 and 2 are refused,
> 0 costs 609, 3 costs 0.
> 
> Reading the LRA dumps needs some getting used to ;-)

Indeed.  So does that mean the discussion about insn 48 is the interesting one? 
 That goes on for a while:

 Choosing alt 0 in insn 48:  (0) =rR  (1) RN {movhi}
1 Spill Non-pseudo into memory: reject+=3
Using memory insn operand 1: reject+=3
1 Non input pseudo reload: reject++
  alt=0,overall=13,losers=1,rld_nregs=0
0 Spill pseudo into memory: reject+=3
Using memory insn operand 0: reject+=3
0 Non input pseudo reload: reject++
  

LRA produces RTL not meeting constraint

2023-01-10 Thread Paul Koning via Gcc
In pdp11.md I have:

(define_insn_and_split "addhi3"
  [(set (match_operand:HI 0 "nonimmediate_operand" "=rR,rR,Q,Q")
(plus:HI (match_operand:HI 1 "general_operand" "%0,0,0,0")
 (match_operand:HI 2 "general_operand" "rRLM,Qi,rRLM,Qi")))]
  ""
  "#"
  "reload_completed"
  [(parallel [(set (match_dup 0)
   (plus:HI (match_dup 1) (match_dup 2)))
  (clobber (reg:CC CC_REGNUM))])]
  ""
  [(set_attr "length" "2,4,4,6")])

While compiling libgcc2.c I see this RTL in the .ira dump file:

(insn 49 48 53 5 (set (reg/f:HI 136)
(plus:HI (reg/f:HI 5 r5)
(const_int -8 [0xfff8]))) 
"../../../../../gcc/libgcc/libgcc2.c":276:4 68 {addhi3}
 (expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
(const_int -8 [0xfff8]))
(nil)))

Then in the .reload dump it appears this way:

(insn 49 48 53 5 (set (reg/f:HI 5 r5 [136])
(plus:HI (reg/f:HI 6 sp)
(const_int 40 [0x28]))) "../../../../../gcc/libgcc/libgcc2.c":276:4 
68 {addhi3}
 (expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
(const_int -8 [0xfff8]))
(nil)))

which obviously causes an ICE because that RTL doesn't meet the constraints.  
This happens only when LRA is used.

I also see this in the .reload file, but I don't know what it means:

 Choosing alt 1 in insn 49:  (0) rR  (1) 0  (2) Qi {addhi3}
1 Non-pseudo reload: reject+=2
1 Non input pseudo reload: reject++
Cycle danger: overall += LRA_MAX_REJECT
  alt=0,overall=609,losers=1,rld_nregs=2
alt=1: Bad operand -- refuse
alt=2: Bad operand -- refuse
  alt=3,overall=0,losers=0,rld_nregs=0

Any ideas?  I ran into this when trying to make LRA the default for this target.

paul



Re: Widening multiplication, but no narrowing division [i386/AMD64]

2023-01-10 Thread Paul Koning via Gcc



> On Jan 9, 2023, at 11:27 AM, Stefan Kanthak  wrote:
> 
> "Paul Koning"  wrote:
> 
>>> ...
> 
>> Yes, I was thinking the same.  But I spent a while on that pattern -- I
>> wanted to support div/mod as a single operation because the machine has
>> that primitive.  And I'm pretty sure I saw it work before I committed
>> that change.  That's why I'm wondering if something changed.
> 
> I can't tell from the past how GCC once worked, but today it can't
> (or doesn't) use such patterns, at least not on i386/AMD64 processors.

It turns out I was confused by the RTL generated by my pattern.  That pattern 
is for divmodhi, so it works as desired given same-size inputs.  

I'm wondering if the case of longer dividend -- which is a common thing for 
several machines -- could be handled by a define_peephole2 that matches the 
sign-extend of the divisor followed by the (longer) divide.  I made a stab at 
that but what I wrote wasn't valid.

So, question to the list:  suppose I want to write RTL that matches what Stefan 
is talking about, with a div or mod or divmod that has si results and a di 
dividend (or hi results and an si dividend), how would you do that?  Can a 
define_peephole2 do it, and if so, what would it look like?

paul




struct vs. class in GCC source

2023-01-10 Thread Paul Koning via Gcc
Building on Mac with Clang I get warnings like this:

../../../gcc/gcc/cgraph.h:2629:28: warning: struct 'cgraph_edge' was previously 
declared as a class; this is valid, but may result in linker errors under the 
Microsoft C++ ABI [-Wmismatched-tags]

It seems to be talking about a MS bug (since C++ says struct and class mean the 
same thing other than the default access).  Still, I wonder if it would be 
worth changing the code to use just one of "struct" or "class" for any given 
type.  (And then the convention would presumably be that a POD type is called 
"struct" and other types are "class".)

paul



Re: Widening multiplication, but no narrowing division [i386/AMD64]

2023-01-09 Thread Paul Koning via Gcc



> On Jan 9, 2023, at 10:20 AM, Stefan Kanthak  wrote:
> 
> "Paul Koning"  wrote:
> 
>>> On Jan 9, 2023, at 7:20 AM, Stefan Kanthak  wrote:
>>> 
>>> Hi,
>>> 
>>> GCC (and other C compilers too) support the widening multiplication
>>> of i386/AMD64 processors, but DON'T support their narrowing division:
>> 
>> I wonder if this changed in the recent past.
>> I have a pattern for this type of thing in pdp11.md:
> [...]
>> and I'm pretty sure this worked at some point in the past.  
> 
> Unfortunately the C standard defines that the smaller operand (of lesser
> conversion rank), here divisor, has to undergo a conversion to the "real
> common type", i.e. the broader operand (of higher conversion rank), here
> dividend. Unless the information about promotion/conversion is handed over
> to the code generator it can't apply such patterns -- as demonstrated by
> the demo code.
> 
> regards
> Stefan

Yes, I was thinking the same.  But I spent a while on that pattern -- I wanted 
to support div/mod as a single operation because the machine has that 
primitive.  And I'm pretty sure I saw it work before I committed that change.  
That's why I'm wondering if something changed.

paul



Re: Widening multiplication, but no narrowing division [i386/AMD64]

2023-01-09 Thread Paul Koning via Gcc



> On Jan 9, 2023, at 7:20 AM, Stefan Kanthak  wrote:
> 
> Hi,
> 
> GCC (and other C compilers too) support the widening multiplication
> of i386/AMD64 processors, but DON'T support their narrowing division:

I wonder if this changed in the recent past.  I have a pattern for this type of 
thing in pdp11.md:

(define_expand "divmodhi4"
  [(parallel
[(set (subreg:HI (match_dup 1) 0)
(div:HI (match_operand:SI 1 "register_operand" "0")
(match_operand:HI 2 "general_operand" "g")))
 (set (subreg:HI (match_dup 1) 2)
(mod:HI (match_dup 1) (match_dup 2)))])
   (set (match_operand:HI 0 "register_operand" "=r")
(subreg:HI (match_dup 1) 0))
   (set (match_operand:HI 3 "register_operand" "=r")
(subreg:HI (match_dup 1) 2))]
  "TARGET_40_PLUS"
  "")

and I'm pretty sure this worked at some point in the past.  

paul



Re: [BUG] missing warning for pointer arithmetic out of bounds

2022-12-13 Thread Paul Koning via Gcc



> On Dec 13, 2022, at 2:08 PM, Alejandro Colomar via Gcc  
> wrote:
> 
> Hi!
> 
> For the following program:
> 
> 
>$ cat buf.c
>#include 
> 
>int main(void)
>{
>char *p, buf[5];
> 
>p = buf + 6;
>printf("%p\n", p);
>}
> 
> 
> There are no warnings in gcc, as I would expect:
> 
>$ gcc -Wall -Wextra buf.c -O0
> 
> Clang does warn, however:
> 
>$ clang -Weverything -Wall -Wextra buf.c -O0
>buf.c:8:17: warning: format specifies type 'void *' but the argument has 
> type 'char *' [-Wformat-pedantic]
>printf("%p\n", p);
>~~ ^
>%s
>buf.c:7:6: warning: the pointer incremented by 6 refers past the end of 
> the array (that contains 5 elements) [-Warray-bounds-pointer-arithmetic]
>p = buf + 6;
>^ ~

I thought void * is a generic pointer that accepts any pointer argument.  So a 
warning about char* being passed in seems to be flat out wrong.

>buf.c:5:2: note: array 'buf' declared here
>char *p, buf[5];
>^
>2 warnings generated.

That was discussed just days ago: C says that a pointer one past the end of the 
array is legal.  So here too it looks like Clang is wrong and GCC is right.

paul



Re: Good grief Charlie Brown

2022-12-13 Thread Paul Koning via Gcc



> On Dec 13, 2022, at 2:09 PM, Dave Blanchard  wrote:
> 
> Since my response did not get posted (maybe one of the words wasn't allowed? 
> or because I attached binaries?) here it is again:
> ...

I'm puzzled.  What is your purpose?  What result do you expect from your 
messages?  What action would you like to see?

I see some vague hints that there are some issues.  If you want those to be 
addressed, you should report them as defects through the bug tracker.  To do 
so, you should be specific as to what's failing so others can reproduce what 
you did and see what goes wrong.

Apart from that, generic "I don't like what you guys did" is obviously just 
going to get ignored as meaningless ravings.

paul



Re: Can't build Ada

2022-11-26 Thread Paul Koning via Gcc



> On Nov 26, 2022, at 11:42 AM, Arnaud Charlet via Gcc  wrote:
> 
> 
>>> The current statement  (https://gcc.gnu.org/install/prerequisites.html) is:
>>> 
>>> GNAT
>>> In order to build GNAT, the Ada compiler, you need a working GNAT compiler 
>>> (GCC version 5.1 or later).
>>> 
>>> so, if 5.1 is not working, then perhaps a PR is in order.
>> 
>> I will do that, if the "shell in Rosetta" thing doesn't cure the problem.
> 
> You won’t need to, the version of gnat you are using is recent enough, you 
> need to follow Ian’s instructions to the letter. The Ada 2022 code is a red 
> herring and is only problematic when you build a cross with a non matching 
> native, not when building a native compiler.
> 
> Arno

All I can tell you is that I'm pretty sure I'm doing what Iain said, using his 
branch (up to date), and using the compilers from the Adacore open source 
release (20200818) which is GCC 8.4.1.  And once again I got that same 
complaint about Ada2020 constructs:

/usr/local/gnat/bin/gcc -c -g -O2  -gnatpg -gnata -W -Wall -nostdinc -I- 
-I. -Iada/generated -Iada -I../../../gcc-darwin/gcc/ada -Iada/libgnat 
-I../../../gcc-darwin/gcc/ada/libgnat -Iada/gcc-interface 
-I../../../gcc-darwin/gcc/ada/gcc-interface 
../../../gcc-darwin/gcc/ada/contracts.adb -o ada/contracts.o
s-imagei.ads:95:11: declare_expression is an Ada 2020 feature
s-valueu.ads:152:09: declare_expression is an Ada 2020 feature
s-valueu.ads:160:09: declare_expression is an Ada 2020 feature
s-valueu.ads:184:06: "Subprogram_Variant" is not a valid aspect identifier
s-valuei.ads:80:11: declare_expression is an Ada 2020 feature
s-valuei.ads:95:08: declare_expression is an Ada 2020 feature
s-valuei.ads:141:06: "Subprogram_Variant" is not a valid aspect identifier
s-widthu.ads:84:09: declare_expression is an Ada 2020 feature
s-widthu.ads:93:11: run-time library configuration error
s-widthu.ads:93:11: file s-imgint.ads had parser errors
s-widthu.ads:93:11: entity "System.Img_Int.Image_Integer" not available
compilation abandoned
make[2]: *** [ada/contracts.o] Error 1

paul



Re: Can't build Ada

2022-11-26 Thread Paul Koning via Gcc



> On Nov 26, 2022, at 11:52 AM, Iain Sandoe  wrote:
> 
> 
> 
>> On 26 Nov 2022, at 16:42, Arnaud Charlet  wrote:
>> 
>> 
 The current statement  (https://gcc.gnu.org/install/prerequisites.html) is:
 
 GNAT
 In order to build GNAT, the Ada compiler, you need a working GNAT compiler 
 (GCC version 5.1 or later).
 
 so, if 5.1 is not working, then perhaps a PR is in order.
>>> 
>>> I will do that, if the "shell in Rosetta" thing doesn't cure the problem.
>> 
>> You won’t need to, the version of gnat you are using is recent enough, you 
>> need to follow Ian’s instructions to the letter. The Ada 2022 code is a red 
>> herring and is only problematic when you build a cross with a non matching 
>> native, not when building a native compiler.
> 
> One additional question/point - which branch are you trying to build the 
> cross from?
> 
> I am sure it will not work from upstream master.
> 
> Unfortunately, owing to lack of free time… aarch64-darwin is not yet 
> completely ready to upstream, so folks are using the development branch here: 
> https://github.com/iains/gcc-darwin-arm64 (which I will update later, based 
> on the master version mentioned earlier; if testing goes OK).
> 
> Iain.

That's the branch I'm using.

paul



Re: Can't build Ada

2022-11-26 Thread Paul Koning via Gcc



> On Nov 26, 2022, at 10:58 AM, Iain Sandoe  wrote:
> 
> Hi Paul,
> 
> I am part way through the exercise on both macOS 11 (X86) and 12 (Arm64).
> 
> ** However, I am using gcc-7.5 as the bootstrap compiler, not gcc-5.1.

I'm not using 5.1 -- I only quoted that version number because the install 
documentation mentions it.  The actual bootstrap compiler is 8.4.1:

pkoning:gcc-darwin-x86 pkoning$ /usr/local/gnat/bin/gcc --version
gcc (GCC) 8.4.1 20200430 (for GNAT Community 2020 20200818)

> You might find problems unless you actually start a Rosetta 2 shell - so 
> “ arch -x86_64 bash “ 
> and then go from there (this seems to ensure that sub-processes are started 
> as x86_64)
> 
> (with this, bootstrap succeeded for both x86_64 Rosetta 2  and rebased Arm64 
> branch native - r13-4309-g309e2d95e3b9)
> 
> I will push the rebased arm64 branch when testing is done.
> 
>> So I'm guessing I'll have to do this in two parts, first build a newer but 
>> not-latest Gnat from a release that doesn't include the problematic 
>> constructs, then follow that by using the intermediate to build the current 
>> sources.
>> 
>> I wonder if this incompatibility was intentional.  If not it would be good 
>> for the Ada maintainers to fix these and ensure that the current code can 
>> still be built with the most recent public release of Gnat.  Conversely, if 
>> it is intentional, the documentation should be updated to explain how to 
>> build the current code.
> 
> The current statement  (https://gcc.gnu.org/install/prerequisites.html) is:
> 
> GNAT
> In order to build GNAT, the Ada compiler, you need a working GNAT compiler 
> (GCC version 5.1 or later).
> 
> so, if 5.1 is not working, then perhaps a PR is in order.

I will do that, if the "shell in Rosetta" thing doesn't cure the problem.

paul




Re: GNU = Junkware

2022-11-26 Thread Paul Koning via Gcc



> On Nov 26, 2022, at 4:20 AM, Dave Blanchard  wrote:
> 
> No, I'm not trolling, just venting here for a moment. So sick of garbage ass, 
> crusty junkware that's always a battle to the death to accomplish anything.

I don't know who you are or why you feel a need to spew obscenities on the GCC 
technical mailing list.  Especially since you have never contributed anything 
to GCC.

Please stop.  And to the list overseers: if it doesn't stop, please apply a 
suitable blacklist.

paul




Re: Can't build Ada

2022-11-26 Thread Paul Koning via Gcc



> On Nov 25, 2022, at 3:46 PM, Iain Sandoe  wrote:
> 
> Hi Paul,
> 
>> On 25 Nov 2022, at 20:13, Andrew Pinski via Gcc  wrote:
>> 
>> On Fri, Nov 25, 2022 at 12:08 PM Paul Koning  wrote:
>>> 
>>>> On Nov 25, 2022, at 3:03 PM, Andrew Pinski  wrote:
>>>> 
>>>> On Fri, Nov 25, 2022 at 11:59 AM Paul Koning via Gcc  
>>>> wrote:
>>>>> 
>>>>> I'm trying to use fairly recent GCC sources (the gcc-darwin branch to be 
>>>>> precise) to build Ada, starting with the latest (2020) release of Gnat 
>>>>> from Adacore.
>>>> 
>>>> Are you building a cross compiler or a native compiler?
>>>> If you are building a cross, you need to bootstrap a native compiler first.
>>> 
>>> I'm not sure.  The installed Gnat is x86_64-darwin; I want to build 
>>> aarch64-darwin.
>> 
>> You have to build a x86_64-darwin compiler first with the same sources
>> as you are building for aarch64-darwin.
> 
> So .. 
> 1/ if you are on arm64 Darwin, 
>  - the first step is to bootstrap the compiler using Rosetta 2 and the 
> available x86_64 gnat.
> 
> 2/ if you are on x86_64 Darwin…
>  - the first step is to bootstrap the compiler using the available x86-64 
> gnat.

Thanks all.

I tried that (#1) and got the same failure.  The trouble seems to be that the 
current sources have Ada2020 constructs in them and the available Gnat doesn't 
support that version.  The commit that introduces these (or some of them at 
least) is 91d68769419b from Feb 4, 2022.

So I'm guessing I'll have to do this in two parts, first build a newer but 
not-latest Gnat from a release that doesn't include the problematic constructs, 
then follow that by using the intermediate to build the current sources.

I wonder if this incompatibility was intentional.  If not it would be good for 
the Ada maintainers to fix these and ensure that the current code can still be 
built with the most recent public release of Gnat.  Conversely, if it is 
intentional, the documentation should be updated to explain how to build the 
current code.

paul




Re: Can't build Ada

2022-11-25 Thread Paul Koning via Gcc



> On Nov 25, 2022, at 3:03 PM, Andrew Pinski  wrote:
> 
> On Fri, Nov 25, 2022 at 11:59 AM Paul Koning via Gcc  wrote:
>> 
>> I'm trying to use fairly recent GCC sources (the gcc-darwin branch to be 
>> precise) to build Ada, starting with the latest (2020) release of Gnat from 
>> Adacore.
> 
> Are you building a cross compiler or a native compiler?
> If you are building a cross, you need to bootstrap a native compiler first.

I'm not sure.  The installed Gnat is x86_64-darwin; I want to build 
aarch64-darwin.

But in any case, how does that relate to the error messages I got?  They don't 
seem to have anything to do with missing compilers, but rather with the use of 
language features too new for the available (downloadable) Gnat.

paul




Can't build Ada

2022-11-25 Thread Paul Koning via Gcc
I'm trying to use fairly recent GCC sources (the gcc-darwin branch to be 
precise) to build Ada, starting with the latest (2020) release of Gnat from 
Adacore.

It fails for several reasons.  One is that two source files use [ ] for array 
initializer brackets when ( ) is apparently supposed to be used instead.  Once 
I fix that, I get a pile of messages I don't know what to do about:

s-imagei.ads:95:11: declare_expression is an Ada 2020 feature
s-valueu.ads:152:09: declare_expression is an Ada 2020 feature
s-valueu.ads:160:09: declare_expression is an Ada 2020 feature
s-valueu.ads:184:06: "Subprogram_Variant" is not a valid aspect identifier
s-valuei.ads:80:11: declare_expression is an Ada 2020 feature
s-valuei.ads:95:08: declare_expression is an Ada 2020 feature
s-valuei.ads:141:06: "Subprogram_Variant" is not a valid aspect identifier
s-widthu.ads:84:09: declare_expression is an Ada 2020 feature
s-widthu.ads:93:11: run-time library configuration error
s-widthu.ads:93:11: file s-imgint.ads had parser errors
s-widthu.ads:93:11: entity "System.Img_Int.Image_Integer" not available
compilation abandoned
make[2]: *** [ada/contracts.o] Error 1

Given that the current open source Gnat is from 2020, so (apparently) it 
doesn't support Ada 2020 features, how is someone supposed to build the current 
GCC?  I looked in the prerequisites listing on the webpage, but it says that 
the Gnat that is built on GCC 5.1 is sufficient.  That seems to be wrong; the 
GCC in Gnat 2020 is 8.4.1 and it is apparently too old to work.

paul



Re: clarification question

2022-10-23 Thread Paul Koning via Gcc



> On Oct 22, 2022, at 2:38 PM, Marc Glisse via Gcc  wrote:
> 
> On Sat, 22 Oct 2022, Péntek Imre via Gcc wrote:
> 
>> https://gcc.gnu.org/backends.html
>> 
>> by "Architecture does not have a single condition code register" do you mean 
>> it has none or do you mean it has multiple?
> 
> Either.
> 
> If you look at the examples below, there is a C for riscv, which has 0, and 
> one for sparc, which has several.

Also pdp11, which has two: one for floating point, one for integers, and 
conditional branches act only on the integer CC register.  So the MD has to 
describe a "move float CC to integer CC" operation.

GCC supports all these strange things quite nicely -- this is one of several 
things that the newer CCmode machinery does well and the old "cc0" stuff 
doesn't.

paul




Re: Possible C++ method signature warning feature?

2022-08-11 Thread Paul Koning via Gcc



> On Aug 10, 2022, at 9:25 PM, Andrew Pinski  wrote:
> 
> On Wed, Aug 10, 2022 at 6:20 PM Paul Koning via Gcc  wrote:
>> 
>> There's a C++ problem I keep running into, in a very large body of software 
>> with lots of subclassing.
>> 
>> There's a base class that defines a set of interface methods, not all pure 
>> virtual (some define the default behavior).  A number of subclasses override 
>> some but not all of these.
>> 
>> Now I find myself changing the argument list of some of these methods, so I 
>> have to change the base class definitions and also track down all the 
>> subclass redefinitions.  If I miss one of the latter, that subclass method 
>> is no longer called (it now just looks like an unrelated method with a 
>> different argument list that isn't used anywhere).  Finding these things can 
>> be hard and time consuming.
>> 
>> It would be helpful to have some way to mark a method as "this is supposed 
>> to be an override of a base class method", in other words "warn me if this 
>> method doesn't override some method in a base class".
> 
> C++11's overload keyword sounds exactly what you want.
> https://en.cppreference.com/w/cpp/language/override

Perfect!  Thanks much.

paul



Possible C++ method signature warning feature?

2022-08-10 Thread Paul Koning via Gcc
There's a C++ problem I keep running into, in a very large body of software 
with lots of subclassing.

There's a base class that defines a set of interface methods, not all pure 
virtual (some define the default behavior).  A number of subclasses override 
some but not all of these.

Now I find myself changing the argument list of some of these methods, so I 
have to change the base class definitions and also track down all the subclass 
redefinitions.  If I miss one of the latter, that subclass method is no longer 
called (it now just looks like an unrelated method with a different argument 
list that isn't used anywhere).  Finding these things can be hard and time 
consuming.

It would be helpful to have some way to mark a method as "this is supposed to 
be an override of a base class method", in other words "warn me if this method 
doesn't override some method in a base class".

Does that sound like a possible thing to do, perhaps with some __attribute__ 
magic?  Would it be interesting?

paul



Re: Signed vs. unsigned compares

2022-06-17 Thread Paul Koning via Gcc



> On Jun 17, 2022, at 11:51 AM, Andreas Schwab  wrote:
> 
> On Jun 17 2022, Paul Koning via Gcc wrote:
> 
>> In expanding a longer-than-word compare, I need to do things differently 
>> depending on whether a signed vs. unsigned compare is needed.  But it seems 
>> the compare operation applies to either.  How can I do this in the target 
>> code?
> 
> There are both signed and unsigned comparison operations, eg. GE
> vs. GEU.
> 
> -- 
> Andreas Schwab, sch...@linux-m68k.org

It looks like what I need is to define additional CCmodes, to describe "cc mode 
valid for unsigned comparison" vs. "cc mode valid for signed comparison".  
Right now I don't make that distinction and that reportedly can produce bad 
code errors.

paul




Signed vs. unsigned compares

2022-06-17 Thread Paul Koning via Gcc
Question for target code:

In expanding a longer-than-word compare, I need to do things differently 
depending on whether a signed vs. unsigned compare is needed.  But it seems the 
compare operation applies to either.  How can I do this in the target code?

paul



Switch statement optimization

2022-04-18 Thread Paul Koning via Gcc
In switch statements with dense case values, the typical result is a jump 
table, which is fast.  If the values are sparse, a tree of compares is 
generated instead.

What if nearly all cases are dense but there are a few outliers?  An example 
appears in the NFS protocol parser, where you get a switch statement with cases 
for each of the opcode values.  All but one are small integers assigned in 
sequence, but one is 10044.  So the "sparse" case kicks in and a compare tree 
is generated for everything.

I can avoid this by putting in special case code for the 10044 case, then all 
the rest ends up being a jump table.  That brings up the question if GCC should 
recognize such scenarios and break up the switch statement into "dense parts" 
handled by a jump table, leaving the sorting between those as a compare tree.

paul



Re: Benchmark recommendations needed

2022-02-22 Thread Paul Koning via Gcc



> On Feb 22, 2022, at 4:26 PM, Gary Oblock via Gcc  wrote:
> 
> Andras,
> 
> The whole point of benchmarks is to judge a processor's performance.
> That being said, just crippling GCC is not reasonable because
> processors must be judged in the appropriate context and that
> includes the current state of the art compiler technology. If you have
> a new processor I'd benchmark it using the applications you built it
> for.

Exactly.  Part of what you want to see is that GCC optimizes well for the new 
machine, i.e., that there aren't artifacts of the machine description that get 
in the way of optimization.

So you'd want to use applications that are good exercises not just of the code 
generator but also the optimizer.  Dhrystone isn't really that, because it has 
evolved into mostly an optimizer test, not a machine or code generator test.

paul



Re: reordering of trapping operations and volatile

2022-01-15 Thread Paul Koning via Gcc



> On Jan 15, 2022, at 4:28 PM, Martin Sebor  wrote:
> 
> On 1/14/22 07:58, Paul Koning via Gcc wrote:
>>> On Jan 14, 2022, at 9:15 AM, Michael Matz via Gcc  wrote:
>>> 
>>>> ...
>>> But right now that's equivalent to making it observable,
>>> because we don't have any other terms than observable or
>>> undefined.  As aluded to later you would have to
>>> introduce a new concept, something pseudo-observable,
>>> which you then started doing.  So, see below.
>> I find it really hard to view this notion of doing work for UB with any 
>> favor.  The way I see it is that a program having UB is synonymous with 
>> "defective program" and for the compiler to do extra work for these doesn't 
>> make much sense to me.
> 
> This is also the official position of the C committee on record,
> but it's one that's now being challenged.
> 
>> If the issue is specifically the handling of overflow traps, perhaps a 
>> better answer would be to argue for language changes that manage such events 
>> explicitly rather than having them be undefined behavior.  Another (similar) 
>> option might be to choose a language in which this is done.  (Is Ada such a 
>> language?  I don't remember.)
> 
> A change to the language standard is only feasible if it doesn't
> overly constrain existing implementations. 

I was thinking that if a new feature is involved, rather than a new definition 
of behavior for existing code, it wouldn't be a constraint on existing 
implementations (in the sense of "what the compiler does for existing code 
written to the current rules").  In other words, suppose there was a concept of 
"trapping operations" that could be enabled by some new mechanism in the 
program text.  If you use that, then you're asking the compiler to do more work 
and your code may get slower or bigger.  But if you don't, the existing rules 
apply and nothing bad happens (other than that the compiler is somewhat larger 
and more complex due to the support for both cases).

paul




Re: reordering of trapping operations and volatile

2022-01-14 Thread Paul Koning via Gcc



> On Jan 14, 2022, at 9:15 AM, Michael Matz via Gcc  wrote:
> 
> Hello,
> 
> On Thu, 13 Jan 2022, Martin Uecker wrote:
> 
> Handling all volatile accesses in the very same way would be 
> possible but quite some work I don't see much value in.
 
 I see some value. 
 
 But an alternative could be to remove volatile
 from the observable behavior in the standard
 or make it implementation-defined whether it
 is observable or not.
>>> 
>>> But you are actually arguing for making UB be observable
>> 
>> No, I am arguing for UB not to have the power
>> to go back in time and change previous defined
>> observable behavior.
> 
> But right now that's equivalent to making it observable,
> because we don't have any other terms than observable or
> undefined.  As aluded to later you would have to
> introduce a new concept, something pseudo-observable,
> which you then started doing.  So, see below.

I find it really hard to view this notion of doing work for UB with any favor.  
The way I see it is that a program having UB is synonymous with "defective 
program" and for the compiler to do extra work for these doesn't make much 
sense to me.

If the issue is specifically the handling of overflow traps, perhaps a better 
answer would be to argue for language changes that manage such events 
explicitly rather than having them be undefined behavior.  Another (similar) 
option might be to choose a language in which this is done.  (Is Ada such a 
language?  I don't remember.)

paul




Re: Help with an ABI peculiarity

2022-01-07 Thread Paul Koning via Gcc



> On Jan 7, 2022, at 4:06 PM, Iain Sandoe  wrote:
> 
> Hi Folks,
> 
> In the aarch64 Darwin ABI we have an unusual (OK, several unusual) feature of 
> the calling convention.
> 
> When an argument is passed *in a register* and it is integral and less than 
> SI it is promoted (with appropriate signedness) to SI.  This applies when the 
> function parm is named only.
> 
> When the same argument would be placed on the stack (i.e. we ran out of 
> registers) - it occupies its natural size, and is naturally aligned (so, for 
> instance, 3 QI values could be passed as 3 registers - promoted to SI .. or 
> packed into three adjacent bytes on the stack)..
> 
> The key is that we need to know that the argument will be placed in a 
> register before we decide whether to promote it.
> (similarly, the promotion is not done in the callee for the in-register case).
> 
> I am trying to figure out where to implement this.

I don't remember the MIPS machinery well enough, but is that a similar case?  
It too has register arguments (4 or 8 of them) along with stack arguments (for 
the rest).

paul




Re: __builtin_addc support??

2021-10-27 Thread Paul Koning via Gcc



> On Oct 27, 2021, at 12:12 PM, sotrdg sotrdg via Gcc  wrote:
> 
> 79173 – add-with-carry and subtract-with-borrow support (x86_64 and others) 
> (gnu.org)
> 
> What I find quite interesting is things like this.
> 
> Since llvm clang provides __builtin_addc __builtin_subc for all targets. Can 
> we provide something similar? Since currently no solutions we can access 
> carry flag besides x86

Certainly some other targets could do this.  The LLVM builtins explicitly 
expose carry, which isn't actually what you want (you'd want the carry flag in 
the condition code to be propagated).  Presumably optimization would eliminate 
those explicit arguments and reduce them to CC references.

paul




Re: Developer branches

2021-09-15 Thread Paul Koning via Gcc



> On Sep 15, 2021, at 5:21 PM, Joseph Myers  wrote:
> 
> On Wed, 15 Sep 2021, Paul Koning via Gcc wrote:
> 
>> Some questions about developer branches:
>> 
>> 1. Who may create one?  Who may write to them?
>> 2. Are they required to be listed in https://gcc.gnu.org/git.html ?  I 
>> notice it mentioned a whole pile of them, most of which don't seem to 
>> exist.
> 
> A devel/ branch (one in refs/heads/devel/) is a shared development branch, 
> which may be created by anyone with write access (who can decide how it 
> will work in terms of patch approvals etc.), should be documented in 
> git.html, and will not accept non-fast-forward pushes or branch deletion.
> 
> A user branch (in refs/users//heads/) is a personal development 
> branch, which may be created by that user (sourceware username), may not 
> necessarily be documented in git.html, and can have non-fast-forward 
> pushes or branch deletion (it's up to that user to decide the rules for 
> that branch, including for non-fast-forward pushes).  Likewise a vendor 
> branch (in refs/vendors//heads/).
> 
> All branches are subject to the same legal requirements (copyright 
> assignment or DCO for code committed there).
> ...

Thanks, that's useful.  Suppose I want to collaborate with one other person 
(for now) on pdp11 target work, does it make sense to keep that in a user 
branch since the community is so small and isolated?  I assume the other person 
would need (as a minimum) write-after-approval privs.  

paul



Re: Developer branches

2021-09-15 Thread Paul Koning via Gcc



> On Sep 15, 2021, at 4:34 PM, Jonathan Wakely  wrote:
> 
> On Wed, 15 Sept 2021 at 21:12, Paul Koning via Gcc  wrote:
>> 
>> Some questions about developer branches:
>> 
>> 1. Who may create one?  Who may write to them?
>> 2. Are they required to be listed in https://gcc.gnu.org/git.html ?  I 
>> notice it mentioned a whole pile of them, most of which don't seem to exist.
> 
> Which ones? All the one I looked for exist.

Perhaps I did the procedures wrong.  I did a git pull, then git branch -a 
|fgrep devel.  I see 24 entries.  Looking at the git.html page, that mentions a 
lot more.  Some examples that do exist: modula-2, ira-select.  Some that don't: 
the ave* branches, x86, var-template.

paul




Developer branches

2021-09-15 Thread Paul Koning via Gcc
Some questions about developer branches:

1. Who may create one?  Who may write to them?
2. Are they required to be listed in https://gcc.gnu.org/git.html ?  I notice 
it mentioned a whole pile of them, most of which don't seem to exist.

It's a bit confusing since this seems to be a concept that is used, but not 
clearly documented on the web pages.

paul



Re: Optional machine prefix for programs in for -B dirs, matching Clang

2021-08-04 Thread Paul Koning via Gcc



> On Aug 4, 2021, at 3:32 AM, Jonathan Wakely via Gcc  wrote:
> 
> On Wed, 4 Aug 2021, 08:26 John Ericson wrote:
> 
>> Problem:
>> 
>> It's somewhat annoying to have to tell GCC --with-as=... --with-ld=...
>> just to prefix those commands the same way GCC is prefixed.
>> 
> 
> Doesn't GCC automatically look for those commands in the --prefix directory
> that you configure GCC with? Or is that only for native compilers?

It does.  That's how I configure my cross-builds.

paul



Re: GCC 10.2: undefined reference to vtable: missing its key function

2021-06-07 Thread Paul Koning via Gcc



> On Jun 6, 2021, at 5:41 PM, Paul Smith  wrote:
> 
> I have a class which is NOT, as far as I can see, polymorphic.
> 
> It doesn't inherit from any other class and none of its methods are
> declared virtual.  The class implementation and all its callers all
> compile just fine.
> 
> Is there some other way that a class can be thought to be virtual,
> stealthily (say by its usage in some other class or something)?

I may be remembering wrong, but -- doesn't dynamic_cast look for a vtable?  So 
if you do a dynamic cast mentioning that class you'd get such a reference.  Or 
is that an error when applied to a non-polymorphic class?

paul




Re: RFC: New mechanism for hard reg operands to inline asm

2021-06-04 Thread Paul Koning via Gcc



> On Jun 4, 2021, at 2:02 PM, Andreas Krebbel via Gcc  wrote:
> 
> Hi,
> 
> I wonder if we could replace the register asm construct for
> inline assemblies with something a bit nicer and more obvious.
> E.g. turning this (real world example from IBM Z kernel code):
> 
> int diag8_response(int cmdlen, char *response, int *rlen)
> {
>register unsigned long reg2 asm ("2") = (addr_t) cpcmd_buf;
>register unsigned long reg3 asm ("3") = (addr_t) response;
>register unsigned long reg4 asm ("4") = cmdlen | 0x4000L;
>register unsigned long reg5 asm ("5") = *rlen; /* <-- */
>asm volatile(
>"   diag%2,%0,0x8\n"
>"   brc 8,1f\n"
>"   agr %1,%4\n"
>"1:\n"
>: "+d" (reg4), "+d" (reg5)
>: "d" (reg2), "d" (reg3), "d" (*rlen): "cc");
>*rlen = reg5;
>return reg4;
> }
> 
> into this:
> 
> int diag8_response(int cmdlen, char *response, int *rlen)
> {
>unsigned long len = cmdlen | 0x4000L;
> 
>asm volatile(
>"   diag%2,%0,0x8\n"
>"   brc 8,1f\n"
>"   agr %1,%4\n"
>"1:\n"
>: "+{r4}" (len), "+{r5}" (*rlen)
>: "{r2}" ((addr_t)cpcmd_buf), "{r3}" ((addr_t)response), "d" 
> (*rlen): "cc");
>return len;
> }
> 
> Apart from being much easier to read because the hard regs become part
> of the inline assembly it solves also a couple of other issues:
> 
> - function calls might clobber register asm variables see BZ100908
> - the constraints for the register asm operands are superfluous
> - one register asm variable cannot be used for 2 different inline
>  assemblies if the value is expected in different hard regs
> 
> I've started with a hackish implementation for IBM Z using the
> TARGET_MD_ASM_ADJUST hook and let all the places parsing constraints
> skip over the {} parts.  But perhaps it would be useful to make this a
> generic mechanism for all targets?!
> 
> Andreas

Yes, I would think this should be made a general mechanism that any target 
could use.

I wonder if instead of creating a new mechanism you could do this simply by 
creating new constraint names, where each name matches exactly one hard 
register.  That's roughly what this amounts to, isn't it? 

paul



Re: Update to GCC copyright assignment policy

2021-06-01 Thread Paul Koning via Gcc



> On Jun 1, 2021, at 12:44 PM, Joseph Myers  wrote:
> 
> On Tue, 1 Jun 2021, David Edelsohn via Gcc wrote:
> 
>> The copyright author will be listed as "Free Software Foundation,
>> Inc." and/or "The GNU Toolchain Authors", as appropriate.
> 
> And copyright notices naming "The GNU Toolchain Authors" should not 
> include a date - that's following the recommendations at 
> https://www.linuxfoundation.org/en/blog/copyright-notices-in-open-source-software-projects/
>  
> for the form of copyright notices in projects with many copyright holders.
> 
> -- 
> Joseph S. Myers
> jos...@codesourcery.com

That's a nice document, but it makes clear that a collective designation of a 
group of authors in a copyright notice is just a convenient shorthand.  It 
mentions that the copyright notice is just a notice that doesn't actually 
affect the copyright (which remains with the individual authors or their 
employers, unless assigned).  So "GNU Toolchain Authors" is a description 
referring to a set of individual owners, one that changes over time.  It 
doesn't describe a legal body, and it isn't an owner of anything.

paul



Re: Update to GCC copyright assignment policy

2021-06-01 Thread Paul Koning via Gcc



> On Jun 1, 2021, at 12:09 PM, David Edelsohn via Gcc  wrote:
> 
> On Tue, Jun 1, 2021 at 10:40 AM Paul Koning  wrote:
>> 
>>> On Jun 1, 2021, at 10:31 AM, David Edelsohn via Gcc  wrote:
>>> 
>>> The copyright author will be listed as "Free Software Foundation,
>>> Inc." and/or "The GNU Toolchain Authors", as appropriate.
>> 
>> What does that mean?  FSF is a well defined organization.  "The GNU 
>> Toolchain Authors" sounds like one, but is it?  Or is it just a label for 
>> "the set of contributors who have contributed without assigning to FSF"?  In 
>> other words, who is the owner of such a work, the GTA, or the submitter?  
>> I'm guessing the latter.
>> 
>> That seems to create a possible future complication.  Prior to this change, 
>> the FSF (as owner of the copyright) could make changes such as replacing the 
>> GPL 2 license by GPL 3.  With the policy change, that would no longer be 
>> possible, unless you get the approval of all the copyright holders.  This 
>> may not be considered a problem, but it does seem like a change.
>> 
>> I looked at gcc.gnu.org to find the updated policy.  I don't think it's 
>> there; the "contribute" page wording still feels like the old policy.  Given 
>> the change, it would seem rather important to have the implications spelled 
>> out in full, and the new rules clearly expressed.
> 
> The GNU Toolchain Authors are all of the authors, including those with
> FSF Copyright.  All of the authors agree to the existing license,
> which is "...either version 3, or (at your option) any later version."
> If the project chooses to adopt a future update to the GPL, all of
> the authors have given permission through the existing copyright
> assignment or through certification of the DCO to utilize the newer
> license.
> 
> Thanks, David

By DCO do you mean the document you linked in your original annoucement?  If 
yes, could you point out which of the words in that document give the GCC 
project permission from the copyright holder to relicense the contributed work? 
 I do not see those words in the document you linked.

I get the feeling that the current change was rushed and not well considered.  
It certainly has that feel.  I do not remember discussion of it, I do not see 
updated policy documents on the gcc.gnu.org website.  The discussion just now 
is raising a pile of questions which are being answered with a whole bunch of 
different answers, not all consistent with each other.  If the change had been 
carefully made this would not be happening; there would instead be a known 
answer (the outcome of prior discussion) and there would be published policies 
that could be pointed to where those answers are explicitly stated.

It's not that I object to the spirit of the change, and I have contributed to a 
number of open source projects where there is no copyright assignment so that 
isn't a strange thing to me.  What concerns me is a disruptive change made with 
what seems to me to be inadequate care.

paul



Re: Update to GCC copyright assignment policy

2021-06-01 Thread Paul Koning via Gcc



> On Jun 1, 2021, at 11:08 AM, Jason Merrill via Gcc  wrote:
> 
> On Tue, Jun 1, 2021 at 10:52 AM D. Hugh Redelmeier  wrote:
> 
>> | From: Mark Wielaard 
>> 
>> | This seems a pretty bad policy to be honest.
>> | Why was there no public discussion on this?
>> 
>> Agreed.  I also agree with the rest of Mark's message.
>> 
>> (Note: I haven't contributed to GCC but I have contributed to other
>> copylefted code bases.)
>> 
>> It is important that the pool be trustable.  A tall order, but
>> solvable, I think.
>> 
>> Two pools (FSF for old stuff, something else, for new stuff if the
>> contributor prefers) should be quite managable.
>> 
>> This would allow, for example, moving to an updated copyleft if the
>> two pools agreed.  It is important that the governance of the pool be
>> trustable.
>> 
>> We've trusted the FSF and now some have qualms.  A second pool would
>> be a check on the power of the first pool.
>> 
>> Individual unassigned copyright pretty much guarantees that the
>> copyright terms can never be changed.  I don't think that that is
>> optimal.
>> 
> 
> GCC's license is "GPL version 3 or later", so if there ever needed to be a
> GPL v4, we could move to it without needing permission from anyone.

I don't think that is what the license says.  It says:

GCC is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3, or (at your option)
any later version.

To me that means the recipient of the software can relicense it under a later 
license.  It doesn't say to me that the original distribution can do so.

paul



Re: Update to GCC copyright assignment policy

2021-06-01 Thread Paul Koning via Gcc



> On Jun 1, 2021, at 10:31 AM, David Edelsohn via Gcc  wrote:
> 
> The copyright author will be listed as "Free Software Foundation,
> Inc." and/or "The GNU Toolchain Authors", as appropriate.

What does that mean?  FSF is a well defined organization.  "The GNU Toolchain 
Authors" sounds like one, but is it?  Or is it just a label for "the set of 
contributors who have contributed without assigning to FSF"?  In other words, 
who is the owner of such a work, the GTA, or the submitter?  I'm guessing the 
latter.

That seems to create a possible future complication.  Prior to this change, the 
FSF (as owner of the copyright) could make changes such as replacing the GPL 2 
license by GPL 3.  With the policy change, that would no longer be possible, 
unless you get the approval of all the copyright holders.  This may not be 
considered a problem, but it does seem like a change.

I looked at gcc.gnu.org to find the updated policy.  I don't think it's there; 
the "contribute" page wording still feels like the old policy.  Given the 
change, it would seem rather important to have the implications spelled out in 
full, and the new rules clearly expressed.

paul



Re: On US corporate influence over Free Software and the GCC Steering Committee

2021-04-20 Thread Paul Koning via Gcc



> On Apr 20, 2021, at 9:22 AM, David Starner via Gcc  wrote:
> 
> Giacomo Tesio wrote:
>> ...
>> Please, do not create a hostile environment for indipendent contributors.
> 
> What do you mean by independent? If you're independently wealthy and
> don't need to work, you can avoid this. If you're a cashier or field
> laborer or in some other poorly paid job, then your employer probably
> doesn't care. Otherwise, if you work for a company that produces
> software, even just internally, or for a university, or a company
> where your name might be associated with the company, then your
> employer may demand that you cease publicly working on Free Software.

Not necessarily.

I'll offer you my own example.  I'm the target maintainer for pdp11.  It should 
be obvious that I'm doing this as an "independent contributor", not as part of 
my job, and indeed that work is for that reason spare time work done outside of 
business hours.

I have a company copyright disclaimer in place, dating back to some years ago 
when the work I was doing on gcc did at times touch on (then) company business. 
 As a legal matter it is doubtful that my current GCC involvement even needs 
that copyright disclaimer since the work is not on company time and not in the 
company's field of enterprise.

In other words, we have jobs (unless we're retired or unemployed) as well as 
private time, and there are a number of GCC contributors who contribute their 
private time.  I take Giacomo's comment to refer to that case.

paul




Re: A suggestion for going forward from the RMS/FSF debate

2021-04-16 Thread Paul Koning via Gcc



> On Apr 16, 2021, at 2:41 PM, NightStrike via Gcc  wrote:
> 
>> ...
> 
> I was under the (likely incorrect, please enlighten me) impression
> that the meteoric rise of LLVM had more to do with the license
> allowing corporate contributors to ship derived works in binary form
> without sharing proprietary code. 

My impression is a variation on this: that LLVM is in substantial part 
motivated by a desire to avoid GPL V3.  And that there wasn't such a push when 
GPL V2 was the version in general use.

paul




Re: removing toxic emailers

2021-04-15 Thread Paul Koning via Gcc



> On Apr 15, 2021, at 7:44 PM, Frosku  wrote:
> 
> On Fri Apr 16, 2021 at 12:36 AM BST, Christopher Dimech wrote:
>> 
>> The commercial use of free software is our hope, not our fear. When
>> people
>> at IBM began to come to free software, wanting to recommend it and use
>> it,
>> and maybe distribute it themselves or encourage other people to
>> distribute
>> it for them, we did not criticise them for not being non-profit virtuous
>> enough, or said "we are suspicious of you", let alone threatening them.
>> 
> 
> There is a colossal difference between commercial use and commercial
> entities buying control of projects currently governed by entities
> which are answerable to the grassroots (GNU) and then toppling that
> governance structure in favor of one which is only answerable to
> boardrooms in Silicon Valley and Seattle WA.

There are, or would be if that were a real issue.  It's not something that is 
feasible with GPL licensed code, whether the copyright is held by the FSF as it 
is for GCC, or by all the authors as for Linux.

paul




Re: removing toxic emailers

2021-04-15 Thread Paul Koning via Gcc



> On Apr 15, 2021, at 11:17 AM, Iain Sandoe  wrote:
> 
> ...
> responding in general to this part of the thread.
> 
> * The GCC environment is not hostile, and has not been for the 15 or so
> years I’ve been part of the community.

Glad to see you feel that way; my view matches yours.

> * We would notice if it became so, I’m not sure about the idea that the wool
>  can be so easily pulled over our eyes.
> 
> I confess to being concerned with the equation “code” > “conduct”; it is not
> so in my professional or personal experience.   I have seen an engineering
> team suffer great losses of performance from the excesses of one (near genius,
> but very antisocial) member - the balance was not met.  Likewise, it has been
> seen to be a poor balance when there are three gifted individuals in a 
> household
> but one persecutes the other two (for diagnosed reaons).. again balance is not
> met
> 
> One could see the equation becoming a self-fullfilling prophecy viz.
> 
> *  let us say compilers are complex, and  any significant input over length 
> of time
>   will require a resonably competent engineer.
> 
> * reasonably competent engineers with a good social habit are welcome 
> everywhere
> 
> * reasonably competent engineers with poor social habit are welcome in few 
> places.

All true.

> - those few places will easily be able to demonstrate that their progress is 
> made
>  despite the poor atmosphere, with no way to know that something better was 
> possible.
> 
> responding to the thread in general..
> 
> * Please could we try to seek concensus?
> 
> - it is disappointing to see people treating this as some kind of 
> point-scoring game
>  when to those working on the compiler day to day it is far from a game.

I'm not sure what the consensus is you're looking for.  Consensus on the 
principle that people should behave in a civil fashion? Yes, I agree with that. 
 

The difficulty, as I mentioned, is in deciding in concrete situations whether 
that principle was violated and what should be done about it.  So I think the 
easy part is the principle; the hard part is the process that will enforce the 
principle in those cases where it needs to be -- and ONLY in those cases.  
Again, if the question had come up 10 years ago I wouldn't be so worried; but 
in 2021 after years of watching people being blacklisted for daring to speak 
the wrong politics of the day, I can no longer do so.

paul



Re: removing toxic emailers

2021-04-14 Thread Paul Koning via Gcc



> On Apr 14, 2021, at 5:38 PM, Ian Lance Taylor  wrote:
> 
> On Wed, Apr 14, 2021 at 1:49 PM Paul Koning  wrote:
>> 
>>> ...
>> 
>> This is why I asked the question "who decides?"  Given a disagreement in 
>> which the proposed remedy is to ostracise a participant, it is necessary to 
>> inquire for what reason this should be done (and, perhaps, who is pushing 
>> for it to be done).  My suggestion is that this judgment can be made by the 
>> community (via secret ballot), unless it is decided to delegate that power 
>> to a smaller body, considered as trustees, or whatever you choose to call 
>> them.
> 
> Personally, I think that voting is unworkable in practice.  I think
> decisions can be reasonably delegated to a small group of trusted
> people.  A fairly common name for that group is "moderators".  It
> might be appropriate to use voting of some sort when selecting
> moderators.

Yes, that seems reasonable.  I think the NetBSD project is an example of this, 
where the membership votes for the trustees, and the trustees are responsible 
for a number of project aspects including correcting bad behavior such as we're 
discussing here.

The SC was mentioned earlier in this thread, though that's not quite so natural 
given how that is appointed.

paul



Re: removing toxic emailers

2021-04-14 Thread Paul Koning via Gcc



> On Apr 14, 2021, at 4:39 PM, Ian Lance Taylor via Gcc  wrote:
> 
> On Wed, Apr 14, 2021 at 9:08 AM Jeff Law via Gcc  wrote:
>> 
>> once or twice when physical violence with threatened, but that's about
>> it (aside from spammers).  I don't think we want to get too deep into
>> moderation and the like -- IMHO it should be an extremely rare event.
>> As much as I disagree with some of the comments that have been made I
>> don't think they've risen to the level of wanting/needing to ban those
>> individuals from posting.
> 
> I think it's useful to observe that there are a reasonable number of
> people who will refuse to participate in a project in which the
> mailing list has regular personal attacks and other kinds of abusive
> behavior.  I know this because I've spoken with such people myself.
> They simply say "that project is not for me" and move on.
> 
> So we don't get the choice between "everyone is welcome" and "some
> people are kicked off the list."  We get the choice between "some
> people decline to participate because it is unpleasant" and "some
> people are kicked off the list."
> 
> Given the choice of which group of people are going to participate and
> which group are not, which group do we want?

My answer is "it depends".  More precisely, in the past I would have favored 
those who decline because the environment is unpleasant -- with the implied 
assumption being that their objections are reasonable.  Given the emergency of 
cancel culture, that assumption is no longer automatically valid.

This is why I asked the question "who decides?"  Given a disagreement in which 
the proposed remedy is to ostracise a participant, it is necessary to inquire 
for what reason this should be done (and, perhaps, who is pushing for it to be 
done).  My suggestion is that this judgment can be made by the community (via 
secret ballot), unless it is decided to delegate that power to a smaller body, 
considered as trustees, or whatever you choose to call them.

paul




Re: removing toxic emailers

2021-04-14 Thread Paul Koning via Gcc



> On Apr 14, 2021, at 2:19 PM, Nathan Sidwell  wrote:
> 
> On 4/14/21 12:52 PM, Martin Jambor wrote:
>> Hi Nathan,
>> On Wed, Apr 14 2021, Nathan Sidwell wrote:
>>> Do we have a policy about removing list subscribers that send abusive or
>>> other toxic emails?  do we have a code of conduct?  Searching the wiki
>>> or website finds nothing.  The mission statement mentions nothing.
>> I think that (most?) people have already figured out that messages from
>> unfamiliar senders on certain topics have to be ignored.  It is much
>> easier than any moderation, which would be ugly work (someone would have
>> to read the often horrible stuff).
>> I think that you only "associate" with trolls if you feed them.  I have
>> recently made that mistake on this list once and will not repeat it.
> 
> I disagree.  Their emails pollute the list.  Just as I wouldn't like to go to 
> a bar where there are noisy jerks in a corner, I don't like to frequent an ML 
> where there are.  Bouncers exist in physical space, is it so hard to 
> electronically bounce jerks?  Is it so hard to explicitly say 'be a jerk and 
> be thrown out'?

Who decides?

Bouncers enforce the policy of the owner of the joint.  In any meetingplace 
that has an owner who has authority over who enters, it's possible to establish 
rules controlling ejection and bouncers to do the ejecting.

Our place does not have a single owner who has the authority to decide 
unilaterally "you're not wanted, leave".  What mechanism would you use instead? 
 Ostracism, in the classic Greek sense of a secret ballot to decide for or 
against banishment?

paul



Re: GCC association with the FSF

2021-04-09 Thread Paul Koning via Gcc



> On Apr 9, 2021, at 2:27 AM, Alfred M. Szmidt via Gcc  wrote:
> 
> These discussions are slightly off topic for gcc@, I'd suggest they
> are moved to gnu-misc-discuss@ or some other more suitable list.

More than "slightly", in my view.  I'm close to putting this thread into my 
"send straight to trash" mail rule.  The alternative would be to unsubscribe 
gcc, which it would be nice to avoid.

paul




Re: Remove RMS from the GCC Steering Committee

2021-03-31 Thread Paul Koning via Gcc
I may have lost it in the enormous flood of text, but I want to ask these 
general questions.

1. Is there a published code of conduct for GCC community members, possibly 
different ones depending on which level of the organization you're in?

2. Is there a formal process for receiving claims of infraction of this code, 
and for adjudicating such claims?

paul



Re: 4.2 source

2020-06-09 Thread Paul Koning via Gcc



> On Jun 9, 2020, at 10:02 AM, James Dugan  
> wrote:
> 
> Hello,
> This is a long shot, but is there any archive of the 4.2 source code? I need 
> a build for a rhel5.4 server to support a p2v migration. I checked the 
> successful builds page and see that this version of rhel was not done for 
> 32bit.
> 
> Thanks,
> Jim Dugan

GCC 4.2 sources?  Sure, in the releases area you can find that release and many 
others going back much further into ancient history.

https://gcc.gnu.org/releases.html

paul



Re: AVR CC0 transition

2020-04-23 Thread Paul Koning via Gcc



> On Apr 22, 2020, at 10:11 PM, Senthil Kumar via Gcc  wrote:
> 
> On Wed, Apr 22, 2020 at 10:08 PM Jeff Law  wrote:
>> 
>> On Wed, 2020-04-22 at 22:01 +0530, Senthil Kumar via Gcc wrote:
>>> Hi,
>>> 
>>> I'm thinking about attempting to do the CC0 transition for the
>>> AVR port in my spare time.  I've read the CC0Transition gcc wiki
>>> page, and as the AVR ISA does not have non-condition-code
>>> clobbering arithmetic instructions, concluded that I'd have to
>>> follow the steps outlined for case #2.
>>> 
>>> Are there any other things I need to watch out for?
>> Not offhand.  You might want to look at how Bernd did the m68k though.  I 
>> think a
>> conceptually similar transition would be reasonable for AVR as well as the
>> alternative would be dropping the AVR port.

Last year I did the CC0 conversion for pdp11, which is another case #2 
architecture, and a pretty simple one for the most part.  I followed the Wiki 
fairly closely, with help from Eric.  One thing that may not be relevant is 
that pdp11 has two CCREGS, one for float, one for integers.

paul




Re: [RFC] Characters per line: from punch card (80) to line printer (132) (was: [Patch][OpenMP/OpenACC/Fortran] Fix mapping of optional (present|absent) arguments)

2019-12-05 Thread Paul Koning



> On Dec 5, 2019, at 11:17 AM, Joseph Myers  wrote:
> 
> On Thu, 5 Dec 2019, Thomas Schwinge wrote:
> 
>> In the relevant session at the GNU Tools Cauldron 2019, Michael Meissner
>> stated that even he is not using a 80 x 24 terminal anymore, and that
>> should tell us something.  ;-)
>> 
>> So, I formally propose that we lift this characters per line restriction
>> from IBM punch card (80) to mainframe line printer (132).
> 
> I thought these line lengths were based on readability studies suggesting 
> lengths that lines shorter than 80 columns were more readable?

That's certainly a general rule.  There is a reason why books aren't wide, and 
why newspapers have columns.  The eye can't deal well with long lines.  So 
while 132 column lines are certainly possible with modern computers, it doesn't 
mean they are desirable.

paul



Re: syncing the GCC vax port, atomic issue

2019-09-21 Thread Paul Koning



> On Sep 20, 2019, at 9:18 PM, co...@sdf.org wrote:
> 
> On Fri, Sep 20, 2019 at 10:07:59PM +, co...@sdf.org wrote:
>> Introducing the reversed jbb* patterns doesn't seem to help with the
>> original issue. It crashes building libatomic.
> 
> My loose understanding of what is going on:
> - GCC emits this atomic in expand.
> - When cleaning up, it looks for optimizations.
> - It decides this is a branch to another branch situation, so maybe
>  can be improved.
> - This fails to output an instruction for unrelated reasons.
> - Hit an assert.
> 
> I don't think that we should be trying to combine regular branch +
> atomic branch in very generic code.
> My guess is that, if it didn't crash now, it might emit a different kind
> of branch which loses the atomic qualities, and result in wrong code.

Or it might leave the atomic branch, in a place where it isn't really wanted.

I wonder if this could be avoided by representing the atomic branch by an 
UNSPEC rather than by a branch, since it isn't a "normal branch" that GCC knows 
about.

paul



Re: Proposal for the transition timetable for the move to GIT

2019-09-19 Thread Paul Koning



> On Sep 17, 2019, at 8:02 AM, Richard Earnshaw (lists) 
>  wrote:
> 
> ...
> So in summary my proposed timetable would be:
> 
> Monday 16th December 2019 - cut off date for picking which git conversion to 
> use
> 
> Tuesday 31st December 2019 - SVN repo becomes read-only at end of stage 3.
> 
> Thursday 2nd January 2020 - (ie read-only + 2 days) new git repo comes on 
> line for live commits.

That sounds ok but it feels incomplete; there are additional steps and dates 
needed leading up to the 16th December decision point.

I would suggest: 1 December 2019: final version of each proposed conversion 
tool is available, trial conversion repository of the full GCC SVN repository 
is posted for public examination.

That allows 2 weeks for the different tools and their output to get the 
scrutiny needed for the picking decision to be made.  2 weeks may be more than 
needed (or possibly, less), but in any case I think this piece needs to be 
called out.

paul



Re: asking for __attribute__((aligned()) clarification

2019-08-21 Thread Paul Koning



> On Aug 21, 2019, at 10:57 AM, Alexander Monakov  wrote:
> 
> On Wed, 21 Aug 2019, Paul Koning wrote:
> 
>> I agree, but if the new approach generates a warning for code that was 
>> written
>> to the old rules, that would be unfortunate.
> 
> FWIW I don't know which GCC versions accepted 'packed' on a scalar type.

That wasn't what I meant; I was talking about the packed and aligned attributes 
on struct members.  I thought you were saying that ((packed,aligned(2))) is now 
a warning.  That doesn't appear to be the case, though; it's accepted without 
complaint as it always was.

paul



Re: asking for __attribute__((aligned()) clarification

2019-08-21 Thread Paul Koning



> On Aug 21, 2019, at 10:28 AM, Alexander Monakov  wrote:
> 
> On Tue, 20 Aug 2019, "Markus Fröschle" wrote:
> 
>> Thank you (and others) for your answers. Now I'm just as smart as before, 
>> however.
>> 
>> Is it a supported, documented, 'long term' feature we can rely on or not?
>> 
>> If yes, I would expect it to be properly documented. If not, never mind.
> 
> I think it's properly documented in gcc-9:
> 
>  https://gcc.gnu.org/onlinedocs/gcc-9.2.0/gcc/Common-Type-Attributes.html
> 
> (the "old" behavior where the compiler would neither honor reduced alignment
> nor issue a warning seems questionable, the new documentation promises a more
> sensible approach)

I agree, but if the new approach generates a warning for code that was written 
to the old rules, that would be unfortunate.

> In portable code one can also use memcpy to move unaligned data, the compiler
> should translate it like an unaligned load/store when size is a suitable
> constant:
> 
>  int val;
>  memcpy(&val, ptr, sizeof val);
> 
> (or __builtin_memcpy when -ffreestanding is in effect)

Yes.  But last I tried, optimizing that for > 1 alignment is problematic 
because that information often doesn't make it down to the target code even 
though it is documented to do so.

paul



Re: asking for __attribute__((aligned()) clarification

2019-08-19 Thread Paul Koning



> On Aug 19, 2019, at 10:08 AM, Alexander Monakov  wrote:
> 
> On Mon, 19 Aug 2019, Richard Earnshaw (lists) wrote:
> 
>> Correct, but note that you can only pack structs and unions, not basic types.
>> there is no way of under-aligning a basic type except by wrapping it in a
>> struct.
> 
> I don't think that's true. In GCC-9 the doc for 'aligned' attribute has been
> significantly revised, and now ends with
> 
>  When used as part of a typedef, the aligned attribute can both increase and
>  decrease alignment, and specifying the packed attribute generates a warning. 
> 
> (but I'm sure defacto behavior of accepting and honoring reduced alignment on
> a typedef'ed scalar type goes way earlier than gcc-9)

Interesting.  It certainly wasn't that way a decade ago.  And for the old code 
pattern to generate a warning seems like a bad incompatible change.  Honoring 
reducing alignments is one thing, complaining about packed is not good.

paul



Re: asking for __attribute__((aligned()) clarification

2019-08-19 Thread Paul Koning



> On Aug 19, 2019, at 8:46 AM, Markus Fröschle  wrote:
> 
> All,
> 
> this is my first post on these lists, so please bear with me.
> 
> My question is about gcc's __attribute__((aligned()). Please consider the 
> following code:
> 
> #include 
> 
> typedef uint32_t uuint32_t __attribute__((aligned(1)));
> 
> uint32_t getuuint32(uint8_t p[]) {
>return *(uuint32_t*)p;
> }
> 
> This is meant to prevent gcc to produce hard faults/address errors on 
> architectures that do not support unaligned access to shorts/ints (e.g some 
> ARMs, some m68k). On these architectures, gcc is supposed to replace the 32 
> bit access with a series of 8 or 16 bit accesses.
> 
> I originally came from gcc 4.6.4 (yes, pretty old) where this did not work 
> and gcc does not respect the aligned(1) attribute for its code generation 
> (i.e. it generates a 'normal' pointer dereference, consequently crashing when 
> the code executes). To be fair, it is my understanding that the gcc manuals 
> never promised this *would* work.

That has never been my understanding.  I've always read the manual to say that 
"aligned" only INCREASES the alignment.  The normal alignment is that specified 
by the ABI for the given data type (often, but not always, the size of the 
primitive type) -- or it is 1 for "packed".

So I use __attribute__ ((packed)) to request byte alignment, and, say, 
__attribute__ ((packed, aligned(2))) to specify alignment to 2 byte multiples.

paul




Re: Indirect memory addresses vs. lra

2019-08-09 Thread Paul Koning



> On Aug 9, 2019, at 10:16 AM, Segher Boessenkool  
> wrote:
> 
> Hi!
> 
> On Fri, Aug 09, 2019 at 10:14:39AM +0200, John Darrington wrote:
>> On Thu, Aug 08, 2019 at 01:57:41PM -0600, Jeff Law wrote:
>> 
>>  ...  However I wonder if this issue is
>> related to the other major outstanding problem I have, viz: the large 
>> number of test failures which report "Unable to find a register to
>> spill" - So far, nobody has been able to explain how to solve that
>> issue and even the people who appear to be more knowlegeable have
>> expressed suprise that it is even happening at all.
> 
> No one is surprised.  It is just the funny way that LRA says "whoops I
> am going in circles, there is no progress and there will never be, I'd
> better stop that".  Everyone doing new ports / new conversions to LRA
> sees that error all the time.
> 
> The error could be pretty much *anywhere* in your port.  You have to
> look at what LRA did, and why, and why that is wrong, and fix that.

I've run into this a number of times.  The difficulty is that, for someone who 
understands the back end and the documented rules but not the internals of LRA, 
it tends to be hard to figure out what the problem is.  And since the causes 
tend to be obscure and undocumented, I find myself having to relearn the 
analysis from time to time. 

It has been stated that LRA is more dependent on correct back end definitions 
than Reload is, but unfortunately the precise definition of "correct" can be 
less than obvious to a back end maintainer.

paul




Re: Indirect memory addresses vs. lra

2019-08-08 Thread Paul Koning



> On Aug 8, 2019, at 1:21 PM, Segher Boessenkool  
> wrote:
> 
> On Thu, Aug 08, 2019 at 12:43:52PM -0400, Paul Koning wrote:
>>> On Aug 8, 2019, at 12:25 PM, Vladimir Makarov  wrote:
>>> The old reload (reload[1].c) supports such addressing.  As modern 
>>> mainstream architectures have no this kind of addressing, it was not 
>>> implemented in LRA.
>> 
>> Is LRA only intended for "modern mainstream architectures"?
> 
> I sure hope not!  But it has only been *used* and *tested* much on such,
> so far. 

That's not entirely accurate.  At the prodding of people pushing for the 
removal of CC0 and reload, I've added LRA support to pdp11 in the V9 cycle.  
And it works pretty well, in the sense of passing the compile tests.  But I 
haven't yet examined the code quality vs. the old one in any detail.

paul



Re: Indirect memory addresses vs. lra

2019-08-08 Thread Paul Koning



> On Aug 8, 2019, at 1:21 PM, Segher Boessenkool  
> wrote:
> 
> On Thu, Aug 08, 2019 at 12:43:52PM -0400, Paul Koning wrote:
>>> On Aug 8, 2019, at 12:25 PM, Vladimir Makarov  wrote:
>>> The old reload (reload[1].c) supports such addressing.  As modern 
>>> mainstream architectures have no this kind of addressing, it was not 
>>> implemented in LRA.
>> 
>> Is LRA only intended for "modern mainstream architectures"?
> 
> I sure hope not!  But it has only been *used* and *tested* much on such,
> so far.  Things are designed to work well for modern archs.
> 
>> If yes, why is the old reload being deprecated?  You can't have it both 
>> ways.  Unless you want to obsolete all "not modern mainstream architectures" 
>> in GCC, it doesn't make sense to get rid of core functionality used by those 
>> architectures.
>> 
>> Indirect addressing is a key feature in size-optimized code.
> 
> That doesn't mean that LRA has to support it, btw, not necessarily; it
> may well be possible to do a good job of this in the later passes?
> Maybe postreload, maybe some peepholes, etc.?

Possibly.  But as Vladimir points out, indirect addressing affects register 
allocation (reducing register pressure).  In older architectures that implement 
indirect addressing, that is one of the key ways in which the feature reduces 
code size.  While I can see how peephole optimization can convert a address 
load plus a register indirect into a memory indirect instruction, does that 
help the register become available for other uses or is post-LRA too late for 
that?  My impression is that it is too late, since at this point we're dealing 
with hard registers and making one free via peephole helps no one else.

paul




Re: Indirect memory addresses vs. lra

2019-08-08 Thread Paul Koning



> On Aug 8, 2019, at 12:25 PM, Vladimir Makarov  wrote:
> 
> 
> On 2019-08-04 3:18 p.m., John Darrington wrote:
>> I'm trying to write a back-end for an architecture (s12z - the ISA you can
>> download from [1]).  This arch accepts indirect memory addresses.   That is 
>> to
>> say, those of the form (mem (mem (...)))  and although my 
>> TARGET_LEGITIMATE_ADDRESS
>> function returns true for such addresses, LRA insists on reloading them out 
>> of
>> existence.
>> ...
> The old reload (reload[1].c) supports such addressing.  As modern mainstream 
> architectures have no this kind of addressing, it was not implemented in LRA.

Is LRA only intended for "modern mainstream architectures"?

If yes, why is the old reload being deprecated?  You can't have it both ways.  
Unless you want to obsolete all "not modern mainstream architectures" in GCC, 
it doesn't make sense to get rid of core functionality used by those 
architectures.

Indirect addressing is a key feature in size-optimized code.

paul



Re: syncing the GCC vax port

2019-03-31 Thread Paul Koning



> On Mar 30, 2019, at 5:03 AM, co...@sdf.org wrote:
> 
> hi folks,
> 
> i was interesting in tackling some problems gcc netbsd/vax has.
> it has some ICEs which are in reload phase. searching around, the answer
> to that is "switch to LRA first". Now, I don't quite know what that is
> yet, but I know I need to try to do it.

That's not quite the whole story.

The answer is (1) switch from CC0 to CCmode condition code handling, which 
enables (2) switch from Reload to LRA.

(1) requires actual work, not terribly hard but not entirely trivial.  (2) may 
take as little as switching the "use LRA" flag to "yes".

I did (1) as well as a tentative (2) for pdp11 last year.  It was reasonably 
straightforward thanks to a pile of help from Eric Botcazou and his gcc wiki 
articles on the subject.  You might find the pdp11 deltas for CCmode helpful as 
a source of ideas, since the two machines have a fair amount in common as far 
as condition codes goes.  At least for the integer ops (pdp11 has separate 
floating point conditions, vax doesn't).

paul



Re: Annoying silly warning emitted by gcc?

2019-01-23 Thread Paul Koning



> On Jan 23, 2019, at 7:15 PM, Warren D Smith  wrote:
> 
> x = x^x;
> 
> The purpose of the above is to load "x" with zero.
> For very wide types, say 256 bits wide, explicitly loading 0
> is deprecated by Intel since taking too much memory.
> XORing x with itself always yields 0 and is allegedly
> a better thing to do.
> 
> But the problem is, gcc complains:
> variable 'x' is uninitialized when used here [-Wuninitialized]
> note: initialize the variable 'x' to silence this warning
> 
> Well, the thing is, it DOES NOT MATTER that x is not initialized,
> or initialized with wrong data.  No matter what was in x, it becomes 0.
> 
> So, how to get Gcc to shut up and quit whining about this?
> I do not want to actually load 0.

The way to tell gcc that you don't want to hear about x being uninitialized is 
to write the declaration as an initialization to itself:

int x = x;
/* then you can do what you wanted: */
x = x ^ x;

But it seems that GCC should be able to tell expressions that do not depend on 
x and suppress the complaint about uninitialized x if so.

But furthermore: the "too much memory" argument makes no sense.  You should 
write what you mean in the plainest terms, in other words: "x = 0".  It is then 
up to the optimizer to generate the best code for it.  If xor is the better 
code that's what should come out.  Does it already?  If so, Intel's advice is 
simply wrong, ignore it and write the simple code.  If the compiler actually 
generates a transfer from a (potentially big) zero, that might be a missed 
optimization bug.  In that case, Intel's advice may possibly be useful as a 
temporary workaround for this bug.  But only if the small savings is so 
important that obfuscating your source code is justified.

   paul



Re: [RFC] Update Stage 4 description

2019-01-09 Thread Paul Koning



> On Jan 9, 2019, at 3:42 AM, Tom de Vries  wrote:
> 
> [ To revisit https://gcc.gnu.org/ml/gcc-patches/2018-04/msg00385.html ]
> 
> The current formulation for the description of Stage 4 here (
> https://gcc.gnu.org/develop.html ) is:
> ...
> During this period, the only (non-documentation) changes that may be
> made are changes that fix regressions.
> 
> Other changes may not be done during this period.
> 
> Note that the same constraints apply to release branches.
> 
> This period lasts until stage 1 opens for the next release.
> ...
> 
> This updated formulation was proposed by Richi (with a request for
> review of wording):
> ...
> During this period, the only (non-documentation) changes that may
> be made are changes that fix regressions.
> 
> -Other changes may not be done during this period.
> +Other important bugs like wrong-code, rejects-valid or build issues may
> +be fixed as well.  All changes during this period should be done with
> +extra care on not introducing new regressions - fixing bugs at all cost
> +is not wanted.
...

Is there, or should there be, a distinction between primary and non-primary 
platforms?  While platform bugs typically require fixes in platform-specific 
code, I would think we would want to stay away from bugfixes in minor platforms 
during stage 4.  The wording seems to say that I could fix wrong-code bugs in 
pdp11 during stage 4; I have been assuming I should not do that.  Is this 
something that should be explicitly stated?

paul



Re: Bug in divmodhi4(), plus poor inperformant code

2018-12-05 Thread Paul Koning



> On Dec 5, 2018, at 11:23 AM, Segher Boessenkool  
> wrote:
> 
> On Wed, Dec 05, 2018 at 02:19:14AM +0100, Stefan Kanthak wrote:
>> "Paul Koning"  wrote:
>> 
>>> Yes, that's a rather nasty cut & paste error I made.
>> 
>> I suspected that.
>> Replacing
>>!(den & (1L<<31))
>> with
>>(signed short) den >= 0
>> avoids this type of error: there's no need for a constant here!
>> 
>> JFTR: of course the 1L should be just a 1, without suffix.
> 
> "int" can be 16 bits only, I think?  If you change to 15 you can remove
> the L, sure.

"int" on pdp11 is either 16 or 32 bits depending on switches, but "short int" 
is always 16.  I changed it to be 1U << 15 which seems more appropriate if int 
is 16 bits.

paul




Re: Bug in divmodhi4(), plus poor inperformant code

2018-12-05 Thread Paul Koning



> On Dec 4, 2018, at 8:19 PM, Stefan Kanthak  wrote:
> 
> "Paul Koning"  wrote:
> 
>> Yes, that's a rather nasty cut & paste error I made.
> 
> I suspected that.
> Replacing
>!(den & (1L<<31))
> with
>(signed short) den >= 0
> avoids this type of error: there's no need for a constant here!
> 
> JFTR: of course the 1L should be just a 1, without suffix.
> 
>> But if the 31 is changed to a 15, is the code correct?
>> I would think so.
> 
> Almost. It's the standard algorithm, and it's correct except
> for den == 0, where the current implementation returns 0 as
> quotient or the numerator as remainder, while my fix yields an
> endless loop (as could be expected for "undefined behaviour").

I submitted a patch that just changes that one line.  This file is a copy of 
udivmodsi4.c so I figured I'd aim for the same logic except for the word length 
changes, and the 31 instead of 15 was a missed edit for that.

The other changes could be left for later, or a handwritten assembly routine 
used instead as some other targets do.

Thanks!

paul



Re: Bug in divmodhi4(), plus poor inperformant code

2018-12-04 Thread Paul Koning
Yes, that's a rather nasty cut & paste error I made.

But if the 31 is changed to a 15, is the code correct?  I would think so.  For 
optimization I'd think that an assembly language version would make more sense, 
and a few targets do that.

paul


> On Dec 4, 2018, at 5:51 PM, Stefan Kanthak  wrote:
> 
> Hi @ll,
> 
> libgcc's divmodhi4() function has an obvious bug; additionally
> it shows rather poor inperformant code: two of the three conditions
> tested in the first loop should clearly moved outside the loop!
> 
> divmodsi4() shows this inperformant code too!
> 
> regards
> Stefan Kanthak
> 
> --- divmodhi4.c ---
> 
> unsigned short
> __udivmodhi4(unsigned short num, unsigned short den, int modwanted)
> {
>  unsigned short bit = 1;
>  unsigned short res = 0;
> 
> #ifdef BUGFIX
>  if (den > num)
> return modwanted ? num : 0;
>  if (den == num)
> return modwanted ? 0 : 1;
>  while ((signed short) den >= 0)
> #else // original, buggy and inperformant code
>  while (den < num && bit && !(den & (1L<<31)))  // unsigned shorts are 16 bit!
> #endif
>{
>  den <<=1;
>  bit <<=1;
>}
>  while (bit)
>{
>  if (num >= den)
>{
>  num -= den;
>  res |= bit;
>}
>  bit >>=1;
>  den >>=1;
>}
>  if (modwanted) return num;
>  return res;
> }



Re: LRA reload produces invalid insn

2018-11-02 Thread Paul Koning



> On Nov 2, 2018, at 9:34 AM, Peter Bergner  wrote:
> 
> On 11/1/18 10:37 PM, Vladimir Makarov wrote:
>> On 11/01/2018 08:25 PM, Paul Koning wrote:
>>> Is this an LRA bug, or is there something I need to do in the target to 
>>> prevent this happening?
>> It is hard to say whose code is responsible for this.  It might be a wrong 
>> machine-dependent code or a LRA bug.
>> 
>> Paul, could you send me full LRA dump file (.reload).  It might help me to 
>> say more specific reason for the bug.  LRA has iterated sub-passes and the 
>> full dump can say where LRA started to behave wrongly.
>> 
> 
> I'll note that when we ported the rs6000 (ie, ppc*) port over to LRA
> from reload, we hit many target problems.  It seems LRA is much less
> forgiving to bad constraints, predicates, etc. than reload was.
> I think that's actually a good thing.
> 
> Peter

Yes, I ran into that, and Segher (I think) helped me with a bad predicate case. 
 It doesn't help that the documentation isn't all that explicit about what the 
requirements are.  

paul



Re: LRA reload produces invalid insn

2018-11-02 Thread Paul Koning



> On Nov 1, 2018, at 8:49 PM, Peter Bergner  wrote:
> 
> On 11/1/18 7:25 PM, Paul Koning wrote:
>> I'm running the testsuite on the pdp11 target, and I get a failure when 
>> using LRA that works correctly with the old allocator.  The issue is that 
>> LRA is producing an insn that is invalid (it violates the constraints stated 
>> in the insn definition).
> [snip]
>> which is the correct sequence given the matching operand constraint in the 
>> define_insn.
>> 
>> Is this an LRA bug, or is there something I need to do in the target to 
>> prevent this happening?
> 
> What do you mean by "old allocator"?  Just an older revision?  Does it work 
> before my
> revision 264897 commit and broken after?  If so, could you try the following 
> to see
> whether that fixes things for you?
> 
>https://gcc.gnu.org/ml/gcc-patches/2018-10/msg01757.html
> 
> My commit above exposed some latent LRA bugs and my patch above tries
> to fix issues similar to what you're seeing.
> 
> Peter

That doesn't cure this particular problem; the ICE is still there with the same 
error message (identical failing insn) as before.

paul



LRA reload produces invalid insn

2018-11-01 Thread Paul Koning
I'm running the testsuite on the pdp11 target, and I get a failure when using 
LRA that works correctly with the old allocator.  The issue is that LRA is 
producing an insn that is invalid (it violates the constraints stated in the 
insn definition).

The insn in the IRA dump looks like this:

(insn 240 238 241 13 (set (reg/f:HI 155)
(plus:HI (reg/f:HI 5 r5)
(const_int -58 [0xffc6]))) 
"/Users/pkoning/Documents/svn/combined/gcc/testsuite/gcc.c-torture/execute/builtins/strcat-chk.c":68:7
 68 {addhi3}
 (expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
(const_int -58 [0xffc6]))
(nil)))

(note that R5 is FRAME_POINTER_REGNUM.)

The reload dump from LRA shows this:

(insn 240 238 241 13 (set (reg/f:HI 5 r5 [155])
(plus:HI (reg/f:HI 6 sp)
(const_int 12 [0xc]))) 
"/Users/pkoning/Documents/svn/combined/gcc/testsuite/gcc.c-torture/execute/builtins/strcat-chk.c":68:7
 68 {addhi3}
 (expr_list:REG_EQUIV (plus:HI (reg/f:HI 5 r5)
(const_int -58 [0xffc6]))
(nil)))

But that's not valid because ADD is a two-operand instruction:

(define_insn_and_split "addhi3"
  [(set (match_operand:HI 0 "nonimmediate_operand" "=rR,rR,Q,Q")
(plus:HI (match_operand:HI 1 "general_operand" "%0,0,0,0")
 (match_operand:HI 2 "general_operand" "rRLM,Qi,rRLM,Qi")))]

The old allocator produces two insns for this:

(insn 443 238 240 13 (set (reg/f:HI 5 r5 [155])
(const_int 12 [0xc])) 
"/Users/pkoning/Documents/svn/combined/gcc/testsuite/gcc.c-torture/execute/builtins/strcat-chk.c":68:7
 25 {movhi}
 (nil))
(insn 240 443 241 13 (set (reg/f:HI 5 r5 [155])
(plus:HI (reg/f:HI 5 r5 [155])
(reg/f:HI 6 sp))) 
"/Users/pkoning/Documents/svn/combined/gcc/testsuite/gcc.c-torture/execute/builtins/strcat-chk.c":68:7
 68 {addhi3}
 (expr_list:REG_EQUIV (plus:HI (reg/f:HI 6 sp)
(const_int 12 [0xc]))
(nil)))

which is the correct sequence given the matching operand constraint in the 
define_insn.

Is this an LRA bug, or is there something I need to do in the target to prevent 
this happening?

paul



Re: dg-add-options ieee ?

2018-10-31 Thread Paul Koning



> On Oct 31, 2018, at 5:47 PM, Joseph Myers  wrote:
> 
> On Wed, 31 Oct 2018, Paul Koning wrote:
> 
>> So you mean, add a new keyword (say, "ieee") to dg-effective-target that 
>> means "run this test only on ieee targets"?
> 
> Note that different tests may need different IEEE features, though some 
> such cases are already handled specially anyway - so be clear on what 
> target properties such a keyword actually relates to.  (E.g. 
> fenv_exceptions already exists for tests needing floating-point 
> exceptions; tests that don't work on SPU because of the way its 
> single-precision format deviates from IEEE are skipped specifically for 
> spu-*-*; tests handling IBM long double specially as needed.)
> 
> -- 
> Joseph S. Myers
> jos...@codesourcery.com

I can add a general "ieee" test.  I've also found some test cases that depend 
on things like the width of exponent and mantissa, and are written to match 
what IEEE does.  They may well be valid for some other float formats, so saying 
they require IEEE might not be the best answer.  For example 
gcc.dg/tree-ssa/sprintf-warn-10.c which has checks like this:
  T1 ("%.*a", 0); /* { dg-warning "between 3 and 10 bytes" } */

Many of those numbers come out different on pdp11, partly because it's a 16 bit 
platform (so the upper bound is never > 32767) and partly because the float 
format has different field widths than IEEE.

Another case is tests that refer to infinite, which some float formats don't 
have.  Or "not a number"; pdp11 has some flavors of that but not all that IEEE 
has.  Should I try to add various other float features, like "infinite", "nan", 
"quiet-nan", "signaling-nan"?

Given Rainer's comment that dg-effective-target ieee and dg-add-options ieee 
should be separate, should I add dg-effective-target ieee to every test that 
currently says dg-add-options ieee?  Or leave that for others to do 
selectively?  It feels like it can be applied everywhere, but I may be missing 
something.

paul



Re: dg-add-options ieee ?

2018-10-31 Thread Paul Koning



> On Oct 31, 2018, at 4:11 PM, Rainer Orth  
> wrote:
> 
> Hi Paul,
> 
>> Ok, thanks.  So adding a dg-skip-if for my target is indeed correct.  Will 
>> do so.
> 
> please don't: since this is going to be common, please add a
> corresponding effective-target keyword instead, together with
> sourcebuild.texi documentation.  That's far more expressive than
> explicit target lists.
> 
> Thanks.
>Rainer

So you mean, add a new keyword (say, "ieee") to dg-effective-target that means 
"run this test only on ieee targets"?

Another approach might be to have dg-add-options ieee mean what it does today, 
but also have it skip the test for non-ieee capable targets.  Or is that 
undesirable because it muddles the meaning of the dg-add-options keyword?  I 
figure it would make sense because any test that has dg-add-options ieee by 
definition should be skipped by any target that can't do ieee at all.

paul




Re: Builtin mismatch warning

2018-10-31 Thread Paul Koning



> On Oct 31, 2018, at 4:21 PM, Martin Sebor  wrote:
> 
> On 10/31/2018 12:15 PM, Paul Koning wrote:
>> I noticed a curious inconsistency.
>> 
>> Some testcases (like gcc.dg/Wrestrict-4.c) have declarations like this:
>> 
>> void *alloca();
>> void* memcpy ();
>> 
>> Those don't generate warnings in a just built V9.0 gcc for x86.  And the 
>> testcase clearly doesn't expect warnings.
> 
> Wrestrict-4.c is about verifying there is no ICE.  It doesn't check
> for warnings (or expect them) only because GCC doesn't issue them,
> even though it should.  I submitted a patch to enable warnings for
> built-in declarations without a prototype back in June but it wasn't
> been approved:
> 
>  https://gcc.gnu.org/ml/gcc-patches/2018-06/msg01645.html
> 
> I'm still hoping to resuscitate it before the end of stage 1.
> 
>> But I do get a warning (warning: conflicting types for built-in function 
>> ‘memcpy’) when I compile that same code on GCC built for pdp11.  I don't 
>> know why changing the target should affect whether that message appears.
> 
> Apparently because the warning depends on whether arguments to
> such functions are affected by default promotions (determined
> by self_promoting_args_p()) and on pdp11 that isn't the case
> for size_t.  So on pdp11, the type of void* memcpy() isn't
> considered compatible with the type of the built-in function.
> 
> IMO, the warning should be issued regardless of compatibility.
> Function declarations without a prototype have been deprecated
> since C99, and are being considered for removal from C2X.  There
> is no reason for maintained software not to adopt the much safer
> declaration style in general, and certainly not for built-in
> functions.  Even if safety weren't reason enough, declaring
> built-ins with correct prototypes improves the quality of
> generated code: GCC will expand a call like memcpy(d, s, 32)
> inline when the the function is declared with a prototype but
> it declines to do so when it isn't declared with one, because
> the type of 32 doesn't match size_t.
> 
> Martin

Thanks.  So I'm wondering if I should add 
  { dg-warning "conflicting types" "" { "pdp11*-*-*" } }
for now, and then if your patch goes in that simply changes to apply to all 
targets.

paul



  1   2   3   4   >