Re: [RFC / musing] Scoped exception handling in Linux userspace?
On 07/18/2013 08:29 PM, Andy Lutomirski wrote: On Thu, Jul 18, 2013 at 6:17 PM, David Daney ddaney.c...@gmail.com wrote: On 07/18/2013 05:50 PM, Andy Lutomirski wrote: On Thu, Jul 18, 2013 at 5:40 PM, David Daney ddaney.c...@gmail.com wrote: On 07/18/2013 05:26 PM, Andy Lutomirski wrote: How is this different than throwing exceptions from a signal handler? Two ways. First, exceptions thrown from a signal handler can't be retries. ?? s/retries/retried, by which I mean that you can't do things like implementing virtual memory in userspace by catching SIGSEGV, calling mmap, and resuming. Second, and more importantly, installing a signal handler in a library is a terrible idea. The signal handler would be installed by main() before calling into the library. You have to have a small amount of boiler plate code to set it up, but the libraries wouldn't have to be modified if they were already exception safe. FWIW the libgcj java runtime environment uses this strategy for handling NullPointerExceptions and DivideByZeroError(sp?). Since all that code for the most part follows the standard C++ ABIs, it is an example of this technique that has been deployed in many environments. Other way around: a *library* that wants to use exception handling can't do so safely without the cooperation, or at least understanding, of the main program and every other library that wants to do something similar. Suppose my library installs a SIGFPE handler and throws my_sigfpe_exception and your library installs a SIGFPE handler and throws your_sigfpe_exception. The result: one wins and the other crashes due to an unhandled exception. In my particular usecase, I have code (known to the main program) that catches all kinds of fatal signals to log nice error messages before dying. That means that I can't use a library that handles signals for any other purpose. Right now I want to have a small snippet of code handle SIGBUS, but now I need to coordinate it with everything else. If this stuff were unified, then everything would just work. That's right. But I think the Linux kernel already supplies all the needed functionality to do this. It is really a matter of choosing a userspace implementation and standardizing your entire system around it. In the realm of GNU/GLibc/Linux, it is really more of social/political exercise rather than a technical problem. David Daney
Re: [RFC / musing] Scoped exception handling in Linux userspace?
On 07/18/2013 05:26 PM, Andy Lutomirski wrote: Windows has a feature that I've wanted on Linux forever: stack-based (i.e. scoped) exception handling. The upshot is that you can do, roughly, this (pseudocode): int callback(...) { /* Called if code_that_may_fault faults. May return unwind to landing pad, propagate the fault, or fixup and retry */ } void my_function() { __hideous_try_thing(callback) { code_that_may_fault(); } blahblahblah { landing_pad_code(); } } How is this different than throwing exceptions from a signal handler? GCC already supports this on many architectures running on the Linux kernel. You can do it from C using incantations like those found in the GCC testsuite's gcc/testsuite/gcc.dg/cleanup-9.c file. From C++ it is even easier, it is just a normal exception. David Daney Windows calls it SEH (structured exception handling), and the implementation on 32-bit Windows is rather gnarly. I don't really know how it works on 64-bit windows, but I think it's saner. This has two really nice properties: 1. It works in libraries! 2. It's localized. So you can mmap something, read from it *and handle SIGBUS*, and unmap. Could Linux support such a thing? Here's a sketch of a way: - The kernel would need to have a fairly well-defined concept of synchronous faults that can be handled with this mechanism. Calls to force_sig_info are probably the right thing to hook in to. - The userspace runtime optionally registers (via a new syscall or prctl, say) a handler for synchronous faults. - When a synchronous fault happens, if the process (struct sighand_struct) has a synchronous fault handler registered, the signal is delivered to that handler, on the thread that faulted, instead of via the normal signal handling mechanism. - The userspace runtime walks the chain of personality handlers and gives them a chance to respond. - If no handler claims the fault, then the user code somehow* causes ordinary signal delivery to happen. * This may need kernel help, too -- if the process is going to die, it should die for the right reason, so perhaps there should be a syscall to redeliver the signal. If the runtime wants to be fancy and a signal handler is installed, then there could be a fast path. Maybe if we got really fancy, it could live in the vdso. Now everyone wins! After someone writes the libgcc support for this (ugh!), then you can write CFI-based exception handlers in assembly! Presumably you could write them in C++, too, if you don't care about restarting, like this: try { code_that_may_fault(); } catch (cxxabi::synchronous_kernel_fault ) { amazingly_dont_crash(); } Is this worth persuing? I'm not touching the gcc part with a ten-foot pole, but I could probably do some of the kernel work. I'm a bit scared of libgcc, too. It's worth noting that SIGBUS isn't the only interesting signal here. SIGFPE could work, too. I'm not sure whether SIGPIPE would make sense. SIGSEGV would clearly work, but anyone using this mechanism for SIGSEGV is probably asking for trouble. --Andy P.S. Just because you can probably get away with throwing a C++ exception from a signal handler right now does not mean it's a good idea. Especially in a library.
Re: [RFC / musing] Scoped exception handling in Linux userspace?
On 07/18/2013 05:50 PM, Andy Lutomirski wrote: On Thu, Jul 18, 2013 at 5:40 PM, David Daney ddaney.c...@gmail.com wrote: On 07/18/2013 05:26 PM, Andy Lutomirski wrote: Windows has a feature that I've wanted on Linux forever: stack-based (i.e. scoped) exception handling. The upshot is that you can do, roughly, this (pseudocode): int callback(...) { /* Called if code_that_may_fault faults. May return unwind to landing pad, propagate the fault, or fixup and retry */ } void my_function() { __hideous_try_thing(callback) { code_that_may_fault(); } blahblahblah { landing_pad_code(); } } How is this different than throwing exceptions from a signal handler? Two ways. First, exceptions thrown from a signal handler can't be retries. ?? Second, and more importantly, installing a signal handler in a library is a terrible idea. The signal handler would be installed by main() before calling into the library. You have to have a small amount of boiler plate code to set it up, but the libraries wouldn't have to be modified if they were already exception safe. FWIW the libgcj java runtime environment uses this strategy for handling NullPointerExceptions and DivideByZeroError(sp?). Since all that code for the most part follows the standard C++ ABIs, it is an example of this technique that has been deployed in many environments. David Daney
Re: [Patch] [MIPS] Fix Many warnings in MIPS port (Was: [PATCH] [MIPS] microMIPS gcc support)
On 04/14/2013 01:27 PM, Moore, Catherine wrote: -Original Message- From: David Daney [mailto:ddaney.c...@gmail.com] Sent: Friday, April 12, 2013 7:29 PM To: Moore, Catherine Cc: Rozycki, Maciej; gcc-patches@gcc.gnu.org; Richard Sandiford Subject: Re: [Patch] [MIPS] Fix Many warnings in MIPS port (Was: [PATCH] [MIPS] microMIPS gcc support) On 04/12/2013 03:07 PM, Moore, Catherine wrote: Hi David, Please try the attached patch. Is this OK to checkin? I don't think it is correct... And you would be right. I attached the wrong patch. Try this one instead: Index: configure === --- configure (revision 197950) +++ configure (working copy) @@ -17830,7 +17830,7 @@ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat conftest.$ac_ext _LT_EOF -#line 17831 configure +#line 17833 configure #include confdefs.h #if HAVE_DLFCN_H @@ -17936,7 +17936,7 @@ lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat conftest.$ac_ext _LT_EOF -#line 17937 configure +#line 17939 configure #include confdefs.h #if HAVE_DLFCN_H @@ -25766,7 +25766,7 @@ gcc_cv_as_micromips_support=no if test x$gcc_cv_as != x; then $as_echo '.set micromips' conftest.s -if { ac_try='$gcc_cv_as $gcc_cv_as_flags -o conftest.o conftest.s 5' +if { ac_try='$gcc_cv_as $gcc_cv_as_flags --fatal-warnings -o conftest.o conftest.s 5' { { eval echo \\$as_me\:${as_lineno-$LINENO}: \$ac_try\; } 5 (eval $ac_try) 25 ac_status=$? Index: configure.ac === --- configure.ac(revision 197950) +++ configure.ac(working copy) @@ -4058,7 +4058,7 @@ [Define if your assembler supports .gnu_attribute.])]) gcc_GAS_CHECK_FEATURE([.micromips support], - gcc_cv_as_micromips_support,,, + gcc_cv_as_micromips_support,,[--fatal-warnings], [.set micromips],, [AC_DEFINE(HAVE_GAS_MICROMIPS, 1, [Define if your assembler supports the .set micromips directive])]) Sorry for the confusion. Catherine Your e-mail client mangled it as Content-Transfer-Encoding: quoted-printable, so I had to manually apply it. But after doing that, it does seem to have the desired effect. So I would say: Please apply it (After getting somebody to approve it). Thanks, David Daney
Many warnings in MIPS port (Was: [PATCH] [MIPS] microMIPS gcc support)
On 03/23/2013 02:07 AM, Richard Sandiford wrote: Moore, Catherine catherine_mo...@mentor.com writes: 2013-03-21 Catherine Moore c...@codesourcery.com * config/mips/constraints.md (u, Udb7 Uead, Uean, Uesp, Uib3, Uuw6, Usb4, ZS, ZT, ZU, ZV, ZW): New constraints. * config/mip/predicates.md (lwsp_swsp_operand, lw16_sw16_operand, lhu16_sh16_operand, lbu16_operand, sb16_operand, db4_operand, db7_operand, ib3_operand, sb4_operand, ub4_operand, uh4_operand, uw4_operand, uw5_operand, uw6_operand, addiur2_operand, addiusp_operand, andi16_operand): New predicates. * config/mips/mips.md (compression): New attribute. (enabled): New attribute. (length): Consider compression in computing length. (shift_compression): New code attribute. (*addmode3): New operands. Record compression. (submode3): Likewise. (one_cmplmode2): Likewise. (*andmode3): Likewise. (*iormode3): Likewise. (unnamed pattern for xor): Likewise. (*zero_extendSHORT:modeGPR:mode2): Likewise. (*optabmode3): Likewise. (*movmode_internal: Likewise. * config/mips/mips-protos.h (mips_signed_immediate_p): New. (mips_unsigned_immediate_p): New. (umips_lwsp_swsp_address_p): New. (m16_based_address_p): New. * config/mips/mips-protos.h (mips_signed_immediate_p): New prototype. (mips_unsigned_immediate_p): New prototype. (lwsp_swsp_address_p): New prototype. (m16_based_address_p): New prototype. * config/mips/mips.c (mips_unsigned_immediate_p): New function. (mips_signed_immediate_p): New function. (m16_based_address_p): New function. (lwsp_swsp_address_p): New function. (mips_print_operand_punctuation): Recognize short delay slot insns for microMIPS.addmode3 OK. Thanks for your patience through all this. Now the framework's been sorted out, the review process for future encoding patches should be much less painful. Richard I just tried to bootstrap on o32 Debian. This system has binutils 2.20.1. Here is a sample of the resulting failure when building the libjava target libs: . . . /home/daney/gccsvn/build/./gcc/xgcc -B/home/daney/gccsvn/build/./gcc/ -B/usr/local/mips-unknown-linux-gnu/bin/ -B/usr/local/mips-unknown-linux-gnu/lib/ -isystem /usr/local/mips-unknown-linux-gnu/include -isystem /usr/local/mips-unknown-linux-gnu/sys-include -DHAVE_CONFIG_H -I. -I../../../../trunk/libjava/libltdl -g -O2 -minterlink-mips16 -c ../../../../trunk/libjava/libltdl/ltdl.c -fPIC -DPIC -o .libs/ltdl.o /tmp/cckECtVQ.s: Assembler messages: /tmp/cckECtVQ.s:12: Warning: Tried to set unrecognized symbol: nomicromips /tmp/cckECtVQ.s:115: Warning: Tried to set unrecognized symbol: nomicromips /tmp/cckECtVQ.s:161: Warning: Tried to set unrecognized symbol: nomicromips . . . There are literally thousands and thousands of these warnings. David Daney
Re: Many warnings in MIPS port (Was: [PATCH] [MIPS] microMIPS gcc support)
On 04/12/2013 10:55 AM, Moore, Catherine wrote: -Original Message- From: Maciej W. Rozycki [mailto:ma...@codesourcery.com] Sent: Friday, April 12, 2013 1:03 PM To: David Daney Cc: Moore, Catherine; gcc-patches@gcc.gnu.org; Richard Sandiford Subject: Re: Many warnings in MIPS port (Was: [PATCH] [MIPS] microMIPS gcc support) On Fri, 12 Apr 2013, David Daney wrote: I just tried to bootstrap on o32 Debian. This system has binutils 2.20.1. Here is a sample of the resulting failure when building the libjava target libs: . . . /home/daney/gccsvn/build/./gcc/xgcc - B/home/daney/gccsvn/build/./gcc/ -B/usr/local/mips-unknown-linux-gnu/bin/ -B/usr/local/mips-unknown-linux-gnu/lib/ -isystem /usr/local/mips-unknown-linux-gnu/include -isystem /usr/local/mips-unknown-linux-gnu/sys-include -DHAVE_CONFIG_H -I. -I../../../../trunk/libjava/libltdl -g -O2 -minterlink-mips16 -c ../../../../trunk/libjava/libltdl/ltdl.c -fPIC -DPIC -o .libs/ltdl.o /tmp/cckECtVQ.s: Assembler messages: /tmp/cckECtVQ.s:12: Warning: Tried to set unrecognized symbol: nomicromips /tmp/cckECtVQ.s:115: Warning: Tried to set unrecognized symbol: nomicromips /tmp/cckECtVQ.s:161: Warning: Tried to set unrecognized symbol: nomicromips . . . There are literally thousands and thousands of these warnings. Thanks for the report, I guess GCC should: 1. Detect in its `configure' script if GAS supports the pseudo-op and refrain from producing it if it does not (or actually perhaps it may never produce it by default as GAS defaults to the nomicromips mode anyway); we have precedents for that already. Configure was modified as part of the micromips patch to detect support for the .set nomicromips pseudo op. Do you have a configure log? Here is the relevant fragment: . . . configure:25761: checking assembler for .micromips support configure:25770: /usr/bin/as-o conftest.o conftest.s 5 conftest.s: Assembler messages: conftest.s:1: Warning: Tried to set unrecognized symbol: micromips configure:25773: $? = 0 configure:25784: result: yes . . . Since it is a warning, it succeeds. I think you need to adjust the test so that it fails if there is a warning.
Re: [Patch] [MIPS] Fix Many warnings in MIPS port (Was: [PATCH] [MIPS] microMIPS gcc support)
On 04/12/2013 03:07 PM, Moore, Catherine wrote: Hi David, Please try the attached patch. Is this OK to checkin? I don't think it is correct... Thanks, Catherine 2013-04-12 Catherine Moorec...@codesourcery.com * configure.ac (.micromips support): Add --fatal-warnings option. * configure: Regenerate. [...] Index: configure.ac === --- configure.ac(revision 197936) +++ configure.ac(working copy) @@ -4051,6 +4051,12 @@ LCF0: [AC_DEFINE(HAVE_AS_NO_SHARED, 1, [Define if the assembler understands -mno-shared.])]) +gcc_GAS_CHECK_FEATURE([.micromips support], + gcc_cv_as_micromips_support,,[--fatal-warnings], + [.set micromips],, + [AC_DEFINE(HAVE_GAS_MICROMIPS, 1, + [Define if your assembler supports the .set micromips directive])]) + There is already an existing check. Just modify that one instead of adding a duplicate. Something like the attached (untested) gcc_GAS_CHECK_FEATURE([.gnu_attribute support], gcc_cv_as_mips_gnu_attribute, [2,18,0],, [.gnu_attribute 4,1],, I didn't have time to test this yet. I may try Monday. David Daney Index: gcc/configure.ac === --- gcc/configure.ac (revision 197836) +++ gcc/configure.ac (working copy) @@ -4058,7 +4058,7 @@ [Define if your assembler supports .gnu_attribute.])]) gcc_GAS_CHECK_FEATURE([.micromips support], - gcc_cv_as_micromips_support,,, + gcc_cv_as_micromips_support,,[--fatal-warnings], [.set micromips],, [AC_DEFINE(HAVE_GAS_MICROMIPS, 1, [Define if your assembler supports the .set micromips directive])])
Re: Help for my Master thesis
On 03/29/2013 01:35 PM, Kiefmann Bernhard wrote: Dear Ladies and Gentlemen! My name is Bernhard Kiefmann and I'm writing my Master's thesis with the topic the suitability of the GNU C compiler used in safety-related areas. The first problem with this is that I have to check if the compiler met the requirements of the international standard IEC 61508:2010. Here I would like to ask you my question as follows: 1) What are the rules of the compiler development? Are there any diagrams of UML? Because they are a requirement of the standard. 2) Are there activities for the Functional Verification? 3) What procedures and measures for - The design and programming guidelines - Dynamic analysis and testing - Functional testing and black box testing - Ausfall-/Versagensanalyse - modeling - Performance tests - Semi Formal Methods - Static Analysis - Modular approach There is a web site that has (at least superficial) answers to most of these questions. Have you looked at: http://gcc.gnu.org/ I would recommend doing that, then asking more specific questions if you find the web site lacking. David Daney If you have information here for me I would rather help in assessing whether the compiler for use in safety-relevant area is suitable. The second point of my work is concerned with the treatment of releases. Are you putting any kind of evidences in your source-code and how they look like? Because the evidences should be read and analyzed and the investigation should demonstrate if the changes in the release code effects on the safety relevant area. I would like to thank you in advance for your help, stand for any questions you may have in the meantime, I remain Yours sincerely Kiefmann Bernhard bernhard.kiefm...@stud.fh-campuswien.ac.at
Re: mips16 and nomips16
On 01/14/2013 04:32 PM, reed kotler wrote: I'm not understanding why mips16 and nomips16 are not simple inheritable attributes. The mips16ness of a function must be known by the caller so that the appropriate version of the JAL/JALX instruction can be emitted i..e you should be able to say: void foo(); void __attribute((nomips16)) foo(); or void goo(); Any call here would assume nomips16 void __attribute((mips16)) goo(); A call here would assume mips16. Which is it? If you allow it to change, one case will always be incorrect. Or perhaps I misunderstand the question. David Daney
Re: How to tell that a compiler test result is from a branch?
On 12/20/2012 03:36 PM, H.J. Lu wrote: On Thu, Dec 20, 2012 at 3:25 PM, Steven Bosscher stevenb@gmail.com wrote: Hello, I've bootstrappedtested the LRA branch on ia64 and posted the results to gcc-testresults (http://gcc.gnu.org/ml/gcc-testresults/2012-12/msg01782.html). Unfortunately there's nothing in the message that shows that this wasn't a trunk checkout but the LRA branch. Is it possible to identify the branch in the compiler version somehow? I see the REVISION file mentioned in configure.ac and Makefile.in. Should that file be used for this? Yes, see http://gcc.gnu.org/ml/gcc-testresults/2012-12/msg01861.html [hjl@gnu-4 src-4.7]$ cat gcc/REVISION [gcc-4_7-branch revision 194514] [hjl@gnu-4 src-4.7]$ The last time I checked, gcc/REVISION is only set to the proper value by running contrib/gcc_update. David Daney
Re: [patch, mips, testsuite] Fix test to handle optimizations
On 10/08/2012 11:15 AM, Mike Stump wrote: On Oct 8, 2012, at 9:16 AM, Steve Ellcey sell...@mips.com wrote: The gcc.target/mips/ext_ins.c was failing in little endian mode on MIPS because the compiler is smart enough now to see that 'c' is uninitialized and it can insert the field 'a' into 'c' with a shift and a full store instead of an insert because the store just overwrites unintialized data. I changed the code to force the compiler to preserve the other fields of 'c' and that makes it use the insert instruction in both big and little endian modes. Tested on mips-mti-elf. OK to checkin? Ok. I don't think this is the proper fix for this. Use of BBIT{0,1} instructions will always be smaller than the alternative. So disabling the test for -Os doesn't fix the problem the test is designed to find. The real problem is that some optimizer is broken. Instead of disabling the tests, can we fix the problem instead? The goal of the testsuite should be to detect problems, not yield clean results. If Richard disagrees with me, then I would defer to him. David Daney
Re: [patch, mips, testsuite] Fix test to handle optimizations
Really I meant this in reply to the 'Fix gcc.target/mips/octeon-bbit-2.c for -Os' thread. Sorry for confusing the issue here. I don't really have an objection to this one. David Daney On 10/08/2012 11:28 AM, David Daney wrote: On 10/08/2012 11:15 AM, Mike Stump wrote: On Oct 8, 2012, at 9:16 AM, Steve Ellcey sell...@mips.com wrote: The gcc.target/mips/ext_ins.c was failing in little endian mode on MIPS because the compiler is smart enough now to see that 'c' is uninitialized and it can insert the field 'a' into 'c' with a shift and a full store instead of an insert because the store just overwrites unintialized data. I changed the code to force the compiler to preserve the other fields of 'c' and that makes it use the insert instruction in both big and little endian modes. Tested on mips-mti-elf. OK to checkin? Ok. I don't think this is the proper fix for this. Use of BBIT{0,1} instructions will always be smaller than the alternative. So disabling the test for -Os doesn't fix the problem the test is designed to find. The real problem is that some optimizer is broken. Instead of disabling the tests, can we fix the problem instead? The goal of the testsuite should be to detect problems, not yield clean results. If Richard disagrees with me, then I would defer to him. David Daney
Re: print operand modifiers in the manual
On 09/06/2012 01:00 PM, Ian Lance Taylor wrote: On Thu, Sep 6, 2012 at 11:56 AM, Mike Stump mikest...@comcast.net wrote: Where in the manual are the machine specific print operand modifiers documented? I've looked around, and just can seem to find them; surely, I can't be the first to document such a modifier. To the best of my knowledge they are not documented in the manual. The machine-specific asm constraint characters are documented in the manual, but I don't think the print operand modifiers are. Perhaps they should be added to the internals manual. The pattern would seem to be to take '16.5 Output Templates and Operand Substitution' and add two subsections, one for generic 'Operand Modifiers' and another for machine specific 'Machine Operand Modifiers'. Then someone would have to look in all the target PRINT_OPERAND implementations and document the state of the art. That could be a bit of work. David Daney
Re: print operand modifiers in the manual
On 09/06/2012 01:48 PM, Mike Stump wrote: On Sep 6, 2012, at 1:09 PM, David Daney ddaney.c...@gmail.com wrote: On 09/06/2012 01:00 PM, Ian Lance Taylor wrote: On Thu, Sep 6, 2012 at 11:56 AM, Mike Stump mikest...@comcast.net wrote: Where in the manual are the machine specific print operand modifiers documented? I've looked around, and just can seem to find them; surely, I can't be the first to document such a modifier. To the best of my knowledge they are not documented in the manual. The machine-specific asm constraint characters are documented in the manual, but I don't think the print operand modifiers are. Perhaps they should be added to the internals manual. Only if you move extended asms to the internals manual. :-( I got the idea from the Constraints documentation, and for those I always looked in the Internals Manual. But now I see that there is similar, but not identical, Constraints documentation in both the GCC manual and the Internals Manual. I am not so fluent in texinfo that I would attempt it, but it seems that these sections should be factored into separate files so that the same information can appear in both manuals and not have the current divergence of content. David Daney
Re: GCC stack backtraces
On 08/29/2012 12:43 AM, Janne Blomqvist wrote: On Wed, Aug 29, 2012 at 10:22 AM, Ian Lance Taylor i...@google.com wrote: I've spent the last couple of days working on a stack backtrace library. It uses the GCC unwind interface to collect a stack trace, and parses DWARF debug info to get file/line/function information. [snip] I expect to use this code not just for GCC proper, but also for libgo (currently libgo uses Go code to parse DWARF, but that is not very satisfactory as that code is only available if it has been imported into the program). So I put it under a BSD license, although that is open for discussion. Also in case it finds more uses elsewhere I wrote it in reasonably portable C rather than C++. Does this seem like something we could usefully add to GCC? Does anybody see any big problems with it? I haven't looked at the code, but if it is async-signal-safe it could be interesting for gfortran. Currently in libgfortran we have a backtracing routine, originally written by FX Coudert IIRC, since rewritten by yours truly a few times, that uses _Unwind_Backtrace() from libgcc and then pipes the output via addr2line, if found. Since it's invoked from a signal handler when the program (user program, not the compiler!) crashes, it needs to be async-signal-safe. AFAIK the current implementation *should* fulfill that requirement. But something that would be async-signal-safe and won't need addr2line to get symbolic info would be a nice improvement, libgcj also uses this technique. If this were merged, it would be really nice to retrofit libgcj to use it as well. Having this capability available from C and C++ code would also be really nice. Several times in the past I have hacked together an unwinder by calling _Unwind_Backtrace(), and then decoded the traces off-line using addr2line. An easy, low-overhead way to add function/line number information to a trace would be quite welcome. I would almost say to put it in libgcc along side of _Unwind_Backtrace(), but that doesn't seem the proper place for it. It would be very convenient though. Thanks, David Daney
Re: MIPS testsuite patch for --with-synci configurations
On 06/11/2012 12:30 PM, Steve Ellcey wrote: This patch is to fix MIPS failures that occcur when configuring GCC with the -with-synci option. In this case GCC will generate the synci instruction by default if on a MIPS architecture that supports it . When on an architecture that does not support synci, GCC will not generate it by default but it does generate a warning saying that it is not doing so and that warning is causing tests that explicitly specify a MIPS architecture that does not support synci but do not explicitly turn off synci to fail with an extra, unexpected warning. I initially looked at changing GCC to remove the warning but that did not look workable, see http://gcc.gnu.org/ml/gcc/2012-06/msg00100.html for more details. This patch addes the -mno-synci flag to MIPS tests that specify an architecture that does not support synci, thus getting rid of the warning message and making the tests pass. Tested with the mips-linux-gnu and mips-sde-elf targets both with and without --with-synci on the GCC configuration. OK to checkin? I wonder if it would make more sense to modify the testsuite driver to take care of this. It seems like the set of files with the -mno-synci annotation could easily become different than the set that requires it. David Daney Steve Ellcey sell...@mips.com 2012-06-11 Steve Ellceysell...@mips.com * gcc.target/mips/call-saved-1.c: Add -mno-synci flag. * gcc.target/mips/call-saved-2.c: Ditto. * gcc.target/mips/call-saved-3.c: Ditto. * gcc.target/mips/clear-cache-2.c: Ditto. * gcc.target/mips/ext-8.c: Ditto. * gcc.target/mips/extend-2.c: Ditto. * gcc.target/mips/fix-r4000-1.c: Ditto. * gcc.target/mips/fix-r4000-10.c: Ditto. * gcc.target/mips/fix-r4000-11.c: Ditto. * gcc.target/mips/fix-r4000-12.c: Ditto. * gcc.target/mips/fix-r4000-2.c: Ditto. * gcc.target/mips/fix-r4000-3.c: Ditto. * gcc.target/mips/fix-r4000-4.c: Ditto. * gcc.target/mips/fix-r4000-5.c: Ditto. * gcc.target/mips/fix-r4000-6.c: Ditto. * gcc.target/mips/fix-r4000-7.c: Ditto. * gcc.target/mips/fix-r4000-8.c: Ditto. * gcc.target/mips/fix-r4000-9.c: Ditto. * gcc.target/mips/fix-vr4130-1.c: Ditto. * gcc.target/mips/fix-vr4130-2.c: Ditto. * gcc.target/mips/fix-vr4130-3.c: Ditto. * gcc.target/mips/fix-vr4130-4.c: Ditto. * gcc.target/mips/fpr-moves-1.c: Ditto. * gcc.target/mips/fpr-moves-2.c: Ditto. * gcc.target/mips/loongson-muldiv-1.c: Ditto. * gcc.target/mips/loongson-muldiv-2.c: Ditto. * gcc.target/mips/loongson-shift-count-truncated-1.c: Ditto. * gcc.target/mips/loongson-simd.c: Ditto. * gcc.target/mips/loongson3a-muldiv-1.c: Ditto. * gcc.target/mips/loongson3a-muldiv-2.c: Ditto. * gcc.target/mips/madd-1.c: Ditto. * gcc.target/mips/madd-2.c: Ditto. * gcc.target/mips/madd-5.c: Ditto. * gcc.target/mips/madd-6.c: Ditto. * gcc.target/mips/madd-7.c: Ditto. * gcc.target/mips/madd-8.c: Ditto. * gcc.target/mips/maddu-1.c: Ditto. * gcc.target/mips/maddu-2.c: Ditto. * gcc.target/mips/msub-1.c: Ditto. * gcc.target/mips/msub-2.c: Ditto. * gcc.target/mips/msub-5.c: Ditto. * gcc.target/mips/msub-6.c: Ditto. * gcc.target/mips/msub-7.c: Ditto. * gcc.target/mips/msub-8.c: Ditto. * gcc.target/mips/msubu-1.c: Ditto. * gcc.target/mips/msubu-2.c: Ditto. * gcc.target/mips/nmadd-1.c: Ditto. * gcc.target/mips/nmadd-2.c: Ditto. * gcc.target/mips/nmadd-3.c: Ditto. * gcc.target/mips/no-smartmips-ror-1.c: Ditto. * gcc.target/mips/pr34831.c: Ditto. * gcc.target/mips/r10k-cache-barrier-10.c: Ditto. * gcc.target/mips/r3900-mult.c: Ditto. * gcc.target/mips/rsqrt-1.c: Ditto. * gcc.target/mips/rsqrt-2.c: Ditto. * gcc.target/mips/rsqrt-3.c: Ditto. * gcc.target/mips/rsqrt-4.c: Ditto. * gcc.target/mips/sb1-1.c: Ditto. * gcc.target/mips/vr-mult-1.c: Ditto. * gcc.target/mips/vr-mult-2.c: Ditto. [...]
Re: Remove obsolete Tru64 UNIX V5.1B support
On 03/06/2012 05:14 AM, Rainer Orth wrote: Joseph S. Myersjos...@codesourcery.com writes: There's one particular issue: the change to java/io/File.java required my to regenerate the .class file in classpath. I've used Sun javac -target 1.5 for that and hope I got it right. I'd have expected regeneration to use GCJ built to use ECJ, though I don't know. I've never tried this. Given that the .class file lives below libjava/classpath and has to be synced with upstream Classpath anyway, I hope the Java maintainers will take care of that. This it documented (although perhaps badly) in install/configure.html You should use --enable-java-maintainer-mode, this will cause the build to use ecj and gjavah to regenerate all the generated files in the 'standard' manner. At least with the javac-built File.class I had no libjava testsuite failures. It probably results in a usable .class file, but is error prone and not very reproducible. David Daney
Re: [RFC 4.8] use ip+cfa to identify unwind frames, if possible
On 02/16/2012 03:32 PM, Richard Sandiford wrote: David Daneydavid.da...@cavium.com writes: On 02/16/2012 02:12 PM, Richard Henderson wrote: [...] Thanks for the patch. index 1c19f8b..59d4560 100644 --- a/gcc/config/mips/mips.h +++ b/gcc/config/mips/mips.h @@ -2920,3 +2920,15 @@ extern GTY(()) struct target_globals *mips16_globals; with arguments ARGS. */ #define PMODE_INSN(NAME, ARGS) \ (Pmode == SImode ? NAME ## _si ARGS : NAME ## _di ARGS) + +/* For mips32 mode we have bits 0 and 1 zero free, but for mips16 mode, + bit 0 indicates mips16 mode, and bit 1 is thence meaningful. Thus + the only free bits would be at the top of the address space. + Can we trust that we'll never try to unwind in kernel mode? */ That's too bad. I guess if we ever want to unwind in kernel mode, we can say no mips16 and switch it to a low-order bit for that application. Or write our own unwinder. I suppose the problem is that baremetal often runs in kernel mode. (Normal mips*-elf gdbsim testing works that way.) But I think we'd be fine if we restrict the IP matching to non-MIPS16 mode. GCC doesn't have any other special RA save registers, and now's a good time to say that such a thing won't be allowed in MIPS16 or microMIPS code (or in anything, probably). So maybe we could set private_1 to (IP | 2) when (IP 3) == 0, and leave it at 0 otherwise. It's then a forced unwind unless (IP 3) == 2. Not as elegant as the single bit though. Just off the top of my head (with out actually looking at the code). Is there anything that could be done with the register zero save slot (if it even exists)? Richard
Re: [RFC 4.8] use ip+cfa to identify unwind frames, if possible
On 02/16/2012 02:12 PM, Richard Henderson wrote: [...] index 1c19f8b..59d4560 100644 --- a/gcc/config/mips/mips.h +++ b/gcc/config/mips/mips.h @@ -2920,3 +2920,15 @@ extern GTY(()) struct target_globals *mips16_globals; with arguments ARGS. */ #define PMODE_INSN(NAME, ARGS) \ (Pmode == SImode ? NAME ## _si ARGS : NAME ## _di ARGS) + +/* For mips32 mode we have bits 0 and 1 zero free, but for mips16 mode, + bit 0 indicates mips16 mode, and bit 1 is thence meaningful. Thus + the only free bits would be at the top of the address space. + Can we trust that we'll never try to unwind in kernel mode? */ That's too bad. I guess if we ever want to unwind in kernel mode, we can say no mips16 and switch it to a low-order bit for that application. Or write our own unwinder. David Daney
[Patch libgo]: Move Iopl and Ioperm to 368/amd64 specific libcall_linux_*.go files.
Ian, As discussed several months ago, libgo will not run on mips because it references the x86 specific system calls iopl() and ioperm(). These system calls do not exist in mips*-linux, so we move them to new 368/amd64 specific libcall_linux_*.go files. The attached patch was tested on x86_64-linux-gnu with no libgo failures. There are still some other problems with mips*-linux, but this makes forward progress. It is unclear what kind of change log is required, so I do not supply one. Cavium, Inc. should now have a corporate contributor license agreement on file, so I think you can commit this upstream if acceptable. Thanks, David Daney Index: go/syscall/libcall_linux_amd64.go === --- go/syscall/libcall_linux_amd64.go (revision 0) +++ go/syscall/libcall_linux_amd64.go (revision 0) @@ -0,0 +1,13 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// GNU/Linux library calls amd64 specific. + +package syscall + +//sys Ioperm(from int, num int, on int) (errno int) +//ioperm(from _C_long, num _C_long, on int) int + +//sys Iopl(level int) (errno int) +//iopl(level int) int Index: go/syscall/libcall_linux_386.go === --- go/syscall/libcall_linux_386.go (revision 0) +++ go/syscall/libcall_linux_386.go (revision 0) @@ -0,0 +1,13 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// GNU/Linux library calls 386 specific. + +package syscall + +//sys Ioperm(from int, num int, on int) (errno int) +//ioperm(from _C_long, num _C_long, on int) int + +//sys Iopl(level int) (errno int) +//iopl(level int) int Index: go/syscall/libcall_linux.go === --- go/syscall/libcall_linux.go (revision 182179) +++ go/syscall/libcall_linux.go (working copy) @@ -207,12 +207,6 @@ func PtraceDetach(pid int) (errno int) { // //sysnb Gettid() (tid int) // //gettid() Pid_t -//sys Ioperm(from int, num int, on int) (errno int) -//ioperm(from _C_long, num _C_long, on int) int - -//sys Iopl(level int) (errno int) -//iopl(level int) int - // FIXME: mksysinfo linux_dirent //Or just abandon this function. // //sys Getdents(fd int, buf []byte) (n int, errno int)
Re: [PATCH] Fix ctrstuff.c with init_array support
On 12/05/2011 07:38 PM, H.J. Lu wrote: On Mon, Dec 5, 2011 at 5:42 PM, Andrew Pinski andrew.pin...@caviumnetworks.com wrote: Hi, Like the .ctors array, __do_global_dtors_aux_fini_array_entry and __frame_dummy_init_array_entry arrays need a specific alignment. This patch fixes those two arrays. This patch fixes the bootstrap on mips64-linux-gnu. Bootstrapped on mips64-linux-gnu. Bootstrapped and tested on x86-linux-gnu with no regressions. Thanks, Andrew Pinski libgcc/ChangeLog: * crtstuff.c (__do_global_dtors_aux_fini_array_entry): Align to the size of func_ptr. (__frame_dummy_init_array_entry): Likewise. .eh_frame section has similar problem: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27576 But you have known about that for over five years and x86_64 still seems to work, so it must not really be hurting anything. Andrew's patch fixes bad code in libgcc which causes bootstrap breakage on MIPS when used with recent binutils releases. So I would propose that it be considered as-is, without changing the .eh_frame bits. If someone wants to change the .eh_frame things in crtstuff.c, they are free to do that, but it shouldn't really be gating factor for this patch. David Daney
Re: Go patch committed: Update Go library
On 11/02/2011 10:54 AM, Ian Lance Taylor wrote: Rainer Orthr...@cebitec.uni-bielefeld.de writes: All go and libgo execution tests are failing for me with this patch on x86_64-unknown-linux-gnu (CentOS 5.5, I think): output is: /var/gcc/regression/trunk/2.6.18-gcc-gas-gld/build/x86_64-unknown-linux-gnu/./li bgo/.libs/libgo.so: undefined reference to `inotify_init1' /var/gcc/regression/trunk/2.6.18-gcc-gas-gld/build/x86_64-unknown-linux-gnu/./li bgo/.libs/libgo.so: undefined reference to `fallocate' /var/gcc/regression/trunk/2.6.18-gcc-gas-gld/build/x86_64-unknown-linux-gnu/./li bgo/.libs/libgo.so: undefined reference to `sync_file_range' collect2: error: ld returned 1 exit status FAIL: go.go-torture/execute/array-1.go compilation, -O0 I assume that CentOS 5.5 uses some version of glibc before version 2.6. The three functions you mention are not supported in older versions of glibc. Fortunately, they are not called anywhere else in the library, so this patch takes the easy way out and simply removes them. Bootstrapped and ran Go testsuite on x86_64-unknown-linux-gnu. Committed to mainline. On Linux you also have iopl and ioperm which are x86 only. On MIPS we fail because of those. I was going to create a patch to move those two to libcall_linux_{386,amd64}.go. An alternative would be to remove them too. David Daney
Re: [PLUGIN] dlopen and RTLD_NOW
On 09/05/2011 12:50 AM, Romain Geissler wrote: Hi Is there any particular reason to load plugin with the RTLD_NOW option? This option force .so symbol resolution to be completely made at load time, but this could be done only when a symbol is needed (RTLD_NOW). Here is the dlopen line in plugin.c: dl_handle = dlopen (plugin-full_name, RTLD_NOW | RTLD_GLOBAL); My issue is, I want to load the same plugin.so in both cc1 and cc1plus, but in the C++ case, I may need to reference some cc1plus specific symbols. I can check whether cc1 or cc1plus loaded the plugin and thus use custom C++ symbols only when present. With RTLD_NOW, the plugin fails to load in cc1 as symbol resolution is forced at load time. Can you supply weak binding implementations for the missing functions? That might allow the linking to succeed. David Daney
Re: [PLUGIN] dlopen and RTLD_NOW
On 09/06/2011 10:55 AM, David Daney wrote: On 09/05/2011 12:50 AM, Romain Geissler wrote: Hi Is there any particular reason to load plugin with the RTLD_NOW option? This option force .so symbol resolution to be completely made at load time, but this could be done only when a symbol is needed (RTLD_NOW). Here is the dlopen line in plugin.c: dl_handle = dlopen (plugin-full_name, RTLD_NOW | RTLD_GLOBAL); My issue is, I want to load the same plugin.so in both cc1 and cc1plus, but in the C++ case, I may need to reference some cc1plus specific symbols. I can check whether cc1 or cc1plus loaded the plugin and thus use custom C++ symbols only when present. With RTLD_NOW, the plugin fails to load in cc1 as symbol resolution is forced at load time. Can you supply weak binding implementations for the missing functions? That might allow the linking to succeed. ... And if I read the entire thread before responding, I would have seen that others had already suggested the same thing. Sorry for the noise. David Daney
Re: ARM Linux EABI: unwinding through a segfault handler
On 08/25/2011 05:26 AM, Andrew Haley wrote: Throwing an exception through a segfault handler doesn't always work on ARM: the attached example fails on current gcc trunk. panda-9:~ $ g++ segv.cc -fnon-call-exceptions -g panda-9:~ $ ./a.out terminate called after throwing an instance of 'FoobarException*' Aborted The bug is that _Unwind_GetIPInfo doesn't correctly set ip_before_insn. Instead, it always sets it to zero; it should be set to 1 if this is a frame created by a signal handler: #define _Unwind_GetIPInfo(context, ip_before_insn) \ (*ip_before_insn = 0, _Unwind_GetGR (context, 15) ~(_Unwind_Word)1) Fixing this on ARM is hard because signal frames aren't specially marked as they are on systems that use DWARF unwinder data. I have a patch that works on systems where the signal restorer is exactly mov r7, $SYS_rt_sigreturn swi 0x0 It works as a proof of concept, but it's fugly. For what it's worth, I did the equivalent on MIPS. Once you do this, it is a de facto ABI. Probably the ARM linux maintainers should be consulted to see if they are willing to consider the possibility of never changing it. I think all Linux ABIs should support unwinding through signal handlers, so adding this makes sense to me. David Daney So, suggestions welcome. Is there a nice way to detect a signal frame?
Re: [patch] Disable static build for libjava
On 07/07/2011 09:57 AM, Matthias Klose wrote: On 07/07/2011 06:51 PM, David Daney wrote: On 07/07/2011 09:27 AM, Matthias Klose wrote: As discussed at the Google GCC gathering, disable the build of static libraries in libjava, which should cut the build time of libjava by 50%. The static libjava build isn't useful out of the box, and I don't see it packaged by Linux distributions either. The AC_PROG_LIBTOOL check is needed to get access to the enable_shared macro. I'm unsure about the check in the switch construct. Taken from libtool.m4, and determining the value of enable_shared_with_static_runtimes. Ok for the trunk? 2011-07-07 Matthias Klosed...@ubuntu.com * Makefile.def (target_modules/libjava): Pass $(libjava_disable_static). * configure.ac: Check for libtool, pass --disable-static in libjava_disable_static. * Makefile.in: Regenerate. * configure: Likewise. My autoconf fu is not what it used to be. It is fine if static libraries are disabled by default, but it should be possible to enable them from the configure command line. It is unclear to me if this patch does that. no. I assume an extra option --enable-static-libjava would be needed. Not being a libjava maintainer, I cannot force you to add something like that as part of the patch, but I think it would be a good idea. Also I would like to go on record as disagreeing with the statement that 'static libjava build isn't useful out of the box' I remember that there were some restrictions with the static library. but maybe I'm wrong. There are restrictions, but it is still useful for some embedded environments. David Daney
Re: RFC: A new MIPS64 ABI
On 05/09/2011 07:28 AM, Ralf Baechle wrote: On Mon, Feb 21, 2011 at 07:45:41PM +, Richard Sandiford wrote: David Daneydda...@caviumnetworks.com writes: Background: Current MIPS 32-bit ABIs (both o32 and n32) are restricted to 2GB of user virtual memory space. This is due the way MIPS32 memory space is segmented. Only the range from 0..2^31-1 is available. Pointer values are always sign extended. Because there are not already enough MIPS ABIs, I present the ... Proposal: A new ABI to support 4GB of address space with 32-bit pointers. FWIW, I'd be happy to see this go into GCC. So am I for the kernel primarily because it's not really a new ABI but an enhancement of the existing N32 ABI. The patches are resting peacefully on my laptop. Now with endorcements like these, I may be forced to actually finish them... David Daney
Re: RFC: A new MIPS64 ABI
On 05/09/2011 07:32 AM, Andrew Haley wrote: On 05/09/2011 03:28 PM, Ralf Baechle wrote: On Mon, Feb 21, 2011 at 07:45:41PM +, Richard Sandiford wrote: David Daneydda...@caviumnetworks.com writes: Background: Current MIPS 32-bit ABIs (both o32 and n32) are restricted to 2GB of user virtual memory space. This is due the way MIPS32 memory space is segmented. Only the range from 0..2^31-1 is available. Pointer values are always sign extended. Because there are not already enough MIPS ABIs, I present the ... Proposal: A new ABI to support 4GB of address space with 32-bit pointers. FWIW, I'd be happy to see this go into GCC. So am I for the kernel primarily because it's not really a new ABI but an enhancement of the existing N32 ABI. Would it work with no kernel changes? It depends on your definition of 'work'. Programs compiled with the new ABI variant would work just as well as when compiled for Genuine n32. However, currently the kernel will only give out 2GB worth of address space to n32 programs. To get any benefit from the new ABI variant, the kernel would have to be modified to use the entire 4GB of usable address space. David Daney.
Re: RFC: A new MIPS64 ABI
On 05/06/2011 01:29 AM, Alexandre Oliva wrote: On Feb 15, 2011, David Daneydda...@caviumnetworks.com wrote: On 02/15/2011 09:56 AM, Alexandre Oliva wrote: On Feb 14, 2011, David Daneydda...@caviumnetworks.com wrote: So, sorry if this is a dumb question, but wouldn't it be much easier to keep on using sign-extended addresses, and just make sure the kernel never allocates a virtual memory range that crosses a sign-bit change, No, it is not possible. The MIPS (and MIPS64) hardware architecture does not allow userspace access to addresses with the high bit (two bits for mips64) set. Interesting. I guess this makes it far easier to transition to the u32 ABI: n32 addresses all have the 32-bit MSB bit clear, so n32 binaries can be used within u32 environments, as long as the environment refrains from using addresses that have the MSB bit set. Correct. So we could switch lib32 to u32, have a machine-specific bit set for u32 binaries, and if the kernel starts an executable or interpreter that has that bit clear, it will refrain from allocating any n32-invalid address for that process. Furthermore, libc, upon loading a library, should be able to notify the kernel when an n32 library is to be loaded, to which the kernel would respond either with failure (if that process already uses u32-valid but n32-invalid addresses) or success (switching to n32 mode if not in it already). Am I missing any other issues? No, this is pretty much what Ralf and I came up with on IRC. We tag u32 objects (in a similar manner to how non-executable stack is done). The linker will propagate the u32 tag as it links things together. u32 shared libraries are compatible with legacy n32 binaries as long as the OS doesn't map any memory where the address has bit 31 set. When the OS loads an n32 executable it would check the u32 tag (both of the executable and ld.so) and adjust its memory allocation strategy. The OS will continue to map the VDSO at the 2GB point. This will cause the maximum size of any object to be compatible with the 32-bit n32 ptrdiff_t. I think once the OS puts a process into u32 mode, there is no going back. We would just have ld.so refuse to load any shared objects that were not compatible with the current mode. We would continue to place libraries in /lib32, /usr/lib32, /usr/local/lib32, etc. David Daney
Unwinding through exception handlers when PC is NULL.
Consider this program under GNU/Linux (x86_64): - np.c --- #include cstdio #include csignal #include cstring #include cstdlib static void handler(int sig) { printf(got signal %d\n, sig); throw 1; } int (*my_vector)(int); int *bar; int main(int argc, char *argv[]) { struct sigaction sa; memset(sa, 0, sizeof(sa)); sa.sa_handler = handler; sa.sa_flags = SA_RESTART; sigemptyset(sa.sa_mask); int rv = sigaction(SIGSEGV, sa, NULL); if (rv) { perror(sigaction failed); exit(1); } try { //*bar = 6; rv = my_vector(0); } catch (int c) { printf(I cought %d\n, c); exit(0); } printf(No exception\n); return 0; } --8- $ g++ -fnon-call-exceptions -o np np.cc $ ./np got signal 11 terminate called after throwing an instance of 'int' Aborted (core dumped) However if we uncomment the '//*bar = 6;' line we get: $ ./np got signal 11 I cought 1 This happens because the libgcc unwinder cannot find unwinding information for the PC at the point of the SIGSEGV. However, we know that usually when we end up with a PC of zero, it is because we called through a NULL function pointer. In this case, we could use the return address (perhaps with a slight adjustment to compensate for the call instruction) to do the unwinding. Would it make any sense to build something like this into libgcc? Or if we want to do this do we need to patch up the register state before executing the throw? David Daney
Re: Internal compiler error in targhooks.c: default_secondary_reload (ARM/Thumb)
On 04/04/2011 02:34 PM, Matt Fischer wrote: I'm getting an internal compiler error on the following test program: void func(int a, int b, int c, int d, int e, int f, int g, short int h) { assert(a 100); assert(b 100); assert(c 100); assert(d 100); assert(e 100); assert(f 100); assert(g 100); assert((-1000 h) (h 0)); } Command line and output: $ arm-none-eabi-gcc -mthumb -O2 -c -o test.o test.c test.c: In function 'func': test.c:11:1: internal compiler error: in default_secondary_reload, at targhooks.c:769 Please submit a full bug report, with preprocessed source if appropriate. Seehttps://support.codesourcery.com/GNUToolchain/ for instructions. Look, it tells you exactly what to do. Go visit that web site. Thanks, David Daney
Re: Same cross-gcc toolchain on different hosts produces different target code?
On 03/17/2011 09:59 AM, McCall, Ronald SIK wrote: Hi, I am attempting to move an old gcc powerpc-eabi cross toolchain from an old Solaris SPARC server to a new Linux x86_64 server. I have built and installed the exact same cross-binutils (2.13.2.1), cross-gcc (3.2.3) and cross-newlib (1.11.0) from source using the exact same target (powerpc-eabi) as I originally built and installed on the Solaris server. I then built the exact same target source code using the two toolchains and got different target object code (i.e. different instruction sequences). Should that be possible? The obvious difference is that the two toolchains are running on different hosts but I wouldn't have expected that to matter (the cross-compiler should have identical functionality). Another difference is that the two toolchains were compiled with different versions of native gcc. Native gcc 3.2.3 was originally used on the Solaris server (with Solaris binutils) while native gcc 3.4.6 was used on the Linux server (with GNU binutils). I wouldn't have expected that to matter either. Any theories on why the object code is different between the two toolchains? If you let us in on what exactly the secret differences were, it would be easier to opine on this topic. David Daney
Re: Same cross-gcc toolchain on different hosts produces different target code?
On 03/17/2011 11:20 AM, McCall, Ronald SIK wrote: If you let us in on what exactly the secret differences were, it would be easier to opine on this topic. Sure thing! Here is an instruction sequence from the original Solaris toolchain: Resending to gcc@. I didn't really want a private message about it. fe000230: 54 6a 87 be rlwinm r10,r3,16,30,31 fe000234: 65 49 ff ff orisr9,r10,65535 fe000238: 61 28 ff fc ori r8,r9,65532 fe00023c: 6d 07 80 00 xoris r7,r8,32768 fe000240: 3c 00 43 30 lis r0,17200 fe000244: 90 e1 00 0c stw r7,12(r1) fe000248: 90 01 00 08 stw r0,8(r1) fe00024c: 3c 80 fe 14 lis r4,-492 fe000250: c8 01 00 08 lfd f0,8(r1) fe000254: 39 24 82 38 addir9,r4,-32200 Instruction sequence #1 fe000258: c8 89 00 00 lfd f4,0(r9) (continued) fe00025c: 74 60 00 08 andis. r0,r3,8 fe000260: 3c c0 00 01 lis r6,1 fe000264: fc 60 20 28 fsubf3,f0,f4 Instruction sequence #2 fe000268: fc 40 18 18 frspf2,f3 fe00026c: fc 20 10 90 fmr f1,f2 fe000270: fc 00 08 1e fctiwz f0,f1 fe000274: d8 01 00 10 stfdf0,16(r1) fe000278: 80 81 00 14 lwz r4,20(r1) fe00027c: 98 86 8f 0d stb r4,-28915(r6) fe000280: 41 82 00 cc beq-fe00034c Here is the same instruction sequence from the newly built Linux toolchain: fe000230: 54 6a 87 be rlwinm r10,r3,16,30,31 fe000234: 65 49 ff ff orisr9,r10,65535 fe000238: 61 28 ff fc ori r8,r9,65532 fe00023c: 6d 07 80 00 xoris r7,r8,32768 fe000240: 3c 00 43 30 lis r0,17200 fe000244: 90 e1 00 0c stw r7,12(r1) fe000248: 90 01 00 08 stw r0,8(r1) fe00024c: 3c 80 fe 14 lis r4,-492 fe000250: c8 01 00 08 lfd f0,8(r1) fe000254: c9 a4 87 b0 lfd f13,-30800(r4) Instruction sequence #1 fe000258: fc 60 68 28 fsubf3,f0,f13 Instruction sequence #2 fe00025c: 74 60 00 08 andis. r0,r3,8 fe000260: 3c 80 00 01 lis r4,1 fe000264: fc 40 18 18 frspf2,f3 fe000268: fc 20 10 90 fmr f1,f2 fe00026c: fc 00 08 1e fctiwz f0,f1 fe000270: d8 01 00 10 stfdf0,16(r1) fe000274: 80 e1 00 14 lwz r7,20(r1) fe000278: 98 e4 8f 0d stb r7,-28915(r4) fe00027c: 41 82 00 c8 beq-fe000344 Instruction sequence #1 has been combined into a single equivalent instruction. Instruction sequence #2 moved. Register usage is also different but equivalent. Ron McCall
Re: RFC: A new MIPS64 ABI
On 02/14/2011 12:29 PM, David Daney wrote: Background: Current MIPS 32-bit ABIs (both o32 and n32) are restricted to 2GB of user virtual memory space. This is due the way MIPS32 memory space is segmented. Only the range from 0..2^31-1 is available. Pointer values are always sign extended. Because there are not already enough MIPS ABIs, I present the ... Proposal: A new ABI to support 4GB of address space with 32-bit pointers. The proposed new ABI would only be available on MIPS64 platforms. It would be identical to the current MIPS n32 ABI *except* that pointers would be zero-extended rather than sign-extended when resident in registers. In the remainder of this document I will call it 'n32-big'. As a result, applications would have access to a full 4GB of virtual address space. The operating environment would be configured such that the entire lower 4GB of the virtual address space was available to the program. At a low level here is how it would work: 1) Load a pointer to a register from memory: n32: LW $reg, offset($reg) n32-big: LWU $reg, offset($reg) 2) Load an address constant into a register: n32: LUI $reg, high_part ORI $reg, low_part That is not reality. Really it is: LUI $reg, R_MIPS_HI16 ADDIU $reg, R_MIPS_LO16 n32-big: ORI $reg, high_part DSLL $reg, $reg, 16 ORI $reg, low_part This one would really be: ORI $reg, R_MIPS_HI16 DSLL $reg, $reg, 16 ADDIU $reg, R_MIPS_LO16 Q: What would have to change to make this work? o A new ELF header flag to denote the ABI. o Linker support to use proper library search paths, and linker scrips to set the INTERP program header, etc. o GCC has to emit code for the new ABI. o Could all existing n32 relocation types be used? I think so. o Runtime libraries would have to be placed in a new location (/lib32big, /usr/lib32big ...) o The C library's ld.so would have to use a distinct LD_LIBRARY_PATH for n32-big code. o What would the Linux system call interface be? I would propose using the existing Linux n32 system call interface. Most system calls would just work. Some, that pass pointers in in-memory structures, might require kernel modifications (sigaction() for example).
Re: MIPS: Trouble with address calculation near the useg/kseg boundary
On 02/16/2011 01:44 PM, Paul Koning wrote: I'm running into a crash caused by mishandling of address calculation of an array element address when that array is near the bottom of kseg0 (0x8000). The code essentially does this foo = v[i - 2].elem; where i is current zero. Assume for now the negative array offset is valid -- data structure elements in question exist to both sides of the label v. The generated code looks like this: /* i is in v0 */ addiu v0, -2 sll v0, 3 lui v1, 0x8000 adduv0, v1 lbu a1, 7110(v0) What's going on here is thatv[0].elem is 0xf80007110. The reference is valid -- array elements are 8 bytes so element -2 is still in kseg0. However, the addu produces value 07ff0 in v0 -- the result of adding -16 to the 32 bit value 0x8. Given that I have an ABI with 64 bit registers -- even though it has 32 bit pointers -- I would say the address adjustment should have been done with daddu; if that had been done I would have gotten the correct address. GCC is 4.5.1, NetBSD target. This is why it is a bad idea to place anything in the 2^16 byte region centered on the split. The Linux kernel works around this by not using the lower 32kb of ckseg0. It also never user the top 32kb of useg when in 32bit mode. David Daney.
Re: MIPS: Trouble with address calculation near the useg/kseg boundary
On 02/16/2011 02:10 PM, Paul Koning wrote: On Feb 16, 2011, at 5:08 PM, David Daney wrote: On 02/16/2011 01:44 PM, Paul Koning wrote: I'm running into a crash caused by mishandling of address calculation of an array element address when that array is near the bottom of kseg0 (0x8000). The code essentially does this foo = v[i - 2].elem; where i is current zero. Assume for now the negative array offset is valid -- data structure elements in question exist to both sides of the label v. The generated code looks like this: /* i is in v0 */ addiu v0, -2 sll v0, 3 lui v1, 0x8000 adduv0, v1 lbu a1, 7110(v0) What's going on here is thatv[0].elem is 0xf80007110. The reference is valid -- array elements are 8 bytes so element -2 is still in kseg0. However, the addu produces value 07ff0 in v0 -- the result of adding -16 to the 32 bit value 0x8. Given that I have an ABI with 64 bit registers -- even though it has 32 bit pointers -- I would say the address adjustment should have been done with daddu; if that had been done I would have gotten the correct address. GCC is 4.5.1, NetBSD target. This is why it is a bad idea to place anything in the 2^16 byte region centered on the split. The Linux kernel works around this by not using the lower 32kb of ckseg0. It also never user the top 32kb of useg when in 32bit mode. Ok, so are you suggesting I have to modify my kernel in order to work around this compiler bug? What is the state of your C0_Status[{KX,SX,UX}] bits? It is not really a compiler bug, but rather a defect in the n32 ABI. When using 32-bit pointers you can only do 32-bit operations on them. To do otherwise raises the possibility of generating addresses that fall outside of the allowed range. But as you have found, there is this 64kb region centered on the split where behavior can become undefined. The only real way to avoid it would be to prohibit GCC from generating non-zero offsets for load/store instructions, the resulting code would be slower and bigger than the current behavior of using the offsets in address calculations. So to answer your question: I think your 'compiler bug' is a false predicate. And yes, I do suggest that you modify your kernel. David Daney. paul
Re: MIPS: Trouble with address calculation near the useg/kseg boundary
On 02/16/2011 02:32 PM, Paul Koning wrote: On Feb 16, 2011, at 5:25 PM, David Daney wrote: What is the state of your C0_Status[{KX,SX,UX}] bits? 0, 0, 0 It is not really a compiler bug, but rather a defect in the n32 ABI. When using 32-bit pointers you can only do 32-bit operations on them. To do otherwise raises the possibility of generating addresses that fall outside of the allowed range. Sure, I understand that. Actually, I'm using O64, but the issue is the same as with N32. ?? The problem is that the machine is doing 64 bit arithmetic when applying an offset to a base register. So what the compiler has to do is valid 64 bit operations from 32-bit sign extended memory and constant values. But as you have found, there is this 64kb region centered on the split where behavior can become undefined. The only real way to avoid it would be to prohibit GCC from generating non-zero offsets for load/store instructions, the resulting code would be slower and bigger than the current behavior of using the offsets in address calculations. I don't think it needs to do anything slower. As far as I can tell, the only change needed is that arithmetic on pointer values held in registers must be done with 64-bit operations. That just changes the opcodes, but it doesn't make them any longer. So to answer your question: I think your 'compiler bug' is a false predicate. And yes, I do suggest that you modify your kernel. I don't have the option of modifying the kernel, since I'm dealing with data structures that are at hardwired addresses and moving them isn't an available option. I guess I'll go beat on the code generator... It might be easier to do something to force the base address into a register so that the offsetting doesn't happen. Something like: struct foo *v = (struct foo *)0x80007110UL; v -= 2; /* Clobber v so GCC forces the value into a register. */ asm( : +r (v)); int bar = v-element; . . .
Re: RFC: A new MIPS64 ABI
On 02/14/2011 07:00 PM, Matt Thomas wrote: On Feb 14, 2011, at 6:50 PM, David Daney wrote: On 02/14/2011 06:33 PM, Matt Thomas wrote: On Feb 14, 2011, at 6:22 PM, David Daney wrote: On 02/14/2011 04:15 PM, Matt Thomas wrote: I have to wonder if it's worth the effort. The primary problem I see is that this new ABI requires a 64bit kernel since faults through the upper 2G will go through the XTLB miss exception vector. Yes, that is correct. It is a 64-bit ABI, and like the existing n32 ABI requires a 64-bit kernel. N32 doesn't require a LP64 kernel, just a 64-bit register aware kernel. Your N32-big does require a LP64 kernel. But using 'official' kernel sources the only way to get a 64-bit register aware kernel is for it to also be LP64. So effectively, you do in fact need a 64-bit kernel to run n32 userspace code. Not all the world is Linux. :) NetBSD supports N32 kernels. Use of LP32 in the kernel is only really appropiate in systems with a small amount of memory. The proposed n32-big would run on such systems, but would probably *not* have an advantage over standard n32. My proposed ABI would need trivial kernel changes: o Fix a couple of places where pointers are sign extended instead of zero extended. I think you'll find there are more of these than you'd expect. You could be right, but to date in auditing the Linux kernel, sigaction() is the only place I have found. o Change the stack address and address ranges returned by mmap(). My biggest concern is that many many mips opcodes expect properly sign-extended value for registers. Thusly N32-big will require using daddu/dadd/dsub/dsubu for addresses. So that's yet another departure from N32 which can use addu/add/sub/subu. That's right. Which is why I said... The main work would be in the compiler toolchain and runtime libraries. You'd also need to update gas for la and dla expansion. I am counting gas, ld and libc as part of the 'compiler toolchain' David Daney
Re: RFC: A new MIPS64 ABI
On 02/15/2011 09:56 AM, Alexandre Oliva wrote: On Feb 14, 2011, David Daneydda...@caviumnetworks.com wrote: Current MIPS 32-bit ABIs (both o32 and n32) are restricted to 2GB of user virtual memory space. This is due the way MIPS32 memory space is segmented. Only the range from 0..2^31-1 is available. Pointer values are always sign extended. The proposed new ABI would only be available on MIPS64 platforms. It would be identical to the current MIPS n32 ABI *except* that pointers would be zero-extended rather than sign-extended when resident in registers. FTR, I don't really know why my Yeeloong is limited to 31-bit addresses, and I kind of hoped an n32 userland would improve that WRT o32, without wasting memory with longer pointers like n64 would. So, sorry if this is a dumb question, but wouldn't it be much easier to keep on using sign-extended addresses, and just make sure the kernel never allocates a virtual memory range that crosses a sign-bit change, or whatever other reason there is for addresses to be limited to the positive 2GB range in n32? No, it is not possible. The MIPS (and MIPS64) hardware architecture does not allow userspace access to addresses with the high bit (two bits for mips64) set. Your complaint is a good summary of why I am thinking about n32-big. David Daney
Re: RFC: A new MIPS64 ABI
On 02/15/2011 09:32 AM, Joseph S. Myers wrote: On Mon, 14 Feb 2011, Joe Buck wrote: On Mon, Feb 14, 2011 at 05:57:13PM -0800, Paul Koning wrote: It seems that this proposal would benefit programs that need more than 2 GB but less than 4 GB, and for some reason really don't want 64 bit pointers. This seems like a microscopically small market segment. I can't see any sense in such an effort. I remember the RHEL hugemem patch being a big deal for lots of their customers, so a process could address the full 4GB instead of only 3GB on a 32-bit machine. If I recall correctly, upstream didn't want it (get a 64-bit machine!) but lots of paying customers clamored for it. (I personally don't have an opinion on whether it's worth bothering with). As I've been warning recently in the context of the operator new[] overflow checks discussion, even if your process is addressing 4GB in such circumstances it can't safely use single objects of 2GB or more and it's a security problem when malloc/calloc/etc. allow such objects to be created. See PR 45779. (There could well be issues with pointer comparisons as well as pointer differences, although there at least it's possible to be consistent if you don't allow objects to wrap around both in the middle and at the end of the address space.) Thanks Joseph, My idea for n32-big is that there would never be wraparound issues in the middle. The address space is contiguous from 0 to 4GB. Typically the area around the 4GB limit would be occupied by the stack and perhaps a several other regions reserved by the OS (vdso, etc. ). At the ends there could be wraparound/truncation issues, but this is no different than with the ABIs of most 32-bit targets. I don't know how hard it would be to make ptrdiff_t a signed 64-bit type. That would certainly complicate things somewhat. David Daney
RFC: A new MIPS64 ABI
Background: Current MIPS 32-bit ABIs (both o32 and n32) are restricted to 2GB of user virtual memory space. This is due the way MIPS32 memory space is segmented. Only the range from 0..2^31-1 is available. Pointer values are always sign extended. Because there are not already enough MIPS ABIs, I present the ... Proposal: A new ABI to support 4GB of address space with 32-bit pointers. The proposed new ABI would only be available on MIPS64 platforms. It would be identical to the current MIPS n32 ABI *except* that pointers would be zero-extended rather than sign-extended when resident in registers. In the remainder of this document I will call it 'n32-big'. As a result, applications would have access to a full 4GB of virtual address space. The operating environment would be configured such that the entire lower 4GB of the virtual address space was available to the program. At a low level here is how it would work: 1) Load a pointer to a register from memory: n32: LW $reg, offset($reg) n32-big: LWU $reg, offset($reg) 2) Load an address constant into a register: n32: LUI $reg, high_part ORI $reg, low_part n32-big: ORI $reg, high_part DSLL $reg, $reg, 16 ORI $reg, low_part Q: What would have to change to make this work? o A new ELF header flag to denote the ABI. o Linker support to use proper library search paths, and linker scrips to set the INTERP program header, etc. o GCC has to emit code for the new ABI. o Could all existing n32 relocation types be used? I think so. o Runtime libraries would have to be placed in a new location (/lib32big, /usr/lib32big ...) o The C library's ld.so would have to use a distinct LD_LIBRARY_PATH for n32-big code. o What would the Linux system call interface be? I would propose using the existing Linux n32 system call interface. Most system calls would just work. Some, that pass pointers in in-memory structures, might require kernel modifications (sigaction() for example).
Re: RFC: A new MIPS64 ABI
On 02/14/2011 04:15 PM, Matt Thomas wrote: On Feb 14, 2011, at 12:29 PM, David Daney wrote: Background: Current MIPS 32-bit ABIs (both o32 and n32) are restricted to 2GB of user virtual memory space. This is due the way MIPS32 memory space is segmented. Only the range from 0..2^31-1 is available. Pointer values are always sign extended. Because there are not already enough MIPS ABIs, I present the ... Proposal: A new ABI to support 4GB of address space with 32-bit pointers. The proposed new ABI would only be available on MIPS64 platforms. It would be identical to the current MIPS n32 ABI *except* that pointers would be zero-extended rather than sign-extended when resident in registers. In the remainder of this document I will call it 'n32-big'. As a result, applications would have access to a full 4GB of virtual address space. The operating environment would be configured such that the entire lower 4GB of the virtual address space was available to the program. I have to wonder if it's worth the effort. The primary problem I see is that this new ABI requires a 64bit kernel since faults through the upper 2G will go through the XTLB miss exception vector. Yes, that is correct. It is a 64-bit ABI, and like the existing n32 ABI requires a 64-bit kernel. At a low level here is how it would work: 1) Load a pointer to a register from memory: n32: LW $reg, offset($reg) n32-big: LWU $reg, offset($reg) That might be sufficient for userland, but the kernel will need to do similar things (even if a 64bit kernel) when accessing structures supplied by 32-bit syscalls. It is a userspace ABI. The MIPS64 kernel already uses something similar (the -msym32 option). There would be no change to the kernel. It seems to be workable but if you need the additional address space why not use N64? In n64 pointers are 64-bits wide. Programs that use many pointer laden data structures have a much larger cache/memory footprint than their n32 versions. Also the number of instructions required to load a 64-bit constant is much larger than that needed to load a 32-bit constant. David Daney
Re: RFC: A new MIPS64 ABI
On 02/14/2011 06:14 PM, Joe Buck wrote: On Mon, Feb 14, 2011 at 05:57:13PM -0800, Paul Koning wrote: It seems that this proposal would benefit programs that need more than 2 GB but less than 4 GB, and for some reason really don't want 64 bit pointers. This seems like a microscopically small market segment. I can't see any sense in such an effort. I remember the RHEL hugemem patch being a big deal for lots of their customers, so a process could address the full 4GB instead of only 3GB on a 32-bit machine. If I recall correctly, upstream didn't want it (get a 64-bit machine!) but lots of paying customers clamored for it. (I personally don't have an opinion on whether it's worth bothering with). Also look at the new x86_64 ABI (See all those X32 psABI messages) that the Intel folks are actively working on. This proposal is very similar to what they are doing. David Daney
Re: RFC: A new MIPS64 ABI
On 02/14/2011 06:34 PM, Matt Thomas wrote: On Feb 14, 2011, at 6:26 PM, David Daney wrote: On 02/14/2011 06:14 PM, Joe Buck wrote: On Mon, Feb 14, 2011 at 05:57:13PM -0800, Paul Koning wrote: It seems that this proposal would benefit programs that need more than 2 GB but less than 4 GB, and for some reason really don't want 64 bit pointers. This seems like a microscopically small market segment. I can't see any sense in such an effort. I remember the RHEL hugemem patch being a big deal for lots of their customers, so a process could address the full 4GB instead of only 3GB on a 32-bit machine. If I recall correctly, upstream didn't want it (get a 64-bit machine!) but lots of paying customers clamored for it. (I personally don't have an opinion on whether it's worth bothering with). Also look at the new x86_64 ABI (See all those X32 psABI messages) that the Intel folks are actively working on. This proposal is very similar to what they are doing. untrue. N32 is closer to the X32 ABI since it is limited to 2GB. It would only be 'untrue' if I had said it was *exactly like* the X32 thing. Really n32 is, as you note, already quite similar to what X32 is trying to do. My proposal is really for a small improvement to n32 to allow doubling the size of the virtual address space to 4GB. David Daney
Re: RFC: A new MIPS64 ABI
On 02/14/2011 06:33 PM, Matt Thomas wrote: On Feb 14, 2011, at 6:22 PM, David Daney wrote: On 02/14/2011 04:15 PM, Matt Thomas wrote: I have to wonder if it's worth the effort. The primary problem I see is that this new ABI requires a 64bit kernel since faults through the upper 2G will go through the XTLB miss exception vector. Yes, that is correct. It is a 64-bit ABI, and like the existing n32 ABI requires a 64-bit kernel. N32 doesn't require a LP64 kernel, just a 64-bit register aware kernel. Your N32-big does require a LP64 kernel. But using 'official' kernel sources the only way to get a 64-bit register aware kernel is for it to also be LP64. So effectively, you do in fact need a 64-bit kernel to run n32 userspace code. My proposed ABI would need trivial kernel changes: o Fix a couple of places where pointers are sign extended instead of zero extended. o Change the stack address and address ranges returned by mmap(). The main work would be in the compiler toolchain and runtime libraries. David Daney
libgo multilib issues.
Ian, In trying to build libgo on mips64-linux we try to build all three multilibs (o32, n32 and n64 ABIs) For the n32 ABI, the configure script generates syscall_arch.go: --- package syscall const ARCH = mips64 const OS = linux --- The Makefile has GOARCH = mips64, so it is trying to compile my new syscalls/syscall_linux_mips64.go. So far so good. But now what will happen for the n64 ABI? It has the exact same GOARCH. This is not good because n64 will need a different syscalls/syscall_linux_${GOARCH}.go Actually I think my syscall_linux_mips.go can be shared between both the o32 and n32 libraries. How to sort this out? David Daney
Re: libgo multilib issues.
On 01/27/2011 11:49 AM, Rainer Orth wrote: Ian Lance Taylori...@google.com writes: I guess ARCH == mips64 is going to be appropriate for any 64-bit MIPS target. If you need a different syscall_linux_${GOARCH} file for different mips64 targets, then I think we're going to need to test some conditional in libgo/Makefile.am to add the file to build. E.g., look at syscall_filesize_file. This is the same difference as between sparc and sparc64/sparcv9: while all recent SPARC CPUs are capable of executing 64-bit insns, there's both a 32-bit ABI (sparc) and a 64-bit one (sparcv9/sparc64). On MIPS (at least IRIX and obviously Linux/MIPS as well), you have two 32-bit ABIs (O32 and N32) and one 64-bit one (N64), on other systems there's also O64. That's right, but for the sake of argument I would say that o64 is unimportant as it is not supported by the Linux kernel. It again comes down to what GOARCH is supposed to mean: an ABI, or what else? I would say it is only useful to distinguish the various ABIs. That is the only thing any program really sees. The output of 'uname -m' is an almost completely usless piece of information, so I don't think assigning the value of GOARCH to it makes much sense. Since libgo doesn't even currently build under linux-mips*, we could change the values of GOARCH generated for mips without causing regressions. I would suggest: GOARCH=mips# o32 GOARCH=mips64n32 # Would you believe n32? GOARCH=mips64n64 # ...n64 David Daney
Re: libgo multilib issues.
On 01/27/2011 01:02 PM, Paul Koning wrote: On Jan 27, 2011, at 4:00 PM, Ian Lance Taylor wrote: Rainer Orthr...@cebitec.uni-bielefeld.de writes: Ian Lance Taylori...@google.com writes: I guess ARCH == mips64 is going to be appropriate for any 64-bit MIPS target. If you need a different syscall_linux_${GOARCH} file for different mips64 targets, then I think we're going to need to test some conditional in libgo/Makefile.am to add the file to build. E.g., look at syscall_filesize_file. This is the same difference as between sparc and sparc64/sparcv9: while all recent SPARC CPUs are capable of executing 64-bit insns, there's both a 32-bit ABI (sparc) and a 64-bit one (sparcv9/sparc64). On MIPS (at least IRIX and obviously Linux/MIPS as well), you have two 32-bit ABIs (O32 and N32) and one 64-bit one (N64), on other systems there's also O64. It again comes down to what GOARCH is supposed to mean: an ABI, or what else? That's a good point. I guess it has to mean an ABI. So we should be using different values for the different MIPS ABIs. What about all the other things you can do to MIPS with multilib? Different ISAs? Soft float vs. hard float? The current default GCC behavior under linux-mips64* is to ignore all those details. Based on my almost complete lack of libgo knowlege, I think the selection of a specific syscall_linux_${GOARCH}.go file would not care about soft/hard float issues. David Daney
Re: RFC: Add 32bit x86-64 support to binutils
On 12/30/2010 10:59 AM, H.J. Lu wrote: On Thu, Dec 30, 2010 at 10:42 AM, Joseph S. Myers jos...@codesourcery.com wrote: On Thu, 30 Dec 2010, H.J. Lu wrote: Hi, This patch adds 32bit x86-64 support to binutils. Support in compiler, library and OS is required to use it. It can be used to implement the new 32bit OS for x86-64. Any comments? Do you have a public psABI document? I think the psABI at the ELF level needs to come before the binutils bits, at the function call level needs to come before the GCC bits, etc. The psABI is the same as x86-64 psABI, except for 32bit ELF instead of 64bit. You appear (judging by the support for Linux targets in the binutils patch) to envisage Linux support for this ABI. How do you plan to avoid I enabled it for Linux so that I can run ILP32 binutils tests on Linux/x86-64. the problems that have plagued the MIPS n32 syscall ABI, which seems like a similar case? Can you describe MIPS n32 problems? I can. As Joseph indicated, any syscall that passes data in memory (ioctl, {set,get}sockopt, etc) potentially must have a translation done between kernel and user ABIs. Currently this is done in kernel/compat.c fs/compat_binfmt_elf.c and fs/compat_ioctl.c as well as a bunch of architecture specific ad hoc code. Look at the change history for those files to see that there is an unending flow of bugs being fixed due to this ABI mismatch. Even today there are many obscure ioctls that don't work on MIPS n32. Most of the code works most of the time, but then someone tries something new, and BAM! ABI mismatch hits anew. My suggestion: Since people already spend a great deal of effort maintaining the existing i386 compatible Linux syscall infrastructure, make your new 32-bit x86-64 Linux syscall ABI identical to the existing i386 syscall ABI. This means that the psABI must use the same size and alignment rules for in-memory structures as the i386 does. David Daney
Re: RFC: Add 32bit x86-64 support to binutils
On 12/30/2010 12:12 PM, H. Peter Anvin wrote: On 12/30/2010 11:34 AM, David Daney wrote: My suggestion: Since people already spend a great deal of effort maintaining the existing i386 compatible Linux syscall infrastructure, make your new 32-bit x86-64 Linux syscall ABI identical to the existing i386 syscall ABI. This means that the psABI must use the same size and alignment rules for in-memory structures as the i386 does. No, it doesn't. It just means it need to do so *for the types used by the kernel*. The kernel uses types like __u64, which would indeed have to be declared aligned(4). Some legacy interfaces don't use fixed width types. There almost certainly are some ioctls that don't use your fancy __u64. Then there are things like ppoll() that take a pointer to: struct timespec { longtv_sec; /* seconds */ longtv_nsec;/* nanoseconds */ }; There are no fields in there that are controlled by __u64 either. Admittedly this case might not differ between the two 32-bit ABIs, but it shows that __u64/__u32 are not universally used in the Linux syscall ABIs. If you are happy with potential memory layout differences between the two 32-bit ABIs, then don't specify that they are the same. But don't claim that use of __u64/__u32 covers all cases. David Daney
Re: RFC: Add 32bit x86-64 support to binutils
On 12/30/2010 12:28 PM, H.J. Lu wrote: On Thu, Dec 30, 2010 at 12:27 PM, David Daneydda...@caviumnetworks.com wrote: On 12/30/2010 12:12 PM, H. Peter Anvin wrote: On 12/30/2010 11:34 AM, David Daney wrote: My suggestion: Since people already spend a great deal of effort maintaining the existing i386 compatible Linux syscall infrastructure, make your new 32-bit x86-64 Linux syscall ABI identical to the existing i386 syscall ABI. This means that the psABI must use the same size and alignment rules for in-memory structures as the i386 does. No, it doesn't. It just means it need to do so *for the types used by the kernel*. The kernel uses types like __u64, which would indeed have to be declared aligned(4). Some legacy interfaces don't use fixed width types. There almost certainly are some ioctls that don't use your fancy __u64. Then there are things like ppoll() that take a pointer to: struct timespec { longtv_sec; /* seconds */ longtv_nsec;/* nanoseconds */ }; There are no fields in there that are controlled by __u64 either. Admittedly this case might not differ between the two 32-bit ABIs, but it shows that __u64/__u32 are not universally used in the Linux syscall ABIs. If you are happy with potential memory layout differences between the two 32-bit ABIs, then don't specify that they are the same. But don't claim that use of __u64/__u32 covers all cases. We can put a syscall wrapper to translate it. Of course you can. But you are starting with a blank slate, you should be asking yourself why you would want to. What is your objective here? Is it: 1) Fastest time to a relatively bug free useful system? or 2) Purity of ABI design? What would the performance penalty be for identical structure layout between the two 32-bit ABIs? Really I don't care one way or the other. The necessity of syscall wrappers is actually probably beneficial to me. It will create a greater future employment demand for people with the necessary skills to write them. David Daney
Re: PATCH RFA: Do not build java by default
On 11/02/2010 03:48 AM, Paolo Bonzini wrote: On 11/01/2010 11:47 AM, Joern Rennecke wrote: Quoting Geert Bosch bo...@adacore.com: On Nov 1, 2010, at 00:30, Joern Rennecke wrote: But to get that coverage, testers will need to have gnat installed. Will that become a requirement for middle-end patch regression testing? No, the language will only be built if a suitable bootstrap compiler is present. I know that. My question was aimed at soliciting opinions on patch submission policy in the case that libjava build testing are dropped from standard bootstrap tests. You already need to have ecj installed, so it's one dependency more and one less. I may be mistaken, but I don't think that is true. Building and testing of libgcj *does not* require ecj. David Daney
MIPS64 GCC not building at r165246
Hi Richard, I was just trying to build the trunk GCC at r165246 Configured thusly: $ ../trunk/configure --target=mips64-linux --with-sysroot=/home/daney/mips64-linux --prefix=/home/daney/mips64-linux --with-arch=mips64r2 --enable-languages=c --disable-libmudflap Back on: r162086 | rsandifo | 2010-07-12 11:53:01 -0700 (Mon, 12 Jul 2010) | 36 lines gcc/ * doc/tm.texi.in (SWITCHABLE_TARGET): Document. [...] You added SWITCHABLE_TARGET, but it seems to break building now in expr.c gcc -c -g -O2 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wmissing-format-attribute -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -Wold-style-definition -Wc++-compat -fno-common -DHAVE_CONFIG_H -I. -I. -I../../trunk/gcc -I../../trunk/gcc/. -I../../trunk/gcc/../include -I../../trunk/gcc/../libcpp/include -I../../trunk/gcc/../libdecnumber -I../../trunk/gcc/../libdecnumber/dpd -I../libdecnumber -I/usr/include/libelf ../../trunk/gcc/expr.c -o expr.o In file included from ../../trunk/gcc/expr.c:57: ../../trunk/gcc/target-globals.h:24: error: expected identifier or ‘(’ before ‘’ token ../../trunk/gcc/target-globals.h: In function ‘restore_target_globals’: ../../trunk/gcc/target-globals.h:63: error: lvalue required as left operand of assignment This seems to be caused by: flags.h:243 #define this_target_flag_state (default_target_flag_state) target-globals.h:24 extern struct target_flag_state *this_target_flag_state; Which when preprocessed we get: expr.i:??? extern struct target_flag_state *(default_target_flag_state); Which evidently is not valid C. I am not sure how to go about fixing this. Do you have any ideas? David Daney
Re: Bugzilla outage Friday, September 17, 18:00GMT-21:00GMT
On 09/15/2010 01:44 PM, Ian Lance Taylor wrote: Thanks to sterling work by Frédéric Buclin, the gcc.gnu.org overseers group is preparing to upgrade gcc.gnu.org bugzilla to a current version. We will be taking bugzilla offline on Friday, September 17, for three hours starting at 18:00GMT, 11:00PDT to do a final database upgrade and conversion to the new system. Please let us know if this is an intolerable inconvenience. A demonstration version of the new system may be found at http://gcc.gnu.org/bugzilla-test/ . Ian A quick question: What will happen to svn commits tagged with bug numbers during this outage? Will bugzilla eventually end up with the commit comments we have all come to know and love? David Daney
Re: RFH: optabs code in the java front end
I don't know the answers to your specific questions, but I do know that java questions might get faster response if cross posted to java@ (now CCed). David Daney On 09/10/2010 03:50 PM, Steven Bosscher wrote: Hello, There is just one front-end file left that still has to #undef IN_GCC_FRONTEND, allowing the front end to include RTL headers. The one remaining file is java/builtins.c. In java/builtins.c there are (what appear to be) functions that generate code for Java builtins, and these functions look at optabs to decide what to emit. For example: static tree compareAndSwapInt_builtin (tree method_return_type ATTRIBUTE_UNUSED, tree orig_call) { enum machine_mode mode = TYPE_MODE (int_type_node); if (direct_optab_handler (sync_compare_and_swap_optab, mode) != CODE_FOR_nothing || flag_use_atomic_builtins) { tree addr, stmt; As a result, java/builtins.c has to include most RTL-specific headers: /* FIXME: All these headers are necessary for sync_compare_and_swap. Front ends should never have to look at that. */ #include rtl.h #include insn-codes.h #include expr.h #include optabs.h I would really like to see this go away, and I would work on it if I had any idea what to do. I thought that the builtins java/builtins.c adds here, are generic GCC builtins. For example there is a definition of BUILT_IN_BOOL_COMPARE_AND_SWAP_4 in sync-builtins.def, so what is the effect of the define_builtin(BUILT_IN_BOOL_COMPARE_AND_SWAP_4,...) code in java/builtins.c:initialize_builtins? Does this re-define the builtin? I don't understand how the front-end definition of the builtin and the one from sync-builtins.def work together. I could use a little help here... Thoughts? Ciao! Steven
Re: How is the definition of stack canary on MIPS arch?
On 08/30/2010 08:36 PM, Adam Jiang wrote: On Mon, Aug 30, 2010 at 10:43:44AM -0700, David Daney wrote: On 08/30/2010 09:46 AM, Richard Henderson wrote: On 08/30/2010 03:45 AM, Adam Jiang wrote: When I read the source in Linux kerne, it was said that stack canary for implementing stack protector is defined as an offset to %gs on x86 architecture. How about stack canary defined on MIPS? It's not implemented for MIPS. For the Linux kernel, the MIPS stack canary would be a constant offset (that depends on PAGE_SIZE) from register $28. David Daney Thanks, David and Richard. Is there code, doc or anything on this topic I can refer to? Is it defined in gcc internally or in kernel source itself? Would you please redirect me to the right place? I am unaware of any documents. The MIPS Linux kernel ABI is not really documented anywhere, one learns it by studying and hacking on the source code. 32-bit kernels use a variant of the o32 ABI, 64-bit kernels use a variant of n64. Both dedicate register $28 as a pointer to the thread area of which the stack is a part. The form any stack canary for the MIPS Linux kernel will be determined by whomever implements it. I have done some research by googling. Here are what I've gotten. http://www.trl.ibm.com/projects/security/ssp/main.html http://www.trl.ibm.com/projects/security/ssp/ http://lxr.linux.no/linux+v2.6.35/arch/x86/include/asm/stackprotector.h However, it seems there is no documents about how this is done on MIPS. Do I miss something? At RTH said, It's not implemented for MIPS., so there was really nothing to miss. David Daney
Re: How is the definition of stack canary on MIPS arch?
On 08/30/2010 09:46 AM, Richard Henderson wrote: On 08/30/2010 03:45 AM, Adam Jiang wrote: When I read the source in Linux kerne, it was said that stack canary for implementing stack protector is defined as an offset to %gs on x86 architecture. How about stack canary defined on MIPS? It's not implemented for MIPS. For the Linux kernel, the MIPS stack canary would be a constant offset (that depends on PAGE_SIZE) from register $28. David Daney
Re: Bizarre GCC problem - how do I debug it?
On 08/06/2010 10:19 AM, Bruce Korb wrote: The problem seems to be that GDB thinks all the code belongs to a single line of text. At first, it was a file of mine, so I presumed I had done something strange and passed it off. I needed to do some more debugging again and my -g -O0 output still said all code belonged to that one line. So, I made a .i file and compiled that. Different file, but the same problem. The .i file contains the correct preprocessor directives: # 309 wrapup.c static void done_check(void) { but under gdb: (gdb) b done_check Breakpoint 5 at 0x40af44: file /usr/include/gmp.h, line 1661. the break point *is* on the entry to done_check, but the source code displayed is line 1661 of gmp.h. Not helpful. Further, I cannot set break points on line numbers because all code belongs to the one line in gmp.h. Yes, for now I can debug in assembly code, but it isn't very easy. $ gcc --version gcc (SUSE Linux) 4.5.0 20100604 [gcc-4_5-branch revision 160292] Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. I've googled for: gcc|gdb wrong source file which only yields how to examine source files in gdb. Which version of GDB? IIRC with GCC-4.5 you need a very new version of GDB. This page: http://gcc.gnu.org/gcc-4.5/changes.html indicates that GDB 7.0 or later would be good candidates. David Daney.
Re: Bizarre GCC problem - how do I debug it?
On 08/06/2010 10:51 AM, Bruce Korb wrote: On 08/06/10 10:24, David Daney wrote: On 08/06/2010 10:19 AM, Bruce Korb wrote: The problem seems to be that GDB thinks all the code belongs to a single line of text. At first, it was a file of mine, so I presumed I had done something strange and passed it off. I needed to do some more debugging again and my -g -O0 output still said all code belonged to that one line. So, I made a .i file and compiled that. Different file, but the same problem. The .i file contains the correct preprocessor directives: # 309 wrapup.c static void done_check(void) { but under gdb: (gdb) b done_check Breakpoint 5 at 0x40af44: file /usr/include/gmp.h, line 1661. the break point *is* on the entry to done_check, but the source code displayed is line 1661 of gmp.h. Not helpful. Further, I cannot set break points on line numbers because all code belongs to the one line in gmp.h. Yes, for now I can debug in assembly code, but it isn't very easy. $ gcc --version gcc (SUSE Linux) 4.5.0 20100604 [gcc-4_5-branch revision 160292] Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. I've googled for: gcc|gdb wrong source file which only yields how to examine source files in gdb. Which version of GDB? IIRC with GCC-4.5 you need a very new version of GDB. This page: http://gcc.gnu.org/gcc-4.5/changes.html indicates that GDB 7.0 or later would be good candidates. That seems to work. There are one or two or three bugs then. Either gdb needs to recognize an out of sync object code It cannot do this as it was released before GCC-4.5. , or else gcc needs to produce object code that forces gdb to object in a way more obvious than just deciding upon the wrong file and line -- or both. I simply installed the latest openSuSE and got whatever was supplied. It isn't reasonable to expect folks to go traipsing through upstream web sites looking for changes.html files And, of course, the insight stuff needs to incorporate the latest and greatest gdb. (I don't use ddd because it is _completely_ non- intuitive.) My understanding is that whoever packages GCC and GDB for a particular distribution is responsible to make sure that they work together. In your case it looks like that didn't happen. David Daney
Re: Source for current ECJ not available on sourceware.org
On 06/28/2010 01:11 PM, Brett Neumeier wrote: The GCC build process uses ecj, which is obtained from sourceware.org using contrib/download_ecj. The current latest version of ecj, used for the GCC build, is ecj 4.5. The previous version of ecj was 4.3, the source for which can be found at the same location on sourceware.org. But the FTP site doesn't contain the source for ecj 4.5. Are there any plans to publish the source code along with the binary jar file? In the meantime, where can I find the source code for the current ecj, as needed by gcc? Is there a source repository I can get to? The source is available somewhere, I have seen it. j...@gcc.gnu.org is the best place to ask java questions. Let's see what they say over there. David Daney
Re: Passing options down to assembler and linker
On 04/23/2010 10:55 AM, Jean Christophe Beyler wrote: Dear all, I've been working on a side port for an architectural variant and therefore there are a few differences in the assembler and linker to be handled. I know we can pass -Wl,option, -Wa,option from gcc down to as and ld however if I have to write : gcc -mArch2 -Wl,--arch2 -Wa,--arch2 hello.c it gets a bit redundant, I must be blind because I can't seem to find how to do it internally. You could try adjusting you ASM_SPEC and LINK_SPEC, the spec language should allow you to automate passing your options to the assembler and linker. David Daney
Re: Why not contribute? (to GCC)
On 04/23/2010 11:39 AM, Manuel López-Ibáñez wrote: This seems to be the question running around the blogosphere for several projects. And I would like to ask all people that read this list but hardly say or do anything. What reasons keep you from contributing to GCC? I am going to answer why I think it is, even though I like to think that I do do something. GCC has high standards, so anybody attempting to make a contribution for the first time will likely be requested to go through several revisions of a patch before it can be accepted. After having spent considerable effort developing a patch, there can be a sense that the merit of a patch is somehow related to the amount of effort expended creating it. Some people don't have a personality well suited to accepting criticism of something into which they have put a lot of effort. The result is that in a small number of cases, people Bad Mouth GCC saying things like: The GCC maintainers are a clique of elitist idiots that refuse to accept good work from outsiders. Personally I don't agree with such a view, and I don't think there is much that can be done about it. There will always be Vocal Discontents, and trying to accommodate all of them would surly be determental to GCC. I think that some potential contributors are discouraged from contributing because they have been frightened away (by the Vocal Discontents mentioned above) before they can get started. David Daney
Re: Change x86 default arch for 4.5?
On 02/18/2010 03:30 PM, Joe Buck wrote: On Thu, Feb 18, 2010 at 02:09:14PM -0800, Jason Merrill wrote: I periodically get bitten by bug 34115: a compiler configured without --with-arch on i686-pc-linux-gnu doesn't support atomics. I think we would only need to bump the default to i486 to get atomic support. Can we reconsider the default for 4.5? Is anyone still manufacturing x86 CPUs that don't support the atomic instructions? Should it just be a question of 'manufacturing'? Or should 'using' be a criterion for any decision? Not that I disagree with Jason's suggestion, it is probably the right choice. David Daney
Re: GCC-How does the coding style affect the insv pattern recognization?
fanqifei wrote: 2010/1/13 Bingfeng Mei b...@broadcom.com: Your instruction is likely too specific to be picked up by GCC. You may use an intrinisc for it. Bingfeng but insv is a standard pattern name. the semantics of expression x= (x0xFF00) | ((i16)0x00FF); is exactly what insv can do. I all tried mips gcc cross compiler, and ins is also not generated. You must be doing something wrong: $ cat fanqifei.c struct test_foo { unsigned int a:18; unsigned int b:2; unsigned int c:12; }; struct test_foo x; unsigned int foo() { unsigned int a=x.b; x.b=2; return a; } $ mips64-linux-gcc -O3 -march=mips32r2 -mabi=32 -mno-abicalls -S fanqifei.c $ cat fanqifei.s .file 1 fanqifei.c .section .mdebug.abi32 .previous .gnu_attribute 4, 1 .text .align 2 .globl foo .setnomips16 .entfoo .type foo, @function foo: .frame $sp,0,$31 # vars= 0, regs= 0/0, args= 0, gp= 0 .mask 0x,0 .fmask 0x,0 .setnoreorder .setnomacro lui $3,%hi(x) lw $2,%lo(x)($3) li $5,2# 0x2 move$4,$2 ins $4,$5,12,2 # Here it is. sw $4,%lo(x)($3) j $31 ext $2,$2,12,2 .setmacro .setreorder .endfoo .size foo, .-foo .comm x,4,4 .ident GCC: (GNU) 4.5.0 20091223 (experimental) [trunk revision 155414]
Re: [PATCH] ARM: Convert BUG() to use unreachable()
Jamie Lokier wrote: Uwe Kleine-König wrote: Use the new unreachable() macro instead of for(;;); *(int *)0 = 0; /* Avoid noreturn function does return */ - for (;;); + unreachable(); Will GCC-4.5 remove (optimise away) the *(int *)0 = 0 because it knows the branch of the code leading to unreachable can never be reached? I don't know the definitive answer, so I am sending to g...@... FYI: #define unreachable() __builtin_unreachable() If GCC-4.5 does not, are you sure a future version of GCC will never remove it? In other words, is __builtin_unreachable() _defined_ in such a way that it cannot remove the previous assignment? We have seen problems with GCC optimising away important tests for NULL pointers in the kernel, due to similar propagation of impossible to occur conditions, so it's worth checking with GCC people what the effect of this one would be. In C, there is a general theoretical problem with back-propagation of optimisations from code with undefined behaviour. In the case of __builtin_unreachable(), it would depend on all sorts of unclearly defined semantics whether it can remove a preceding *(int *)0 = 0. I'd strongly suggest asking on the GCC list. (I'd have mentioned this earlier, if I'd known about the patch for other architectures). The documentation for __builtin_unreachable() only says the program is undefined if control flow reaches it. In other words, it does not say what effect it can have on previous instructions, and I think it's quite likely that it has not been analysed in a case like this. One thing that would give me a lot more confidence, because the GCC documentation does mention asm(), is this: *(int *)0 = 0; /* Ensure unreachableness optimisations cannot propagate back. *I/ __asm__ volatile(); /* Avoid noreturn function does return */ unreachable(); -- Jamie
Re: [PATCH] ARM: Convert BUG() to use unreachable()
Joe Buck wrote: On Thu, Dec 17, 2009 at 11:06:13AM -0800, Russell King - ARM Linux wrote: On Thu, Dec 17, 2009 at 10:35:17AM -0800, Joe Buck wrote: Besides, didn't I see a whole bunch of kernel security patches related to null pointer dereferences lately? If page 0 can be mapped, you suddenly won't get your trap. Page 0 can not be mapped on ARM kernels since the late 1990s, and this protection is independent of the generic kernel. Milage may vary on other architectures, but that's not a concern here. I don't understand, though, why you would want to implement a generally useful facility (make the kernel trap so you can do a post-mortem analysis) in a way that's only safe for the ARM port. Each Linux kernel architecture has in its architecture specific bug.h an implementation that is deemed by the architecture maintainers to work. As far as I know, few if any of these use __builtin_trap(). Some could be converted to __builtin_trap(), others cannot (x86 for example). If we enhanced __builtin_trap() to take an argument for the trap code, MIPS could be converted. But as it stands now __builtin_trap() is not very useful. As more architectures start adding funky tables that get generated by the inline asm (as in x86), __builtin_trap() becomes less useful. David Daney
Re: Bad mailing list index?
H.J. Lu wrote: Hi, When I visit: http://gcc.gnu.org/ml/gcc-bugs/ http://gcc.gnu.org/ml/gcc-cvs/ at Wed Dec 9 10:41:43 PST 2009, I didn't see December, 2009. It was there yesterday. Has anyone else seen it? You may need to clear browser cache first. You just said how to work around the problem. Just reloading the page works as well. I don't know what it would take to put an expiration date on those pages that are updated monthly so that the reload wouldn't be necessary. David Daney
Re: BUG: GCC-4.4.x changes the function frame on some functions
Linus Torvalds wrote: On Thu, 19 Nov 2009, Linus Torvalds wrote: I bet other people than just the kernel use the mcount hook for subtler things than just doing profiles. And even if they don't, the quoted code generation is just crazy _crap_. For the kernel, if the only case is that timer_stat.c thing that Thomas pointed at, I guess we can at least work around it with something like the appended. The kernel code is certainly ugly too, no question about that. It's just that we'd like to be able to depend on mcount code generation not being insane even in the presense of ugly code.. The alternative would be to have some warning when this happens, so that we can at least see it. mcount won't work reliably For the MIPS port of GCC and Linux I recently added the -mmcount-ra-address switch. It causes the location of the return address (on the stack) to be passed to mcount in a scratch register. Perhaps something similar could be done for x86. It would make this patching of the return location more reliable at the expense of more code at the mcount invocation site. For the MIPS case the code size doesn't increase, as it is done in the delay slot of the call instruction, which would otherwise be a nop. David Daney
Re: [JAVA,libtool] Big libjava is biiiig.
Tom Tromey wrote: Dave == Dave Korn dave.korn.cyg...@googlemail.com writes: Dave There are a couple of regressions to solve first, but it appears Dave that I've more-or-less cracked it. Full details are written up Dave here: Dave http://gcc.gnu.org/wiki/Internal_dependencies_of_libgcj One thing worth considering is that you may be able to shrink things even more by splitting up some existing objects. I didn't see AWT in the cluster 48 list, which seems weird. I would expect it to be in the core due to AWTPermission. I'm curious why sun.awt and swing ended up in there. I would expect that with minor tweaks you could probably get AWT, the peers, and Swing to drop out. That was true for AWT, at least, last time I looked at this (years ago) -- but I needed a special case to keep AWTPermission in. I have patches that do a lot of the things Tom mentions. They are only lightly tested, but they could be a starting point. I will dig them out and post them this weekend. David Daney.
Re: GCC and boehm-gc
NightStrike wrote: Given the recent issues with libffi being so drastically out of synch with upstream, I was curious about boehm-gc and how that is handled. In getting gcj to work on Win64, the next step is boehm-gc now that libffi works just fine. However, the garbage collector is in terrible shape and will need a bit of work. Do we send those fixes here to GCC, or to some other project? Who handles it? How is the synching done compared to other external projects? Your analysis of the situation is essentially correct. Hans (now CCed) is good about merging changes to the upstream sources, but we haven't updated GCC/libgcj's copy in quite some time. A properly motivated person would have to import a newer version of the GC checking that all GCC local changes were either already merged, or if not port them to the new GC (those that are not upstream should then be evaluated to see if they should be). David Daney
Re: GCC and boehm-gc
Andrew Haley wrote: NightStrike wrote: On Thu, Jun 18, 2009 at 12:27 PM, David Daneydda...@caviumnetworks.com wrote: NightStrike wrote: Given the recent issues with libffi being so drastically out of synch with upstream, I was curious about boehm-gc and how that is handled. In getting gcj to work on Win64, the next step is boehm-gc now that libffi works just fine. However, the garbage collector is in terrible shape and will need a bit of work. Do we send those fixes here to GCC, or to some other project? Who handles it? How is the synching done compared to other external projects? Your analysis of the situation is essentially correct. Hans (now CCed) is good about merging changes to the upstream sources, but we haven't updated GCC/libgcj's copy in quite some time. A properly motivated person would have to import a newer version of the GC checking that all GCC local changes were either already merged, or if not port them to the new GC (those that are not upstream should then be evaluated to see if they should be). So it seems that boehm-gc is in the exact state as libffi. No, it's not. The problem with libffi is that it was updated in gcc and upstream; that is much less of a problem with boehm-gc. It may be less of a problem, but running svn log boehm-gc shows several non-configure changes since Bryce imported version 6.6 in r110222. David Daney
Re: GCC and boehm-gc
NightStrike wrote: On Thu, Jun 18, 2009 at 12:58 PM, Andrew Haleya...@redhat.com wrote: NightStrike wrote: On Thu, Jun 18, 2009 at 12:27 PM, David Daneydda...@caviumnetworks.com wrote: NightStrike wrote: Given the recent issues with libffi being so drastically out of synch with upstream, I was curious about boehm-gc and how that is handled. In getting gcj to work on Win64, the next step is boehm-gc now that libffi works just fine. However, the garbage collector is in terrible shape and will need a bit of work. Do we send those fixes here to GCC, or to some other project? Who handles it? How is the synching done compared to other external projects? Your analysis of the situation is essentially correct. Hans (now CCed) is good about merging changes to the upstream sources, but we haven't updated GCC/libgcj's copy in quite some time. A properly motivated person would have to import a newer version of the GC checking that all GCC local changes were either already merged, or if not port them to the new GC (those that are not upstream should then be evaluated to see if they should be). So it seems that boehm-gc is in the exact state as libffi. No, it's not. The problem with libffi is that it was updated in gcc and upstream; that is much less of a problem with boehm-gc. That's what David just described -- that there are both GCC local changes and upstream changes. Regardless, someone with the knowledge and background needs to do this merge. Or someone willing to acquire such knowledge and background by attempting to do the merge and presenting the results of their efforts for review. David Daney
Re: [JAVA,libtool] Big libjava is biiiig.
Ralf Wildenhues wrote: Hello Dave, * Dave Korn wrote on Wed, May 06, 2009 at 06:09:05PM CEST: [...] 1) Would this be a reasonable approach, specifically i) in adding a configure option to cause sublibraries to be built, and ii) in using gmake's $(filter) construct to crudely subdivide the libraries like this? You are aware of the fact that it is part of the ABI in which of the linked DLLs a given symbol was found, and any shuffling of that later will break that ABI? You also have to ensure that the sub libraries are self-contained, or at least their interdependencies form a directed non-cyclic graph (or you will need very ugly hacks on w32). Unfortunately it may not be a simple task to find a suitably large set of packages that satisfy this 'directed non-cyclic graph' criterion. I might suggest looking at grouping a bunch of various protocol handlers together that are all accessed via mechanisms like the URLConnection, and the various crypto implementations. David Daney
Re: RFC: case insensitive for #include
H.J. Lu wrote: Hi, I got a request to try FOO.H if foo.h doesn't exist when dealing with #include foo.h Any comments? How about Foo.h, FOo.H, etc.? I have found as many errors with mixed case screw-ups as with the 'single case' variety you mention. Would you want to make it fully case insensitive or only try the lower-upper case? David Daney
Fixing the __sync_nand mess for MIPS.
Richard and others, I have a (still broken) patch that tries to fix the fallout from the change in semantics of the __sync_nand faimily of builtins that occurred recently on the trunk. If someone else is working on this and is ready to commit, I would abandon my patch, otherwise we will press on with it. The main point of this message is to try to avoid duplication of effort. Thanks, David Daney
Re: Fixing the __sync_nand mess for MIPS.
Richard Sandiford wrote: Hi David, David Daney [EMAIL PROTECTED] writes: Richard and others, I have a (still broken) patch that tries to fix the fallout from the change in semantics of the __sync_nand faimily of builtins that occurred recently on the trunk. If someone else is working on this and is ready to commit, I would abandon my patch, otherwise we will press on with it. The main point of this message is to try to avoid duplication of effort. Thanks for asking. TBH, I was getting to the same point: I'd done a quick local hack, realised it wasn't enough, and was gearing up to ask whether it was worth continuing or not. I then got sidetracked by the other testsuite stuff I'm doing. So if you're happy to press ahead with your patch, that'd be great from my POV, thanks. OK, I will work on mine more. Really, how hard could it be? I am surprised my first attempt failed as usually I never have bugs :-) David Daney
Re: MIPS -mplt option in N32 abi system
Zhang Le wrote: Hi there, I have just tried gcc 4.4 svn trunk on a MIPS N32 system. But I found -mplt is practically not usable, because -mno-shared is not used when generating non-PIC code. I dug into the code and found the cause is in gcc/config/mips/linux64.h. Unlike linux.h under the same directory, DRIVER_SELF_SPECS in linux64.h has no LINUX_DRIVER_SELF_SPECS. Is it left out intentionally? However it seems to me that -mplt works on N32 system. So what about the patch attached? ok to apply? [...] BASE_DRIVER_SELF_SPECS \ +LINUX_DRIVER_SELF_SPECS \ %{!EB:%{!EL:%(endian_spec)}} \ %{!mabi=*: -mabi=n32} You are missing a comma there between BASE_DRIVER_SELF_SPECS and LINUX_DRIVER_SELF_SPECS. Without the comma, I am told that gcc.target/mips/pr35802.c FAILs. Other than that (and some formatting) this patch is equivalent to: http://gcc.gnu.org/ml/gcc-patches/2008-12/msg00033.html David Daney
Re: MIPS -mplt option in N32 abi system
Zhang Le wrote: On 10:33 Mon 01 Dec , David Daney wrote: Zhang Le wrote: BASE_DRIVER_SELF_SPECS \ +LINUX_DRIVER_SELF_SPECS \ %{!EB:%{!EL:%(endian_spec)}} \ %{!mabi=*: -mabi=n32} You are missing a comma there between BASE_DRIVER_SELF_SPECS and LINUX_DRIVER_SELF_SPECS. Without the comma, I am told that gcc.target/mips/pr35802.c FAILs. Other than that (and some formatting) this patch is equivalent to: http://gcc.gnu.org/ml/gcc-patches/2008-12/msg00033.html Thanks! I will pay much closer attention to gcc-patches list in the future. In this case, it may not have helped much. I only beat you be about an hour. David Daney
Re: [PATCH] MIPS: Make BUG() __noreturn.
Geert Uytterhoeven wrote: On Fri, 21 Nov 2008, Alan Cox wrote: On Thu, 20 Nov 2008 17:26:36 -0800 David Daney [EMAIL PROTECTED] wrote: MIPS: Make BUG() __noreturn. Often we do things like put BUG() in the default clause of a case statement. Since it was not declared __noreturn, this could sometimes lead to bogus compiler warnings that variables were used uninitialized. There is a small problem in that we have to put a magic while(1); loop to fool GCC into really thinking it is noreturn. That sounds like your __noreturn macro is wrong. Try using __attribute__ ((__noreturn__)) if that works then fix up the __noreturn definitions for the MIPS and gcc you have. Nope, gcc is too smart: $ cat a.c int f(void) __attribute__((__noreturn__)); int f(void) { } $ gcc -c -Wall a.c a.c: In function f: a.c:6: warning: `noreturn' function does return $ That's right. I was discussing this issue with my colleague Adam Nemet, and we came up with a couple of options: 1) Enhance the _builtin_trap() function so that we can specify the break code that is emitted. This would allow us to do something like: static inline void __attribute__((noreturn)) BUG() { __builtin_trap(0x200); } 2) Create a new builtin '__builtin_noreturn()' that expands to nothing but has no CFG edges leaving it, which would allow: static inline void __attribute__((noreturn)) BUG() { __asm__ __volatile__(break %0 : : i (0x200)); __builtin_noreturn(); } David Daney
Re: [PATCH] MIPS: Make BUG() __noreturn.
Andrew Morton wrote: Yup, this change will fix some compile warnings which will never be fixed in any other way for mips. +static inline void __noreturn BUG(void) +{ + __asm__ __volatile__(break %0 : : i (BRK_BUG)); + /* Fool GCC into thinking the function doesn't return. */ + while (1) + ; +} This kind of sucks, doesn't it? It adds instructions into the kernel text, very frequently on fast paths. Those instructions are never executed, and we're blowing away i-cache just to quash compiler warnings. For example, this: --- a/arch/x86/include/asm/bug.h~a +++ a/arch/x86/include/asm/bug.h @@ -22,14 +22,12 @@ do { \ .popsection\ : : i (__FILE__), i (__LINE__),\ i (sizeof(struct bug_entry))); \ - for (;;) ; \ } while (0) #else #define BUG() \ do { \ asm volatile(ud2); \ - for (;;) ; \ } while (0) #endif _ reduces the size of i386 mm/vmalloc.o text by 56 bytes. I wonder if there is any clever way in which we can do this without introducing additional runtime cost. As I said in the other part of the thread, We are working on a GCC patch that adds a new built-in function '__builtin_noreturn()', that you could substitute for 'for(;;);' that emits no instructions in this case. David Daney
Re: Copyright notices during assignment limbo
Joern Rennecke wrote: For code that we have written, what should the copyright notices read during the period where we have given the FSF a copyright assignemnt, but they haven't yet acknowledged that it is on file? Obviously when we wrote the code, we've put our own copyright notices on it because we need to have a copyright in it before we can assign it to someone else. Eventually, for the code that we have assigned, they should be changed to say Copyright FSF. But at what point in time should the Copyright notices actually be changed? IANAL, but... It cannot be committed to GCC until it contains an FSF copyright notice. Before it is committed it is your code, do whatever you want. David Daney
Re: java announce mailing list
On Mon, Sep 29, 2008 at 10:30 PM, NightStrike [EMAIL PROTECTED] wrote: The java-announce mailing list hasn't had a single message (according to the archive on gcc.gnu.org) since 2001. Is there something wrong with the archive, or is the list dead? We are very humble folk. Much has happened, but we don't like to make a big spectacle of it. I would recommend following [EMAIL PROTECTED] instead. David Daney
FAIL: gcc.target/mips/octeon-exts-2.c scan-assembler-times
Adam, As shown here: http://gcc.gnu.org/ml/gcc-testresults/2008-09/msg01775.html gcc.target/mips/octeon-exts-2.c is failing when configured --with-arch=sb1 Do you know if it is failing universally or only on non-octeon targets? David Daney
FAIL: gcc.target/mips/octeon-exts-2.c scan-assembler-times
Adam, As shown here: http://gcc.gnu.org/ml/gcc-testresults/2008-09/msg01775.html gcc.target/mips/octeon-exts-2.c is failing when configured --with-arch=sb1 Do you know if it is failing universally or only on non-octeon targets? David Daney
Re: improving testsuite runtime
Ben Elliston wrote: On Thu, 2008-09-18 at 10:44 -0600, Tom Tromey wrote: Ben So, I guess my question is: what now? What do people feel would be Ben required to make this usable? I assume that the most pressing thing Ben would be to have the build system fold the various .log and .sum files Ben together so that they look like they were run as a whole. Yeah, this seems necessary. Ideally the order ought to be stable, too. Do you think that the current order of .exps should be preserved in the resultant .sum and .logs? I guess some people and/or build farms actually use diff rather than compare_tests? That would be nice, but you could sort your FAILs if it changed and be able to compare the sorted lists. But stability within a given revision of the testsuite I think would be almost essential. David Daney
New ICE on MIPS in haifa-sched.c when compiling __popcountsi2 from libgcc
Within the last two days my MIPS bootstraps are failing. http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37360 It worked back on: http://gcc.gnu.org/ml/gcc-testresults/2008-09/msg00118.html David Daney
Re: [PATCH] Use lwsync in PowerPC sync_* builtins
On Wed, Sep 3, 2008 at 6:09 PM, David Edelsohn [EMAIL PROTECTED] wrote: On Wed, Sep 3, 2008 at 6:53 PM, Anton Blanchard [EMAIL PROTECTED] wrote: The only thing lwsync wont order is a store followed by a load. Since the lwsync will always be paired with a store (the stwcx), we will order all accesses before it and provide a release barrier. Anton, My one other concern is developers using the builtins for applications on embedded PowerPC processors. lwsync will not order accesses to device memory space, AFAICT. I do not know if developers would rely on GCC builtins in that context and assume it implements the correct semantics. Otherwise, I agree that the memory barrier operations probably can use lwsync. Would it be possible to have a conservative default and use a more optimal form based on a specific CPU specified by -mcpu=? I was thinking of doing something similar on MIPS where there are similar issues. David Daney
Re: broken svn commit logs and how to fix them
Paul Koning wrote: I'm seeing messages on this list repeating over and over (several minutes apart, maybe as much as 15 minutes or so). I'm not sure if the are just messages from Manuel or also from others. Is it just me? It seems to be specific to this list... It seems that all the repeats are generated by gmail.com. You will note that NightStrike's messages are repeated as well. David Daney
Re: [PATCH]: GCC Scheduler support for R10000 on MIPS
Kumba wrote: Richard Sandiford wrote: OK otherwise. Do you have a copyright assignment on file? Nope. Is there something I need to fill out and e-mail to someone? Yes there is. I'm not sure if Richard can cause them to be sent to you, but certainly requesting copyright assignment documents on gcc@gcc.gnu.org would work. It can often take many weeks to get them processed, so starting as soon as possible would be a good idea. Do I need to put my name and the name of the author of the very original gcc-3.0 patch in this file as well? It would depend on if any of the original patch code remains. If so, probably a copyright assignment for the original author would be required as well (at least that is my understanding). David Daney
Re: Exception handling tables for function generated on the fly
Questions like this should probably go to [EMAIL PROTECTED], but... Tom Quarendon wrote: I'm porting some code that does a kind of JIT to translate a user script into a dynamically created function for execution, but am having trouble porting this to GCC and the way it implementes exceptions. Lets say I've got int doPUT() { throw IOException; } int doGET() { throw IOException } and I want to magic up a function by writing (intel x86) instructions into memory that does the same as if I'd done int magic() { doPUT(); doGET(); return 0; } You don't say how you get them into memory. Are you building a shared library and then loading it with dlopen()? I then want to call my magic function as in int main() { // magic up my function in memory containing calls to doGET and doPUT. try { // call my magic'd function } catch (IOException) { // Report the exception } } If I do this I get std::terminate called from __cxa_throw. Researching this it seems that I somehow need to register some exception handling tables to correspond to the magic function to enable the exception handler to allow the exception to propagate through. I'd welcome any pointers to where I might be able to get some information on this. I've looked at the C++ ABI documentation which helps a bit, and I've found some information on the format that the tables need to be in (and indeed I've looked at the assembler generated by the gcc compiler if I code up magic and compile it directly), but I don't yet see quite how to put it all together. If you pass -funwind-tables to gcc it will generate the necessary unwinding information. If you put the code in a shared library and dlopen() it it should just work. If you are loading the code some other way, then you may have to call some of the __register_frame* family of functions (in libgcc) passing pointers to the appropriate .eh_frame sections of the generated code. I imagine that GCJ has do to this ind of thing? g++ as well. David Daney
Re: Exception handling tables for function generated on the fly
Dave Korn wrote: David Daney wrote on 12 August 2008 18:19: Questions like this should probably go to [EMAIL PROTECTED] Questions about deep compiler internals and EH abis? Seems a bit intense for the where's-the-any-key list to me... gcc@ is for questions about development of GCC. gcc-help@ is for everything else. . . . If you pass -funwind-tables to gcc it will generate the necessary unwinding information. . . . Yes. The OP's question is How do I generate .eh_frame data at runtime for an arbitrary function that has no throws and no catches but may call functions that throw. Which is why I recommended passing -funwind-tables, which does exactly that. David Daney
Re: Java Development with GCJ
On Sun, Aug 10, 2008 at 6:28 PM, Daniel B. Davis [EMAIL PROTECTED] wrote: Hello -- I am acting for a friend with a java application who needs to run it on a microcontroller. Though aware of GC, I only recently learned of GCJ. Accordingly, I am looking for developer resources using GCJ for microcontrollers. Anyone, manufacturer, developer, or hobbyist who has put forward the effort to define the microcontroller hardware for GCJ so that effective compilation may be done. It depends on the microcontroller. If it is well supported by GCC and runs Linux and glibc, then if GCJ/libgcj are not already working it would be fairly simple to get it working. If it does not meet these criteria, then it may be more difficult. Additionally, since one of the main show stoppers is the Java class libraries, I would also be interested in any efforts to convert the class libraries, on some basis, for microcontroller usage. Clearly this cannot be a direct compilation in all cases, especially for graphics, but any such activity may beat starting from square one. GCC ships with a fairly complete java runtime library (libgcj). David Daney
Re: GCC/GCJ, SWT, and license lock-in
Steve Perkins wrote: I have a question about using GCC/GCJ to compile a Java application which uses the SWT framework for its GUI, and whether this locks you in or out of any licensing options. I apologize in advance if this question is somewhat off-topic... I searched gnu.org for a mailing list specifically directed toward licensing discussion and came up empty. The SWT is covered by the Eclipse Public License (EPL), which does not bind you to use the EPL for programs which merely link to SWT without modifying it (Question #27 at http://www.eclipse.org/legal/eplfaq.php#DERIV). However, the FSF considers the EPL to be incompatible with the GPL. I'm not sure what impact (if any) this would have on my desire to write a GPL'ed application from scratch, which links to SWT for its GUI. I know that writing a *non*-GPL'ed application, and linking it to GPL'ed code, creates problems (e.g. the Cygwin DLL dependency issue on Windows). This is because the GPL requires you to use the GPL for works linking to GPL'ed code, even if you aren't modifying that GPL'ed code. However, it seems that this issue would not arise when going in the other direction... applying the GPL to new code, which links to libraries using non-restrictive licenses. Now, if I were also making modifications to the SWT as part of this work, then the EPL would be imposed upon the whole... and then I would clearly run into a clash between the GPL and EPL. However, since I am not modifying SWT, the EPL does not require me to impose its terms on the whole... so it appears that I would be free to apply the GPL on my non-derivative new code. On its face, there doesn't appear to be a licensing problem with applying the GPL to a new application which uses non-modified SWT. This seems fairly intuitive to me, but I wanted to bounce it off the GCC community to see if I'm overlooking any potential issues. For example, I'm not sure if GCC's recent migration from GPL 2 to GPL 3 has had any effect of imposing license terms on executables compiled with GCC/GCJ. I cannot really opine on SWT, but I would encourage you to look at the linking exception clauses in GCC's various runtime libraries. Specifically classpath/libgcj allow you to link to them with very few restrictions. It might also be a good idea to get a real lawyer to help you evaluate your exact situation. David Daney
Re: Resend: [PATCH] [MIPS] Fix asm constraints for 'ins' instructions.
Richard Sandiford wrote: David Daney [EMAIL PROTECTED] writes: Ralf Baechle wrote: On Wed, Jun 11, 2008 at 10:04:25AM -0700, David Daney wrote: The third operand to 'ins' must be a constant int, not a register. Signed-off-by: David Daney [EMAIL PROTECTED] --- include/asm-mips/bitops.h |6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/asm-mips/bitops.h b/include/asm-mips/bitops.h index 6427247..9a7274b 100644 --- a/include/asm-mips/bitops.h +++ b/include/asm-mips/bitops.h @@ -82,7 +82,7 @@ static inline void set_bit(unsigned long nr, volatile unsigned long *addr) 2: b 1b \n .previous \n : =r (temp), =m (*m) - : ir (bit), m (*m), r (~0)); + : i (bit), m (*m), r (~0)); #endif /* CONFIG_CPU_MIPSR2 */ } else if (cpu_has_llsc) { __asm__ __volatile__( An old trick to get gcc to do the right thing. Basically at the stage when gcc is verifying the constraints it may not yet know that it can optimize things into an i argument, so compilation may fail if r isn't in the constraints. However we happen to know that due to the way the code is written gcc will always be able to make use of the i constraint so no code using r should ever be created. The trick is a bit ugly; I think it was used first in asm-i386/io.h ages ago and I would be happy if we could get rid of it without creating new problems. Maybe a gcc hacker here can tell more? It is not nice to lie to GCC. CCing GCC and Richard in hopes that a wider audience may shed some light on the issue. You _might_ be able to use i#r instead of ri, but I wouldn't really recommend it. Even if it works now, I don't think there's any guarantee it will in future. There are tricks you could pull to detect the problem at compile time rather than assembly time, but that's probably not a big win. And again, I wouldn't recommend them. I'm not saying anything you don't know here, but if the argument is always a syntactic constant, the safest bet would be to apply David's patch and also convert the function into a macro. I notice some other ports use macros rather than inline functions here. I assume you've deliberately rejected macros as being too ugly though. I am still a little unclear on this. To restate the question: static inline void f(unsigned nr, unsigned *p) { unsigned short bit = nr 5; if (__builtin_constant_p(bit)) { __asm__ __volatile__ ( foo %0, %1 : =m (*p) : i (bit)); } else { // Do something else. } } . . . f(3, some_pointer); . . . Among the versions of GCC that can build the current kernel, will any fail on this code because the i constraint cannot be matched when expanded to RTL? David Daney
Re: Resend: [PATCH] [MIPS] Fix asm constraints for 'ins' instructions.
Richard Sandiford wrote: David Daney [EMAIL PROTECTED] writes: Richard Sandiford wrote: David Daney [EMAIL PROTECTED] writes: Ralf Baechle wrote: On Wed, Jun 11, 2008 at 10:04:25AM -0700, David Daney wrote: The third operand to 'ins' must be a constant int, not a register. Signed-off-by: David Daney [EMAIL PROTECTED] --- include/asm-mips/bitops.h |6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/asm-mips/bitops.h b/include/asm-mips/bitops.h index 6427247..9a7274b 100644 --- a/include/asm-mips/bitops.h +++ b/include/asm-mips/bitops.h @@ -82,7 +82,7 @@ static inline void set_bit(unsigned long nr, volatile unsigned long *addr) 2:b 1b \n .previous \n : =r (temp), =m (*m) - : ir (bit), m (*m), r (~0)); + : i (bit), m (*m), r (~0)); #endif /* CONFIG_CPU_MIPSR2 */ } else if (cpu_has_llsc) { __asm__ __volatile__( An old trick to get gcc to do the right thing. Basically at the stage when gcc is verifying the constraints it may not yet know that it can optimize things into an i argument, so compilation may fail if r isn't in the constraints. However we happen to know that due to the way the code is written gcc will always be able to make use of the i constraint so no code using r should ever be created. The trick is a bit ugly; I think it was used first in asm-i386/io.h ages ago and I would be happy if we could get rid of it without creating new problems. Maybe a gcc hacker here can tell more? It is not nice to lie to GCC. CCing GCC and Richard in hopes that a wider audience may shed some light on the issue. You _might_ be able to use i#r instead of ri, but I wouldn't really recommend it. Even if it works now, I don't think there's any guarantee it will in future. There are tricks you could pull to detect the problem at compile time rather than assembly time, but that's probably not a big win. And again, I wouldn't recommend them. I'm not saying anything you don't know here, but if the argument is always a syntactic constant, the safest bet would be to apply David's patch and also convert the function into a macro. I notice some other ports use macros rather than inline functions here. I assume you've deliberately rejected macros as being too ugly though. I am still a little unclear on this. To restate the question: static inline void f(unsigned nr, unsigned *p) { unsigned short bit = nr 5; if (__builtin_constant_p(bit)) { __asm__ __volatile__ ( foo %0, %1 : =m (*p) : i (bit)); } else { // Do something else. } } . . . f(3, some_pointer); . . . Among the versions of GCC that can build the current kernel, will any fail on this code because the i constraint cannot be matched when expanded to RTL? Someone will point this out if I don't, so for avoidance of doubt: this needs to be always_inline. It also isn't guaranteed to work with bit being a separate statement. I'm not truly sure it's guaranteed to work even with: __asm__ __volatile__ ( foo %0, %1 : =m (*p) : i (nr 5)); but I think we'd try hard to make sure it does. I think Maciej said that 3.2 was the minimum current version. Even with those two issues sorted out, I don't think you can rely on this sort of thing with compilers that used RTL inlining. (always_inline does go back to 3.2, in case you're wondering.) Well I withdraw the patch. With the current kernel code we seem to always get good code generation. In the event that the compiler tries to put the shift amount (nr) in a register, the assembler will complain. I don't think it is possible to generate bad object code, so best to leave it alone. FYI, the reason that I stumbled on this several weeks ago is that if(__builtin_constant_p(nr)) in the trunk compiler was generating code for the asm even though nr was not constant. David Daney
Re: Resend: [PATCH] [MIPS] Fix asm constraints for 'ins' instructions.
Ralf Baechle wrote: On Wed, Jun 11, 2008 at 10:04:25AM -0700, David Daney wrote: The third operand to 'ins' must be a constant int, not a register. Signed-off-by: David Daney [EMAIL PROTECTED] --- include/asm-mips/bitops.h |6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/asm-mips/bitops.h b/include/asm-mips/bitops.h index 6427247..9a7274b 100644 --- a/include/asm-mips/bitops.h +++ b/include/asm-mips/bitops.h @@ -82,7 +82,7 @@ static inline void set_bit(unsigned long nr, volatile unsigned long *addr) 2:b 1b \n .previous \n : =r (temp), =m (*m) - : ir (bit), m (*m), r (~0)); + : i (bit), m (*m), r (~0)); #endif /* CONFIG_CPU_MIPSR2 */ } else if (cpu_has_llsc) { __asm__ __volatile__( An old trick to get gcc to do the right thing. Basically at the stage when gcc is verifying the constraints it may not yet know that it can optimize things into an i argument, so compilation may fail if r isn't in the constraints. However we happen to know that due to the way the code is written gcc will always be able to make use of the i constraint so no code using r should ever be created. The trick is a bit ugly; I think it was used first in asm-i386/io.h ages ago and I would be happy if we could get rid of it without creating new problems. Maybe a gcc hacker here can tell more? It is not nice to lie to GCC. CCing GCC and Richard in hopes that a wider audience may shed some light on the issue. David Daney
Re: Where is setup for goto in nested function created?
[EMAIL PROTECTED] wrote: During the process of fixing setjmp for AVR target, I needed to define targetm.builtin_setjmp_frame_value () to be used in expand_builtin_setjmp_setup(). This sets the value of the Frame pointer stored in jump buffer. I set this value to virtual_stack_vars_rtx+1 (==frame_pointer) Receiver defined in target latter restores frame pointer using virtual_stack_vars_rtx = value - 1 This produce correct code as expected and avoids run-time add/sub of offsets. (setjmp works!) However, for a normal goto used inside a nested function, a different part of gcc creates the code to store frame pointer (not expand_builtin_setjmp_setup). I can't find this code. The issue I have is that this goto_setup code does NOT use targetm.builtin_setjmp_frame_value - but seems to use value=virtual_stack_vars_rtx, which is incompatible with my target receiver. So where is the goto setup code created? And is there a bug here? Perhaps you need to implement one or more of: save_stack_nonlocal, restore_stack_nonlocal, nonlocal_goto, and/or nonlocal_goto_receiver. David Daney
Re: RFH: Building and testing gimple-tuples-branch
Diego Novillo wrote: The tuples branch is at the point now that it should bootstrap all primary languages and targets. There are things that are still broken and being worked on (http://gcc.gnu.org/wiki/tuples), but by and large things should Just Work. I expect things like code generation to be sub-par because some optimizations are still not converted (notably, loop passes, PRE, and TER). So, for folks with free cycles to spare, could you build the branch on your favourite target and report bugs? Bugzilla and/or email reports are OK. If you are creating a bugzilla report, please add my address to the CC field. Other than obvious brokenness, we are interested in compile time slow downs and increased memory utilization. Both of which are possible because we have spent no effort tuning the data structures yet. To build the branch: $ svn co svn://gcc.gnu.org/svn/gcc/branches/gimple-tuples-branch $ mkdir bld cd bld $ ../gimple-tuples-branch/configure --disable-libgomp --disable-libmudflap $ make make -k check For mipsel-linux: http://gcc.gnu.org/ml/gcc-testresults/2008-05/msg01055.html The complete bootstrap/test cycle seems to be about 10% faster than on the trunk. I didn't try to figure out why, although I can speculate that it is due to the fact that some optimization passes are still disabled and that tests that ICE prevent the corresponding execution test to be run. Comparing the test results to a recent trunk build shows many FAILures that are only on the branch. Although I didn't investigate, the FAILure in libjava/Array_3, usually indicate that exception handling is broken in some way. David Daney
Re: [RFC] Modeling the behavior of function calls
Diego Novillo wrote: [ Apologies if this comes out twice. I posted this message last week, but I think it was rejected because of a .pdf attachment. ] We have been bouncing ideas for a new mechanism to describe the behavior of function calls so that optimizers can be more aggressive at call sites. Currently, GCC supports the notion of pure/impure, const/non-const, but that is not enough for various cases. The main application for this would be stable library code like libc, that the compiler generally doesn't get to process. David sketched up the initial idea and we have been adding to it for the last few weeks. At this point, we have the initial design ideas and some thoughts on how we would implement it, but we have not started any actual implementation work. The main idea is to add a variety of attributes to describe contracts for function calls. When the optimizers read in the function declaration, they can take advantage of the attributes and adjust the clobbering effects of call sites. We are interested in feedback on the main idea and possible implementation effort. We would like to discuss this further at the Summit, perhaps we can organize a BoF or just get folks together for a chat (this came up after the Summit deadline). Diego, For the (all important :-)) java front end, it could be useful to have an attribute indicating that a function returns a non-null value. This is the case for the new operator which throws on allocation failures. Having this would allow VRP to eliminate a good bit of dead code for common java constructs. See also: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=24825 Thanks, David Daney
Re: US-CERT Vulnerability Note VU#162289
Tom Truscott wrote: Here is an unintended bug I encountered recently, hopefully the cert warning will catch this one too. int okay_to_increment (int i) { if (i + 1 i) return 0; /* adding 1 would cause overflow */ return 1;/* adding 1 is safe */ } Any sort of bug can cause a security vulnerability, so I recommend that gcc developers work harder on warning messages. Do you want warnings on all logic errors in your code, or only those that could cause a security vulnerability? The first case is easy, but I don't know of a simple algorithm that can distinguish the second :-). David Daney