[Bug c++/107500] Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-04 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

--- Comment #24 from R. Diez  ---
In case somebody else wants to patch their GCC 12.2, here is the
slightly-modified patch for convenience:

https://github.com/rdiez/DebugDue/blob/master/Toolchain/Patches/Gcc12EhGlobalsAtexit.patch

[Bug c++/107500] Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-04 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

--- Comment #23 from R. Diez  ---
Many thanks for the fix.

If you backport it to GCC 12.x, I won't be able to complain so much. ;-)

[Bug c++/107500] Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-04 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

--- Comment #20 from R. Diez  ---
I had to modify the patch slightly. I guess that union member "unsigned char
unused;" was removed after GCC 12.2 was released.

But otherwise, the patch does work, at least in my bare-metal scenario. The
atexit entry is no longer being generated, and I haven't seen any other
side-effects yet.

[Bug c++/107500] Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-03 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

--- Comment #16 from R. Diez  ---
I am slowly arriving at a different conclusion.

"struct __cxa_eh_globals" has neither a constructor nor a destructor. Its
members are pointers or integers, so GCC will not have automatically generated
any constructor or destructor for this structure.

Therefore, variable "static __cxa_eh_globals eh_globals" in the past was
already initialised before any users (probably because static memory being
zeroed on start-up). The situation has not changed now.

This static variable already outlived anything else, as there was no destructor
changing anything on termination.

Your patch introduced wrapper "struct constant_init" around it, which makes GCC
generate a constructor for the wrapper solely to register an empty destructor
with atexit(). Otherwise, the wrapper does nothing else useful.

Your patch also changes '__eh_globals_init::_M_init' (a different global
object) to a static member. Is that not enough to fix the original problem?

[Bug c++/107500] Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-02 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

R. Diez  changed:

   What|Removed |Added

 Resolution|DUPLICATE   |FIXED

--- Comment #13 from R. Diez  ---
>From your comments about "constexpr constructor" and "constinit", I gather that
the "eh_globals" singleton is guaranteed then to be initialised very early,
earlier than anything else that might use it, right? But that does not
guarantee that it will be destroyed later than anything that wants to use it,
is that correct? That is why we need the hack, to make it outlive all potential
users.

I am now trying to understand the implications of not destroying
"__cxa_eh_globals obj" inside the "eh_globals" singleton, at least in the case
of single-threaded (bare metal) embedded software. Hopefully, I can learn a
little more along the way about how C++ exception handling works.

As far as I can see, "struct __cxa_eh_globals" in unwind-cxx.h has 1 or 2
pointers and no destructor:

struct __cxa_eh_globals
{
  __cxa_exception *caughtExceptions;
  unsigned int uncaughtExceptions;
#ifdef __ARM_EABI_UNWINDER__
  __cxa_exception* propagatingExceptions;
#endif
};

Therefore, destroying this object should have real no effect. I wonder why
there was a problem to fix in 'eh_globals' then.

I am guessing that some static analyser, or checker instrumentation, may now
complain that static object 'eh_globals', or at least its member 'obj', has not
been properly destroyed upon termination. Normally, that would mean a risk of
leaking some memory, but I am guessing that the last thread can never have any
such 'caughtExceptions' or 'propagatingExceptions' left upon termination,
right?

So, theoretically, instead of leaving the destructor for the singleton empty,
we could add asserts that those 2 pointers are nullptr. Or am I missing
something?

This all feels iffy. If I understand this correctly, it is impossible for GCC
to guarantee the correct construction and destruction order of such global
objects, and that is why we are hacking our way out. The reason is mainly, that
not all targets support "__attribute__ constructor", so there is no way to
implement a deterministic initialisation and destruction order for everybody.
Is that right?

[Bug libstdc++/105880] eh_globals_init destructor not setting _M_init to false

2022-11-02 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105880

R. Diez  changed:

   What|Removed |Added

 CC||rdiezmail-gcc at yahoo dot de

--- Comment #17 from R. Diez  ---
For the record, this fix introduces a call to atexit() for a static destructor,
see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

[Bug c++/107500] Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-02 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

--- Comment #9 from R. Diez  ---
> [...]
> not just "turn on -Os and all the code gets removed".

I am sure that the solution is not as trivial as "turn on -Os". But, as an
outsider, it is hard to believe that it "takes non-trivial analysis of the
destructor body". The destructor is empty!

I am not talking about the GCC optimiser realising afterwards that the code is
generating an atexit() entry that does nothing. I am saying that GCC should not
generate so much code for an empty function for starters, and that GCC should
not generate the destructor registration at all if the destructor is empty. I
would imagine that those steps come before the optimiser gets to see the
generated code.

[Bug c++/107500] Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-02 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

--- Comment #8 from R. Diez  ---
Why does this 'eh_globals' object have to use a constexpr constructor?

How does the current code avoid the "static initialization order fiasco"? If
the user defines his/her own static C++ objects, how is it guaranteed now that
'eh_globals' is initialised before all other user code?

Isn't using the "__attribute__ constructor" trick safer anyway? With it, you
can document what priority levels libstdc++ uses. The user may even want to run
a few routines before libstdc++ initialises. Flexibility in the initialisation
order is often important in embedded environments.

Portability is not really an issue. You can just "#ifdef GCC" around the
"better" hack. Is GCC not using "__attribute__ constructor" internally anyway
to implement such static constructors? So anybody using C++ with GCC must
support that mechanism already.

And about saving a few bytes, 400 bytes is no small amount in tightly-embedded
environments. But it is not just the amount of memory. As I mentioned, my code
is checking that nothing unexpected registers an atexit() destructor. If
libstdc++ does that on start-up, it becomes hard to tell whether something
unexpected has been added recently.

I can surely put up with yet another little annoyance with this new GCC
version. But bear in mind that flexibility and attention to detail in the
embedded world is one of GCC's few remaining bastions. If GCC starts dropping
the ball here too, even more people will consider moving to clang.

[Bug c++/107500] Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-02 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

--- Comment #5 from R. Diez  ---
I know very little about GCC, but it is a very smart compiler, so I am having a
hard time understanding how GCC could miss so many optimisations. After all,
even when compiling with little optimisation, GCC seems able to discard unused
code rather well.

In my project, I am building my own toolchain with this makefile:

https://github.com/rdiez/DebugDue/blob/master/Toolchain/Makefile

I haven't verified it this time around, but I believe that Newlib is being
built with '-Os' optimisation. Search for COMMON_CFLAGS_FOR_TARGET in that
makefile, which eventually gets passed to Newlib in CFLAGS_FOR_TARGET.

First of all, GCC seems unable to generate an empty routine or destructor, or
at least flag it as being effectively empty. The caller should then realise
that it is empty, so it is not worth generating an atexit call for it.

Secondly, I am not ARM Thumb assembly expert either, but shouldn't "add r7, sp,
#0" be optimised away? After all, nobody is really using R7 in that routine.

And finally, what is the point of generating a function prolog and epilog which
saves and restores context from the stack? If the routine is pretty much empty,
but cannot be really empty, shouldn't some kind of RET instruction suffice?

[Bug c++/107500] Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-02 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

--- Comment #4 from R. Diez  ---
The 'constant_init' wrapper with the 'union' inside is a contrived hack, isn't
it? We may as well use a different hack then.

How about a combination of '__attribute__ constructor' and 'placement new' like
this?

uint8_t some_buffer[ sizeof( __cxa_eh_globals ) ];

// All objects with an init_priority attribute are constructed before any
// object with no init_priority attribute.
#define SOME_INIT_PRIORITY  200  // Priority range [101, 65535].

static __attribute__ ((constructor (SOME_INIT_PRIORITY))) void
MyHackForInitWithoutAtExitDestructor ( void ) throw()
{
  // Placement new.
  new ( some_buffer ) __cxa_eh_globals();
}

You would then need a 'get_eh_globals()' wrapper to return a pointer or a
reference to a '__cxa_eh_globals' object from 'some_buffer', by doing a type
cast. Everybody should then use the wrapper to access that singleton object.

[Bug c++/107500] New: Useless atexit entry for ~constant_init in eh_globals.cc

2022-11-01 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107500

Bug ID: 107500
   Summary: Useless atexit entry for ~constant_init in
eh_globals.cc
   Product: gcc
   Version: 12.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de
  Target Milestone: ---

I have this embedded firmware project of mine, which uses Newlib:

https://github.com/rdiez/DebugDue

It is the template for other similar bare-metal projects I have. Even though
some targets have very little SRAM (like 16 KiB), I am still using C++ and
exceptions. The project above documents how I configure GCC to that effect.

Up until GCC 11.2, I have been doing this check:

  if ( _GLOBAL_ATEXIT != nullptr )
  {
Panic( "Unexpected entries in atexit table." );
  }

On devices with very little RAM and without an operating system, initialisation
and destruction can be tricky. With the check above, I am making sure that
nothing unexpected comes up in that respect. I am initialising all static
objects manually anyway, to prevent any ordering surprises (the initialisation
order of C++ static objects can be problematic too).

The check above fails with GCC 12.2. Apparently, a destructor called
constant_init::~constant_init() gets added to the atexit table on start-up.

Because of the way that Newlib works, that wastes 400 bytes of SRAM, which
corresponds to sizeof( _global_atexit0 ). The structure has room for 32 atexit
calls (because of some ANSI conformance), but we are only using 1 entry.

The interesting thing is that the destructor is supposed to be empty, see:

https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/libsupc%2B%2B/eh_globals.cc

~constant_init() { /* do nothing, union member is not destroyed */ }

GCC generates the following code for that empty destructor:

0008da68 <(anonymous namespace)::constant_init::~constant_init()>:
  8da68:  b580  push  {r7, lr}
  8da6a:  af00  add   r7, sp, #0
  8da6c:  bd80  pop   {r7, pc}

That does not make any sense. Is there a way to prevent GCC from registering
such an empty atexit function? Failing that, is there a way to prevent GCC from
registering a particular atexit function, even if it is not empty?

I find surprising that GCC emits such code. My project is building its own
GCC/Newlib toolchain with optimisation level "-Os", so I would expect at least
the "add r7, sp, #0" to be optimised away.

[Bug libstdc++/68606] Reduce or disable the static emergency pool for C++ exceptions

2022-09-28 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68606

--- Comment #13 from R. Diez  ---
It is hard to automatically tell whether nobody else is using such a
statically-allocated emergency buffer. In my case, I am using C++ exceptions,
so the linker will probably always include the buffer.

My patch makes sure that no emergency buffer is allocated. As long as your
firmware does not run out of malloc RAM, C++ exceptions continue to work fine.

About implementing a proper solution (my patch is just a workaround): There are
probably guys who want to control the size of the emergency buffer, but for
really constrained environments, I would like an option to disable it
completely.

As a bonus, the code that allocates and uses the emergency buffer could be
optimised away too, but that is not critical for me. RAM / SRAM is often tight,
but Flash/program memory (where the code resides) tends to be much bigger. So
optimising the buffer away from RAM would be enough in most scenarios.

[Bug libstdc++/68606] Reduce or disable the static emergency pool for C++ exceptions

2022-09-28 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68606

--- Comment #11 from R. Diez  ---

> Has a solution been found for embedded systems with very limited resources?
> In this case for example, C++ exceptions can be disabled and this
> emergency pool not needed.

Contrary to popular belief, C++ exception handling does not need many
resources. I have been generating dynamic error messages in readable English
using C++ exceptions on microcontrollers with as little as 16 KiB SRAM for
years, with 'plenty' of memory to spare.

To that effect, I have been using the patch that I mentioned above. Here is an
updated URL for it:

https://github.com/rdiez/JtagDue/blob/master/Toolchain/Patches/GccDisableCppExceptionEmergencyBuffer-GCC-5.3.0.patch

[Bug c++/98992] attribute malloc error associating a member deallocator with an allocator

2022-06-24 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98992

R. Diez  changed:

   What|Removed |Added

 CC||rdiezmail-gcc at yahoo dot de

--- Comment #1 from R. Diez  ---
I have been using the following up to GCC 11.3:

struct MyClass
{
  static void FreeMemory ( const void * pMem ) throw();

  #if __GNUC_PREREQ(11, 0)
__attribute__ (( malloc, malloc( MyClass::FreeMemory, 1 ) ))
  #else
__attribute__ (( malloc ))
  #endif

  static void * AllocMemory ( size_t Size ) throw();

  [...]
};

However, GCC 12.1 does not want to accept it anymore:

error: 'malloc' attribute argument 1 does not name a function

I tried placing the attribute outside the class, like this:

__attribute__ (( malloc, malloc( MyClass::FreeMemory, 1 ) ))
void * MyClass::AllocMemory ( size_t Size ) throw()
{
  return malloc( Size );
}

But then I got 2 errors:

error: 'static void MyClass::FreeMemory(const void*)' is protected within this
context
  710 | __attribute__ (( malloc, malloc( MyClass::FreeMemory, 1 ) ))
  |   ^~

error: 'malloc' attribute argument 1 does not name a function

That cannot be right. GCC should not insist that the deallocator is public.
After all, the allocator AllocMemory is not public.

[Bug target/68605] Add -mno-crt0 to disable automatic crt0 injection

2022-04-08 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68605

--- Comment #4 from R. Diez  ---
That is certainly a way to fix the crt0 nuisance. But it requires some specs
file black magic, so yet another thing to learn. And then you have to keep up
with GCC in case something changes around the specs files. It is not a
user-friendly solution.

[Bug bootstrap/60160] Building with -flto in CFLAGS_FOR_TARGET / CXXFLAGS_FOR_TARGET

2022-04-08 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60160

R. Diez  changed:

   What|Removed |Added

 CC||rdiezmail-gcc at yahoo dot de

--- Comment #6 from R. Diez  ---
I am experimenting with a GCC 11.2 cross-compiler for bare-metal embedded
software.

There is no operating system, so no shared libraries or anything fancy. But
there is a static libc (Newlib or Picolibc).

I wanted to build everything with LTO, including libc, libstdc++ and libgcc.
This is the makefile I am using:

https://github.com/rdiez/JtagDue/blob/master/Toolchain/Makefile

Search for "-ffat-lto-objects" in that makefile.

As soon as I enable the LTO flags, I get linker errors. They are documented in
the makefile next to the LTO options, and look similar to those reported in
this bug.

I tried -fno-builtin with varying degrees of success. I also tried building
only the application and libc with LTO, but not libstdc++ etc., to no avail.

LTO only works for the user application. As soon as libc or the other GCC
libraries are compiled with LTO, it fails.

Is it unfortunate, because I believe that a full LTO build for a bare-metal
environment would be rather beneficial.

The patch and information referenced in this bug report look dated. Is there a
way to make LTO work now, at least for my configuration?

[Bug bootstrap/104301] New: --enable-cstdio=stdio_pure not passed down to libstdc++-v3

2022-01-31 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104301

Bug ID: 104301
   Summary: --enable-cstdio=stdio_pure not passed down to
libstdc++-v3
   Product: gcc
   Version: 11.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: bootstrap
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de
  Target Milestone: ---

I have been using the following makefile for years to build a cross-compiler
GCC toolchain with Newlib:

https://github.com/rdiez/JtagDue/blob/master/Toolchain/Makefile

Now I would like to optionally replace Newlib with Picolibc.

Picolibc needs this option for libstdc++-v3:

--enable-cstdio=stdio_pure

So I tried to pass it to GCC's top-level 'configure' script, but
libstdc++-v3's 'configure' script is not run at that time, it runs
later on, when you invoke 'make' at GCC's top level. Then I see this
line in the 'make' output:

checking for underlying I/O to use... stdio

So it seems that the "--enable-cstdio=stdio_pure" option is not passed down
from the top-level 'configure' to libstdc++-v3's 'configure'.

I guess this is a bug in the top-level 'configure' logic. It has probably gone
unnoticed for a long time because it is only recently that you can specify
options other than 'stdio' in --enable-cstdio=xxx .

Or is the GCC user expected to delve down and configure GCC's components
separately?

[Bug libstdc++/104299] New: Doc: stdio is not the only option in --enable-cstdio=XXX

2022-01-31 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104299

Bug ID: 104299
   Summary: Doc: stdio is not the only option in
--enable-cstdio=XXX
   Product: gcc
   Version: 11.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: libstdc++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de
  Target Milestone: ---

File libstdc++-v3/doc/html/manual/configure.html states for
"--enable-cstdio=OPTION":

"At the moment, the only choice is to use 'stdio', a generic "C" abstraction."

That is no longer true. According to this snippet from
libstdc++-v3/acinclude.m4:

AC_DEFUN([GLIBCXX_ENABLE_CSTDIO], [
  AC_MSG_CHECKING([for underlying I/O to use])
  GLIBCXX_ENABLE(cstdio,stdio,[[[=PACKAGE]]],
[use target-specific I/O package], [permit stdio|stdio_posix|stdio_pure]) 

Options "stdio_posix" and "stdio_pure" are also available.

[Bug bootstrap/98324] [11 Regression] bootstrap broken with a LTO build configured with --enable-default-pie

2021-12-15 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98324

R. Diez  changed:

   What|Removed |Added

 CC||rdiezmail-gcc at yahoo dot de

--- Comment #7 from R. Diez  ---
I have been building cross-compiler toolchains for years with makefiles similar
to this one:

https://github.com/rdiez/JtagDue/blob/master/Toolchain/Makefile

I am trying to upgrade that makefile from GCC 10.2 to GCC 11.2, and I am
getting exactly the same problem that this bug describes.

I checked, and the fix linked from this bug is included in version 11.2 .

However, I have never used option "--enable-default-pie" in the past, and I was
able build cross-toolchains with many GCC versions without it for years.

The target is actually an embedded ARM Cortex-M3 microcontroller with fixed
memory addresses (a "bare metal" firmware without OS), so I guess that I do not
really need PIE. It may even cost some performance, if I understand what PIE
does.

I am guessing that bootstrapping a cross-compiler GCC is still broken with its
default PIE setting.

[Bug c/42579] [PATCH] support for obtaining file basename

2021-05-31 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=42579

--- Comment #12 from R. Diez  ---
*** Bug 77488 has been marked as a duplicate of this bug. ***

[Bug preprocessor/77488] Proposal for __FILENAME_ONLY__

2021-05-31 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77488

R. Diez  changed:

   What|Removed |Added

 Resolution|--- |DUPLICATE
 Status|UNCONFIRMED |RESOLVED

--- Comment #10 from R. Diez  ---
This issue has been fixed in 42579.

*** This bug has been marked as a duplicate of bug 42579 ***

[Bug c/42579] [PATCH] support for obtaining file basename

2021-05-31 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=42579

R. Diez  changed:

   What|Removed |Added

 CC||rdiezmail-gcc at yahoo dot de

--- Comment #11 from R. Diez  ---
What is the target GCC version for __FILE_NAME__? GCC 12.1?

[Bug debug/100446] GDB has problems reading GCC's debugging info level -g3

2021-05-06 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100446

--- Comment #5 from R. Diez  ---
In a nutshell: "objdump --syms" does not show that symbol, probably because the
routine was inlined, but "readelf --debug-dump" does show it.

Thanks for your help.

[Bug debug/100446] GDB has problems reading GCC's debugging info level -g3

2021-05-06 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100446

--- Comment #3 from R. Diez  ---
Regarding "shifting the blame", no worries, I am grateful for any help.

I suspect that there is more than 1 issue here. Could you take a look at the
following aspect mentioned in the GDB bug?

8<8<

In fact, I do not understand why StartOfUserCode is not defined in the release
build, because it is the same source code after all. The same routine is used
in the same way.

I dumped all symbols like this and I compared them:

arm-none-eabi-objdump  --syms  firmware-debug-non-lto.elf
arm-none-eabi-objdump  --syms  firmware-release-lto.elf

8<8<

That particular symbol, StartOfUserCode, among many others, should be in the
release build too. And there is no GDB involved there at all.

I have not got experience with clang or lldb at all, and I have read that lldb
is not ready yet for debugging bare metal firmware (at least off the shelf).

[Bug other/100446] New: GDB has problems reading GCC's debugging info level -g3

2021-05-06 Thread rdiezmail-gcc at yahoo dot de via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100446

Bug ID: 100446
   Summary: GDB has problems reading GCC's debugging info level
-g3
   Product: gcc
   Version: 10.3.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: other
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de
  Target Milestone: ---

GDB has excessive CPU load and memory usage with -g3 debug info. Sometimes it
makes GDB unusable.

There are problems with debug (non LTO) builds, but most issues come with when
building with LTO, like for example missing or weird C++ symbols. The release
ELF seems to have lost most C++ symbols, and there are many entries like this:

00010d6b l   .debug_info 
00010d6b l   .debug_info 
00010d6b l   .debug_info 
00010d6b l   .debug_info 

More details about this problem are here:

https://sourceware.org/bugzilla/show_bug.cgi?id=27754

It looks like a GCC issue, and not a GDB issue.

I asked in the mailing list but got no answer at all:

https://gcc.gnu.org/pipermail/gcc-help/2021-April/140221.html

[Bug other/94330] New: No warning if jobserver not available

2020-03-25 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94330

Bug ID: 94330
   Summary: No warning if jobserver not available
   Product: gcc
   Version: 8.3.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: other
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de
  Target Milestone: ---

If you pass -flto=jobserver , and the jobserver file descriptors are not
actually available, you get no warning whatsoever from GCC.

GNU Make does try to help in this scenario with the following warning:

make[1]: warning: jobserver unavailable: using -j1.  Add `+' to parent make
rule.

Without such a warning, it is really easy to miss the opportunity to
parallelise the build. In fact, GCC seems to use 2 threads in this scenario,
which misleads you into thinking that the option is working correctly, when in
fact is not using all other CPU cores.

More context on this issue is here:

https://lists.gnu.org/archive/html/help-make/2020-02/msg0.html

https://gcc.gnu.org/legacy-ml/gcc-help/2020-02/msg00069.html

[Bug preprocessor/77488] Proposal for __FILENAME_ONLY__

2018-08-17 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77488

--- Comment #7 from R. Diez  ---
(In reply to Piotr Henryk Dabrowski from comment #6)
> You can use:
> 
> #line 2 "FileName.cpp"
> 
> at the very top (!) of all your files
> to change the content of __FILE__.
> This also affects compiler messages.

I do not want to override __FILE__. Its original content may be needed for
something else. I just want an alternative to generate smaller asserts.

Besides, maintaining such a "#line" hack in all files is uncomfortable. I
already mentioned that I am including other libraries with their own source
code and build systems. I need a way to tweak the assert definition for all of
them in Newlib. Otherwise, I have to patch all components everywhere.

[Bug c++/83211] Warning: ignoring incorrect section type for .init_array.00200

2018-01-30 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83211

--- Comment #2 from R. Diez  ---
I am upgrading my embedded ARM Cortex-M4 toolchain from GCC 6.4 to GCC 7.3, and
Binutils from 2.29.1 to 2.30 (among other minor component upgrades), and I am
not seeing this warning anymore.

I do not know what fixed it yet, or whether the warning is now just silenced by
default.

[Bug c++/83211] New: Warning: ignoring incorrect section type for .init_array.00200

2017-11-29 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83211

Bug ID: 83211
   Summary: Warning: ignoring incorrect section type for
.init_array.00200
   Product: gcc
   Version: 6.4.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de
  Target Milestone: ---

I recently upgraded my embedded ARM Cortex-M4 toolchain from GCC 6.3 to GCC
6.4, and Binutils from 2.28 to 2.29.1 (among other minor component upgrades).

My C++ code is using the following for a static routine:

  __attribute__ ((constructor (200)))

After the upgrade I am getting this warning now:

  /tmp/cc79zoMV.s: Assembler messages:
  /tmp/cc79zoMV.s:5079: Warning: ignoring incorrect section type for
.init_array.00200
  /tmp/cc79zoMV.s:5112: Warning: ignoring incorrect section type for
.fini_array.00200


Normally, GCC emits this kind of section statements during compilation:

.section.init_array,"aw",%init_array

But for the "__attribute__ ((constructor (200)))" code, it emits this instead:

.section.init_array.00200,"aw",%progbits

Binutils saw this change recently:

https://sourceware.org/bugzilla/show_bug.cgi?id=21287

So I am guessing that is what is causing the new warning.

I suppose that GCC needs an update to match that Binutils change.

[Bug libstdc++/68606] Reduce or disable the static emergency pool for C++ exceptions

2017-02-08 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68606

--- Comment #6 from R. Diez  ---
The proposed patch looks OK. But I guess I will not be able to completely
disable the emergency pool by setting STATIC_EH_ALLOC_POOL_BYTES to 0, right?
But at least I hope that setting a very low value will have that effect in
practice.

[Bug preprocessor/77488] Proposal for __FILENAME_ONLY__

2016-09-06 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77488

--- Comment #2 from R. Diez  ---

> __FILE__ expands to whatever you pass on the command line as the base file
> (and for headers whatever the directory is where they are found + the header
> filename.  So, if you compile with gcc -c something.c -I subdir/ , then
> __FILE__ will be "something.c" or e.g. "subdir/something.h".  If you use
> absolute filenames, that is what you get.  So, if you care about the lengths
> of these, just tweak your Makefiles so that you do the right thing (compile
> in the directory with the sources, with -o to the directory of object files
> if different), rather than compiling in the object directory, with absolute
> filenames as sources.  Adding yet another __FILE__-like builtin preprocessor
> constant is IMHO undesirable,

I already wrote that I cannot easily control the path depth, because  I am
using the autotools in my project in order to generate the makefiles. Some
libraries come with their own (complex) makefiles or build systems, so your
advice is not practicable in real life. Moreover, the GCC toolchain itself
decides where some of the include files lie and how their paths look.

I also said that using absolute paths in the makefiles may be desirable for
other reasons. I am concerned that the binary easily depends on the source code
and toolchain location, which makes it hard to get a reproducible build. With
__FILE__, there is no practicable solution.


> especially in the spelling you're proposing.

I do not actually mind about the spelling. I am just looking for some workable
way (in real life) to include filename (source code position) information in
assert messages that does not depend on full paths (and enables easily
reproducible builds).


> As for strrchr folding, I see it folded properly, e.g. __builtin_strrchr
> (__FILE__, '/') is optimized into "foo/bar/baz.c" + 7.  Optimizing it into
> "baz.c" would be incorrect, you can do later on with the returned pointer
> e.g. ptr -= 4; etc., which would be undefined behavior if everything before
> the last / is removed from the string literal.

OK, now I understand the issue with strrchr(), thanks.

[Bug preprocessor/77488] New: Proposal for __FILENAME_ONLY__

2016-09-05 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77488

Bug ID: 77488
   Summary: Proposal for __FILENAME_ONLY__
   Product: gcc
   Version: unknown
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P3
 Component: preprocessor
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de
  Target Milestone: ---

Hi all:

I am writing embedded software for a memory-constraint embedded target.
However, I would still like to use assert() in debug builds to help debug the
software.

The assert() macro includes in the final binary a lot of text, like the source
code filename, the function name and assert expression. So much text is blowing
my memory budget.

This is newlib's definition of assert:

# define assert(__e) ((__e) ? (void)0 : __assert_func (__FILE__, __LINE__, \
   __ASSERT_FUNC, #__e))

Most of the time, I only need a filename and a line number, so I wrote a small
patch that builds my toolchain without the rest. The patch is here:

https://github.com/rdiez/JtagDue/blob/master/Toolchain/NewlibSmallerAssert.patch

The trouble is, the only built-in symbol that yields the filename of a source
file is __FILE__, which includes the full path. But the full path is often too
long, and its length varies depending on where the software is built, so the
generated binary may or may not fit in the target's program memory depending on
the source code's path length at compilation time.

This is not obvious and rather annoying. For example, when the overnight build
suddently blows the flash memory (program memory) budget, it is not obvious the
reason is that the source path on the server is longer than on the developer's
PC.

A related question has been asked before:

  __FILE__ macro shows full path
  https://stackoverflow.com/questions/8487986/file-macro-shows-full-path

I cannot easily control the path depth, because  I am using the autotools in my
project in order to generate the makefiles.

In any case, the advise regarding filenames on makefiles and so on is often
that you should be using absolute paths in order to avoid surprises.

The following suggested solution does not work for me either:

  #define __FILENAME__ (strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 :
__FILE__)

Function strrchr() is recognised as a GCC intrinsic, but it is not optimised
away at compilation time like the strlen() case. That alone could be an
improvement to GCC.

A new predefined symbol like __FILENAME_ONLY__, which should yield __FILE__
without the full path, would be the most comfortable solution for me.
Sure, some filenames are going to have the same name, but it is fairly easy to
tell which assert failed from a filename and a line number even if 2 or 3
source files happen to have the same name.

A more advanced solution would be to have predefined symbol like
__FILENAME_WITHOUT_PREFIX__( filename, prefix_to_remove ), but I do not think
that this is worth the trouble.

I have seen source code bases where each .cpp file got assigned a textual ID or
a number manually, so that the strings passed to assert() are always of a
determined length, but this is a pain to maintain. Every time you add a new
source file, you need to update the ID table, which tends to trigger a
recompilation of all files. But maybe the compiler could automatically assign
an ID per file and then generate a map file with those IDs for later look-up.

Or maybe I could write an __assert_func() that prints the
__builtin_return_address() instead of a source filename, so that I can manually
look-up the function's name in the generated linker .map file.

However, I have to enable some optimisation even on debug builds for my
memory-constrained targets, and I wonder if even minor optimisations could
render those addresses meaningless for the purposes of correlating to a source
code line.

Any other ideas?

Thanks in advance,
  rdiez

[Bug libstdc++/68606] Reduce or disable the static emergency pool for C++ exceptions

2015-11-30 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68606

--- Comment #2 from R. Diez  ---
A setting like LIBSTDCXX_EMERGENCY_EH_POOL_SIZE sounds good. However, an
environment variable will not help me, it has to be a configuration option when
building the toolchain. I am writing embedded firmware with newlib, and, as far
as I know, there are no environment variables on start-up. You can set them in
main() with setenv(), but that's too late, the emergency pool has already been
initialised by then.

I would also like to specify a size of 0 to disable it completely. I need to
save as much memory as I can. After all, even if the emergency pool is huge, it
can still  happen that it is not enough, so the exception-handling code must
already deal with that case that the pool has no room left, maybe with panic().
In my case, if there is no malloc() space available to throw an exception, the
system is not working properly anyway.

By the way, setting the size of some global emergency pool on start-up seems
like a hack. You do not really know how big it should be, that depends for
example on the number of threads you have, which is not known upfront.
Reserving space per thread can help, but that's then a big waste if you have
many threads. I suggest a special case in the unwind logic that can throw a
particular out-of-memory exception like std::bad_alloc without allocating
memory with malloc().

Whatever happened to the sensible way of allocating the exception on the stack
and copying it up the stack every time the stack unwinds? I don't think that
optimising the performance of the error-handling case is a high priority, at
least for most applications. At least for embedded applications, allocating
with malloc() when you throw is far worse. That prevents, for example, throwing
in interrupt context, where malloc() is often not available.

[Bug other/68605] Add -mno-crt0 to disable automatic crt0 injection

2015-11-29 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68605

--- Comment #2 from R. Diez  ---
Option -nostartfiles breaks other start-up things like _init and
__libc_init_array:

/home/rdiez/rdiez/arduino/toolchain-bin-08/lib/gcc/arm-none-eabi/4.9.3/../../../../arm-none-eabi/lib/libstdc++.a(system_error.o):
In function `__static_initialization_and_destruction_0':
/home/rdiez/rdiez/arduino/JtagDue/Toolchain/Tmp/gcc-4.9.3/libstdc++-v3/src/c++11/system_error.cc:66:
undefined reference to `__dso_handle'
/home/rdiez/rdiez/arduino/toolchain-bin-08/lib/gcc/arm-none-eabi/4.9.3/../../../../arm-none-eabi/lib/libg.a(lib_a-init.o):
In function `__libc_init_array':
/home/rdiez/rdiez/arduino/JtagDue/Toolchain/Tmp/newlib-2.2.0.20150423/newlib/libc/misc/init.c:37:
undefined reference to `_init'
/home/rdiez/rdiez/arduino/toolchain-bin-08/lib/gcc/arm-none-eabi/4.9.3/../../../../arm-none-eabi/bin/ld:
jtagdue.elf: hidden symbol `__dso_handle' isn't defined
/home/rdiez/rdiez/arduino/toolchain-bin-08/lib/gcc/arm-none-eabi/4.9.3/../../../../arm-none-eabi/bin/ld:
final link failed: Bad value

[Bug libstdc++/68606] New: Reduce or disable the static emergency pool for C++ exceptions

2015-11-29 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68606

Bug ID: 68606
   Summary: Reduce or disable the static emergency pool for C++
exceptions
   Product: gcc
   Version: 4.9.3
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P3
 Component: libstdc++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de
  Target Milestone: ---

I am writing embedded firmware in C++ for small ARM Cortex-M3 microcontrollers
running on a 'bare metal' environment. One of the CPUs has just 16 KiB of SRAM.
Nevertheless, I have found that the only way to report sensible error messages
is by using C++ exceptions. And this is working fine, even with such tight
resources.

When you have 16 KiB SRAM, you do look carefully where your memory is going. It
took me while to realise that libstdc++ was reserving full 2 KiB for some
undocumented emergency memory pool in connection with C++ exceptions.

Acceptance of C++ exceptions has been slow over the years, not least due to
implementation deficiencies. In order to fix this one, I would like to see that
memory pool documented, and an option to reduce or disable it.

Embedded systems tend to be designed to never use all available memory. In many
such systems, a failing malloc() can only be caused by a bug, and if it does
happen, the system will panic and automatically reboot.

In my project, I have disabled the pool with this patch:

https://github.com/rdiez/JtagDue/blob/master/Toolchain/GccDisableCppExceptionEmergencyBuffer.patch

[Bug other/68605] New: Add -mno-crt0 to disable automatic crt0 injection

2015-11-29 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68605

Bug ID: 68605
   Summary: Add -mno-crt0 to disable automatic crt0 injection
   Product: gcc
   Version: 4.9.3
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P3
 Component: other
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de
  Target Milestone: ---

I am developing embedded software on an ARM Cortex-M3 CPU without an OS ('bare
metal') using newlib. I am not using libgloss, sometimes because the boards
need custom start-up code, partly because they have 2 firmware versions (normal
and emergency). On other scenarios, libgloss is just not worth the trouble.

GCC insists on adding crt0.o to the list of objects to link, which libgloss
tends to provide. I haven't got such a file, and I do not need one, as
everything is provided somewhere else. Due to limitations in the autotools, I
have to provide an empty crt0 on every makefile. An example is here, just
search for "crt0" on the following page:

https://github.com/rdiez/JtagDue/blob/master/Project/JtagFirmware/Makefile.am

I am not the only one hitting this problem, search the Internet for [gcc crt0]
and you'll find many hits. Working around this issue has made enough people
waste enough time.

I guess quietly and forcibly adding an object file to the list of things to
link is still there for historical reasons, but it does not make it any less of
a bad practice.

Some GCC targets have flag -mno-crt0 to alleviate the problem, others allow you
to change the name from crt0 to something else. I would welcome a flag like
-mno-crt0 for all architectures, or at least for ARM, which is what is bugging
me at the moment.

I have seen that some ARM toolchain patches GCC to remove the crt0 injection,
but I like building the GCC toolchain myself, and patching this every time is
unnecessary work.

[Bug other/63440] -Og does enable -fmerge-constants too

2014-10-06 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63440

--- Comment #2 from R. Diez rdiezmail-gcc at yahoo dot de ---
Yes, I would enable -fmerge-constants with -Og.

I would do it even for -O0. Merging constants should be safe, and it saves
precious program space when generating debug builds for small embedded targets.

Besides, in my opinion, it does not make sense that the addresses of some
literal strings suddenly change when you enable optimisations, because they get
collapsed. Any bugs because of such shared addresses should be apparent in
debug builds too.

Note that GCC already seems to apply some optimisations when building with -O0.
Specifically, I believe that at least some dead code elimination does occur.
That would make sense, as you do not want to bloat your debug builds
unnecessary.


[Bug other/63440] New: -Og does enable -fmerge-constants too

2014-10-02 Thread rdiezmail-gcc at yahoo dot de
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63440

Bug ID: 63440
   Summary: -Og does enable -fmerge-constants too
   Product: gcc
   Version: 4.9.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: other
  Assignee: unassigned at gcc dot gnu.org
  Reporter: rdiezmail-gcc at yahoo dot de

The documentation for -fmerge-constants does not mention that the new
optimization level -Og does enable -fmerge-constants too. Or at least it seems
to do, judging by the generated code size in  my tests.


[Bug target/55514] New: PowerPC EABI: Warning: setting incorrect section attributes for .sdata2

2012-11-28 Thread rdiezmail-gcc at yahoo dot de


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55514



 Bug #: 55514

   Summary: PowerPC EABI: Warning: setting incorrect section

attributes for .sdata2

Classification: Unclassified

   Product: gcc

   Version: 4.7.2

Status: UNCONFIRMED

  Severity: normal

  Priority: P3

 Component: target

AssignedTo: unassig...@gcc.gnu.org

ReportedBy: rdiezmail-...@yahoo.de





I am compiling with -meabi -msdata=eabi, and I am getting this compilation

warning every now and then:



  Warning: setting incorrect section attributes for .sdata2



This line of code triggers it:



  const uint8_t utf8TestStringKanji[] = { 0xE6, 0xBC, 0xA2, 0xE5, 0xAD, 0x97,

0x00 };



This is the assembler output for those 7 bytes of data:



 11768  .section.sdata2,aw,@progbits

 11769  .align 2

 11770  .LC1:

 11771  E6   .byte-26

 11772 0001 BC   .byte-68

 11773 0002 A2   .byte-94

 11774 0003 E5   .byte-27

 11775 0004 AD   .byte-83

 11776 0005 97   .byte-105

 11777 0006 00   .byte0



The 'w' in the aw means writable, which is wrong in this case.



I replaced that line of code with the following, which is more or less the same

for my purposes:



  const char * const utf8TestStringKanji = \xE6\xBC\xA2\xE5\xAD\x97\x00;



And that compiles fine (no warnings).


[Bug target/55515] New: PowerPC EABI: Create a predefined symbol for -mdata=xxx

2012-11-28 Thread rdiezmail-gcc at yahoo dot de


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55515



 Bug #: 55515

   Summary: PowerPC EABI: Create a predefined symbol for

-mdata=xxx

Classification: Unclassified

   Product: gcc

   Version: 4.7.2

Status: UNCONFIRMED

  Severity: enhancement

  Priority: P3

 Component: target

AssignedTo: unassig...@gcc.gnu.org

ReportedBy: rdiezmail-...@yahoo.de





I am investigating this issue:



  http://sourceware.org/ml/newlib/2011/msg00295.html



If that link does not work, search the mailing list for PowerPC EABI issues

with newlib.



During the investigation, I have missed a way to tell which -mdata=xxx option

is being used at the moment. That is, I need a predefined symbol like

_PPC_MSDATA=EABI, _PPC_MSDATA_EABI or similar. With such a symbol, it would be

easy to #error if the user has specified the wrong compiler flags.



Info about where to look in GCC's source code is here:



  http://gcc.gnu.org/ml/gcc-help/2011-07/msg00079.html



If that link does not work, search the mailing list for How to check at

compilation time whether -msdata is set to eabi for PowerPC embedded targets.


[Bug c/49674] New: Improve documentation for __attribute__ __section__

2011-07-08 Thread rdiezmail-gcc at yahoo dot de
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49674

   Summary: Improve documentation for __attribute__ __section__
   Product: gcc
   Version: 4.6.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
AssignedTo: unassig...@gcc.gnu.org
ReportedBy: rdiezmail-...@yahoo.de


Can someone please improve the documentation for __attribute__ __section__ ?
Only functions are mentioned, but I know it works for variables too. This is
the documentation page on the Web:

  http://gcc.gnu.org/onlinedocs/gcc-4.6.1/gcc/Function-Attributes.html

Apparently, there are checks in GCC against the section properties. For
example, you cannot place a non-const variable in a const section, you get a
section type conflict error message. Documentation about this would be an
extra bonus.

I wonder how documentation bugs are handled in this project, I couldn't find
any documentation category in Bugzilla.

Many thanks in advance.