[Bug lto/45375] [meta-bug] Issues with building Mozilla with LTO

2011-04-04 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45375

--- Comment #69 from Mark Mitchell mark at codesourcery dot com 2011-04-05 
00:16:02 UTC ---
On 4/4/2011 3:19 AM, froydnj at codesourcery dot com wrote:

 Do folks think it would be useful to include a breakdown by individual
 TREE_CODE, similar to what's done for RTXes?

Sure couldn't hurt, and I can definitely think of situations where I
wanted exactly that.

Thank you,


[Bug lto/45375] [meta-bug] Issues with building Mozilla with LTO

2011-01-05 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45375

--- Comment #22 from Mark Mitchell mark at codesourcery dot com 2011-01-06 
03:55:40 UTC ---
On 1/5/2011 5:36 AM, hubicka at gcc dot gnu.org wrote:

 40259 5.6000  cc1plus  cc1plus 
 lookup_field_1

I've looked at this, in the distant past.  I don't think the routine
itself is *very* low-hanging fruit; it's already using an inline log n
algorithm to find a field in most cases, and I bet that's as good as a
hash table since n is generally relatively small.  But, maybe in most
cases is wrong; there is a slow-path, and we should confirm that most
of the time is in the fast-path code.

We could also try a bit of memoization; I wouldn't be surprised if we
often lookup x.y several times in a row.

More often, when I've looked at this kind of thing, though, I've
concluded that the problem was that we were calling the routine too
often, rather than the routine itself was too slow.  Quite possibly we
could improve algorithms that are using lookup_field_1 so that they
didn't do so as often, by building caches or otherwise.  For that, we'd
need to look at the callers of lookup_field_1.

So, in summary, I'd recommend three things:

* Split lookup_field_1 into its fast-path and slow-path code so that we
can profile it and figure out which code is taking up most of the time.

* Assuming it's fast-path code, look at the frequent callers and think
about how to optimize them.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-14 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #62 from Mark Mitchell mark at codesourcery dot com 2010-12-14 
15:17:25 UTC ---
 Having everyone with knowledge of static construction alerted, can't we use 
 the
 GNU constructor priorities to solve PR44952?

The two constraints are:

(a) priorities aren't supported on all systems, so we need to have a
fall-back mechanism

(b) we need to document which range of priorities we're reserving for
libstdc++

On RTOS platforms, high priorities are also used for things like C
library initialization and even for device initialization.

Thank you,


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-12 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #46 from Mark Mitchell mark at codesourcery dot com 2010-12-12 
18:40:35 UTC ---
On 12/11/2010 4:32 PM, hjl.tools at gmail dot com wrote:

 Mark, I may have misunderstood you. Correct me if I am wrong.
 Currently, it may be possible to interleave constructors
 between different object files by examing .ctors section names
 and passing object files in specific order to linker.

It is possible.  The linker sorts the section names, so a higher
priority constructor always runs before a lower priority constructor,
independent of object file order.  You may also be able to play games
with object file order to control the order of constructors with the
same priority, but we don't document that anywhere, as far as I know.

 But we can't do it between .init_array and .crors sections.

Correct, we do not at present do that.  That's the problem I'm raising
with switching to .init_array.  If you do that, and someone links in old
object code using .ctors, we may run a low-priority .ctors constructor
after a high-priority .init_array constructor, or we may run a
low-priority .init_array constructor after a low-priority .ctors
constructor.  Either outcome would be a bug; we would break semantics.

My opinion is that we can't switch to .init_array unless we either (a)
make the linker detect the problem and fix it, or (b) at least make the
linker detect the problem and issue an *error*.  I do not think a
warning is sufficient.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #16 from Mark Mitchell mark at codesourcery dot com 2010-12-11 
18:50:11 UTC ---
On 12/11/2010 10:47 AM, hjl.tools at gmail dot com wrote:

 Linker supports sorting .ctors.N and .init_array..
 Within .ctors.N and .init_array., the order is defined.
 And ctors.N will be called before .init_array..

Really?  I thought all of ctors. got sorted into a single big block.

If the GNU linker (and GOLD) know how to interleave .ctors.N with
.init_array.N so that constructor priority is honored even when
mixing .ctors and .init_array, then I think we're OK.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #18 from Mark Mitchell mark at codesourcery dot com 2010-12-11 
19:33:17 UTC ---
On 12/11/2010 11:03 AM, hjl.tools at gmail dot com wrote:

 I am not sure about GOLD. But it usually follows GNU linker.
 For GNU linker, the constructor priority is honored within
 .ctors.N and .init_array.N.  ctors.N will be called
 before .init_array..

From the linker script fragment you're showing we're not going to get
the right behavior.  In particular, all .ctors.* are going to get called
before any .init_array.*, or vice versa; we won't interleave the two
appropriately.

So, if I understand correctly, we have a critical problem with switching
to .init_array; we'll fail to conform to the specification for GNU
constructor priorities.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #24 from Mark Mitchell mark at codesourcery dot com 2010-12-11 
19:56:43 UTC ---
On 12/11/2010 11:53 AM, hjl.tools at gmail dot com wrote:

 You have to be more specific about what you meant by interleaving.

Constructor priorities are a GNU C extension:

  __attribute__((constructor(priority)))

 I have said If you have constructor priorities in .o files and .c
 files, you may get different behaviors if .o files are compiled with
 a different compiler, different versions of GCC or not GCC at all.

Well, it sounds to me, then, that we would be introducing a binary
compatibility problem to make this change.  If we're going to do it, I
think that means adding linker smarts that detect that there are both
.ctor.* and .init_array.* sections and issuing an error -- not a warning
-- together with a hint as to how to recompile so as to get either the
new or old behavior.  (Some people will have binary libraries they can't
recompile, so we need to explain how to compile new code so that it
still uses .ctor.*.)


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #27 from Mark Mitchell mark at codesourcery dot com 2010-12-11 
20:19:23 UTC ---
On 12/11/2010 12:17 PM, hjl.tools at gmail dot com wrote:

 I don't think GCC really supports interleaving constructor priority
 at binary level. Unless GCC can guarantees one can interleave constructor
 priority in object files

I don't understand this comment at all.  GCC honors constructor
priorities across object files and has for ages.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #29 from Mark Mitchell mark at codesourcery dot com 2010-12-11 
21:06:41 UTC ---
On 12/11/2010 1:01 PM, hubicka at ucw dot cz wrote:

 So I take that, the ctor order is to support priotities, since the
 .ctor.priority sections get merged into single and ordered in increasing 
 rather
 than decreasing order, while init_array gets around the problem.

I don't think gets around the problem is true.  In both cases, you
need to honor the order of constructor priorities.  That's a GNU C
extension, so not part of most standard ABIs, but it's one people use.
Whether you use .ctor.* or .init_array, you have a bunch of stuff that
has to run in a particular order and the linker has to make sure that
happens.

 Can't linker be told to translate .ctor section into init_array upon
 seeing the fact that both are used? (or just do it by default)

Maybe...

Certainly, linker magic seems like the obvious way to solve a binary
compatibility problem.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #32 from Mark Mitchell mark at codesourcery dot com 2010-12-11 
23:19:05 UTC ---
On 12/11/2010 2:56 PM, hjl.tools at gmail dot com wrote:

 It works at source code level. I don't believe we ever support
 interleaving constructor priorities between object files, with
 .ctors or .init_array.

You can definitely use different priorities in different object files
and be guaranteed that the constructors will be run in numerical
priority order across object files.  That's the whole point of the feature.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #34 from Mark Mitchell mark at codesourcery dot com 2010-12-11 
23:30:19 UTC ---
On 12/11/2010 3:28 PM, hjl.tools at gmail dot com wrote:

 1. How do you find out what priority foo constructor has?

If you're looking at source code, read the source.  If you're looking at
object code, look at what section the constructor is in; the numerical
value of .ctor.N indicates the priority.

 2. How do you run your constructor before foo?

Given it a higher priority.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #36 from Mark Mitchell mark at codesourcery dot com 2010-12-11 
23:54:44 UTC ---
On 12/11/2010 3:48 PM, hjl.tools at gmail dot com wrote:

 1.  __attribute__((init_priority(1005))) doesn't map to
 .ctors.1005 section.

It probably maps to .ctors.(65535-1005).  There is most definitely a
direct relationship.

 2. You need to check .init_array. sections on some
 platforms.

Not now -- because on most platforms those sections aren't used.  The
whole point of this PR is to consider switching to .init_array.  If we
do that, then, yes, you need to use those sections *and* interleave
correctly with .ctors sections.

 2. How do you run your constructor before foo?

 Given it a higher priority.
 
 The highest priority is 65535. What if foo's
 constructor already has 65535 priority?

There is a maximum priority; you can't have a higher priority than that.
 But, so what?  Your question is like asking how do you make a unsigned
int bigger than UINT_MAX?

In any case, this is totally irrelevant to the issue of mixing .ctors
and .init_array.

 That is the constructor order between A and B. We don't support
 interleaving constructor priorities between object files.

Yes, we do.  We have for a very long time.  This is why the linker sorts
the .ctors sections.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #38 from Mark Mitchell mark at codesourcery dot com 2010-12-12 
00:03:22 UTC ---
On 12/11/2010 4:00 PM, hjl.tools at gmail dot com wrote:

 Really? Here is a testcase.  Do you think goo's constructor
 will be called before another constructor in another file
 with priority 1005?

Yes.

(Or after, I don't remember if smaller numbers indicate higher priority.
 In either case, there is a deterministic order based on the priority
number.)

This is the point of the feature.  If that were not the case, there
would be no need to have .ctors. sections; everything would just go
in .ctors.


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #40 from Mark Mitchell mark at codesourcery dot com 2010-12-12 
00:11:56 UTC ---
On 12/11/2010 4:08 PM, hjl.tools at gmail dot com wrote:

 We only support constructor priority in single source file:

H.J., this is false.

Please try writing three constructors, with priorities 1, 2, and 3.  Put
the constructors with priorities 1 and 3 in one file and 2 in another
file.  See what happens the program runs.

Thank you,


[Bug target/46770] Replace .ctors/.dtors with .init_array/.fini_array on targets supporting them

2010-12-11 Thread mark at codesourcery dot com
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46770

--- Comment #43 from Mark Mitchell mark at codesourcery dot com 2010-12-12 
00:24:30 UTC ---
On 12/11/2010 4:20 PM, hjl.tools at gmail dot com wrote:

 That means we only guarantee constructor priorities in one TU and
 my testcase confirms it.

HJ, this isn't true.

The experiment I suggested in my last email is pretty straightforward.
If you're unwilling to do the experiment, it seems that you're not
really very interested in figuring out the answer.

Perhaps you should go ask some other people and see what they think.


[Bug c++/43680] [DR 1022] G++ is too aggressive in optimizing away bounds checking with enums

2010-04-20 Thread mark at codesourcery dot com


--- Comment #15 from mark at codesourcery dot com  2010-04-20 22:18 ---
Subject: Re:  [DR 1022] G++ is too aggressive in optimizing
 away bounds checking with enums

jason at gcc dot gnu dot org wrote:

 Certainly optimizing away bounds checking is good when it is provably
 redundant, but that clearly doesn't apply to this case.

Do you think this is different from signed integer overflow in loops?
To me, it seems quite similar.  That's a situation where the compiler
will now optimize away the check in something like for (int i = 0; i =
0; ++i), leaving us with an infinite loop.

And, of course, that can hit you in a security context too.

  /* Here we know that i is positive.  */
  ...
  if (i + 100 = 0)
abort();
  /* The check above will make sure this never overflows ...
 scaryvoiceor will it?/scaryvoice */
  i += 100;

 That said, I'll go ahead and add the option.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43680



[Bug c++/42748] warnings about 'mangling of 'va_list' has changed in GCC 4.4' not suppressed in sytem headers

2010-02-18 Thread mark at codesourcery dot com


--- Comment #21 from mark at codesourcery dot com  2010-02-18 19:47 ---
Subject: Re:  warnings about 'mangling of 'va_list' has changed
 in GCC 4.4' not suppressed in sytem headers

manu at gcc dot gnu dot org wrote:

 In any case, using diagnostic_report_warnings_p (location) should fix it.

AFAICT, this is not the case; at the point of mangling, input_location
does not necessarily reflect the location at which the function was
declared.  Julian Brown and I are looking into this.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42748



[Bug c++/42748] warnings about 'mangling of 'va_list' has changed in GCC 4.4' not suppressed in sytem headers

2010-01-29 Thread mark at codesourcery dot com


--- Comment #15 from mark at codesourcery dot com  2010-01-29 15:12 ---
Subject: Re:  warnings about 'mangling of 'va_list' has changed
 in GCC 4.4' not suppressed in sytem headers

manu at gcc dot gnu dot org wrote:

 Why is this a note and not simply a warning?

Because, as noted earlier, it's not reflective of any likely problem in
the user's code.  I think a warning is appropriate if the compiler
detects something which might indicate a bug in the application, but
this is just the compiler telling you about that something might go
wrong if you linked with code form a different version of G++.  Which is
unlikely, and might go wrong for other reasons too.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42748



[Bug c++/42748] warnings about 'mangling of 'va_list' has changed in GCC 4.4' not suppressed in sytem headers

2010-01-27 Thread mark at codesourcery dot com


--- Comment #9 from mark at codesourcery dot com  2010-01-27 20:04 ---
Subject: Re:  warnings about 'mangling of 'va_list' has changed
 in GCC 4.4' not suppressed in sytem headers

paolo dot carlini at oracle dot com wrote:

 If you say 'consider' and are talking to a GWP and release manager, it seems
 unpolite to re-open at once.

I certainly took no offense.

I do think the patch would apply easily to GCC 4.4, and I think it's
appropriate to apply it there.  However, I have so little bandwidth for
GCC development these days that I will not be able to quickly do the
appropriate patch/test cycle.

Matthias, would you like to do that?  If so, the patch is certainly
pre-approved.  If not, I'll put it in my queue, but please be patient.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42748



[Bug tree-optimization/39251] FAIL: g++.dg/tree-ssa/new1.C scan-tree-dump-not forwprop1 = .* \+ -

2010-01-15 Thread mark at codesourcery dot com


--- Comment #10 from mark at codesourcery dot com  2010-01-15 15:05 ---
Subject: Re:  FAIL: g++.dg/tree-ssa/new1.C scan-tree-dump-not
 forwprop1 = .* \+ -

ramana at gcc dot gnu dot org wrote:

 So yes it does look ARM specific . Also peeking at results on gcc-testresults
 doesn't show this failure on x86.

Thanks for looking at that.  I will investigate this bug, but it might
not be until week after next, as I will be out of the office this coming
week.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39251



[Bug c++/14777] [4.3/4.4/4.5 Regression] typedef doesn't fully expose base class type

2009-11-13 Thread mark at codesourcery dot com


--- Comment #15 from mark at codesourcery dot com  2009-11-13 15:07 ---
Subject: Re:  [4.3/4.4/4.5 Regression] typedef doesn't fully
 expose base class type

jason at gcc dot gnu dot org wrote:
 I'm assuming Mark isn't actually working on this bug.

Sad, but true.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14777



[Bug c++/26266] [4.2/4.3/4.4 regression] Trouble with static const data members in template classes

2009-03-20 Thread mark at codesourcery dot com


--- Comment #26 from mark at codesourcery dot com  2009-03-20 20:16 ---
Subject: Re:  [4.2/4.3/4.4 regression] Trouble with static
 const data members in template classes

jason at gcc dot gnu dot org wrote:

 I don't think the testcase in comment #7 indicates a bug at all.

FWIW, I concur with your analysis.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26266



[Bug c++/14179] [4.2/4.3/4.4 Regression] out of memory while parsing array with many initializers

2009-02-23 Thread mark at codesourcery dot com


--- Comment #53 from mark at codesourcery dot com  2009-02-23 16:11 ---
Subject: Re:  [4.2/4.3/4.4 Regression] out of memory while
 parsing array with many initializers

hubicka at gcc dot gnu dot org wrote:

 Perhaps explicitly freeing would be good idea? 

I certainly have no objection to explicitly freeing storage if we know
we don't need it anymore.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14179



[Bug c++/39242] [4.4 Regression] Inconsistent reject / accept of code

2009-02-19 Thread mark at codesourcery dot com


--- Comment #10 from mark at codesourcery dot com  2009-02-19 16:41 ---
Subject: Re:  [4.4 Regression] Inconsistent reject / accept
 of code

rguenth at gcc dot gnu dot org wrote:

 The ultimate question is of course if the standard allows (or even requires)
 an error here.

The (someone old) C++ WP I have is pretty clear:

An explicit instantiation declaration that names a class template
specialization has no effect on the class template specialization
itself (except for perhaps resulting in its implicit instantiation).
Except for inline functions, other explicit
instantiation declarations have the effect of suppressing the implicit
instantiation of the entity to which they refer. [ Note:
The intent is that an inline function that is the subject of an explicit
instantiation declaration will still be implicitly instantiated
when used so that the body can be considered for inlining, but that no
out-of-line copy of the inline function
would be generated in the translation unit. —end note ]

Here, inline function is of course the C++ definition thereof, i.e.,
functions declared inline or defined in the body of a class
definition, rather than outside the class.

What that means is that we *must not* implicitly instantiate things
declared extern template unless they are DECL_DECLARED_INLINE_P.  As a
consequence, at -O3, we cannot implicitly instantiate non-inline extern
template functions.

So, I think first hunk in the patch is correct.  It needs a comment,
though, right above DECL_DECLARED_INLINE to point out that this is a
restriction coming from the standard:

/* An explicit instantiation declaration prohibits implicit
instantiation of non-inline functions.  With high levels of
optimization, we would normally inline non-inline functions -- but we're
not allowed to do that for extern template functions.  Therefore, we
check DECL_DECLARED_INLINE_P, rather than possibly_inlined_p.  */

OK with that change.

I don't yet understand why the second hunk is required.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39242



[Bug c++/34397] [4.2/4.3/4.4 regression] ICE on invalid default template parameter

2009-02-09 Thread mark at codesourcery dot com


--- Comment #24 from mark at codesourcery dot com  2009-02-10 06:20 ---
Subject: Re:  [4.2/4.3/4.4 regression] ICE on invalid default
 template parameter

paolo dot carlini at oracle dot com wrote:

 Mark, can you have a closer look to the draft patch? I'm still looking but I
 don't think we can extract and commonize much code from grok_array_decl, 
 unless
 we accept to pass from the callers an in_parser flag, or use a function
 pointer, I can see only such rather ugly solutions...

You may well be right.  Your draft patch looks plausible.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34397



[Bug testsuite/36443] [4.3/4.4 Regression]: HOSTCC doesn't work with installed gcc

2009-02-06 Thread mark at codesourcery dot com


--- Comment #48 from mark at codesourcery dot com  2009-02-06 18:35 ---
Subject: Re:  [4.3/4.4 Regression]: HOSTCC doesn't work
 with installed gcc

rob1weld at aol dot com wrote:

 One example is inherently derived from where we see it being set (wrongly),
 during make -i check _PRIOR_ to running make install. We (some of us)
 can see that the Testsuite Results vary WILDLY (sometimes) depending on the
 Installed Gcc versus the Tested Gcc.

There is no perfect answer here.  Not setting GCC_EXEC_PREFIX is wrong
for some usage models.  Setting it may be wrong for your usage model.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36443



[Bug testsuite/36443] [4.3/4.4 Regression]: HOSTCC doesn't work with installed gcc

2009-02-06 Thread mark at codesourcery dot com


--- Comment #50 from mark at codesourcery dot com  2009-02-06 19:22 ---
Subject: Re:  [4.3/4.4 Regression]: HOSTCC doesn't work
 with installed gcc

hjl dot tools at gmail dot com wrote:

 For most people, GCC_EXEC_PREFIX points to either a directory which
 doesn't exist or a different version of gcc. Since GCC_EXEC_PREFIX
 may point a directory which doesn't exist, it isn't really needed
 by make check for most people. Only very small percentage of gcc
 developers need GCC_EXEC_PREFIX for make check.

I don't know how that's been measured.

Improving the heuristic by avoiding the doesn't-exist case might be a
fine option, as I stated previously.  If you really want to clean this
up, add some variable you can set on the make command-line, or at
configure-time, that allows the user to say which use case applies.

I don't see a lot of value in arguing over the default.  There's no
right answer; ergo, we need a switch.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36443



[Bug libstdc++/25191] exception_defines.h #defines try/catch

2009-02-02 Thread mark at codesourcery dot com


--- Comment #75 from mark at codesourcery dot com  2009-02-02 20:29 ---
Subject: Re:  exception_defines.h #defines try/catch

jason at gcc dot gnu dot org wrote:

 Since my suggested patch proved somewhat controversial, for 4.4 I'd like to
 fall back on the simpler solution that Howard proposed in the initial bug
 report; it is inappropriate for library headers to redefine keywords.

Makes sense to me.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25191



[Bug c++/38908] [4.4 regression] Unexplained 'anonymous' is used uninitialized in this function warning in cc1plus -m64

2009-02-01 Thread mark at codesourcery dot com


--- Comment #16 from mark at codesourcery dot com  2009-02-02 07:15 ---
Subject: Re:  [4.4 regression] Unexplained 'anonymous' is
 used uninitialized in this function warning in cc1plus -m64

rguenther at suse dot de wrote:

 Ok.  But, as opposed to inheritance, inserting empty members seems to
 make a class non-empty:
 
 struct A {};
 struct B { A x; };

I'm surprised by that too, but the ABI definition is:

empty class

A class with no non-static data members other than zero-width
bitfields, no virtual functions, no virtual base classes, and no
non-empty non-virtual proper base classes.

Here, we do have a non-static data member that is not a zero-width
bitfield, so I guess this isn't an empty class.

So, CLASSTYPE_EMPTY_P would be a conservative approximation at present,
but we need a new bit to capture the broader thing that is desired here.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=38908



[Bug middle-end/38851] [4.4 regression] Compiler warns about uninitialized variable that is an object with a constructor

2009-01-25 Thread mark at codesourcery dot com


--- Comment #16 from mark at codesourcery dot com  2009-01-25 20:03 ---
Subject: Re:  [4.4 regression] Compiler warns about
 uninitialized variable that is an object with a constructor

rguenther at suse dot de wrote:

 Therefore, I don't think that the key here is zero-size.  Instead, it's the
 fact that structure cannot be initialized.  That's useful both for warnings 
 and
 for optimization; it can't be initialized, so there's no point about warning
 about uninitialized uses, and there's no reason to actually generate code for
 the copies.
 
 Ok, I think mapping cannot be initialized to zero-size is ok, as that is
 the only thing we can currently query (and we even specialize this
 for C++ to deal with the 1 byte vs. empty case).

Yes, I think it's OK to approximate logically empty by zero-size at
present.  It might be worth either changing the zero-size
documentation/name to reflect that it means logically empty (if we
think these are the same concept) or else defining a separate
LOGICALLY_EMPTY_P predicate (implemented by checking for zero size) as a
hedge against separating them (if we think they are usefully distinct
concepts).

 It's a P1 defect as we didn't warn for uninitialized structure
 uses in any previous relelase.  While we can argue that it is safe
 to downgrade this to P2 I think we should at least try to fix this
 issue for 4.4.0.

I don't mind fixing it, of course, and it would certainly be better to
do so.  But, at the end of the day, if everything else is ready, I'd be
opposed to holding up the release for this.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=38851



[Bug inline-asm/33932] miscalculation of asm labels with -g3

2008-12-29 Thread mark at codesourcery dot com


--- Comment #22 from mark at codesourcery dot com  2008-12-29 23:48 ---
Subject: Re:  miscalculation of asm labels with -g3

stsp at users dot sourceforge dot net wrote:

 Can this possibly be solved by emitting
 a warning if the asm in global scope is
 used with -ffunction-sections?

I think the generalization of Steven's point is that we can't really
know what section the user's assembly code should go in: text, data, or
something else, and therefore we'd better depend on the user to tell us.
 I still think it would be an OK idea to try to reduce the chances of
something bad happening -- and the inconsistency between -g levels -- by
popping back from the debug section, but the fundamental point is that
if the user wants full robustness they need to say what section in which
to put the assembly code.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33932



[Bug c++/34269] [4.2/4.3/4.4 regression] Incomplete __decltype/__typeof expressions accepted

2008-11-11 Thread mark at codesourcery dot com


--- Comment #6 from mark at codesourcery dot com  2008-11-11 20:09 ---
Subject: Re:  [4.2/4.3/4.4 regression] Incomplete __decltype/__typeof
 expressions accepted

jason at redhat dot com wrote:

 This seems right to me.  It's even what the comment at the top of the 
 file says we do:
 
   Then, while we attempt to parse the construct, the parser queues 
 up
   error messages, rather than issuing them immediately, and saves 
 the
   tokens it consumes.  If the construct is parsed successfully, the 
 
   parser commits, i.e., it issues any queued error messages and 
 
   the tokens that were being preserved are permanently discarded.
 
 The simulate_error business only works for parse errors that indicate 
 that this line of parsing won't work; it doesn't work for code that 
 parses fine, but violates semantic rules and therefore needs an error.

I forgot that comment was still there.  I think it's a lie, reflecting
an earlier implementation state.  I found queuing up the messages to be
really difficult.

For a syntactically broken construct, we can just issue the error and
commit to the tentative parse at that point.  I believe we do that in
some other places.  It doesn't matter what top-level construct
(declaration or expression-statement) we might be looking at; something
like __decltype( ; is always invalid.  Once you see decltype ( , if
the parsing of the operand to decltype fails, we can commit to the
current tentative parse, issue the error, and move on.

However, I think the core bug here may be that the code you mention in
cp_parser_simple_declaration doesn't check to see if the parse has
already failed.  Committing to the tentative parse is reasonable in that
situation if the parsing has succeeded thus far -- but if we've actually
hit a *parse* error, rather than a *semantic* error, we could safely
give up.

That will result in trying to parse the decltype again (now as an
expression statement), and we'll get an error that time.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34269



[Bug java/37068] [4.4 Regression] libgcj linkage failure: Incorrect library ABI version detected

2008-11-03 Thread mark at codesourcery dot com


--- Comment #23 from mark at codesourcery dot com  2008-11-04 05:51 ---
Subject: Re:  [4.4 Regression] libgcj linkage failure: Incorrect
 library ABI version detected

aph at gcc dot gnu dot org wrote:

 It's quite likely that the Java FE should not be calling
 cgraph_build_static_cdtor(), but when that call is removed some test
 cases fail.  Rather than arguing what priority this should be, all we
 need is someone who actually understands cgraph_build_static_cdtor(),
 and can tell me when it should be called.

You shouldn't call that function.  Instead, you should set
DECL_STATIC_{CONSTRUCTOR,DESTRUCTOR}.  Then, cgraph will do the right
thing.  If necessary, you can also call decl_init_priority_insert.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37068



[Bug debug/33429] debug info for class2 in g++.dg/other/unused1.C requires -femit-class-debug-always

2008-10-16 Thread mark at codesourcery dot com


--- Comment #11 from mark at codesourcery dot com  2008-10-16 20:37 ---
Subject: Re:  debug info for class2 in g++.dg/other/unused1.C
  requires -femit-class-debug-always

jason at redhat dot com wrote:

 It seems to me that you're arguing that -femit-class-debug-always should 
 go back to being on by default; its only effect is to control this exact 
 optimization.

If that's the only effect, then, yes, I guess that's what I'm arguing.

 Does anyone have some recent numbers?

That would certainly be helpful.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33429



[Bug debug/33429] debug info for class2 in g++.dg/other/unused1.C requires -femit-class-debug-always

2008-10-15 Thread mark at codesourcery dot com


--- Comment #9 from mark at codesourcery dot com  2008-10-15 22:51 ---
Subject: Re:  debug info for class2 in g++.dg/other/unused1.C
  requires -femit-class-debug-always

jason at redhat dot com wrote:

 But, I think it's odd if I'm in the debugger, looking at code that says:

   return (X*)y;

 if I can't say print (X*)y.

 If the type is coming from a library, we may not ever create objects of this
 type.
 
 If the Xes are created in the library, the library should have the debug 
 info we need.

That assumes a friendly library distributor. :-)

The library is provided to us in binary form and stripped, and if it
does have debug info it might not have come from GCC.  But, if it's
declared in a header, we can still provide debug info.

 Finally, we use vast amounts of space in object files for debug info, since 
 we
 emit the same debug info in multiple object files.  Trying to optimize by not
 emitting debug info in this case doesn't seem likely to be a big win given 
 our
 overall strategy.  I don't have any data to support that claim, though.
 
 I'm not sure what overall strategy you mean.  We try to avoid emitting 
 the same info in multiple places when possible: we try to treat the 
 debug info for classes as another vague linkage entity and put it with 
 the vtable.

OK, my statement was overly strong.  I was thinking particularly of C++
templates, where the vague linkage strategy makes for lots of copies,
both in the object files, and, because we don't use COMDAT, in the final
binaries.  In that kind of C++ code, this optimization doesn't save a
significant percentage of space.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33429



[Bug libstdc++/25191] exception_defines.h #defines try/catch

2008-09-24 Thread mark at codesourcery dot com


--- Comment #57 from mark at codesourcery dot com  2008-09-24 13:03 ---
Subject: Re:  exception_defines.h #defines try/catch

jason at gcc dot gnu dot org wrote:
 --- Comment #55 from jason at gcc dot gnu dot org  2008-09-23 20:43 
 ---
 It seems reasonable to me for try { X } catch... to mean X when
 -fno-exceptions.  We don't need to error except on throw.

We have to be careful, in some cases.  For example:

  extern int f();

  template typename T
  struct S {
static int i;
  };
  template typename T
  int ST::i = f();

 int main() {
try {
  return 0;
} catch (...) {
  return Sint::i;
}
  }

This program, IIRC, is guaranteed to call f, as a side-effect of the
presence of the catch-clause?  Of course, the C++ FE could still process
the catch clause; my only point is that we cannot literally just throw
away the catch clause.

I don't objection to -fno-exceptions silently discarding catch clauses,
as long as we avoid the kind of problem above.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25191



[Bug testsuite/36087] [4.4 Regression] test failures between revs. 134696 and 134717

2008-08-08 Thread mark at codesourcery dot com


--- Comment #8 from mark at codesourcery dot com  2008-08-08 23:40 ---
Subject: Re:  [4.4 Regression] test failures between
 revs. 134696 and 134717

janis at gcc dot gnu dot org wrote:
 --- Comment #7 from janis at gcc dot gnu dot org  2008-08-08 23:34 ---
 Mark, the tests started failing because -fdump-rtl-loop2 used to produce dump
 files for all loop2_* passes.  The compiler could be fixed to do that again, 
 or
 the tests mentioned here could be changed to use -fdump-rtl-loop2_unroll and
 -fdump-rtl-loop2_invariant instead of -fdump-rtl-loop2.  I was expecting
 someone to fix the compiler after Andrew complained about the change, but I
 wouldn't mind modifying the tests and closing this PR.

OK, I understand much better now, thanks!  Do we have a separate PR open 
for the problem as well, or shall I retitle this one to more accurately 
reflect the problem?

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36087



[Bug c++/36797] ICE on SFINAE and __is_empty

2008-07-14 Thread mark at codesourcery dot com


--- Comment #7 from mark at codesourcery dot com  2008-07-14 15:28 ---
Subject: Re:  ICE on SFINAE and __is_empty

sebor at roguewave dot com wrote:

 My preference would be for gcc to avoid imposing restrictions on the use
 of these helpers to facilitate portability to other compilers such as EDG
 eccp (the latest 3.10.1 compiles the test case correctly).

How does it mangle it?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36797



[Bug c++/36797] ICE on SFINAE and __is_empty

2008-07-14 Thread mark at codesourcery dot com


--- Comment #9 from mark at codesourcery dot com  2008-07-14 16:53 ---
Subject: Re:  ICE on SFINAE and __is_empty

sebor at roguewave dot com wrote:

 int fooA0 (BA0, __is_empty (A0)::X*):
 _Z3fooI1AILi0EEEiPN1BIT_Xv19builtin16TOS3_EE1XE
 
 int fooint(Bint, !__is_empty (int)::X*):
 _Z3fooIiEiPN1BIT_Xntv19builtin16TOS1_EE1XE

OK.  I don't see anything inherently wrong with that mangling, though of 
course if we're going to make this standard, we need EDG's table of 
builtins (so we known which ones are which), and we need to specify 
semantics for each of the builtins so that we know that we can mix 
object files between different compilers.  (No good if G++'s __is_empty 
is somehow subtly different than EDG's __is_empty.)

So, I think the high-order issues here are still:

(1) Do we need a mangling?  (I know you think we do.)
(2) If so, do we want to specify it at the ABI level, or use something 
G++-specific?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36797



[Bug c++/36633] [4.4 regression] warning array subscript is below array bounds on delete [] with -O2, -Wall

2008-07-10 Thread mark at codesourcery dot com


--- Comment #14 from mark at codesourcery dot com  2008-07-10 14:58 ---
Subject: Re:  [4.4 regression] warning array subscript is
 below array bounds on delete [] with -O2, -Wall

rguenther at suse dot de wrote:

 Can the FE mark this array-access with TREE_NO_WARNING?  Or is it not
 in array_ref form?

In general, the FE cannot do that; the array might have (say) 128-byte 
elements, but there will still only be (say) 8 bytes for the cookie.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36633



[Bug c++/36760] Simple std::bind use causes warnings with -Wextra

2008-07-09 Thread mark at codesourcery dot com


--- Comment #13 from mark at codesourcery dot com  2008-07-09 19:08 ---
Subject: Re:  Simple std::bind use causes warnings with -Wextra

bangerth at dealii dot org wrote:
 --- Comment #10 from bangerth at dealii dot org  2008-07-09 17:04 ---
 (In reply to comment #8)
 I was also trying to raise the issue of whether we think the warning is 
 useful.
  If it's not practical to avoid the warning in the library, then I wonder if
 it's practical to avoid it other generic-programming code.
 
 I agree with this. As I mentioned in PR 30601, code like this
 
   template typename T class ArrayView {
 T operator();
 T operator() const;
   };
 
 is quite common and I don't see a need to make it more complicated than
 necessary just for a warning.

Me neither.  I think writing:

   const int f();

or:

   template typename T
   const int f(T);

is probably worth warning about, but maybe we ought to just skip this 
warning when instantiating a template function.  In other words, warn at 
the point of original declaration of the template if it is already 
obviously meaningless at that point to add the cv-qualifier, but not 
warn at instantiation.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36760



[Bug c++/36633] [4.4 regression] warning array subscript is below array bounds on delete [] with -O2, -Wall

2008-07-09 Thread mark at codesourcery dot com


--- Comment #9 from mark at codesourcery dot com  2008-07-10 03:42 ---
Subject: Re:  [4.4 regression] warning array subscript is
 below array bounds on delete [] with -O2, -Wall

paolo dot carlini at oracle dot com wrote:

 Mark, could you possibly comment on this PR? With some good hints I could even
 try to work on it...

I don't see that the C++ front-end is doing anything obviously wrong 
here.  The cast to (long unsigned int *) is coming from the presence of 
the array cookie.  When we allocate an array, we allocate a few extra 
bytes and scribble the length of the array into that extra space.  Then, 
when the user does delete[] we know how many array elements there are, 
so we can run all the destructors.  See:

  http://www.codesourcery.com/public/cxx-abi/abi.html#array-cookies

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36633



[Bug c++/36760] Simple std::bind use causes warnings with -Wextra

2008-07-08 Thread mark at codesourcery dot com


--- Comment #6 from mark at codesourcery dot com  2008-07-08 16:32 ---
Subject: Re:  Simple std::bind use causes warnings with -Wextra

paolo dot carlini at oracle dot com wrote:

 Thanks Tom. In fact, yesterday I was writing without remembering my past
 analyses of this type of issue, with system header warnings not suppressed:
 TREE_NO_WARNING is *not* generically uses for that. Everything boils down to
 DECL_IN_SYSTEM_HEADER  on the decl instead.

Why is it reasonable for a libstdc++ header to return a cv-qualified 
type, but not for user code to do so?

In general, the system-header hack is to work around things we don't 
control; we need to accept weird code in stdio.h because the OS 
distributor controls that, not us.  But, if it's not practical for 
libstdc++ to avoid this warning, then it's probably not practical for 
users to avoid it either, so then I wonder how beneficial the warning 
is.  On the other hand, I'd expect that libstdc++ could just avoid the 
warning, by using a traits class to strip the cv-qualifier?

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36760



[Bug testsuite/36443] [4.3/4.4 Regression]: HOSTCC doesn't work with installed gcc

2008-06-09 Thread mark at codesourcery dot com


--- Comment #17 from mark at codesourcery dot com  2008-06-09 21:16 ---
Subject: Re:  [4.3/4.4 Regression]: HOSTCC doesn't work
 with unstalled gcc

hjl dot tools at gmail dot com wrote:
 --- Comment #16 from hjl dot tools at gmail dot com  2008-06-09 14:16 
 ---
 (In reply to comment #9)
 I suspect that if you remove the setting in site.exp you will break the 
 following scenario:

 1. User puts libraries/headers in $pefix/{lib,include}
 
 I am not convinced it is the right thing to do. What
 are those libraries/headers? Are they from gcc? If yes,
 you don't need to do it. If not, can you use --sysroot
 to handle non-gcc libraries/headers?

In general, no, these are not from GCC.  They're probably from your C 
library -- which might not be GLIBC or Newlib, of course.  And, they 
probably include your installed assembler and linker -- which not be 
from GNU binutils, of course.

I don't know if --sysroot might be a solution.  Historically, I believe 
the scenario I put forth has worked, so you are going to break people's 
test methodology.  Maybe there is some solution that involves changing 
the compiler flags used in site.exp (like by adding --sysroot, or -B 
options, or something) so that you don't need to set GCC_EXEC_PREFIX.

But, I think that's going to be complicated.  That's why I think the 
right thing to do is to set up HOSTCC to be robust.  Like having the 
command to run default to:

   unset GCC_EXEC_PREFIX  gcc

rather than just:

   gcc


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36443



[Bug testsuite/36443] [4.3/4.4 Regression]: HOSTCC doesn't work with installed gcc

2008-06-09 Thread mark at codesourcery dot com


--- Comment #19 from mark at codesourcery dot com  2008-06-10 00:38 ---
Subject: Re:  [4.3/4.4 Regression]: HOSTCC doesn't work
 with installed gcc

hjl dot tools at gmail dot com wrote:

 They sound to me the ideal usage for --sysroot. They aren't from
 gcc and they don't change from one gcc version to another one.
 You can use one --sysroot for gcc 4.1, 4.2, 4.3, 4.4, ...
 --syroot supports libraries and headers.  Does it support
 assembler and linker?

Not as far as I know; --sysroot is about the target, not the host.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36443



[Bug testsuite/36443] [4.3/4.4 Regression]: HOSTCC doesn't work with installed gcc

2008-06-09 Thread mark at codesourcery dot com


--- Comment #21 from mark at codesourcery dot com  2008-06-10 05:02 ---
Subject: Re:  [4.3/4.4 Regression]: HOSTCC doesn't work
 with installed gcc

hjl dot tools at gmail dot com wrote:

 --syroot supports libraries and headers.  Does it support
 assembler and linker?
 Not as far as I know; --sysroot is about the target, not the host.
 
 So setting GCC_EXEC_PREFIX is to support make check using
 non-system assembler and linker with gcc for a target which
 probably isn't a GNU target/OS. Am I correct?

Or which *is* a GNU target/OS, but isn't using an in-tree build of all 
the components -- like, for example, if you already have good versions 
of the cross tools around.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36443



[Bug testsuite/36443] [4.3/4.4 Regression]: HOSTCC doesn't work with unstalled gcc

2008-06-08 Thread mark at codesourcery dot com


--- Comment #9 from mark at codesourcery dot com  2008-06-08 20:23 ---
Subject: Re:  [4.3/4.4 Regression]: HOSTCC doesn't work
 with unstalled gcc

hjl dot tools at gmail dot com wrote:

 How does gcc search the right paths when GCC_EXEC_PREFIX points
 to non-existent directory because gcc isn't installed? Even if
 there is a GCC_EXEC_PREFIX directory, it could be a very old
 gcc installation and you may search very old files, instead of
 the current ones, which are just built, but not installed yet.

I don't remember all of the details of these changes.

However, the compiler historically searched the configured libdir no 
matter what.  This problem with having random old stuff in the place 
where you're going to be installing the new compiler is not new.  People 
wanted that behavior for in-tree testing so that if you've already put a 
new libc in libdir the compiler you're testing can find it.

I suspect that if you remove the setting in site.exp you will break the 
following scenario:

1. User puts libraries/headers in $pefix/{lib,include}
2. User builds GCC with corresponding --prefix option
3. User runs make check


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36443



[Bug testsuite/36443] [4.3/4.4 Regression]: HOSTCC doesn't work with unstalled gcc

2008-06-08 Thread mark at codesourcery dot com


--- Comment #11 from mark at codesourcery dot com  2008-06-08 21:12 ---
Subject: Re:  [4.3/4.4 Regression]: HOSTCC doesn't work
 with unstalled gcc

hjl dot tools at gmail dot com wrote:

 I suspect that if you remove the setting in site.exp you will break the 
 following scenario:

 1. User puts libraries/headers in $pefix/{lib,include}
 2. User builds GCC with corresponding --prefix option
 3. User runs make check
 
 Can't we at least test if $(libdir)/gcc/ exists before setting it
 blindly?

That seems like it might work.

What is the effective default value of GCC_EXEC_PREFIX for the compiler 
being tested if we don't set the variable?

Also, have you tested my suggested change to HOSTCC?  If HOSTCC is 
another GCC then any environment variables that affect the compiler 
we're trying to test will also affect HOSTCC.  It seems to me that the 
best way to avoid that causing problems is to make sure that HOSTCC 
sets/unsets the environment variables it needs.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36443



[Bug testsuite/36443] [4.3/4.4 Regression]: HOSTCC doesn't work with unstalled gcc

2008-06-08 Thread mark at codesourcery dot com


--- Comment #13 from mark at codesourcery dot com  2008-06-09 00:05 ---
Subject: Re:  [4.3/4.4 Regression]: HOSTCC doesn't work
 with unstalled gcc

hjl dot tools at gmail dot com wrote:

 1. User puts libraries/headers in $pefix/{lib,include}
 2. User builds GCC with corresponding --prefix option
 3. User runs make check
 
 Do you have an example to show it doesn't work if
 GCC_EXEC_PREFIX isn't set.

Yes -- the scenario you quote above.  If you want to remove the setting 
of GCC_EXEC_PREFIX, you need to explain how that is going to work.

 That means we have to do it whenever HOSTCC is used, including new
 and old tests. I don't think it is the right fix, given that no one
 has shown GCC_EXEC_PREFIX really has to be set here.

In order to properly control the test environment for the compiler just 
built, all environment variables used by the compiler being tested 
should be explicitly set or cleared.  Otherwise, the behavior of the 
tests will depend on things set in the user's environment, possibly for 
their /usr/bin/gcc, which clearly makes no sense.

Unless you can find a way to localize those environment changes only to 
the tested compiler (by setting/restoring them around every call to the 
compiler being tested for example), HOSTCC must set/clear all the 
environment variables that it uses.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36443



[Bug c++/35368] [4.1/4.2/4.3/4.4 Regression] With #pragma visibility, `vtable for __cxxabiv1::__class_type_info' is emitted as a hidden-visibility relocation

2008-02-26 Thread mark at codesourcery dot com


--- Comment #10 from mark at codesourcery dot com  2008-02-26 17:57 ---
Subject: Re:  [4.1/4.2/4.3/4.4 Regression] With #pragma visibility,
 `vtable for __cxxabiv1::__class_type_info' is emitted as a hidden-visibility
 relocation

benjamin at smedbergs dot us wrote:
 --- Comment #8 from benjamin at smedbergs dot us  2008-02-26 17:25 ---
 Yes, to make it clear: the class typeinfo object may have hidden visibility...
 it's the __cxxabiv1::__class_type_info class that should have default
 visibility always.

Oh, I see!  Yes, __cxxabiv1::* should definitely have default visibility.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35368



[Bug c++/34950] [4.2/4.3 Regression] ICE in svn boost math toolkit

2008-02-18 Thread mark at codesourcery dot com


--- Comment #15 from mark at codesourcery dot com  2008-02-19 06:15 ---
Subject: Re:  [4.2/4.3 Regression] ICE in svn boost math toolkit

rguenth at gcc dot gnu dot org wrote:
 --- Comment #14 from rguenth at gcc dot gnu dot org  2008-02-12 23:19 
 ---
 It looks like simply deleting from dependent_type_p:
 
   /* If there are no template parameters in scope, then there can't be
  any dependent types.  */
   if (!processing_template_decl)
 {
   /* If we are not processing a template, then nobody should be
  providing us with a dependent type.  */
   gcc_assert (type);
   gcc_assert (TREE_CODE (type) != TEMPLATE_TYPE_PARM);
   return false;
 }
 
 fixes the testcase - so we are probably not setting processing_template_decl
 correctly(?).  Or is it even correct and the check in the context of
 the caller make_typename_type is simply bogus?

We definitely don't want to delete that from dependent_type_p; it's a 
vital optimization.  I think the usage in make_typename_type is correct; 
when CONTEXT is a dependent type, we have to make a real TYPENAME_TYPE; 
when it's not, we can figure out what's being referenced immediately.

What does the stack trace look like at the point we're crashing?  What 
typename type are trying to simplify?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34950



[Bug c++/28879] [4.0/4.1/4.2/4.3 regression] ICE with VLA in template function

2008-02-13 Thread mark at codesourcery dot com


--- Comment #8 from mark at codesourcery dot com  2008-02-13 18:18 ---
Subject: Re:  [4.0/4.1/4.2/4.3 regression] ICE with VLA in
 template function

jason at gcc dot gnu dot org wrote:

 Either value_dependent_expression_p needs to handle arbitrary VLA bounds, or 
 we
 need to avoid calling it if the array bound is not a constant-expression.

Exactly so.  My intent when writing v_d_e_p was that it only be called 
with constant-expressions.  (The whole idea of value-dependent 
expressions in the standard is predicated on them being 
constant-expressions; the goal is to partition constant-expressions 
which whose value is known before template substitution from those whose 
values is not known until later.)

So, I suggest that we take the second approach.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28879



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2008-01-22 Thread mark at codesourcery dot com


--- Comment #38 from mark at codesourcery dot com  2008-01-22 17:47 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

jason at gcc dot gnu dot org wrote:

 However, this runs into problems with libstdc++.  In particular,
 std::complexdouble has a constructor from double and also a constructor 
 from
 __complex__ double.  Making the change in this patch makes that conversion
 ambiguous because now std::complexdouble(1) can go via either the
 __complex__ double constructor or the plain double constructor.
 
 It seems clear to me that conversion to complex should be worse than 
 conversion
 to another scalar arithmetic type.  I would implement this in hypothetical
 standardese by defining complex conversions for the conversion from scalar 
 to
 complex, and the term scalar arithmetic conversions for integer, float and
 integer-float conversions, then adding to 13.3.3.2p3 an additional rule that 
 S1
 is better than S2 if S1 is a scalar arithmetic conversion and S2 is a complex
 conversion.

Yes, that would probably work.  I would prefer to avoid a whole new
class of conversions, and it doesn't seem necessary to me, since I still
don't understand what Gaby is worried about.  But, it does seem like a
technically feasible solution if absolutely necessary.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug c++/33984] [4.2/4.3 Regression] bit-fields, references and overloads

2008-01-20 Thread mark at codesourcery dot com


--- Comment #6 from mark at codesourcery dot com  2008-01-20 20:28 ---
Subject: Re:  [4.2/4.3 Regression] bit-fields, references and
 overloads

aoliva at gcc dot gnu dot org wrote:
 --- Comment #5 from aoliva at gcc dot gnu dot org  2008-01-17 18:01 
 ---
 Created an attachment (id=14959)
 -- (http://gcc.gnu.org/bugzilla/attachment.cgi?id=14959action=view)
  -- (http://gcc.gnu.org/bugzilla/attachment.cgi?id=14959action=view)
 Slight revision of Jakub's patch that fixes the regression
 
 Getting built-in candidate functions not to use the bit-field types fixed the
 regression Jakub noticed in his patch, and keeps the progression in place.  
 I'm
 almost done testing this, and I'll post it to gcc-patches then.

Thank you for working on this.  I think this is the right idea, but I
have two comments:

* In add_builtin_candidates, I think you can just do:

  argtypes[i] = unlowered_expr_type (lvalue_type (args[i]));

* In layout_class_type, I understand that you're trying to preserve
cv-qualification.  I don't see a test case for that, though.  If there's
a bug you're fixing here, let's have a test case for it.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33984



[Bug libstdc++/33831] [4.3 Regression] Revision 129442 breaks libstc++ API

2008-01-16 Thread mark at codesourcery dot com


--- Comment #25 from mark at codesourcery dot com  2008-01-16 22:27 ---
Subject: Re:  [4.3 Regression] Revision 129442 breaks
 libstc++ API

bkoz at gcc dot gnu dot org wrote:

 I believe there is a bit of a bias here, in that it's OK to make FE changes,
 but even well-documented and warned lib changes are not ok? What's up with
 that? I assert the right to make API changes, including removal of deprecated
 items.

No, as I've said before, I think the C++ maintainers -- mostly me! --
were just plain wrong about some of the changes made.

I think that some of the changes that were made were necessary because
they were the only way to increase our ability to accept correct,
conformance code.  In other words, sometimes we had to choose between
backwards-compatibility and correctness.  In those cases, I think we
were right to choose correctness.

In other cases, we could have kept compatibility, but didn't.  In some
of those cases, I think we -- and again, mostly, be we I mean I! --
didn't try hard enough to keep backwards compatibility.  We've being
punished for that.  (Note that there's a recent discussion about making
things that are errors by default into warnings by default -- thereby
making the compiler more lenient.)

My -- possibly incorrect -- understanding is that in this case the
problem with the old headers is not that it prevents implementation of
an ISO-conformant C++ library,  but just that they're a pain to keep
around.  My feeling is that we should be very reluctant to break
backwards compatibility for maintenance reasons.

Fedora (or any other GNU/Linux distribution) packages are not a good
measure of the problem.  They're good sample of free and/or open-source
codebases compiled with G++, but they're a poor sample of all code
compiled with G++.  We see a lot of dusty-deck C++ code.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33831



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2008-01-07 Thread mark at codesourcery dot com


--- Comment #34 from mark at codesourcery dot com  2008-01-07 16:17 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

gdr at cs dot tamu dot edu wrote:

 | What's the likely change?
 
 Ban implicit narrowing conversions, in the sense that a round trip will not
 give the same value back. 

Which direction is narrowing, between int and float?  (Both have
values unrepresentable in the other, of course.)  Would you please give
an example of how this change, together with the new constructors, would
make some program behave differently than the standard says it should?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2008-01-07 Thread mark at codesourcery dot com


--- Comment #36 from mark at codesourcery dot com  2008-01-08 03:39 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

gdr at cs dot tamu dot edu wrote:

 |  (Both have
 | values unrepresentable in the other, of course.)  Would you please give
 | an example of how this change, together with the new constructors, would
 | make some program behave differently than the standard says it should?
 
 Please see the details in the proposal put foward by BS, me, and JSA titled
 `initializer list' (post Toronto meeting), and the recent `rationale'
 paper by BS in the mid-term mailing.  Look for the section or word 
 `narrowing'.

I don't know where to find those things, unfortunately.  Do you have a URL?

Would you please provide an example of how:

  complexfloat {
Complex (int i) : real_(i) {};
Complex (float f): real_f() {};

float real_;
float imag_;
  };

would be different than just:

  complexfloat {
Complex (float f): real_f() {};

float real_;
float imag_;
  };

when passed an int?  I'm having a hard time seeing how making the
conversion in the constructor would be different than making it at the
call site, whether or not the argument is a constant.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2008-01-06 Thread mark at codesourcery dot com


--- Comment #24 from mark at codesourcery dot com  2008-01-06 21:06 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

gdr at cs dot tamu dot edu wrote:

 | I'm not sure what you mean by that.  It's a public constructor;
 
 I mean that it is not a standard constructor, and it is not a
 constructor I documented as a GNU extension.  The fact that it is a
 public constructor is not, by itself, a documentation that it is a
 standard constructor or a constructor that users should use.

But, it's also not documentation that users should *not* use it.  And,
now it's been out there for a long time, so it's quite likely that some
users somewhere *are* using it.  The run-time library has various
extensions to the standard, and the way people use a run-time library is
partly to open its header files and use what they see.  I think we have
to accept that this is indeed an incompatible change and likely to
affect users.

That said, I do think it's reasonable to break backwards compatibility
here if we have no other choice.  Right now, we have this odd wart in
the language with our handling of __complex__ (treating is as a
non-artithmetic type) which causes other problems.  So, it's possible
that we have to choose between making an incompatible change to the
library and leaving the language wart -- and I think we're all agreed
that in that case we'd rather add the dummy parameter you suggest.

But, you've not shown that my suggestion of adding additional
constructors is detectable by users.  If it's not, then that would be a
better solution: it would allow us to avoid the incompatible change to
the library.  Of course, if adding constructors itself breaks
compatibility, then that's a powerful argument against my suggestion.
So far, all you've said is that it makes you nervous.  Does it actually
break something?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2008-01-06 Thread mark at codesourcery dot com


--- Comment #26 from mark at codesourcery dot com  2008-01-07 01:16 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

gdr at cs dot tamu dot edu wrote:

 I would not bet money that nobody is not using it.  However, that
 somebody is using something specifically non-standard and NOT
 documented GNU extension.  
 
 This situatiation is radically very different from the one where the
 constructor would have been documented as GNU extension 

It isn't different to the user.  This isn't quite the same situation as
fixing an accepts-invalid bug in the front end.  There, a user had no
reason at all to expect the code to be valid, and the only way to make
the compiler conform to the requirement to emit a diagnostic is to
reject the code -- or at least give a warning about it.  And I'd prefer
to warn rather than error where practical.

Imagine that you're a user.  You read about GNU __complex__ types in the
manual.  You write some code with them.  Then, you want to call some C++
functions that expect std::complex.  You look at the libstdc++ source
code, notice a constructor there that does what you want, and use it.
You upgrade to a new version of G++ and your code breaks.  I'm sure you
agree that this doesn't make you happy.

So, let's not try to argue that changing the constructor signature is
painless.  Instead, let's decide whether that's a better or worse
solution than adding more constructors.  As I said previously, if adding
more constructors is going to break something, then I agree that it's bad.

 We have no plan of how those new constructors will interact with
 future new additions.  Consequently, I'm very reluctant adding those
 constructors -- after all, these new single-parameter constructors are
 being suggested because of an ambiguity caused by adding a single-parameter
 constructor that did not exist (in the Standard) in the first place.

I don't understand this argument.  Do you mean a future addition to the
ISO C++ standard or to the GNU C++ library?  We control the latter, so
that doesn't seem like a problem.

Is it conceivable that ISO C++ will ever add a
complexdouble::complex(int) constructor that doesn't set the real part
to the value of the argument (converted to double), and the imaginary
part to zero?  I'm not involved in the standards process at this point,
but that would be amazing to me, both since that would change the
meaning of:

  complexdouble(3)

and since it would not conform to the usual mathematical notions of
projections of integers onto the complex plane.

What is the concern that you have?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2008-01-06 Thread mark at codesourcery dot com


--- Comment #30 from mark at codesourcery dot com  2008-01-07 07:44 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

gdr at cs dot tamu dot edu wrote:

 | Is it conceivable that ISO C++ will ever add a
 | complexdouble::complex(int) constructor that doesn't set the real part
 | to the value of the argument (converted to double), and the imaginary
 | part to zero? 
 
 That isn't the issue.  My concern is whether ISO C++ will ever
 change conversion rules, say from integers to floats or doubles.  The
 answer is likely. 

What's the likely change?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2008-01-06 Thread mark at codesourcery dot com


--- Comment #31 from mark at codesourcery dot com  2008-01-07 07:48 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

gdr at cs dot tamu dot edu wrote:

 But, as that hypothetical user, I would not have any ground to be unhappy.
 After all, it was code based on unfounded extrapolations.

I think this is a mistake.   Our documentation has never been good
enough for people to rely on the absence of documentation as meaningful.
 One of the most frequent complaints I get about GCC is that we break
existing code with every release.  Apparently, we do this much more
often than other other compilers.

You're clearly not going to agree with me.  So be it.

Please ask your fellow libstdc++ maintainers what they think.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2008-01-04 Thread mark at codesourcery dot com


--- Comment #22 from mark at codesourcery dot com  2008-01-05 07:55 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

gdr at cs dot tamu dot edu wrote:

 |  I'd rather distinguish the constructor taking __complex__ by adding
 |  a dummy parameter:
 |  
 | enum _DummyArg { };
 | complex(__complex__ double __z, _DummyArg);
 | 
 | That will, however, break backwards compatibility for user programs (if
 | any) relying on the constructor.
 
 That isn't a concern because I never published that constructor as a
 contract in the interface of std::complexdouble.

I'm not sure what you mean by that.  It's a public constructor; how do
we know that there aren't users out there using it?  How would they have
known that they weren't supposed to use it?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug middle-end/32044] [4.3 regression] udivdi3 counterproductive, unwarranted use

2008-01-03 Thread mark at codesourcery dot com


--- Comment #39 from mark at codesourcery dot com  2008-01-04 04:43 ---
Subject: Re:  [4.3 regression] udivdi3 counterproductive,
 unwarranted use

fche at redhat dot com wrote:

 Downgrading to P4.  We seem to have consensus that this is [not] a GCC 
 wrong-code
 bug.
 
 Yeah, it seems to be a mistaken expectation of -ffreestanding not to
 call libgcc.  Maybe a new option to that effect would help?

I don't think there's a practical, platform-independent way for GCC to
avoid calling libgcc.  On some platforms, it has to do that for pretty
basic operations.  I think we just need to accept that the libgcc API is
part of what's required by the compiler; you could imagine that it is
as-if weak definitions of these functions were emitted in every
assembly file by the compiler itself.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32044



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2007-12-26 Thread mark at codesourcery dot com


--- Comment #18 from mark at codesourcery dot com  2007-12-26 21:19 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

gdr at gcc dot gnu dot org wrote:

 I'm very nervous about adding more constructors.
 I'd rather distinguish the constructor taking __complex__ by adding
 a dummy parameter:
 
enum _DummyArg { };
complex(__complex__ double __z, _DummyArg);

That will, however, break backwards compatibility for user programs (if
any) relying on the constructor.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug libstdc++/33831] [4.3 Regression] Revision 129442 breaks libstc++ API

2007-12-16 Thread mark at codesourcery dot com


--- Comment #15 from mark at codesourcery dot com  2007-12-17 04:34 ---
Subject: Re:  [4.3 Regression] Revision 129442 breaks
 libstc++ API

rguenther at suse dot de wrote:

 Now that we have ext/hash_map and ext/hash_set back (yes, SPEC2000
 eon still is broken, as it uses the removed iostream.h and other *.h
 headers - and it's impossible to fix without touching all of it)
 the issue isn't as pressing anymore.  Though still the question
 remains if we should break the libstdc++ API at all.

There seemed to be pretty good consensus that we shouldn't.  So far,
other than maintenance pain, I've not seen an argument for removing
things like iostream.h.  And, I think user pain should trump maintenance
pain in this case.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33831



[Bug middle-end/32044] [4.3 regression] udivdi3 counterproductive, unwarranted use

2007-11-27 Thread mark at codesourcery dot com


--- Comment #30 from mark at codesourcery dot com  2007-11-27 18:58 ---
Subject: Re:  [4.3 regression] udivdi3 counterproductive,
 unwarranted use

rguenth at gcc dot gnu dot org wrote:
 --- Comment #29 from rguenth at gcc dot gnu dot org  2007-11-27 09:43 
 ---
 This is IMHO at most a QOI issue - at Novell we mark timespec_add_ns's u64
 parameter as volatile to work around this issue.  I expect upstream to adopt
 a workaround as well.  Note that some targets in the kernel have parts of
 libgcc implemented, but i?86 misses at least __udivdi3.

I am not a kernel developer, but my feeling as a GCC developer is that
you must provide the entry points in libgcc whenever you are linking
code compiled with GCC.  In other words, that GCC should be free to use
functions from libgcc as it pleases.

Of course, it might be a GCC optimization bug to call __udivdi3; perhaps
it could generate more efficient code that doesn't call that function.

Do others agree?  That this is at most an optimization issue, but not a
correctness issue?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32044



[Bug middle-end/32044] [4.3 regression] udivdi3 counterproductive, unwarranted use

2007-11-27 Thread mark at codesourcery dot com


--- Comment #35 from mark at codesourcery dot com  2007-11-27 19:45 ---
Subject: Re:  [4.3 regression] udivdi3 counterproductive,
 unwarranted use

bunk at stusta dot de wrote:

 Even if this specific issue in the kernel would turn out as a misoptimization,
 the general problem would still remain waiting to pop up at some later time at
 a  different place.

Indeed.  However, I think that the kernel developers should be aware
that GCC is not designed to avoid libgcc functions.  GCC fundamentally
assumes that it may call various functions from libgcc as it pleases.
(Sometimes it may do so for good reasons, sometimes it may be that it
does it suboptimally.)

Because there's nothing in GCC to keep it from randomly calling libgcc
functions, if the kernel wants to be robust against different versions
of GCC, it should provide definitions of these functions.  There's no
easy way to make GCC avoid these functions.

That's not meant to defend GCC calling this particular function in this
particular circumstance, of course.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32044



[Bug target/33579] INIT_PRIORITY is broken

2007-11-01 Thread mark at codesourcery dot com


--- Comment #12 from mark at codesourcery dot com  2007-11-01 16:50 ---
Subject: Re:  INIT_PRIORITY is broken

danglin at gcc dot gnu dot org wrote:
 --- Comment #11 from danglin at gcc dot gnu dot org  2007-11-01 03:05 
 ---
 Mark,
 
 This is major progress.  All the priority tests pass and there are no
 regressions on hppa2.0w-hp-hpux11.11 and hppa-unknown-linux-gnu.
 
 However, I don't think the patch is quite right.  For example, in the
 gcc.dg/initpri1.c test, two identical routines for c1 are emitted:

I don't think that's actually a bug -- except maybe its a
misoptimization.  The compiler's just inlining the calls to c1 from the
_GLOBAL_... functions due to code in record_cdtor_fn:

  node-local.disregard_inline_limits = 1;

I'm not sure that's a great idea -- especially with -Os! -- but I also
don't think it's related to the bug I introduced.

Do you agree?

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33579



[Bug target/33579] INIT_PRIORITY is broken

2007-10-29 Thread mark at codesourcery dot com


--- Comment #8 from mark at codesourcery dot com  2007-10-30 02:50 ---
Subject: Re:  INIT_PRIORITY is broken

dave at hiauly1 dot hia dot nrc dot ca wrote:

 I don't think this will be too hard to implement.  In
 cgraph_build_cdtor_fns, we need to partition/sort the static_[cd]tors by
 priority, and then pass each batch off to build_cdtor separately.  Do
 you want to work on this, or do you want me to do it?
 
 At the moment, I'm finding it more and more difficult to keep up with
 GCC issues.

No problem; it's my bug.  I'll work on a patch, and send it to you to
for testing -- I no longer have an HP-UX machine to work on.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33579



[Bug target/33579] INIT_PRIORITY is broken

2007-10-28 Thread mark at codesourcery dot com


--- Comment #6 from mark at codesourcery dot com  2007-10-28 22:46 ---
Subject: Re:  INIT_PRIORITY is broken

danglin at gcc dot gnu dot org wrote:

 With respect to initpr1.c, it can be seen that only one GLOBAL constructor,
 _GLOBAL__I_0_c1, and one GLOBAL destructor, _GLOBAL__D_1_c1, are created.
 These respectively call all the constructors and destructors.  The order of
 the calls is not sorted based on constructor priority, so the test fails.

I'm sorry to hear of this breakage.

 A global constructor visible to collect2 was output for each constructor/
 destructor priority (e.g., _GLOBAL__I$01000_foo).  These would call
 a static function, _Z41__static_initialization_and_destruction_0ii, with
 two arguments, construct/destruct and priority.  It would arrange to
 call the constructor/destructor for a given priority.  Collect2 sorts
 the GLOBAL cdtors in terms of priority.  The overall running of constructors
 and destructors is done using HP ld's +init and +fini arguments.

Right now, there are two primary cases for back ends: either they
support constructors and destructors (targetm.have_ctors_dtors is false,
and collect2's special handling is not required), or they don't
(targetm.have_ctors_dtors is false, and collect2 threads the _GLOBAL_*
functions together.)

I believe you're correct that my changes broke the handling of
prioritized constructors in the case where we use collect2.  I didn't
realize that there were targets that did that before.

In order to fix this, I think the correct change would be to have
cgraphunit.c:cgraph_build_cdtor be smarter.  In particular, it should
build one function for each priority, rather than building one function
for everything.  Then, collect2 will work as before.

I don't think this will be too hard to implement.  In
cgraph_build_cdtor_fns, we need to partition/sort the static_[cd]tors by
priority, and then pass each batch off to build_cdtor separately.  Do
you want to work on this, or do you want me to do it?

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33579



[Bug c++/19163] __attribute__((aligned)) not working in template

2007-09-06 Thread mark at codesourcery dot com


--- Comment #11 from mark at codesourcery dot com  2007-09-06 06:16 ---
Subject: Re:  __attribute__((aligned)) not working in template

jason at gcc dot gnu dot org wrote:
 --- Comment #10 from jason at gcc dot gnu dot org  2007-09-06 05:50 
 ---
 Vague references:
 
 http://gcc.gnu.org/ml/gcc-patches/2005-10/msg00247.html
 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=17743
 
 Seems like Mark and/or Nathan have/had state on this that they haven't shared
 with the lists.

I don't think so, but I'm not quite sure I understand.  Anyhow, I'm sure
I don't have any uncontributed patches for this, and I don't think
Nathan has either.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19163



[Bug libstdc++/31906] -Xcompiler is inserted after -Xlinker when building libstdc++

2007-07-15 Thread mark at codesourcery dot com


--- Comment #12 from mark at codesourcery dot com  2007-07-15 19:27 ---
Subject: Re:  -Xcompiler is inserted after -Xlinker
 when building libstdc++

pcarlini at suse dot de wrote:
 --- Comment #11 from pcarlini at suse dot de  2007-07-14 23:51 ---
 I advice against committing anything to 4_2-branch, at this time. In any case,
 we don't have a regression wrt 4.2.0 - only wrt 4.1.x if confirmed - and we
 have a workaround.

I agree.  I'd certainly like to see this fixed in 4.2.2 (or whatever the
version number ends up being for the release following 4.2.1) but I
don't think we should make this change for 4.2.1, at this very late date.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31906



[Bug c++/32232] [4.1 Regression] ICE in resolve_overloaded_unification

2007-07-12 Thread mark at codesourcery dot com


--- Comment #7 from mark at codesourcery dot com  2007-07-12 06:20 ---
Subject: Re:  [4.1 Regression] ICE in resolve_overloaded_unification

reichelt at gcc dot gnu dot org wrote:
 templatetypename struct A
 {
   A operator(void (*)(A));
 };
 
 templatetypename T AT operator(AT, const AT);
 
 templatetypename T void foo(AT);
 
 void bar()
 {
   Aint()  (1, fooint);
 }

As I (thought I?  meant to?) said in the patch submission mail, I don't
think this is valid.  The syntactic form of the expression matters for
deduction purposes, and my reading of the standard is that only a
(generalized) identifier is allowed.  (It's not very clear, but I didn't
see anything to suggest that your code was valid.)  The version of the
EDG front end I had on hand agreed with me. :-)  You could certainly ask
on the core reflector, etc.

The issue about printing (0, ...) instead of (1, ...) is certainly a
bug, but a different one from this.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32232



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2007-07-08 Thread mark at codesourcery dot com


--- Comment #11 from mark at codesourcery dot com  2007-07-08 18:12 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

pcarlini at suse dot de wrote:
 --- Comment #10 from pcarlini at suse dot de  2007-07-07 22:57 ---
 (In reply to comment #9)
 Ah, thanks for finding the old PR.  In looking at the mail threads, I
 fail to find my magic solution. :-(  Do you have a pointer to it?
 
 Well, that PR is *closed as fixed*. Maybe at the time I didn't follow all the
 details and your eventual fix was only partial, in some sense? Certainly 21210
 is closed as fixed and we didn't add any constructor, contrary to some ideas
 temporarily envisaged in the discussion linked in Comment #3 therein.

I was confused by your crediting me with magic because it was Roger
Sayle who fixed the bug.  In any case, his fix was a specific hack for
converting zero to a complex type, not for the more general problem,
which has always remained unfixed.

I still think adding a few constructors is the best fix.  The only
situation where we have a problem is a class with constructors taking
both a type like double and a GNU __complex__ type.  GNU
__complex__types are very rare in C++ programs; people use std::complex
in C++, and there is no problem in that situation. :-)

So, libstdc++ is the rare case.  Changing the library will give us very
natural semantics in the front end; we just declare GNU __complex__ to
be an arithmetic type, and everything else follows.  Absent direction
from the ISO C++ committee regarding integration of C99 complex into
C++, that seems like the best we can do.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug c++/31743] [4.1/4.2/4.3 regression] ICE with invalid use of new

2007-07-07 Thread mark at codesourcery dot com


--- Comment #11 from mark at codesourcery dot com  2007-07-07 19:18 ---
Subject: Re:  [4.1/4.2/4.3 regression] ICE with invalid use
 of new

reichelt at gcc dot gnu dot org wrote:
 --- Comment #10 from reichelt at gcc dot gnu dot org  2007-07-07 10:59 
 ---
 Mark, is there any reason, you added the exectuable flag?
 If not, would you mind removing it?
 
 Propchange: trunk/gcc/testsuite/g++.dg/init/new20.C
('svn:executable' added)

I suspect that this is an accident of the file having been on a Cygwin
system at some point.  I've noticed that when I scp from a Cygwin
system, text files tend to get executable permissions.

If you will tell me how to remove the flag, I will take care of it.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31743



[Bug c++/31780] [4.2/4.3 regression] ICE with incompatible types for ?: with complex type conversion

2007-07-07 Thread mark at codesourcery dot com


--- Comment #9 from mark at codesourcery dot com  2007-07-07 22:51 ---
Subject: Re:  [4.2/4.3 regression] ICE with incompatible types
 for ?: with complex type conversion

pcarlini at suse dot de wrote:
 --- Comment #8 from pcarlini at suse dot de  2007-07-07 22:44 ---
 Hi Mark. First, I can point you to C++/21210. In that occasion (see in
 particular Comment #3) we struggled with the issue quite a bit (if I remember
 correctly we tried to avoid adding constructors...) then you came up with a
 magic very simple solution! While I study a bit more the present issue maybe
 you can re-focus that old one... (thanks for involving libstdc++ this time 
 too)

Ah, thanks for finding the old PR.  In looking at the mail threads, I
fail to find my magic solution. :-(  Do you have a pointer to it?

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31780



[Bug tree-optimization/32527] [4.3 Regression] ICE in build2_stat, at tree.c:3074

2007-06-29 Thread mark at codesourcery dot com


--- Comment #5 from mark at codesourcery dot com  2007-06-29 19:29 ---
Subject: Re:  [4.3 Regression] ICE in build2_stat,
 at tree.c:3074

pinskia at gcc dot gnu dot org wrote:
 --- Comment #4 from pinskia at gcc dot gnu dot org  2007-06-29 19:27 
 ---
 Mark,
   Even though this has only showed up in Fortran code correctly.  I could make
 a C testcase where it fails.  The Fortran code looks simplier because of 
 arrays
 in Fortran are way simplier than in C.
 Do you want a C example to be able to mark this as a P1?

Yes.  Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32527



[Bug c++/32492] [4.3 Regression] attribute always_inline - sorry, unimplemented: recursive inlining

2007-06-26 Thread mark at codesourcery dot com


--- Comment #2 from mark at codesourcery dot com  2007-06-26 12:14 ---
Subject: Re:  [4.3 Regression] attribute always_inline  -
  sorry, unimplemented: recursive inlining

rguenth at gcc dot gnu dot org wrote:

 TYPE_ARG_TYPES says we want a char, but the call expression has an int.  I
 would say this is a C++ frontend bug?  Or is this somehow expected and we
 need to deal with this mismatch?

This is probably something that used to be considered OK, but is now
considered a C++ front-end bug.  For avoidance of doubt, I think that
the C++ front end should be changed so that the argument has type char.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32492



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-06-09 Thread mark at codesourcery dot com


--- Comment #176 from mark at codesourcery dot com  2007-06-09 19:29 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

rguenth at gcc dot gnu dot org wrote:

 So, from my point of view the patch is ready to be exposed to more eyes.

The C++ bits are fine.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug c++/31809] [4.1/4.2/4.3 Regression] sometimes TREE_READONLY is still set for non read only variables causing wrong code

2007-05-31 Thread mark at codesourcery dot com


--- Comment #9 from mark at codesourcery dot com  2007-05-31 16:32 ---
Subject: Re:  [4.1/4.2/4.3 Regression] sometimes TREE_READONLY
 is still set for non read only variables causing wrong code

jakub at gcc dot gnu dot org wrote:

 2007-05-31  Jakub Jelinek  [EMAIL PROTECTED]
 
 PR c++/31806
 * decl.c (cp_finish_decl): Also clear was_readonly if a static var
 needs runtime initialization.

This patch is OK.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31809



[Bug c++/32158] uninitialized_fill compile failure if no default assignment operator

2007-05-30 Thread mark at codesourcery dot com


--- Comment #2 from mark at codesourcery dot com  2007-05-30 21:08 ---
Subject: Re:  uninitialized_fill compile failure if no default
 assignment operator

pcarlini at suse dot de wrote:
 --- Comment #1 from pcarlini at suse dot de  2007-05-30 21:00 ---
 Curious, this is actually a C++ front-end issue, a bug in my implementation of
 __is_pod: currently it just forwards to pod_type_p, in cp/tree.c, and
 apparently I was wrong to assume it exactly implements the Standard concept of
 POD-ness: it returns true for std::pair, which is *not* a POD. The problem is
 that std::pair isn't an aggregate type, thus cannot be a POD. I think I should
 just also check CP_AGGREGATE_TYPE_P, in order to fix that. Mark, can you
 confirm that? Thanks in advance.

pop_type_p is indeed to exactly implement the standard definition of
POD.  If it's giving the wrong answer, then we need to fix it.  There is
code to set CLASSTYPE_NON_POD_P for non-aggregates.  So, I'm not sure
what's going wrong, but we need to track it down.  Let me know if you
need help.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32158



[Bug c++/32158] uninitialized_fill compile failure if no default assignment operator

2007-05-30 Thread mark at codesourcery dot com


--- Comment #5 from mark at codesourcery dot com  2007-05-30 21:38 ---
Subject: Re:  uninitialized_fill compile failure if no default
 assignment operator

pcarlini at suse dot de wrote:
 --- Comment #3 from pcarlini at suse dot de  2007-05-30 21:25 ---
 Thanks Mark. In fact, we have already a test for that, in ext/is_pod.cc. But 
 we
 have a problem with templates. This:
 
   templatetypename T
 struct A
 {
   A() { }
 };
 
 has __is_pod(Aint) true. Actually, the problem affects also other front-end
 traits, probably most of them :( :( They are not working correctly with
 templates. First blush, any hint where I should fix my implementation?

Perhaps you need to call complete_type in the traits implementation to
ensure that Aint is completed, if it can be.  (Or, if __is_pod
requires, by the language standards, a complete type, you can use
complete_type_or_else.)


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=32158



[Bug c++/31809] [4.1/4.2/4.3 Regression] sometimes TREE_READONLY is still set for non read only variables causing wrong code

2007-05-30 Thread mark at codesourcery dot com


--- Comment #6 from mark at codesourcery dot com  2007-05-30 23:02 ---
Subject: Re:  [4.1/4.2/4.3 Regression] sometimes TREE_READONLY
 is still set for non read only variables causing wrong code

mueller at gcc dot gnu dot org wrote:
 --- Comment #5 from mueller at gcc dot gnu dot org  2007-05-30 22:46 
 ---
 is it okay that was_readonly will eventually turn on TREE_READONLY()
 afterwards?

I wondered about this too, but was_readonly is only set for
REFERENCE_TYPEs.  I'm not sure what happens with something like:

  int f() { return *new int; }
  int i = f();

Jakub, does the reference end up TREE_READONLY in that case?  If so, do
we have a problem there too?  I think we might, for something like:

  int f() { static int x; return x; }
  int i = f();
  int j = i;

The compiler must not reorder this to:

  j = i;
  i = x;

which I guess it might, since it might think that i never changes, and
therefore j can access its value at any point?

Of course, it's sad that we lose TREE_READONLY on references or const
variables, as we know they cannot change after initialization.  There's
no question that Jakub's change is going to make use generate inferior
code in some cases.

In fact, for the reference case, we know that i does not change in the
scope of any function except the static initialization function that
assigns the return value of f to i.

One possible way to get some of this performance back would be to avoid
marking references as clobbered by calls -- except to the static
initialization function, which is only called from .ctors, so we can
probably ignore that.  In other words, even though i is not
TREE_READONLY, treat it as such when computing what can clobber it.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31809



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-23 Thread mark at codesourcery dot com


--- Comment #133 from mark at codesourcery dot com  2007-05-23 19:43 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

ian at airs dot com wrote:

 The case where TBAA is most useful is on a deeply pipelined in-order processor
 with multiple function units and a high latency memory cache.  One example 
 I've
 worked on is an embedded VLIW processor with vector instructions.  TBAA is of
 relatively little interest on an out-of-order processor.

The original motivating case for me was stuff like:

  void f (int *a, double *d) {
for (int i = 1; i  N; ++i) {
  a[i] += i;
  d[i] = d[i-1] * a[i];
}
  }

That's not the right code, but the point is that TBAA can allow us to
avoid reloading d[i-1] from one iteration to the next, despite the store
to a[i].  That reduces memory access and instruction count.  Ordinary
PTA does not allow this.

Of course, Gaby's memory model doesn't allow this optimization either;
we have to worry that a and d are both in some union somewhere.
That's why Gaby's model is so bad from an optimization point of view; it
makes the compiler assume a worst-case situation, even though that
worst-case situation almost never actually happens.

I'm not an expert on CPU models, so I'm not sure how out-of-order vs.
in-order might matter here.

 And I think we see the outlines of a
 successful patch: make placement new return a pointer which effectively 
 aliases
 everything.  That will permit us to reorder loads and eliminate dead stores. 
 It won't permit us to arbitrarily re-order loads and stores, but I'm skeptical
 that that will count as a severe penalty.

That's exactly what I think.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-23 Thread mark at codesourcery dot com


--- Comment #136 from mark at codesourcery dot com  2007-05-23 20:10 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

rguenth at gcc dot gnu dot org wrote:
 --- Comment #134 from rguenth at gcc dot gnu dot org  2007-05-23 19:54 
 ---
 But using a union for type-punning is a gcc extension (and of course the
 extension
 is only for access through the union), so with strict C99/C++ semantics we can
 avoid reloading d[i-1] even if a and d were in the same union because the code
 would then be invalid.  

Gaby's claim, as I understand it, is that writing to a union member,
even through a pointer, rather than directly through the union, is
valid, and activates that part of the union.  So, it is not a GCC
extension.  For code like:

  a[i] = i;
  d[i] = d[i-1] + a[i];

I guess you can argue that a[i] does not alias d[i-1], even in Gaby's
model, because a[i] is written to right before the access to d[i-1].
But, you don't know that a[m] doesn't alias d[n] for arbitrary m and n.
 So, it's easy to create variations on the case I posted that can't be
optimized, if you agree to Gaby's model.

 So the union case is a non-issue here (it was only used to
 make available enough properly aligned storage for the particular testcase).

I agree that union case *should* be a non-issue in this context, where
we were discussing how to fix placement new, but Gaby has made it one
because he is claiming that placement new is not the only way to change
the dynamic type of memory.  Gaby's claim is that given an arbitrary
pointer p, saying:

 (int *)p = 3;

is the same as saying:

 *(new (p) int) = 3;

That makes life for the optimizers much, much harder.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-23 Thread mark at codesourcery dot com


--- Comment #140 from mark at codesourcery dot com  2007-05-23 21:07 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

rguenth at gcc dot gnu dot org wrote:

 quote
 Gaby's claim is that given an arbitrary
 pointer p, saying:
 
  (int *)p = 3;
 
 is the same as saying:
 
  *(new (p) int) = 3;
 
 That makes life for the optimizers much, much harder.
 /quote
 
 I say so as well (that those are the same), but I don't agree that this
 makes life for optimizers much harder.

Placement new is rare; assignments to pointers are everywhere.

Note that the first case does not need to use an explicit cast.  In a
function:

  void f(int *p) {
*p = 3;
  }

under Gaby's interpretation, we cannot be sure that p points to an
int before this function, so we can't be sure the write to *p
doesn't clobber memory of other types.  TBAA is one of the few ways to
disambiguate pointers in the absence of whole-program optimization, and
this model really undermines TBAA.

Frankly, I'm surprised that you are taking this position.  This is a
change to the compiler that can only hurt high-performance C++
applications, which is an area I know you care about.  I know that
you're unhappy about how Ian's patches might hurt programs that use
placement-new in an inner loop, but this model will impose the same
penalties on programs that write to pointers in an inner loop.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-23 Thread mark at codesourcery dot com


--- Comment #141 from mark at codesourcery dot com  2007-05-23 21:13 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

joseph at codesourcery dot com wrote:

 DR#236 http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_236.htm was 
 what eventually said for C that you don't need to worry about that; I'd 
 think the aim should be to get C++ to agree with that ruling.

Thank you for the pointer.  That seems directly on point, and makes C99
match the existing GCC practice: we don't need to worry that pointers
might point to unions.

Gaby, would you please forward that to the C++ reflector, so that the
reflector has that information as well?  They should be aware that the
model you're proposing is at odds with C99.

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-23 Thread mark at codesourcery dot com


--- Comment #143 from mark at codesourcery dot com  2007-05-23 21:27 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

rguenther at suse dot de wrote:

   void f(int *p) {
 *p = 3;
   }

 under Gaby's interpretation, we cannot be sure that p points to an
 int before this function, so we can't be sure the write to *p
 doesn't clobber memory of other types.  TBAA is one of the few ways to
 disambiguate pointers in the absence of whole-program optimization, and
 this model really undermines TBAA.
 
 In C inside the function f we can indeed be sure *p points to an int.

Not before the assignment to p.  In:

  void f(int *p, double *q) {
double d = *q;
*p = 3;
return d;
  }

your interpretation does not allow moving the load of *q after the
store to *p.  That's clearly limiting the freedom of the optimizer.

Now, we can argue about how much that matters -- but it's inarguable
that it's a limitation.

 If you discount scheduling on in-order machines, what would be an
 optimization that can be no longer done with Gabys and my scheme?
 I believe there are none.  Also other compilers manage to not
 miscompile in the face of placement new but still optimize loops
 with them.

I'm lost.

What does Gaby's model have to do with placement new?

We're all agreed that (a) placement new can change the dynamic type of
memory, (b) therefore GCC currently has a bug, (c) we want the fix to
have as little optimization impact as possible.

Gaby's model says that we know less about dynamic types than we
presently think we do, because there might be a union out there
somewhere.  (Fortunately, as Joseph points out, C99 has already answered
this question.  Surely we can agree that making C99 and C++ different in
this respect is a bad idea.)

If *p = 3 changes the dynamic type of *p, that just means we know
even less.  The less we know, the less optimization we can do.  Making
*p = 3 change the dynamic type of *p can't possibly help us
implement placement new more efficiently.  Whatever conservative
assumptions we want to make about *p = 3 we could make about new (p)
int instead.

If you have a patch that fixes the placement new problem, making us
generate correct code, and with minimal/no impact on optimization,
that's great!  But, that can't possibly, in and of itself, be a reason
to change the rules we're using for TBAA.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-23 Thread mark at codesourcery dot com


--- Comment #146 from mark at codesourcery dot com  2007-05-23 22:13 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

rguenther at suse dot de wrote:

 Only so much that we seem to agree on the semantics of placement new.
 Gaby extends this semantics to any store, so
 
   *p = X;
 
 is equivalent to
 
   *(new (p) __typeof__ *p) = X;
 
 to which semantics we thus can agree (not to whether those two should
 be the same, mandated by the standard or liked by some of us or not).

I think I understand.  Let me just restate this, to make sure:

(a) Gaby's model makes the first assignment above equivalent to the second
(b) Thus, in Gaby's model, if we solve either case, we solve both.

I agree with that statement.  (I don't like the model -- but I agree
with the logic.)

 Making *p = 3 change the dynamic type of *p can't possibly help us
 implement placement new more efficiently.
 
 I disagree here.  Making *p = 3 change the dynamic type of *p will
 make the placement new issue moot - the current library implementation
 is fine then and we don't need any new explicit or implicit side-effects
 of it.
 
 Whatever conservative
 assumptions we want to make about *p = 3 we could make about new (p)
 int instead.
 
 True.  I say making them about *p = 3 is way easier as we are changing
 semantics of memory operations and *p = 3 is one, but placement new is 
 not.

I think I understand what you're saying here too; again, I'll restate to
make sure:

(a) In the model where *p = 3 changes the dynamic type of memory, we
don't need to do anything special to handle placement new.
(b) It's relatively easy to implement support for *p = 3 changing the
dynamic type of memory.
(c) Therefore, it's relatively easy to fix our placement new problem.

I agree with those statements too.

However, I don't like this approach because I believe it will result in
inferior code.  I think that you're looking at the proposed placement
new patches, then looking at what they do to a particular codebase,
which happens to use placement-new in an inner loop, and becoming
unhappy with the patches.  I suspect that the changes required to
support the *p = 3 model, while perhaps better for that case, will be
worse for many others.

I can't prove that.  But, I did implement TBAA after looking at what
other compilers did, specifically to improve performance of (ironically)
POOMA.  So, I'm afraid that you're going to find that if we allow memory
writes to change the type of memory, that we will get worse performance.

That's why I'm much more comfortable with a change that only affects
placement new.  At least, if placement new is slow, we can tell users
not to use it in inner loops.  If using pointers are slow, there's
nothing we can do.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-22 Thread mark at codesourcery dot com


--- Comment #106 from mark at codesourcery dot com  2007-05-22 16:04 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

rguenth at gcc dot gnu dot org wrote:

  - we _cannot_ sink loads across stores.
 
  x = *int;
  *double = 1.0;
 
the store to double may change the dynamic type of what *int
points to.

To be clear, you mean something like this, right:

  int i;
  int *ip = i;
  double *dp = (double *)i;
  int x;
  x = *ip;
  *dp = 1.0;

?

I think that considering this code valid, and, therefore, forbidding the
interchange of the last two statements, requires a perverse reading of
the standard.  Placement new allows you to change the dynamic type of
storage; I don't think that just writing through a pointer does.  A key
goal of C++ relative to C was better type-safety.  The placement new
operator provides a facility for explicitly controlling object lifetime,
for programmers that need this.

Before we do anything to support the case above, we should have a
crystal-clear ruling from the committee that says this is valid.
Otherwise, this is exactly the kind of optimization that TBAA was
designed to perform.

For history, the reason I implemented TBAA in GCC was that the SGI
MIPSPro C/C++ compiler did these kinds of optimizations ten years ago,
and I was trying to catch us up when looking at POOMA performance on
IRIX.  G++ has had the freedom to interchange those stores for a long
time, and I believe it should continue to have that choice.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-22 Thread mark at codesourcery dot com


--- Comment #109 from mark at codesourcery dot com  2007-05-22 17:19 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

gdr at cs dot tamu dot edu wrote:

 Consider the following instead
 
// tu-1.C
void f(int* p) {
   *p = 90;
   // ...
   *(double *) p = 8.3748;
};
 
 Is the above code invalid, independent of context?   I don't think
 you can find a wording in the standard that says it is invalid.

IMO, the standard is just not clear with respect to aliasing.  We cannot
rely upon it, except as a guide.  As I've said throughout this thread,
it doesn't make sense to try to do close reading of the standard for
aliasing issues because it just wasn't written with those issues in
mind, just as there are all of the famous memory model issues in C.

In any case, I consider the code above invalid.

 Indeed, consider this:
 
// tu-2.C
void f(int*);
void g() {
   union {
 int i;
 double d;
   } t;
 
  t.i = 42;
  f(t);
  cout  t.d  endl;
}
 
 I believe we can all agree the definition of g is valid.

No, I do not.  And GCC historically has not; you are only allowed to use
the union for type-punning if the accesses are through the union
directly.  That was the decision we made a long time ago regarding TBAA,
and it even appears in the manual; the -fstrict-aliasing documentation says:

The practice of reading from a different union member than the one most
recently written to (called ``type-punning'') is common.  Even with
@option{-fstrict-aliasing}, type-punning is allowed, provided the memory
is accessed through the union type.  So, the code above will work as
expected.  However, this code might not:
@smallexample
int f() @{
  a_union t;
  int* ip;
  t.d = 3.0;
  ip = t.i;
  return *ip;
@}
@end smallexample

The point here is that the compiler is allowed to decide that t.d does
not alias *ip because the latter is not a direct access through the union.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-22 Thread mark at codesourcery dot com


--- Comment #111 from mark at codesourcery dot com  2007-05-22 17:37 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

gdr at cs dot tamu dot edu wrote:

 | No, I do not.  And GCC historically has not; you are only allowed to use
 | the union for type-punning if the accesses are through the union
 | directly. 
 
 I am not talking of the GCC's historical behaviour here, but what the
 standard actually says.  For the object t, above the last write was
 to the double field, therefore the read is well-defined.

Suffice it to say that I disagree.  I'm not debating that you can read
the standard that way.  But, I don't think the standard contemplated
these issues in sufficient detail to make it useful in this respect.

Pragmatically, I don't think that we should change GCC, after years of
people using it with the current rules, to make it generate inferior
code -- without clear guidance from the standards committee.  IMO, that
needs to go beyond a reading of the current standard; there needs to be
a clear expression from the committee that, indeed, the compiler cannot
use TBAA in the way that GCC has historically used it.

I'm all for bringing G++ into better conformance with the standard, and
agree that correctness is more important than optimization, but I don't
believe that the standard was written with these considerations in mind,
so I don't think it can be relied upon in this respect.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-22 Thread mark at codesourcery dot com


--- Comment #113 from mark at codesourcery dot com  2007-05-22 17:54 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

gdr at cs dot tamu dot edu wrote:
 --- Comment #112 from gdr at cs dot tamu dot edu  2007-05-22 17:46 ---
 Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement new does not change the
 dynamic type as it should
 
 mark at codesourcery dot com [EMAIL PROTECTED] writes:
 
 | But, I don't think the standard contemplated
 | these issues in sufficient detail to make it useful in this respect.
 
 The issues has been raised on the -core reflector.

So that I understand, do you mean that they have been raised in the
past, and settled, or that you've just raised them now?

Thanks,


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-22 Thread mark at codesourcery dot com


--- Comment #118 from mark at codesourcery dot com  2007-05-22 18:34 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

gdr at cs dot tamu dot edu wrote:

 Thanks for reminding all those points.  I'll ensure that I make those
 points stand in subsequence messages.

I think it's also worth pointing out to the committee that the more
aggressive aliasing rules (in which only access directly through a union
is allowed) have been GNU C/C++ practice for a long time.  I would guess
that we made this change around the year 2000.  So, there's a large body
of code that conforms to the requirements of the aggressive
interpretation.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-22 Thread mark at codesourcery dot com


--- Comment #120 from mark at codesourcery dot com  2007-05-22 18:55 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

gdr at cs dot tamu dot edu wrote:

 |  I would guess
 | that we made this change around the year 2000.  So, there's a large body
 | of code that conforms to the requirements of the aggressive
 | interpretation.
 
 Yes; those programs will continue to be conformant.

Indeed -- but that's not the point I was trying to make.  The point is
that your changes would force G++ to generate inferior code -- for a
codebase that already works with the more aggressive interpretation.

In other words, from a GNU product marketing point of view, rather
than from a C++ language standards point of view, the change you want to
make is just going to hurt our users, who have made their code work with
the aggressive interpretation.  It's only going to help people moving to
G++ from other compilers that do not use the aggressive interpretation.
 And, we already have -fno-strict-aliasing for those folks.

I'm not trying to tell you not to make the argument that you think is
best for C++ as a language; obviously, that's your right.  I'm just
pointing out that the change you want to make probably isn't going to
help users who are already working with G++.

And, although I don't have the time/energy that you seem to have to work
on these standards issues, I do plan to oppose your interpretation on
the reflector.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-18 Thread mark at codesourcery dot com


--- Comment #80 from mark at codesourcery dot com  2007-05-18 07:26 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

ian at airs dot com wrote:
 --- Comment #78 from ian at airs dot com  2007-05-18 07:14 ---
 The test case in comment #73 is just a standard aliasing violation.  You are
 casting a double* to an int* and writing to it both ways.

I'm confused.  The double-ness looks irrelevant to me; it could just as
well be void *.  The only actual accesses to the memory are through an
int * pointer and a long * pointer, and there's a placement new
between the two.  I thought the whole point of these patches was to
allow placement new to change the type in exactly this way?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-18 Thread mark at codesourcery dot com


--- Comment #89 from mark at codesourcery dot com  2007-05-18 17:44 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

ian at airs dot com wrote:
 --- Comment #86 from ian at airs dot com  2007-05-18 17:24 ---
 Re comment #80, comment #81, comment #82.  My patch handles the placement new
 in comment #73 to indicate an alias between double and long.  The mis-ordered
 code is actually aliasing int and long.  That aliasing is due to a cast to
 (int*).  There is no placement new in that cast.  There is no reason for the
 compiler to avoid exchanging the store via int* and the store via long*.  Why
 should it? 

I don't think the fact that p is a double * is relevant; it could
just as well be void *.  This kind of code is unambiguously valid:

  void f(double *p) { (int*)p = 3; }
  void g() {
int i;
f((double *)i);
  }

Pedantically, the alignment of double has to be no stricter than int
for this to be valid, but since we define pointer conversion as a no-op,
it's always valid in GNU C.

This is why I liked your earlier patch that made placement new a memory
barrier.  I think the right handling of placement new is basically to
say everything we know about types is forgotten here.  One can limit
that to things to which the operand might point, but points-to analysis
(should not) tell you that p and l cannot point at the same place,
since we have no idea what p points at.

I don't think this is equivalent to totally disabling TBAA in C++.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-18 Thread mark at codesourcery dot com


--- Comment #93 from mark at codesourcery dot com  2007-05-18 19:01 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

ian at airs dot com wrote:

 void f(double* p) { *(int*)p = 3; long *l = new (p) long; *l = 4; }
 void g() { int i; f((double *)i); }
 
 And the specific question is whether we are permitted to interchange the
 assignments to *p and *l.

I do not think we are.

 void f(double* p) { *(int*)p = 3; long *l = (long*)p; *l = 4; }
 
 Is that valid?  Is the compiler permitted to interchange the assignments to *p
 and *l?  Consider that, as in comment #73, p might actually point to a union 
 of
 int and long.  Does that fact that that union might exist somewhere else make
 this test case valid?  Presumably it does not.  Presumably this is invalid.

Agreed; this case is invalid.

 So if that is not valid, and the placement new case is valid, then what is the
 essential difference between the cases?  The variable is being accessed via 
 two
 different types.  Why is that OK?

Because placement new changes the type of storage, in the same way that
using ordinary (delete) and then using (ordinary) new (but getting
back the same memory pointer) does.  The placement new operator is
special.

 You're right that don't have to abandon TBAA to make this work, that we can
 make it work by turning placement new into a memory barrier.  But then you 
 have
 to address comment #42.  That approach will cause a performance regression for
 natural valid code.  The question then becomes whether we are willing to pay
 that performance regression for normal code in order to support this sort of
 weird code.

I am willing to accept that performance regression.  I don't consider
that code normal; many C++ performance libraries now provide a way to
produce an uninitialized container, precisely to avoid default
construction.  POOMA could use that technique.

It would of course be better (though, in my opinion, not essential) to
have a more gentle barrier.  If we could tell the compiler to forget the
type of anything that the argument to placement-new might point to, but
not to assume that arbitrary weirdness has occurred, then the compiler
could still eliminate the redundant stores.

In other words, in Comment #42, the problem is that the volatile asm
tells the compiler that not only must the stores/loads not be reordered
across the barrier, but that stores before the barrier must actually
occur because their may be some arbitrary action at the barrier that
depends upon the values written.  If we had a barrier that says just
that the operations may not be reordered across the barrier -- but does
not say that the operations before the barrier are side-effecting --
then we could still eliminate them as redundant.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-18 Thread mark at codesourcery dot com


--- Comment #97 from mark at codesourcery dot com  2007-05-18 21:17 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

rguenth at gcc dot gnu dot org wrote:

 But construction/initialization of uninitalized memory in memory happens 
 with
 placement new!  So we're back to square one.  What this PR initially was about
 is a fixed type memory allocator in C++ which needs to change memory from
 allocated type T to free-space-managing-structure S at deallocation time and
 the other way around at allocation time.  We absolutely _have_ to handle
 this case correct.  And we need to optimize the memory routines that use
 placement new, because they resemble patterns used in libraries like POOMA
 or Boost.

First and foremeost, we have to generate correct code.  If that means
the memory barrier solution, for now, then so be it.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



[Bug libstdc++/29286] [4.0/4.1/4.2/4.3 Regression] placement new does not change the dynamic type as it should

2007-05-16 Thread mark at codesourcery dot com


--- Comment #77 from mark at codesourcery dot com  2007-05-17 00:41 ---
Subject: Re:  [4.0/4.1/4.2/4.3 Regression] placement
 new does not change the dynamic type as it should

ian at airs dot com wrote:

 I don't believe that the C++ standards writers really meant to eliminate 
 TBAA. 
 And that is the inevitable consequence of the dynamic memory type approach if
 you are allowed to change the dynamic type in a function.

I agree with Ian.

I think there is good evidence that the authors of the standard intended
C++ to be *more* typesafe than C, and we should read the standard in
that way.  It's unfortunate that neither the C or C++ standards is very
precise about various aspects of the memory model, but that is what it
is.  I think trying to read the standard to divine the answers to these
questions is essentially futile; this is a situation where we should
accept that the standard doesn't say, and do our best to balance
performance with existing practice and expectations.

I don't fully understand the point of Comment #73.  I thought the whole
point of this series of patches was to make the compiler understand that
memory returned by placement new could alias other memory, or to
otherwise introduce a barrier that would prevent the compiler from
reordering accesses across the call to operator new.  If that's the
case, why does the post-patch compiler still think that the writes to
f and l don't alias?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29286



  1   2   3   4   >