RE: Annoying silly warning emitted by gcc?

2019-01-23 Thread Joe Buck
 
On Wed, Jan 23, 2019 at 4:16 PM Warren D Smith  wrote:
>
> x = x^x;
>
> The purpose of the above is to load "x" with zero.

Don't waste your time.  Intel was offering that advice to writers of assembly 
language and compilers.  Gcc already does the right thing.

Try the following on an Intel/AMD machine:

% cat z.c
long long zero() {
long long tmp = 0LL;
return tmp;
}
% gcc -O2 -S z.c
% cat z.s
.file   "z.c"
.text
.p2align 4,,15
.globl  zero
.type   zero, @function
zero:
.LFB0:
.cfi_startproc
xorl%eax, %eax
ret
.cfi_endproc
.LFE0:
.size   zero, .-zero
.ident  "GCC: (GNU) 4.8.3"
.section.note.GNU-stack,"",@progbits


RE: RFC: Allow moved-from strings to be non-empty

2018-10-26 Thread Joe Buck
The reason move constructors were introduced was to speed up code in cases 
where an object
Is copied and the copy is no longer needed.  It is unfortunate that there may 
now be code out
there that relies on accidental properties of library implementations.  It 
would be best if the
Implementation is not constrained.  Unless the standard mandates that, after a 
string is moved,
the string is empty, the user should only be able to assume that it is in some 
consistent but
unspecified state.  Otherwise we pay a performance penalty forever. 

If the standard explicitly states that the argument to the move constructor is 
defined to be
empty after the call, we're stuck.



RE: error printing in reversed order ?

2016-10-07 Thread Joe Buck
You can already do this today.  Run the output of the compiler through 'tac'.  
No need for a new feature.  

https://linux.die.net/man/1/tac

-Original Message-
From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of nicolas 
bouillot
Sent: Friday, October 07, 2016 12:09 PM
To: gcc@gcc.gnu.org
Subject: error printing in reversed order ?

Hi,

Was wondering this could be a feature request ? Basically, this could be a GCC 
option to print compilation errors in a reversed order, i.e.
the first being printed last. This is because when compiling from the terminal, 
it would avoid mouse scrolling all day in order to get the first error.

I'll be happy to write a feature request somewhere if this deserves it, but I 
do not know where and if this can be considered as a feature request.

Nicolas


RE: [PATCH] libstdc++/77645 fix deque and vector xmethods for Python 3

2016-09-19 Thread Joe Buck
Python has a distinct integer division operator, "//".  7 // 3 returns the 
integer 2.

-Original Message-
From: libstdc++-ow...@gcc.gnu.org [mailto:libstdc++-ow...@gcc.gnu.org] On 
Behalf Of Jonathan Wakely
Sent: Monday, September 19, 2016 10:11 AM
To: libstd...@gcc.gnu.org; gcc-patches@gcc.gnu.org
Subject: [PATCH] libstdc++/77645 fix deque and vector xmethods for Python 3

The problem for these xmethods is that in Python 3 division of two integers 
produces a float, and GDB then doesn't allow that value to be used for pointer 
arithmetic (https://sourceware.org/PR20622).

PR libstdc++/77645
* python/libstdcxx/v6/xmethods.py (DequeWorkerBase.__init__)
(DequeWorkerBase.index, VectorWorkerBase.get): Cast results of
division to int to work with Python 3.

Tested x86_64-linux, committed to trunk.

I'll fix it for gcc-5 and gcc-6 too.




RE: Fwd: Building gcc-4.9 on OpenBSD

2014-09-18 Thread Joe Buck
(delurking)

Ian Grant writes:

 In case it isn't obvious, what I am interested in is how easily we can know 
 the problem of infeasibly large binaries isn't an instance of this one:


 http://livelogic.blogspot.com/2014/08/beware-insiduous-penetrator-my-son.html

Ah, this is commonly called the Thompson hack, since Ken Thompson actually 
produced a successful demo:

http://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html

The only way that the Thompson hack can survive a three-stage bootstrap is if 
the compiler used for the stage 1 build has the bad code.  The comparison 
between stages 2 and 3 require exact match, and any imperfection in the object 
code injection would reveal itself.

So, you can build GCC with LLVM or Intel's compiler or Microsoft's or IBM's or 
Sun's, doing cross-compilation where necessary.  The basic idea is:

1: build gcc with 3-stage bootstrap, starting with a compiler that you suspect 
might be infected.  call the result A.
2: do it again, starting with a different compiler that you think is 
independent of the compiler you used in step 1.  call it B.
3: compare A to B.  If they differ, you've found something that should be 
investigated.  If you don't, then either A and B are both clean, or A and B 
both have the identical inserted object code. Maybe they have a common ancestor?

Note that if you build gcc with a cross-compiler the object code will be 
different.  You have to use the cross-compiler to build one more time to 
normalize: GCC 4.9.0 built with GCC 4.9.0 on operating system X should always 
be the same.

As far as I know no one has been paranoid enough to put in the time to do the 
experiment on a large scale, and it's harder because you can't build a modern 
GCC (or LLVM for that matter) with an ancient compiler.  But you can create a 
chain: grab an ancient gcc version off a 15-year-old CD, and build newer 
versions with it until you get up  to the present.  The result should be 
byte-for-byte identical with what you get when building the current compiler 
with a recent version.  If it is, then either the infection is 15 years old or 
does not exist.  Try it again by building cross-compilers from a Microsoft 
system.  Don't trust Apple, they used to use GCC so maybe all their LLVM 
binaries caught the bug.


BTW, if size is reporting much smaller size than the executable file itself 
and that motivates this concern, most of the difference is likely to be debug 
info, which is bigger since gcc switched to C++.  Might want to try strip.



RE: Remove spam in GCC mailing list

2013-12-28 Thread Joe Buck
Some background on the below: Google has recently changed its algorithms, and 
the presence of obvious spam mails pointing to a site now *lower* that site's 
Google rank.  So the same search engine optimization people who created the 
spams for pay in the first place are now frantically trying to get the spams 
removed, to keep their clients from suing them.  I think gcc is best served by 
just leaving the spams in the archive, as permanent punishment for the people 
who paid to wreck the Internet for their own gain.

-Original Message-
From: gcc-ow...@gcc.gnu.org [mailto:gcc-ow...@gcc.gnu.org] On Behalf Of Tae Wong
Sent: Saturday, December 28, 2013 3:40 AM
To: gcc@gcc.gnu.org
Subject: Re: Remove spam in GCC mailing list

You want to send a mail to python-dev at python dot org.

The spam still exists in gcc-bugs mailing list:
http://gcc.gnu.org/ml/gcc-bugs/2013-08/msg00689.html
http://gcc.gnu.org/ml/gcc-bugs/2013-08/msg00759.html
http://gcc.gnu.org/ml/gcc-bugs/2013-08/msg00776.html
http://gcc.gnu.org/ml/gcc-bugs/2013-08/msg01181.html
http://gcc.gnu.org/ml/gcc-bugs/2013-08/msg01586.html
http://gcc.gnu.org/ml/gcc-bugs/2013-09/msg01513.html
http://gcc.gnu.org/ml/gcc-bugs/2013-09/msg01946.html
http://gcc.gnu.org/ml/gcc-bugs/2013-09/msg01947.html
http://gcc.gnu.org/ml/gcc-bugs/2013-09/msg02011.html

There's no reason that the gcc-bugs mailing list can post bug reports directly.

Please delete spam messages from gcc-bugs.

-- 
Tae-Wong Seo
Korea, Republic of


Re: i386 __atomic_compare_exchange_n not found

2013-08-09 Thread Joe Buck
On Fri, Aug 09, 2013 at 11:23:51AM -0500, Joel Sherrill wrote:
 On 8/9/2013 11:05 AM, Deng Hengyi wrote:
  Hi Joel,
 
  I have done a test, it seems that '-march=i386' does not provide 
  __atomic_compare_exchange_n libs. And '-march=i486' or '-march=pentium' 
  can find the '__atomic_compare_exchange_n' function.
 Look in the source for that methods on x86 and see what instruction
 it used. If it only got added in i486, then we have to figure out
 something for i386. If it was an oversight and the instruction is
 on an i386, we fix the code.

The i386 architecture lacks atomic compare instructions, to the point
where libstdc++ can't be built with that architecture (correct and
efficient atomic operations are vital important for libstdc++, andon i386
it can't be done).

The worry is that if you add atomic operations that don't lock for the
i386 architecture, you've screwed anyone who decides to build their
application for i386 hoping for maximum portability, but winds up with
locks that don't lock.

You could perhaps handle that for RTEMS by providing these functions in a
library, but users need to understand this issue, because improper locks
are tough to debug.



RE: Stale C++ ABI link

2012-12-14 Thread Joe Buck
Richard Henderson writes:
 On
  http://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
 we have a stale link to
  http://www.codesourcery.com/public/cxx-abi/abi.html

What's the new canonical location for this document?

Looks like CodeSourcery is being assimilated into Mentor.  The parent directory 
points to

http://mentorembedded.github.com/cxx-abi/abi.html

as the new location.


Re: inlined memcpy/memset degradation in gcc 4.6 or later

2012-10-05 Thread Joe Buck

On Thu, Oct 4, 2012 at 7:44 PM, Joe Buck joe.b...@synopsys.com wrote:
  Perhaps I'm missing something.  While memcpy is not permitted to assume
  alignment of its arguments, copy is.  Otherwise, if I wrote
 
  void copy(struct foo* f0, struct foo* f1)
  {
  *f0 = *f1;
  }
 
  the compiler would also be forbidden from assuming any alignment.  So,
  when compiling copy, do we lack the information that the memcpy call is
  to the standard ISO memcpy function?  If we know that it is the standard
  function we should be able to do the copy assuming everything is properly
  aligned.
 
On Fri, Oct 05, 2012 at 10:32:55AM +0200, Richard Guenther wrote:
 If we see the above aggregate copy then we should be able to compile
 the function assuming that f0 and f1 are properly aligned for type struct foo.
 If we see C source code using memcpy (f0, f1, sizeof (struct foo)) then
 we cannot assume anything about the alignment of f0 or f1 based on the
 fact that the code uses the ISO memcpy function.

Sorry, that makes no sense at all.  Let's say that I'm a user of the
function copy, and I don't know if the implementer of copy chose to
write

*f0 = *f1;

or if she used memcpy.  What you are telling me is that I need to know
that information, because in one case I am permitted to pass in nonsense
pointers that only claim to point to a (struct foo), and in the other
case I have to use proper struct foo's, aligned according to the rules of
that platform.  In fact, if I pass invalid pointers I deserve to lose, and
GCC should not be required to implement pessimistic code to deal with a
possibility that cannot occur.  A (struct foo*) has to point either to a
proper structure or be null.

What I am assuming is that memcpy(f0, f1, sizeof(struct foo)) is
equivalent to *f0 = *f1, because that is what it does: it copies the
structure.  The types of the pointers tell me the required alignment.
If there is language in the standard indicating otherwise then the
standard is defective, because it prevents an obvious optimization.




Re: inlined memcpy/memset degradation in gcc 4.6 or later

2012-10-04 Thread Joe Buck

On Tue, Oct 2, 2012 at 4:19 PM, Walter Lee w...@tilera.com wrote:
  On TILE-Gx, I'm observing a degradation in inlined memcpy/memset in
  gcc 4.6 and later versus gcc 4.4.  Though I find the problem on
  TILE-Gx, I think this is a problem for any architectures with
  SLOW_UNALIGNED_ACCESS set to 1.
 
  Consider the following program:
 
  struct foo {
int x;
  };
 
  void copy(struct foo* f0, struct foo* f1)
  {
memcpy (f0, f1, sizeof(struct foo));
  }
 
  In gcc 4.4, I get the desired inline memcpy: ...
  In gcc 4.7, however, I get inlined byte-by-byte copies: ...

On Thu, Oct 04, 2012 at 01:58:54PM +0200, Richard Guenther wrote:
 There is no way to fix it.  memcpy does not require aligned arguments
 and the merely presence of a typed pointer contains zero information
 of alignment for the middle-end.  If you want to excercise C's notion
 of alignemnt requirements then do not use memcpy but
 
  *f0 = *f1;
 
 which works equally well.

Perhaps I'm missing something.  While memcpy is not permitted to assume
alignment of its arguments, copy is.  Otherwise, if I wrote

void copy(struct foo* f0, struct foo* f1)
{
*f0 = *f1;  
}

the compiler would also be forbidden from assuming any alignment.  So,
when compiling copy, do we lack the information that the memcpy call is
to the standard ISO memcpy function?  If we know that it is the standard
function we should be able to do the copy assuming everything is properly
aligned.


 Btw, the new beavior even fixed bugs.

Could you point to a PR that was fixed by the change?  There must be some
way to distinguish this case from those cases.


Re: [v3] improve exception text when threads not enabled

2012-08-13 Thread Joe Buck
On Sun, Aug 12, 2012 at 08:02:30PM +0100, Jonathan Wakely wrote:
 This improves the fairly uninformative Operation not supported
 message given when std::thread is used without linking to libpthread.
 
 Now you get:
 
 terminate called after throwing an instance of 'std::system_error'
   what():  Enable multithreading to use std::thread: Operation not permitted
 Aborted

The new message still seems deficient.  The issue is that the executable
does not contain any thread support; not permitted usually suggests a
permission violation (like trying to write a read-only file).  Perhaps no
thread support found should be used instead of Operation not permitted.


Re: Add corollary extension

2012-06-29 Thread Joe Buck
On Thu, Jun 28, 2012 at 12:39:16PM -0700, Rick Hodgin wrote:
 I've thought more about the syntax, and I see this making more sense:
 bool isSystemOpen[!isSystemClosed];

You've just declared an array of bool, whose size is the expression 
!isSystemClosed.

As developers have already showed you how to achieve what you want in the
existing language, you should define an inv_bool class, then write

inv_bool isSystemOpen(isSystemClosed);

and use the feature to your heart's content.

There's a very high bar to accepting a language extension, because
developers need to know, to draft standard level of detail, how that
feature interacts with existing language features, and you can't change
the meaning of valid code.  Furthermore, the vast engineering effort isn't
worth doing if users can achieve the same thing in the standard language,
perhaps with slightly different syntax.

The previous proposal was for a self keyword.  But

#define self (*this)

and you're done.



Re: GCC and Clang produce undefined references to functions with vague linkage

2012-06-28 Thread Joe Buck
On Thu, Jun 28, 2012 at 02:13:47PM -0400, Rafael Espíndola wrote:
[ problem with visibility for bar::~bar for testcase ]
 $ cat test.h
 struct foo {
   virtual ~foo();
 };
 struct bar : public foo {
   virtual void zed();
 };
 $ cat def.cpp
 #include test.h
 void bar::zed() {
 }
 $ cat undef.cpp
 #include test.h
 void f() {
   foo *x(new bar);
   delete x;
 }
 
...
 
 I can see two ways of solving this and would like for both clang and
 gcc to implement the same:
 
 [1] * Make sure the destructor is emitted everywhere. That is, clang and
 gcc have a bug in producing an undefined reference to _ZN3barD0Ev.
 [2] * Make it clear that the file exporting the vtable has to export the
 symbols used in it. That is, the Itanium c++ abi needs clarification
 and so does gcc's lto plugin (and the llvm patch I am working on).

I think that the second solution wins because it allows for the production
of less object code, and it is consistent with the rationale for the
vtable optimization rule (the vtable is emitted by the file that has the
definition for the first non-inline virtual function; simply do the same
for the auto-generated virtual destructor).  The first solution requires
making one copy per compilation unit and eliminating the duplicates at
link time.


RE: self keyword

2012-06-14 Thread Joe Buck
It only saves one character in any case: your self is just *this.


From: gcc-ow...@gcc.gnu.org [gcc-ow...@gcc.gnu.org] on behalf of Ian Lance 
Taylor [i...@google.com]
Sent: Thursday, June 14, 2012 10:19 AM
To: Rick C. Hodgin
Cc: gcc@gcc.gnu.org
Subject: Re: self keyword

Rick C. Hodgin foxmuldrs...@yahoo.com writes:

 I was thinking C and C++.

 int myclass::foo(int a)
 {
 // recursion
 self(a + 1);
 }

 Just out of curiosity, why wouldn't it be accepted back into mainline?

In general these days GCC discourages language extensions.  They would
have to have a compelling advantage.  I don't see that here.  Even if I
did, I would recommend running it through a language standards body
first.

Ian


Re: Updated GCC vs Clang diagnostics

2012-04-12 Thread Joe Buck
On Fri, Apr 13, 2012 at 12:42:19AM +0200, Manuel López-Ibáñez wrote:
 I would like to have color output. And since nobody is paying me to do
 this work, I'd rather work on what I would like to have. The question
 is whether this is something that GCC wants to have.

 If the answer is NO, that is fine, I will have more free time.

I'm not interested in color output, and would turn it off if it were
implemented (the escape sequences would just mess things up when capturing
compiler output in log files).

Clang is much smarter about identifying what the user probably meant when
the issue is a typo, or . instead of - or vice versa.  Getting GCC to
do at least as well in this area is a much better use of developers' time
than presenting a cascade of unintelligible messages resulting from
template expansion in full color.

That said, you're free to work on what interests you.


Re: contributing to gcc

2012-01-09 Thread Joe Buck
On Mon, Jan 09, 2012 at 04:33:54PM -0800, Aayush Upadhyay wrote:

 I'm a sophomore in college, and I'm a solid C programmer. I'd like to
  work on an open source project, and the gcc compiler seems like a great
  one. However, I'm not sure if work is still done on the compiler itself,
  or just porting it to other systems? I'm interested in the former, but I
  don't know much about compilers. Would it be possible for me to make
  meaningful contributions, and if so, how should I start?

Quite a bit of work is being done on the compiler itself, with lots of
brainpower devoted to making it better and to implement the latest
language standards.

There is a great deal of information on gcc development on http://gcc.gnu.org/ .
As this is a core piece of infrastructure for GNU/Linux and lots of other
folks, standards are extremely high.  However, you could start out by
reading

http://gcc.gnu.org/contribute.html

and

http://gcc.gnu.org/projects/beginner.html

The latter lists some projects that non-experts could contribute to,
though it is wise to check for duplication and relevance before starting
any major efforts.


RE: Long-term plan for C++98/C++11 incompatibility

2012-01-05 Thread Joe Buck
On 10/10/2011 08:07 PM, Gabriel Dos Reis wrote:
 PODness has changed from C++98.

Jason Merrill wrote:

 Class layout in the ABI still uses the C++98 definition of POD.

But does this actually matter?  If I understand correctly, more classes are POD 
under the C++11
rules than the C++98 rules, but are there any classes that are legal C++98 that 
require a different
layout under the new rules?  Can anyone produce an example of a real (and not a 
theoretical)
binary incompatibility?


Re: wish: generation of type annotation for C++11 code.

2011-11-10 Thread Joe Buck
On Thu, Nov 10, 2011 at 10:04:34PM -0800, Gabriel Dos Reis wrote:
 On Thu, Nov 10, 2011 at 10:12 AM, Jonathan Wakely jwakely@gmail.com 
 wrote:
 
  Adding this to GCC seems like a total waste of time, write a dwarf
  processor that dumps the info you want.
 
 
 Agreed.
 
 I suspect there is a misunderstanding of what 'auto' means in C++.
 Furthermore, I think the step is completely backward.

Yes, the reason I'm delighted with auto is that there are cases where
I do not want to know the type (or I want to write generic code that
will work with different kinds of containers).  For

std::multimapFoo,Bar amap;

when I write

auto ipair = amap.equal_range(key);
for (auto iter = ipair.first; iter != ipair.second; ++iter)
  do_something_with(iter-first, iter-second);

I explicitly do not want to know the details of the ridiculously hairy
type of ipair.  If you want to know, it is

std::pairstd::multimapFoo,Bar::iterator,std::multimapFoo,Bar::iterator

and that's with the defaulted template parameters omitted.



Re: Long-term plan for C++98/C++11 incompatibility

2011-10-10 Thread Joe Buck
On Fri, Oct 07, 2011 at 07:35:17PM -0700, Gabriel Dos Reis wrote:
 C++11 is essentially binary incompatible with C++98.

Only partially.  The layout for user-defined classes is the same, and
code sequences for calls that don't include new features like rvalue
references is the same.  Some very important classes from the standard
library are different, and that creates an incompatibility.

 The best thing people should do is to take it seriously that that they should
 not attempt to mix or play loose.

Unfortunately, distros aren't going to start shipping two versions of
every C++ library, so people are going to have to solve the problem that
you are saying should not be solved, or at least determine which cases
are problems and which aren't.  If common libraries are all c++98-only
this will slow the adoption of c++11: forcing a flag day when this is not
necessary is probably not the right thing to do.

I wrote:
 Eventually there would need to be one libstdc++ that programs link against and
 run whether they use c++98 or c++11. I would expect there to be restrictions,
 but it's a problem that eventually needs to be tackled.

Gaby writes:

 My opinion is that it would an exercise in futility, frustration, and
 possibly deception to try to make people believe that there are sets of
 simple rules they can follow to mix their C++98 binaries with fully
 compatible C++11 library.  They would have to recompile the source code.

They will need to build their code with the same compiler, yes, and it
won't be binary-compatible with older versions.  But as of today, the
templates in libstdc++ expand differently depending on language mode, so
the library is already providing both sets of interfaces.

A bump to libstdc++ major version number could be made that ensures, for
example, that std::string satisfies the requirements of both languages.
It's also possible that the code in the standard library uses c++11 features
internally even when the user has specified c++98 (to get move semantics,
for example).



RE: Long-term plan for C++98/C++11 incompatibility

2011-10-07 Thread Joe Buck


On Fri, Oct 7, 2011 at 5:24 PM, James Y Knight f...@fuhm.net wrote:

 I guess to start, it would have been nice if there was a big warning on
 http://gcc.gnu.org/projects/cxx0x.html telling me not to use c++0x mode
 unless there are no objects compiled with c++98 linked into the same
 executable.

Gabriel Dos Reis [g...@integrable-solutions.net] wrote:
 I was under the impression c++0x was explicitly documented as experimental.

Yes. But I hope that some thought is devoted to figuring out how this problem
can be dealt with when c++11 support is transitioned to a fully supported 
feature.
Eventually there would need to be one libstdc++ that programs link against and
run whether they use c++98 or c++11. I would expect there to be restrictions,
but it's a problem that eventually needs to be tackled.


Re: C++11 no longer experimental

2011-09-21 Thread Joe Buck
On Wed, Sep 21, 2011 at 11:07:07AM -0700, Jonathan Wakely wrote:
 On 21 September 2011 19:00, Jonathan Wakely wrote:
  On 21 September 2011 18:51, Nathan Ridge wrote:
 
  Now that the C++11 standard has been officially voted in, there is nothing
   experimental about it any more.
 
  I thought the experimental refers to GCC's support, not the standard's 
  status.
 
 The page you linked to even makes that clear:
 
 Important: because the ISO C++0x draft is still evolving, GCC's
 support for C++0x is *experimental*. No attempt will be made to
 maintain backward compatibility with implementations of C++0x features
 that do not reflect the final C++0x standard.

No, the page now claims something that is incorrect.  The C++0x draft is
no longer evolving.  C++11 is an official standard now.

It is still the case that the *GCC support for the standard* has to be
considered experimental, which means that it's not yet possible to freeze
the ABI and provide the same level of backward compatibility as is
provided for C++98.

Still, the page needs an update.




Re: [HELP] Fwd: Mail delivery failed: returning message to sender

2011-09-07 Thread Joe Buck
On Wed, Sep 07, 2011 at 08:08:01PM -0700, Xiangfu Liu wrote:
 Hi
 
 I got the pdf file. and I also sent out the papers by postal mail.
 where is the pdf file I should send to?
 
 I have tried:
copyright-cl...@fsf.org ass...@gnu.org
 
 and I don't know Donald R. Robertson's email address

copyright-cl...@fsf.org should be correct.  Maybe it was bounced
because of a file size limit or some configuration issue?  I suggest
seeing if you can send a shorter message.


Re: Bootstrap with -Wmissing-prototypes doesn't work for C++

2011-08-21 Thread Joe Buck
On Sat, Aug 20, 2011 at 07:20:41AM -0700, Ian Lance Taylor wrote:
 Hmmm, you're right, -Wmissing-declarations seems to be equivalent to
 -Wmissing-prototypes when using C++.  Sorry I missed that.

Then it would seem that HJ's issue could be fixed by treating
-Wmissing-prototypes as a synonym for -Wmissing-declarations when building
C++.



RE: Bootstrap with -Wmissing-prototypes doesn't work for C++

2011-08-19 Thread Joe Buck
I'm confused. Since C++ treats the lack of a prototype as a hard error, what 
does it mean to make -Wmissing-prototypes useless?


From: gcc-ow...@gcc.gnu.org [gcc-ow...@gcc.gnu.org] On Behalf Of H.J. Lu 
[hjl.to...@gmail.com]
Sent: Friday, August 19, 2011 9:36 AM
To: GCC Development
Subject: Bootstrap with -Wmissing-prototypes doesn't work for C++

Since -Wmissing-prototypes doesn't work for C++, using
C++ to bootstrap GCC makes -Wmissing-prototypes useless.
You will see the -Wmissing-prototypes message in stage 1,
but you won't see it in stage3 2/3.

--
H.J.


Re: [LLVMdev] Handling of pointer difference in llvm-gcc and clang

2011-08-11 Thread Joe Buck
On Thu, Aug 11, 2011 at 09:05:19AM -0700, Florian Merz wrote:
 If I remember the standard correctly, pointer subtraction is valid if both 
 pointers point to elements of the same array or to one past the last element 
 of the array. According to this 0x8000 - 0x7FFF should be a valid 
 pointer subtraction with the result 0x0001.
 
 But if the subtraction is treated as a signed, this would be an signed 
 integer 
 overflow, as we subtract INT_MAX from INT_MIN, which surely must overflow, 
 and 
 the result therefore would be undefined.

It is true that the C and C++ languages make signed integer overflow
undefined, but that's for actual integer types as declared by the user.
For pointers, though the subtraction has to be signed (because, for two
pointers, either can can come later in the address space), this signed
subtraction has to be defined to work in a two's complement fashion (so
the wraparound in your example case works reliably).




RE: [LLVMdev] Handling of pointer difference in llvm-gcc and clang

2011-08-11 Thread Joe Buck
On Thu, Aug 11, 2011 at 1:58 PM, Joseph S. Myers
jos...@codesourcery.com wrote:
  -ftrapv and -fwrapv should have no effect on pointer subtraction.

Gaby writes:

 Yes!

Wouldn't it suffice to convert the pointers to unsigned, do an unsigned 
subtraction, and then convert the result to signed? This would then guarantee 
that gcc uses two's complement semantics, independent of -ftrapv.


Re: C99 Status - inttypes.h

2011-07-21 Thread Joe Buck
On Thu, Jul 21, 2011 at 07:30:16AM -0700, Joseph S. Myers wrote:
 On Thu, 21 Jul 2011, Diogo Sousa wrote:
 
  Hi,
  
  I checked the library functions in inttypes.h item in c99status
  (marked as Library Issue) [http://gcc.gnu.org/c99status.html], and it
  seems that glibc implements everything the standard demands.
  
  Am I missing something or is this outdated? If so, where can I find more
  information about it?
 
 Library Issue simply means it's not GCC's resposibility; it says nothing 
 about the state in any particular library that may be used with GCC.

But readers will focus on the word Issue here and think that there is
something missing.  Perhaps there should be a footnote explaining that
glibc/eglibc has the needed support, but that other libraries might not.



Re: RFA (libstdc++): C++/v3 PATCH for c++/24163 (lookup in dependent bases) and c++/29131

2011-05-20 Thread Joe Buck
On Fri, May 20, 2011 at 09:32:16AM -0700, Jason Merrill wrote:
 G++ has had a long-standing bug with unqualified name resolution in 
 templates: if we didn't find any declaration when looking up a name in 
 the template definition, we would do an additional unqualified lookup at 
 the point of instantiation.  This led to incorrectly finding 
 namespace-scope functions declared later (29131) and member functions of 
 dependent bases (24163).  This patch fixes that bug.

I get the impression that most competing C++ compilers (other than the
old HP compiler) were (or are) very loose about that rule.

 To be friendly to users, the patch also allows affected code to compile 
 with -fpermissive and provides suggestions about how to fix the code: 
 either declaring the desired function earlier (29131) or explicitly 
 qualifying the name with this- or Class:: (24163).

I think that it's quite likely that there is a lot of C++ code out there
that depends on this bug to compile.  So I'm glad that you've included
user guidance in the error messages, and it would be interesting to see
how much code is affected when, say, compiling a distro.



Re: 'The GNU Compiler for the JavaTM Programming Language' translation

2011-05-05 Thread Joe Buck
On Thu, May 05, 2011 at 11:33:44AM -0700, Paul Koning wrote:

 It sounds to me like the question are you allowed to translate this remains 
 valid and open, even if this particular translator is not real.

Yes, the SC's discussing it with RMS now and I'm hopeful that there will
be some positive changes made (too early to say more than that).





Re: Use --format=pax for release?

2011-03-31 Thread Joe Buck
On Wed, Mar 30, 2011 at 11:38:02PM -0700, Ian Lance Taylor wrote:
 Our releases are normally built with GNU tar, which seems to default to
 --format=tar.  I wonder if we should switch to --format=pax.  The pax
 format was defined by POSIX.1 10 years ago, and should be widely
 supported at this point.  GNU tar can generate it and recognize it.

I've never seen anyone distribute free software with the pax format.
Yes, Posix created it to settle a fight between promoters of the tar
and the cpio format: create yet a third format and declare peace
(pax is Latin for peace).  But no one uses pax.

 That might permit us to remove this paragraph from install.texi:
 
 @item GNU tar version 1.14 (or later)
 
 Necessary (only on some platforms) to untar the source code.  Many
 systems' @command{tar} programs will also work, only try GNU
 @command{tar} if you have problems.

But you'd need a program that could deal with the pax format, which, for
most people, would be GNU tar.



Re: GCC 4.6.0 Released

2011-03-28 Thread Joe Buck
On Mon, Mar 28, 2011 at 11:52:56AM -0700, FX wrote:
  this is a known issue and strictly cygwin related. Please update your
  cygwin environment to newest version, or disable decimal-floating
  point by option.
 
 Well, maybe this is known, but it is not noted on the GCC 4.6.0 release 
 notes, nor on the target-specific installation information page at 
 http://gcc.gnu.org/install/specific.html#x-x-cygwin
 Possibly one of the target maintainers might want to update that?

I think that the right place for the note is at

http://gcc.gnu.org/install/specific.html#x-x-cygwin

It should say something like:

Versions of Cygwin older than x.y.z fail to build the decimal floating
point library, libbid.  You will either need to upgrade Cygwin to a newer
version or disable decimal floating point by specifying --disable-decimal-float
at configure time.




Re: Second GCC 4.6.0 release candidate is now available

2011-03-25 Thread Joe Buck
On Mon, Mar 21, 2011 at 03:12:14PM -0700, Jakub Jelinek wrote:
 A second GCC 4.6.0 release candidate is available at:
 
 ftp://gcc.gnu.org/pub/gcc/snapshots/4.6.0-RC-20110321/
 
 Please test the tarballs and report any problems to Bugzilla.
 CC me on the bugs if you believe they are regressions from
 previous releases severe enough to block the 4.6.0 release.

See http://gcc.gnu.org/ml/gcc-testresults/2011-03/msg02463.html .

There's an ICE for gcc.c-torture/compile/limits-exprparen.c
which might be an issue.  I think that the others may be due
to the ancient version of glibc on RHEL 4 systems, though I
haven't confirmed this.




Re: GIMPLE Question

2011-02-25 Thread Joe Buck
On Fri, Feb 25, 2011 at 11:33:58AM -0800, Andrew Pinski wrote:
 On Fri, Feb 25, 2011 at 11:21 AM, Kyle Girard k...@kdmanalytics.com wrote:
 
    That *is* the content of the bar method.  What exactly do you expect to 
  see
  happening when you assign a class with no members?  There's nothing to do!
 
 
  I was hoping to see the assignment.  My example might have been a little
  too simple.  Here's a slightly more complex example:
 
  foo.hh
 
  class A
  {
  public:
    void yay(){};
  };
 
 A is still an empty class, try adding int a; and you will see some code 
 there.

The fact that passing an empty class has zero cost is used quite a bit in
the design of the standard library to generate efficient specialized
templates.  The empty objects are just used to select the correct
overloaded form.




Re: AspectG++ ?

2011-02-18 Thread Joe Buck
On Fri, Feb 18, 2011 at 01:56:47AM -0800, David Lanzendörfer wrote:
 Hello Folks
 You certainly know about aspect orientated programming.
 http://www.aspectc.org/
 Is there any chance that this will ever be integrated into official gcc?
 Would be cool to define aspect because it would make your code much smaller
 and more readable. Additionally it comes in very handy if you wanna debug 
 something.

The following is just my opinion and others may disagree, but I don't
think it's a good idea because I think that the costs would greatly
outweigh the benefits.

OK, let's assume that the Aspect C++ group contributes a beautifully
engineered set of extensions to g++, meeting all the coding standards,
done with great style and with a large regression test suite, properly
legally assigned to the FSF.  Great code, slap the GNU label on it and
ship it, right?

But then there's the issue that there is no rigorous specification
(at the level of a draft ISO standard) for how the Aspect C++ features
interact with all other C++ language features (templates, exceptions,
rvalue references, the standard library, etc), and the fact that the
user community is microscopic as compared to existing GCC-supported
languages.  Then there's the question of how long we would have a
skilled maintainer.  It's an academic project.  It's a decade old and
still around, but all the publications are from one research group,
which might eventually move on to some other project.  In the meantime,
g++ developers who don't know Aspect C++ will face the problem that
their checkins break a language that they don't understand.

If, on the other hand, the language really catches on, to the point
where part or all of Aspect C++ gets into future standards for ISO
C++, this of course changes everything.



Re: RFC: A new MIPS64 ABI

2011-02-14 Thread Joe Buck
On Mon, Feb 14, 2011 at 05:57:13PM -0800, Paul Koning wrote:
 It seems that this proposal would benefit programs that need more than 2 GB 
 but less than 4 GB, and for some reason really don't want 64 bit pointers.
 
 This seems like a microscopically small market segment.  I can't see any 
 sense in such an effort.

I remember the RHEL hugemem patch being a big deal for lots of their
customers, so a process could address the full 4GB instead of only 3GB
on a 32-bit machine.  If I recall correctly, upstream didn't want it
(get a 64-bit machine!) but lots of paying customers clamored for it.

(I personally don't have an opinion on whether it's worth bothering with).



Re: C/C++ extensions for array notations

2010-12-13 Thread Joe Buck
On Mon, Dec 13, 2010 at 09:08:39AM -0800, Sebastian Pop wrote:
 Hi,
 
 I would like to ask the opinion of C/C++ maintainers about the extension
 that the Intel compiler proposes for array notations:
 http://software.intel.com/sites/products/documentation/studio/composer/en-us/2011/compiler_c/index.htm#optaps/common/optaps_par_cean_prog.htm
 
 Are there strong opinions against this extension?

It's an interesting concept, looks especially useful for parallel
programming.  It looks like a very complex set of features; I don't
know whether Intel has a document elsewhere that specifies the details.
Because of the high complexity, it could be quite a maintainance burden,
especially if there isn't a rigorous spec.

Interaction with other C++ language features doesn't seem to be described.
I can't think of a standard C++ program that would change meaning with
this extension.  But it's not stated whether there can be a reference to
an array section, what the ABI looks like when an array section is
passed as a C++ function argument, whether there are any issues when the
base type of the array we're taking a section of has constructors and
destructors, etc.

If someone is interested in producing an experimental gcc extension,
great.  But there would be a lot of questions to be answered before
it would be appropriate to accept as a supported feature of GCC.







Re: operator new[] overflow (PR 19351)

2010-12-03 Thread Joe Buck
On Thu, Dec 02, 2010 at 02:47:30PM -0800, Gabriel Dos Reis wrote:
 On Thu, Dec 2, 2010 at 2:20 PM, Joe Buck joe.b...@synopsys.com wrote:
  On Wed, Dec 01, 2010 at 10:26:58PM -0800, Florian Weimer wrote:
  * Chris Lattner:
 
   On overflow it just forces the size passed in to operator new to
   -1ULL, which throws bad_alloc.
 
  This is also what my patch tries to implement.
 
  Yes, but Chris's code just checks the overflow of the multiply.  Your
  patch achieves the same result in a more complex way, by
  computing the largest non-overflowing value of n in
 
  new T[n];
 
  and comparing n against that.  Even though max_size_t/sizeof T is a
  compile-time constant, this is still more expensive.
 
 I would expect max_size_t/sizeof(T) to be actually an integer
 constant that n is compared against.  I would be surprised
 if that one-time comparison is noticeable in real applications that
 new an array of objects.

It's wasted code if the multiply instruction detects the overflow.
It's true that the cost is small (maybe just one extra instruction
and the same number of tests, maybe one more on architectures where you
have to load a large constant), but it is slightly worse code than what
Chris Lattner showed.  Still, it's certainly an improvement on the current
situation and the cost is negligible compared to the call to the
allocator.  Since it's a security issue, some form of the patch should
go in.






Re: operator new[] overflow (PR 19351)

2010-12-02 Thread Joe Buck
On Wed, Dec 01, 2010 at 10:26:58PM -0800, Florian Weimer wrote:
 * Chris Lattner:
 
  On overflow it just forces the size passed in to operator new to
  -1ULL, which throws bad_alloc.
 
 This is also what my patch tries to implement.

Yes, but Chris's code just checks the overflow of the multiply.  Your
patch achieves the same result in a more complex way, by
computing the largest non-overflowing value of n in

new T[n];

and comparing n against that.  Even though max_size_t/sizeof T is a
compile-time constant, this is still more expensive.



Re: operator new[] overflow (PR 19351)

2010-11-30 Thread Joe Buck
On Tue, Nov 30, 2010 at 01:49:23PM -0800, Gabriel Dos Reis wrote:
 The existing GCC behaviour is a bit more perverse than the
 C malloc() case as in
 
new T[n]
 
 there is no multiplication that could be credited to careless programmer.
 The multiplication is introduced by GCC.

... which suggests strongly that GCC should fix it.  Too bad the ABI is
frozen; if the internal ABI kept the two values (the size of the type, and
the number of values) separate and passed two arguments to the allocation
function, it would be easy to do the right thing (through bad_alloc if the
multiplication overflows).



Re: Merging gdc (Gnu D Compiler) into gcc

2010-11-09 Thread Joe Buck
On Tue, Nov 09, 2010 at 05:08:44AM -0800, Jakub Jelinek wrote:
 On Tue, Nov 09, 2010 at 09:36:08AM +, Andrew Haley wrote:
   The D specific part of gdc is already GPL, it's just copyrighted by
   Digital Mars. I understand the copyright must be reassigned to the FSF.
   Is it possible to fork the code, and assign copyright of one fork to the
   FSF and leave the other copyrighted by Digital Mars?
  
  The FSF generally allows a grant-back: that is, you assign your code
  to the FSF, which immediately grants you an unlimited licence to do
  whatever you want with it.
 
 Just note that if you'll want to merge back from the FSF tree back to your
 forked tree any changes made there by others, those changes will already be
 copyrighted by the FSF and not Digital Mars.

You might be able to get RMS to agree to an alternative arrangement, but
no one but him could approve it.


Re: Trouble doing bootstrap

2010-10-14 Thread Joe Buck
On Thu, Oct 14, 2010 at 12:47:34PM -0700, Ian Lance Taylor wrote:
  It is not so unlikely that multiple instances of cc1, cc1plus, and f951
  are running simultaneously.  Granted, I haven't done any measurements.
 
 Most projects are written in only one language.  Sure, there may be
 cases where cc1 and cc1plus are running simultaneously.  But I suspect
 those cases are relatively unlikely.  In particular, I suspect that the
 gain when that happens, which is really quite small, is less than the
 cost of using a shared library.  Needless to say, I also have not done
 any measurements.

Projects that use C in some places and C++ in others are common, so a
simultaneous cc1 and cc1plus run will often occur with parallel builds.
However, the mp math libraries are relatively small compared to the size
of cc1 or cc1plus so the memory savings from having one copy instead of
two are minimal.



Re: show size of stack needed by functions

2010-10-13 Thread Joe Buck
On Wed, Oct 13, 2010 at 02:43:18PM -0700, Sebastian wrote:
 On Wed, Oct 13, 2010 H.J. Lu wrote:
  gcc can not dump a callgraph.  Both GNU ld and gold can dump a
  cross-reference table, which is not a call graph but could perhaps be
  used to produce a call graph.  See the --cref option.
 --cref isn't much use. It doesn't tell me which functions call other
 functions, only which modules refer to them.
 
 Static analysis which work on source code are not ideal, either. They
 don't know which functions will be inlined by the compiler.
 
 So it would be nice if gcc could provide a call graph.

gcc compiles only one object file at a time; to produce a call graph you'd
need data produced by the linker.  gcc cannot provide a call graph.

To get a call graph, you'd need to combine data produced by the compiler
(with stack-usage) and data produced by the linker (--cref).  It probably
wouldn't be too hard to produce such a tool using a scripting language
(perl or python, say) to parse the outputs from the compile and link
steps.



Re: Where are the new GCC releases?

2010-09-22 Thread Joe Buck
On Wed, Sep 22, 2010 at 10:49:58AM -0700, Artem S. Tashkinov wrote:
 Hello,
 
 Something tells me that GCC 4.4.5 and 4.5.2 should have been
 released a long time ago, but I don't even see regular GCC
 status updates. Are all release managers on leave?

Who or what is this something that tells you that?  4.5.1 was released
August 8th.  It would be very unusual to see a 4.5.2 this quickly, and
no schedule was announced.

For 4.4.5, see http://gcc.gnu.org/ml/gcc/2010-09/msg00146.html though
this is not a promise (it would be good to hear about whether the testing
Jakub referred to shows performance degradations or not).

(note that I'm only an experienced observer here and have nothing to do
with decisions about when things are ripe for release).




Re: Merging Apple's Objective-C 2.0 compiler changes

2010-09-09 Thread Joe Buck
On Thu, Sep 09, 2010 at 02:11:43PM -0700, Chris Lattner wrote:
 On Sep 9, 2010, at 12:19 PM, Jack Howarth wrote:
Perhaps a rational approach would be to contact whoever at Apple 
  currently is
  charged with maintaining their objc languages about the issue.
 
 Apple does not have an internal process to assign code to the FSF anymore.  I 
 would focus on the code that is already assigned to the FSF.

To clarify, anything not checked in on gcc.gnu.org somewhere must be
assumed to be copyright Apple, not copyright FSF, and has not been
contributed, and Apple has no plans to contribute more code.  However,
anything on gcc.gnu.org has been contributed.

I understand that the main issue is that Apple objects to GPLv3, but
the exact reason doesn't necessarily matter that much.




Re: GFDL/GPL issues

2010-08-04 Thread Joe Buck
On Wed, Aug 04, 2010 at 12:21:05AM -0700, Benjamin Kosnik wrote:
 
  So one way to move forward is to effectively have two manuals, one
  containing traditional user-written text (GFDL), the other containing
  generated text (GPL).  If you print it out as a book, the generated
  part would just appear as an appendix to the manual, it's mere
  aggregation.
 
 This is not acceptable to me. 
 
 You have just described the status quo,
 what we are already doing. It is very difficult to link api
 references to manual references in two separate documents. What I want
 to do is full integration, and not be forced into these aggregations.
 
 And I am being denied. 

You are being denied by RMS.  He controls the copyright, the SC has no
legal say, and he's stubborn as hell.


Re: GFDL/GPL issues

2010-08-04 Thread Joe Buck
On Wed, Aug 04, 2010 at 10:34:51AM -0700, Alfred M. Szmidt wrote:
You are being denied by RMS.  He controls the copyright, the SC has
no legal say, and he's stubborn as hell.
 
 When presented with weak arguments, then yes he will be stubborn but
 rightly so.  
 
 I don't see what the problem is with two manuals, from a users
 perspective I actually prefer that and doing cross referencing between
 manuals in texinfo is easy.

OK, let's say Don Knuth decides he wants to spend his retirement
contributing to GNU.  RMS is effectively saying that literate
programming is banned from the GNU project and Knuth can just go away if
he doesn't like it (and yes, requiring GFDL for documentation and GPL for
code is equivalent to banning literate programming).  This is an
anti-software-freedom argument, an attempt by one man to impose his
personal taste.

For a class library, documentation generators are really the only
reasonable way to provide a maintainable manual.  You need to make
sure that every inheritance relationship is described correctly, and
you need to make sure that, as interfaces change, they are described
consistently and accurately.  The best way to achieve that is to
auto-generate the information.  Sure, as a *user* it works equally
well for you if the maintainers have worked three times as hard to
do by hand what could be done by computer, but there's a high cost.


Re: GFDL/GPL issues

2010-08-04 Thread Joe Buck
On Wed, Aug 04, 2010 at 02:12:18PM -0700, Paolo Bonzini wrote:
 However, until there is a possibility to relicense anything GPL-GFDL I 
 cannot disagree.  In fact, since the GFDL is more restrictive, it is the 
 same thing as the Affero GPL.

No, because there is explicit language in the Affero GPL and GPL3 to
prevent license incompatibility.




Re: GFDL/GPL issues

2010-08-03 Thread Joe Buck
On Mon, Aug 02, 2010 at 05:51:13PM -0700, Paul Koning wrote:
 gcc and gccint docs are actually pretty reasonable.  (Certainly gccint is 
 vastly better than some of its siblings, like gdbint.)  But very little of it 
 is generated and very little of what comes to mind as possible subject matter 
 is suitable for being generated.

RMS explicitly blessed generated cross-references and the like under the
GPL.

So one way to move forward is to effectively have two manuals, one
containing traditional user-written text (GFDL), the other containing
generated text (GPL).  If you print it out as a book, the generated
part would just appear as an appendix to the manual, it's mere
aggregation.



Re: GFDL/GPL issues

2010-07-29 Thread Joe Buck
On Thu, Jul 29, 2010 at 01:20:45PM -0700, Brian Makin wrote:
 Or to move to a better foundation?  It seems to me that gcc has had various 
 issues for various reasons for quite a while now.  RMS is all for tightly 
 controller yet freely distributable software.
 Maybe it's time to throw more effort behind something like LLVM?

This is the gcc development list.  If you want to contribute to LLVM,
that's fine, but if so you're on the wrong list.



Re: GFDL/GPL issues

2010-07-27 Thread Joe Buck
On Tue, Jul 27, 2010 at 08:53:48AM -0700, Mark Mitchell wrote:
 I believe that the right fix (short of simply abandoning the GFDL, which
 would be fine with me, but is presumably not going to pass muster with
 RMS) is a revision to the GPL that explicitly permits relicensing GPL'd
 content under the GFDL, by anyone.  Movement in that direction should
 not be of concern to the FSF; the point of the GFDL was to prevent
 people removing the FSF's philosophical statements in its manuals, not
 to prevent GPL'd content from being used in manuals.

RMS already rejected the idea of dual-licensing just GCC (GPL/GFDL) to
deal with this problem, now you're asking to effectively dual-license all
GCC (v3.1?) code that way.  Even if he would be willing to consider it
(which I doubt), he'd want to have attorneys examine all the legal
consequences so another year will go by.

We might need to go in the other direction (less radical, but enough to
solve the immediate problem).  What if only constraints files are
dual-licensed (GPL3+ or GFDL) for now?  Then documentation can be
generated from them and we've at least solved that problem.  If RMS agrees
to that and sees that the world doesn't end, maybe he'll be open later on
to opening this door wider.





Re: GFDL/GPL issues

2010-07-22 Thread Joe Buck
On Thu, Jul 22, 2010 at 04:36:46PM -0700, Mark Mitchell wrote:
 Steven Bosscher wrote:
 
  2. Can we move GPL'd code into GFDL'd manuals, or copy text from GFDL's
  manuals into GPL'd code, or auto-generated GFDL's manuals from GPL'd code?
 
  This got complicated; see previous postings.  But, it's not relevant to
  your question, since you're not trying to do that.
  
  I would like to do this for the constraints.md files, but it's not
  clear to me right now whether this is allowed or not. What do you
  think?
 
 I think it's allowed, but not a good idea, due to the fact that I think
 it creates a trap for people.
 
 The FSF has said that it's OK for *us* to do it, in the FSF repository,
 because the FSF can itself relicense code.  But, it's said that it's not
 OK for third parties to do it, because they can't.  And, the natural way
 for us to do it is via generator programs.  This creates a situation
 where a third party could rerun the generator program and end up with
 something they couldn't distribute.  That seems very tricky to me.
 
 I believe that the only real fix here is (a) for the FSF to abandon the
 GFDL, and relicense manuals under the GPL, or (b) for the FSF to add an
 exception to the GFDL, making it compatible with the GPL in some way.
 However, I have no evidence that the FSF is considering either of these
 ideas; RMS didn't provide encouraging feedback when I made such suggestions.

RMS is unlikely to abandon the GFDL because the features that many object
to as non-free are intentionally chosen, in part to make sure that he can
get his message out even in situations where a distributor would not agree
with that message.  I think he hasn't gotten over ESR's attempts in the
late 90s to write him out of history, so he thinks he has to force people
to carry his message along with the GNU tools.

However, if we have text that is entirely generated from a GPL program
by some kind of generator program, that text can be distributed under
the GPL.  It just can't be combined with GFDL text, except by mere
aggregation (you can print the two manuals one after the other as
chapters, or publish them both from the same web site).

RMS didn't object to what he called a cross reference or an index,
generated this way, to be distributed under the GPL.

Not a great solution, but perhaps it can be made to work for a while.


Re: Massive performance regression from switching to gcc 4.5

2010-06-25 Thread Joe Buck
On Fri, Jun 25, 2010 at 06:10:56AM -0700, Jan Hubicka wrote:
 When you compile with -Os, the inlining happens only when code size reduces.
 Thus we pretty much care about the code size metrics only.  I suspect the
 problem here might be that normal C++ code needs some inlining to make
 abstraction penalty go away. GCC -Os implementation is generally tuned for
 CSiBE and it is somewhat C centric (that makes sense for embedded world). As a
 result we might get quite noticeable slowdowns on C++ apps compiled with -Os
 (and code size growth too since abstraction is never eliminated). It can be
 seen also at tramp3d (Pooma testcase) where -Os produces a lot bigger and a 
 lot
 slower code.

One would think that in most of the abstraction-penalty cases, the inlined
code (often the direct reading or setting of a class data member) should
be both smaller and faster than the call, so -Os should inline.  Perhaps
there are cases where the inlined version is, say, one or two instructions 
larger
than the version with a call, and this causes the degradation?  If so,
maybe some heuristic could be produced that would inline anyway for
a small function?



Re: possible license issue (documentation generated from source) in MELT branch of GCC

2010-05-29 Thread Joe Buck
On Sat, May 29, 2010 at 01:39:44AM -0700, Basile Starynkevitch wrote:
 ... I was told that
 generating a *texi file from (GPLv3+ licensed, FSF copyrighted) source
 code could be incompatible with the GFDL license of gccint.texi.

The SC is trying to work something out with RMS on this (more generally,
it's also an issue for libstdc++ and doxygen).  While I can't make
promises, it seems he's open to coming up with some kind of solution that
would allow this use, ideally without needing to involve lawyers.

Unfortunately these things always take longer than you'd think that they
should.





Re: Does `-fwhole-program' make sense when compiling shared libraries?

2010-05-17 Thread Joe Buck
On Mon, May 17, 2010 at 10:57:31AM -0700, Toon Moene wrote:
 On 05/17/2010 08:08 PM, Dave Korn wrote:
 
   Hi!
 
 PR42904 is a bug where, when compiling a windows DLL using 
  -fwhole-program,
  the compiler optimises away the entire library body, because there's no
  dependency chain related to 'main' to anchor it.

Not a bug, but perhaps the beginning of a reasonable enhancement project.

 Aren't shared library and whole program mutually exclusive concepts ?
 
 The mere fact that you are building a library means that it cannot be 
 the whole program, and because a shared library cannot be determined to 
 have being used by any fixed program, by definition cannot be the whole 
 program.
 
 Or so I'd think.

The concept would need to be extended so that the compiler would be told
exactly what interfaces of the shared library are considered free, and
which are considered internal calls.  Then a -fwhole-library could make
sense.



Re: [sysad...@gnu.org: [gnu.org #572859] [gcc-bugs-h...@gcc.gnu.org: ezmlm warning]]

2010-05-11 Thread Joe Buck
On Tue, May 11, 2010 at 01:12:45PM -0700, Alfred M. Szmidt wrote:
 Not sure where to send this, who is responsible for the mail server
 for gcc.gnu.org?

The admins can be reached at overse...@gcc.gnu.org .



Re: memcpy(p,p,len)

2010-04-30 Thread Joe Buck
On Fri, Apr 30, 2010 at 07:30:33AM -0700, Mark Mielke wrote:
 Just a quick comment than Jan-Benedict's opinion is widely shared by the 
 specification and by the Linux glibc manpage:
 
 DESCRIPTION
 The  memcpy()  function  copies  n bytes from memory area src to 
 memory
 area dest.  The memory areas should not overlap.  Use memmove(3) 
 if the
 memory areas do overlap.
 
 It doesn't matter if it sometimes works. Sometimes works programs are 
 sometimes doesn't work programs. :-)

The typical memcpy function will fail for overlapping but unequal memory
ranges, but will work for src == dst.  Switching to memmove would degrade
performance, and that should only be done if there is an actual, rather
than a theoretical bug.  Note that for this use, it's not possible (if
the program is valid) for the ranges to overlap but be unequal.

Another alternative is that instead of using memcpy, a specialized
function could be used that has the required property (the glibc
memcpy does).


Re: memcpy(p,p,len)

2010-04-30 Thread Joe Buck
On Fri, Apr 30, 2010 at 08:29:19AM -0700, Richard Guenther wrote:
 On Fri, Apr 30, 2010 at 5:05 PM, Joe Buck joe.b...@synopsys.com wrote:
  On Fri, Apr 30, 2010 at 07:30:33AM -0700, Mark Mielke wrote:
  Just a quick comment than Jan-Benedict's opinion is widely shared by the
  specification and by the Linux glibc manpage:
 
  DESCRIPTION
          The  memcpy()  function  copies  n bytes from memory area src to
  memory
          area dest.  The memory areas should not overlap.  Use memmove(3)
  if the
          memory areas do overlap.
 
  It doesn't matter if it sometimes works. Sometimes works programs are
  sometimes doesn't work programs. :-)
 
  The typical memcpy function will fail for overlapping but unequal memory
  ranges, but will work for src == dst.  Switching to memmove would degrade
  performance, and that should only be done if there is an actual, rather
  than a theoretical bug.  Note that for this use, it's not possible (if
  the program is valid) for the ranges to overlap but be unequal.
 
  Another alternative is that instead of using memcpy, a specialized
  function could be used that has the required property (the glibc
  memcpy does).
 
 Note that language semantics come in here as well.  The middle-end
 assumes that when an assignment is not BLKmode that the RHS
 will be read before the lhs will be written.  It does not assume so
 otherwise and the behavior is undefined for overlapping *p and *q
 if you do *p = *q.  Thus it is up to the frontend to emit a call to
 memmove in this case (the C++ frontend got bitten by this and
 was fixed).

If the only possibilities are that p == q, or *p and *q do not overlap,
then
  if (p != q)
 memcpy(p, q, n);
would be cheaper than memmove, which has to choose between forward and
backward copying to handle overlap.  However, some memcpy implementations
(including the one in glibc) will do the right thing even without the
test.

If structure copying suddenly produces memmove calls, that would not
be good.



Re: Why not contribute? (to GCC)

2010-04-23 Thread Joe Buck
On Fri, Apr 23, 2010 at 03:35:26PM -0700, Manuel López-Ibáñez wrote:
 On 24 April 2010 00:18, Alfred M. Szmidt a...@gnu.org wrote:
 
  The disclaimers are legally necessary though, the FSF needs a paper
  trail in the case your employer comes back and claims that they have
  copyright over a change.
 
 BTW, in this aspect there is no difference between GCC and LLVM. The
 latter also requires to assign copyright to the University of
 Illinois. If you don't have a copyright disclaimer before contributing
 to LLVM, you are exposing yourself to some future legal troubles.

The main difficulties I've experienced haven't been with the copyright
assignment itself, but the issues surrounding patents in the default
disclaimer language (though I understand that the FSF has negotiated other
language with a number of corporate contributors).


Re: Why not contribute? (to GCC)

2010-04-23 Thread Joe Buck
On Fri, Apr 23, 2010 at 05:05:47PM -0700, Basile Starynkevitch wrote:
 The real issue is not the copyright disclaimer, it is the legal terms 
 inside. Maybe U.Illinois don't use words like unlumited liaibility.

Where are you getting this term unlimited liability from?
I think that your legal people made a mistake, or no company
would ever agree to contribute code to the FSF.




Re: Why not contribute? (to GCC)

2010-04-23 Thread Joe Buck
On Fri, Apr 23, 2010 at 05:08:02PM -0700, Basile Starynkevitch wrote:
 Joe Buck wrote:
  On Fri, Apr 23, 2010 at 03:35:26PM -0700, Manuel López-Ibáñez wrote:
  On 24 April 2010 00:18, Alfred M. Szmidt a...@gnu.org wrote:
  The disclaimers are legally necessary though, [...]
  
  The main difficulties I've experienced haven't been with the copyright
  assignment itself, but the issues surrounding patents in the default
  disclaimer language (though I understand that the FSF has negotiated other
  language with a number of corporate contributors).
 
 Yes yes yes!
 
 Thanks for the clarification!

But we aren't saying the same thing: you seem to think that people are
opening themselves up to unlimited liability by contributing, and the
issue I encountered was over the generality of the promise not to sue over
any future GCC version even if new capabilities are added.





Re: GCC 4.5.0 Released

2010-04-21 Thread Joe Buck
On Tue, Apr 20, 2010 at 01:22:32AM -0700, Manuel López-Ibáñez wrote:
 Is there any one against advertising GCC to the fullest extent? The
 problem, as always, is who will do this job. But I don't think nobody
 will be against if you create a GCC blog/tweeter/youtube channel and
 start writing nice articles for various magazines/journals/newspapers.
 People may criticize what you say if it is untrue or inaccurate, but
 per-se more (positive) visibility of GCC is good.

If someone wants to volunteer to write an article about all the delicious
goodness of 4.5.0, that would be cool, and lwn.net and others would
be interested in publishing such a thing.  But the RMs have enough work
to do as is, so it shouldn't be up to Mark to produce a beautifully
written white paper.

As you say, the problem is who will do this job.



Re: RFC: c++ diagnostics

2010-04-06 Thread Joe Buck
On Tue, Apr 06, 2010 at 09:00:16AM -0700, Chris Lattner wrote:
 I wrote a little blog post that shows off some of the things that Clang can 
 do.  It would be great to improve some of GCC/G++'s diagnostics in a similar 
 way:
 
 http://blog.llvm.org/2010/04/amazing-feats-of-clang-error-recovery.html

Some of Chris's examples appear to be regressions, in that gcc's old
bison-based parser did a better job than the current parser.  In
particular, around the time of the 2-3 transition, a rule was added to
catch the fact that

foo bar;

where neither is defined as a type, almost certainly is because of a
missing definition of the type foo, or a mis-spelling (this is what is
going on in several of Chris's examples).  I pushed to get that rule
added because I tried a lot of C++ code that had only been compiled
with g++ 2.9x, and it was filled with uses of STL without std:: because
back then, the standard library wasn't in a namespace and std:: was
magic that was ignored.






Re: RFC: c++ diagnostics

2010-04-06 Thread Joe Buck

  http://blog.llvm.org/2010/04/amazing-feats-of-clang-error-recovery.html
  
  ...As it happens, some C++ diagnostics are better than the
  same diagnostic for C and viceversa.

On Tue, Apr 06, 2010 at 09:45:11AM -0700, Chris Lattner wrote:
 I think all the C examples are also valid C++ code, they should apply equally 
 well, but I admit that I didn't try those on g++ to see how it does.  I 
 figured it also didn't matter much because there has surely been significant 
 progress since gcc 4.2.

Yes, g++ does a better job for some of Chris's examples than gcc does.

For the second example we get

t.c:1: error: 'pid_t' has not been declared

For the third example:
t.c:2: error: 'int64' does not name a type

However, most of the criticisms do apply, and the spell checker is a
very good idea.







Re: BB reorder forced off for -Os

2010-03-23 Thread Joe Buck
 From: Ian Bolton [mailto:bol...@icerasemi.com]
  Is there any reason why BB reorder has been disabled
  in bb-reorder.c for -Os, such that you can't even
  turn it on with -freorder-blocks?

On Tue, Mar 23, 2010 at 12:21:05PM -0700, Paul Koning wrote:
 Does -Os mean optimize even if it makes things a bit bigger or does it
 mean optimize only to make it smaller?  If the latter then the current
 behavior would appear to be the correct one.

The intent of -Os is to say that speed matters less than size.  This
would argue against using any optimization that can increase code size
*by default*.

However, if the user explicitly says -freorder-blocks on the command line,
then he/she is overriding part of -Os, saying that desired behavior is
to do the specified optimization, but otherwise optimize for space.

Also, while some combinations of options might not be possible, at the least,
if a user asks for some pass to run with an -f switch and the pass isn't
run, there should at least be a warning to that effect (IMHO).



Re: The scope of a for statement

2010-03-05 Thread Joe Buck
On Fri, Mar 05, 2010 at 11:38:23AM -0800, Magnus Fromreide wrote:
 Hello.
 
 I tried to do
 
 for (;; ({ break; }))
   printf(Hello\n);
 
 and got an error message:
 
 error: break statement not within loop or switch

But it only got through the parser, so that this error message
could be generated, because you're using a GNU extension: statements
and declarations in expressions.  That is, ({ break;}) is a GNU
extension.

 when compiling it as C. Given that 9899:1999 §6.8.6.3 says that a break
 statement only shall appear in or as a switch or loop body that is expected.
 
 The problem is that when I compile it as C++ i get the same error message and
 14882:1998 (and n3035) §6.5.3 says that
 
 The for statement
   for ( for-init-statement conditionopt ; expressionopt ) statement
 is equivalent to
   {
 for-init-statement
 while ( condition ) {
statement
expression ;
 }
   }
 
 and then goes on to list some exceptions to this, none of which are of
 importance here.

But in standard ISO C++, ({ break;}) is not a valid expression.

Ideally a GNU extension should be specified as well as the rest of the
standard is specified, but I'm not surprised that this doesn't work.


Re: The scope of a for statement

2010-03-05 Thread Joe Buck
On Fri, Mar 05, 2010 at 02:40:44PM -0800, Magnus Fromreide wrote:
 On Fri, Mar 05, 2010 at 12:06:01PM -0800, Joe Buck wrote:
  On Fri, Mar 05, 2010 at 11:38:23AM -0800, Magnus Fromreide wrote:
   Hello.
   
   I tried to do
   
   for (;; ({ break; }))
 printf(Hello\n);
   
   and got an error message:

...

  Ideally a GNU extension should be specified as well as the rest of the
  standard is specified, but I'm not surprised that this doesn't work.
 
 So you would say this points to a buglet in the specification of statement
 expressions?
 
 Or is it a bug in the C++ implementation, but one that is unimportant as it
 is impossible to detect using standard C++?

Either way, it's low priority, but if you care, I think that the fix
might just be to document that certain uses don't work, and to warn the
user that he/she isn't going to get a very good diagnostic if such uses
are tried.

If I were required to come up with a fix, I would specify that it's not
valid to break out of the statement expression (with a break, continue, or
goto) and thus forbid ({ break;}), not just here but everywhere.  Throwing
exceptions would be OK because ordinary expression evaluation can throw
exceptions.


Re: Change x86 default arch for 4.5?

2010-02-21 Thread Joe Buck
On Sun, Feb 21, 2010 at 11:45:49AM -0800, Martin Guy wrote:
  You want to cater for a minority with old hardware. I
   actually expect you'll find that those users are less naive than the
   average gcc user.
 I want to cater for everyone, especially youngsters, learners and the
 poor struggling with whatever they can get their hands on.
 It's not even a rich country/poor country thing: I live in a run down
 industrial area of England where the local kids are gagging for
 anything that works.

Let's step back a bit and look at the tradeoffs.  I see two main
inflection points where the choice of default has a real impact.

First, you have to assume i486 if you want a libstdc++ that supports
locking correctly.  If you default to i386, you screw every user of an
i486-or-later processor by giving them locking that doesn't work, for the
benefit of a processor that hasn't been manufactured in several years, and
was only used in embedded markets for a number of years before that.  To
me, that's a no-brainer; we should assume 486 and ask the tiny minority
trying to make an ancient system work to use the appropriate switches.
Right now, GNU/Linux distros are already doing that.

The second inflection point is being able to assume enough SSE to
generate improved floating point.  Here, it seems less clear that
it's better to change the default, as it's more possible that the
users of older systems that you champion could be impacted.



Re: Change x86 default arch for 4.5?

2010-02-19 Thread Joe Buck
On Thu, Feb 18, 2010 at 06:00:07PM -0800, Tim Prince wrote:
 On 2/18/2010 4:54 PM, Joe Buck wrote:
 
  But maybe I didn't ask the right question: can any x86 experts comment on
  recently made x86 CPUs that would not function correctly with code
  produced by --with-arch=i486?  Are there any?
 
 
 All CPUs still in production are at least SSE3 capable, unless someone 
 can come up with one of which I'm not aware.  Intel compilers made the 
 switch last year to requiring SSE2 capability for the host, as well as 
 in the default target options, even for 32-bit.  All x86_64 or X64 CPUs 
 for which any compiler was produced had SSE2 capability, so it is 
 required for those 64-bit targets.

I'm sure that Intel and AMD haven't made any in ages, I just wanted to
make sure that there are no low-end third-party cores made recently (say,
by Cyrix, VIA, or someone else) that lack atomics.  I guess that the
answer is no.



Re: Change x86 default arch for 4.5?

2010-02-18 Thread Joe Buck
On Thu, Feb 18, 2010 at 02:09:14PM -0800, Jason Merrill wrote:
 I periodically get bitten by bug 34115: a compiler configured without 
 --with-arch on i686-pc-linux-gnu doesn't support atomics.  I think we 
 would only need to bump the default to i486 to get atomic support.  Can 
 we reconsider the default for 4.5?

Is anyone still manufacturing x86 CPUs that don't support the atomic
instructions?



Re: Change x86 default arch for 4.5?

2010-02-18 Thread Joe Buck
On Thu, Feb 18, 2010 at 04:31:37PM -0800, David Daney wrote:
 On 02/18/2010 03:30 PM, Joe Buck wrote:
  On Thu, Feb 18, 2010 at 02:09:14PM -0800, Jason Merrill wrote:
  I periodically get bitten by bug 34115: a compiler configured without
  --with-arch on i686-pc-linux-gnu doesn't support atomics.  I think we
  would only need to bump the default to i486 to get atomic support.  Can
  we reconsider the default for 4.5?
 
  Is anyone still manufacturing x86 CPUs that don't support the atomic
  instructions?
 
 Should it just be a question of 'manufacturing'?  Or should 'using' be a 
 criterion for any decision?

 Not that I disagree with Jason's suggestion, it is probably the right 
 choice.

using would be the right criterion if Jason were advocating removing
support for i386 but he only proposed changing the default.

But maybe I didn't ask the right question: can any x86 experts comment on
recently made x86 CPUs that would not function correctly with code
produced by --with-arch=i486?  Are there any?



Re: Support for export keyword to use with C++ templates ?

2010-01-29 Thread Joe Buck
On Fri, Jan 29, 2010 at 06:23:45PM -0800, Michael Witten wrote:
 On Fri, Jan 29, 2010 at 8:05 PM, Paolo Carlini paolo.carl...@oracle.com 
 wrote:
  Even for implementors knowing *very* well both the details of the C++
  standard and the internals of a specific front-end, implementing export
  is an *highly* non-trivial task.
 
 However, I have a gut feeling that at least a restricted version of
 'export' (or a cousin of 'export') could be both useful and trivial to
 implement: Perhaps a limited form could simply automate the
 not-infrequently employed practice of creating a translation unit
 specifically for explicit template instantiations.

The typical implementations have all sorts of problems: where do you store
these extra explicit template expansions?  If you make a database for the
template expansions, you have locking problem, or experience frequent
corruption or the like; I've had such problems making such systems work
that we've generally resorted to turning it off (e.g. tell Sun's compiler
to expand templates into static functions).

These problems might be overcome, but it hasn't been done because it is
a hard problem.  The main difficulty is building a robust flow that
supports parallel builds and an efficient link pass.




Re: Question about code licensing

2010-01-24 Thread Joe Buck
On Sun, Jan 24, 2010 at 07:00:44AM -0800, Paolo Bonzini wrote:
  I think the main reason is that DMD front end sources are dual licensed
  with GPL and Artistic License.  The DMD backend is not under an open
  source license (personal use only), so the Artistic License is how the
  two are integrated.  The fork is required to allow DMD to continue under
  its current license scheme.
 
  It also means that fixes to the GCC front end would not be copyable to
  the DMD front end going forward.
 
 Strictly speaking, that's not true.  Even if the submitter would still 
 be required to have copyright assignment for the FSF, they could be 
 copyable to the DMD front-end _as long as the submitter himself sends 
 them for inclusion there too_.  This is the practical significance of 
 the license grantback from the FSF to the author.

This is getting off-topic for this list.  Still, if this were the plan
(and I don't know whether it is or not), I think that the FSF would reject
it, because it would implicitly ask all GCC developers to help out with
a proprietary product.

There would also be a huge conflict-of-interest issue if the official
maintainer of the D front end were in a position to accept or reject
patches based not on their technical merit, but on whether the contributor
agrees to separately contribute them under the dual-license scheme, and
his/her employer had an interest in this issue.




Re: Question about code licensing

2010-01-22 Thread Joe Buck
On Fri, Jan 22, 2010 at 05:31:03PM -0800, Jerry Quinn wrote:
 There is renewed interest in getting a D compiler into the GCC sources.
 The most direct route for this to happen is to use the existing Digital
 Mars DMD front end.
 
 The current DMD front end code is GPL licensed, and copyright is owned
 by Digital Mars.  If they were to fork the source, and contribute that
 fork under the current license of GCC, do they still possess the freedom
 to continue to do what they wish with the original code?

The standard FSF contribution paperwork assigns copyright to the FSF and
then grants back a license to the contributor to do whatever they want
with the original code (continue to develop it, distribute under other
terms, embed in proprietary products -- not sure about any restrictions).
I'm not sure whether that would work for what they want to do or not.  If
it would, it's easy.  Otherwise they might be able to make some other
arrangement with the FSF.

Ideally, something could be worked out so they wouldn't feel the need
to continue to maintain a fork.  It's not very efficient.


Re: [PATCH] ARM: Convert BUG() to use unreachable()

2009-12-17 Thread Joe Buck
On Thu, Dec 17, 2009 at 11:06:13AM -0800, Russell King - ARM Linux wrote:
 On Thu, Dec 17, 2009 at 10:35:17AM -0800, Joe Buck wrote:
  Besides, didn't I see a whole bunch of kernel security patches related
  to null pointer dereferences lately?  If page 0 can be mapped, you
  suddenly won't get your trap.
 
 Page 0 can not be mapped on ARM kernels since the late 1990s, and this
 protection is independent of the generic kernel.
 
 Milage may vary on other architectures, but that's not a concern here.

I don't understand, though, why you would want to implement a generally
useful facility (make the kernel trap so you can do a post-mortem
analysis) in a way that's only safe for the ARM port.


Re: detailed comparison of generated code size for GCC and other compilers

2009-12-14 Thread Joe Buck
On Mon, Dec 14, 2009 at 12:36:00PM -0800, John Regehr wrote:
 My opinion is that code containing undefined behaviors is definitely 
 interesting, but probably it is interesting in a different way than 
 functions that are more meaningful.

Optimizations based on uninitialized variables make me very nervous.
If uninitialized memory reads are transformed into don't-cares, then
checking tools like valgrind will no longer see the UMR (assuming that
the lack of initialization is a bug).

Did I understand that icc does this?  It seems like a dangerous practice.


Re: detailed comparison of generated code size for GCC and other compilers

2009-12-14 Thread Joe Buck
On Mon, Dec 14, 2009 at 01:53:30PM -0800, John Regehr wrote:
  Optimizations based on uninitialized variables make me very nervous.
  If uninitialized memory reads are transformed into don't-cares, then
  checking tools like valgrind will no longer see the UMR (assuming that
  the lack of initialization is a bug).
 
  Did I understand that icc does this?  It seems like a dangerous practice.
 
 Yes, it looks like icc does this.  But so does gcc, see below.  There is 
 no add in the generated code.
 
 John Regehr
 
 
 [reg...@babel ~]$ cat undef.c
 int foo (int x)
 {
int y;
return x+y;
 }

I'm less concerned about cases like this, because the compiler will
issue a warning for the uninitialized variable (if -Wall is included).

I would only be worried for cases where no warning is issued *and*
unitialized accesses are eliminated.



Re: RFC: PR 25137: moving -Wmissing-braces to -Wextra?

2009-11-17 Thread Joe Buck
On Tue, Nov 17, 2009 at 04:07:28PM -0800, Ian Lance Taylor wrote:
 Paolo Carlini paolo.carl...@oracle.com writes:
 
  Ian Lance Taylor wrote:
  OK, to me that seems like an excellent reason to implement a special
  case for the warning here.  For example, perhaps if a struct has only
  one field, and that field is an aggregate, then we don't warn if there
  is only one set of braces.

  Sure, we considered that, you can find traces of that reasoning in the
  audit trail, then lately I noticed the ICC and SunStudio behaviors, and
  that idea appeared to me a tad too sophisticated... but if people agree,
  I can return to it. Do you think that version of the warning should be
  simply the default, right, simply the new behavior of -Wmissing-braces?
 
 I suppose so but I'd be happy to hear other opinions.

I think that the cleanest way is to suppress the warning for structs
with one member, rather than treating tr1::array as a special case, as
Jonathan Wakely suggested.

The point of warnings should be to help people write correct, non-buggy,
portable code, and omitting the outer braces in this case is allowed by
the standard and isn't going to result in unexpected behavior.



Re: gccgo: A gcc frontend for Go, a new programming language

2009-11-11 Thread Joe Buck
On Wed, Nov 11, 2009 at 11:26:36AM -0800, Basile STARYNKEVITCH wrote:
 My feeling is that Google's Go (quite a nice language from the slides I just 
 have read) is almost canonically the case 
 for a front-end plugin.

I have some major concerns about this suggestion.  Isn't this a recipe for
getting people to stop contributing changes to gcc?

You seem to want people to use plugins for everything.  I would prefer to
see more limited uses.  Plugins are appropriate for small, specialized
additions to gcc that aren't generally useful enough, or stable enough, to
include in the main gcc distribution.  For example, a specialized static
checker, or a pass to add an unusual kind of instrumentation, or something
to gather statistics on a body of source code.

They weren't intended as a way of attaching complete new front ends
or complete new back ends.  That was the thing that RMS feared the most,
and he had at least some justification: would we have a C++ compiler or
an Objective-C compiler if the companies who employed the original authors
had the alternative of hooking into GCC without contributing their code?
There's some evidence that they would not have.

We currently lack enough plugin hooks to give a complete front end a
stable interface, and I would argue that this is a feature.


Re: Prague GCC folks meeting summary report

2009-10-01 Thread Joe Buck
On Thu, Oct 01, 2009 at 05:00:10PM -0700, Andi Kleen wrote:
 Richard Guenther rguent...@suse.de writes:
 
  The wish for more granular and thus smaller debug information (things like
  -gfunction-arguments which would properly show parameter values
  for backtraces) was brought up.  We agree that this should be addressed at a
  tools level, like in strip, not in the compiler.
 
 Is that really the right level? In my experience (very roughly) -g can turn 
 gcc from
 CPU bound to IO bound (especially considering distributed compiling 
 appraches),
 and dropping unnecessary information in external tools would make the IO 
 penalty even
 worse.

Certainly life can suck when building large C++ apps with -g in an NFS
environment.  Assuming we can generate tons of stuff and strip it later
might not be best.



Re: Compiling the GNU ada compiler on a new platform

2009-08-21 Thread Joe Buck
On Fri, Aug 21, 2009 at 03:40:57PM -0700, Paul Smedley wrote:
 I'm wanting to update the GNU ADA compiler for OS/2... I'm currently
 building GCC 4.3.x and 4.4.x on OS/2 (C/C++/fortran) but for ADA
 configure complains about not finding gnat.  The problem is that the
 only gnat compiled for OS/2 was years ago using a different toolchain
 so it's not suitable.
 
 I assume that at some point in time, ada didn't depend on an existing
 gnat, so if I could find that version, I could compile a version of
 gnat to get me started?? Otherwise it's a bit chicken and egg :(

The alternative solution is to build gnat as a cross-compiler, so it
runs on (say) GNU/Linux and produces gnat code for OS/2.  I haven't
done that for gnat, only for other languages, but perhaps someone can
advise you on how to set that up.  Then you can use the cross-compiler
to build a native compiler.





Re: order of -D and -U is significant

2009-08-05 Thread Joe Buck
On Tue, Aug 04, 2009 at 05:58:05PM -0700, Vincent Lefevre wrote:
 On 2009-08-04 15:44:05 -0700, Joe Buck wrote:
  But AFAIK neither Posix nor the C89 standard nor the C99 standard
  say anything about -D and -U flags.  It's the Single UNIX specification
  that is the issue, and it refers to a command that is spelled c89,
  or (in later versions) c99, not gcc.
 
 c99 with the mentioned -D and -U flags is specified by:
 
 The Open Group Base Specifications Issue 7
 IEEE Std 1003.1-2008
 
 That's POSIX.1-2008.

OK, clearly I am out of date; the unix specification used to be distinct
from Posix.


Re: order of -D and -U is significant

2009-08-04 Thread Joe Buck
On Tue, Aug 04, 2009 at 08:03:56AM -0700, Tom Tromey wrote:
  Erwin == Unruh, Erwin erwin.un...@ts.fujitsu.com writes:
 
 Erwin In current gcc the order of options -D and -U is significant. The
 Erwin Single Unix(r) Specification explicitly specifies that the order
 Erwin should not matter for the c89 command. It reads (cited from
 Erwin version 2, which is ten years old):
 
 Erwin I did not find a justification for the current gcc
 Erwin behavior.
 
 GCC's behavior is more useful.  And, given its age, I think it would be
 a mistake to change it.
 
 I think if you want the c89 behavior, a wrapper program of some kind
 would be the way to go.

Another alternative would be an extra flag that would turn on conformance
to the spec.  But gcc can't change its default behavior; this would cause
massive breakage.


Re: order of -D and -U is significant

2009-08-04 Thread Joe Buck
On Tue, Aug 04, 2009 at 11:42:51AM -0700, Ross Smith wrote:
 
 On 2009-08-05, at 04:03, Joe Buck wrote:
 
  Another alternative would be an extra flag that would turn on
  conformance
  to the spec.
 
 Traditionally spelled -posixly-correct in other GNU software. This would
 presumably also affect other options, such as making the default -
 std=c99
 instead of gnu89.

But AFAIK neither Posix nor the C89 standard nor the C99 standard
say anything about -D and -U flags.  It's the Single UNIX specification
that is the issue, and it refers to a command that is spelled c89,
or (in later versions) c99, not gcc.









Re: Compiling programs licensed under the GPL version 2 with GCC 4.4

2009-07-27 Thread Joe Buck
On Mon, Jul 27, 2009 at 05:34:34PM -0700, Russ Allbery wrote:
 f...@redhat.com (Frank Ch. Eigler) writes:
  Robert Dewar de...@adacore.com writes:
 
  Discussion of FSF policy on licensing issues is also off-topic for
  this mailing list.
 
  Perhaps, yet the libgcc exception licensing issues were quite
  prominently discussed right here, and not too many months ago.
  Florian's concern sounds linearly connected to that.  If this is as
  trivial a matter as some people seem to hint, perhaps someone can supply
  a link to a prior discussion for it.
 
 Furthermore, the people Robert is telling him to go ask are not replying
 to their e-mail.  Given that, on-topic or not, I think it's hardly
 surprising for the issue to come up here.  The most effective way to keep
 it from coming up here would seem to be for them to start answering their
 e-mail.

I would suggest that affected distributors contact the SFLC for an opinion
and advice.  I think that they are in the best position to work both with
the FSF and with other free software distributors to determine if there
is a genuine problem, what the scope of it is, and make suggestions
for a resolution.  They've done work for many other free software projects
outside of the FSF.

Continued use of this list to discuss the matter isn't going to produce
any reliable conclusions or good results.


Re: Compiling programs licensed under the GPL version 2 with GCC 4.4

2009-07-26 Thread Joe Buck

 * Joe Buck:
 
On Sat, Jul 25, 2009 at 01:53:40PM -0700, Florian Weimer wrote:
  Kalle Olavi Niemitalo discovered that as an operating system vendor,
  you are not allowed to distribute GPL version 2 programs if they are
  compiled with GCC 4.4.  The run-time library is GPL version 3 or
  later, which is incompatible with GPL version 2, so it is not
  permitted to link this with the GPLv2-only program and distribute the
  result.

I wrote:
  That's incorrect.  The runtime library is GPLv3 or later, but with an
  *exception* that permits linking not only with GPLv2 programs, but
  also with proprietary programs.

On Sat, Jul 25, 2009 at 11:46:51PM -0700, Florian Weimer wrote:
 Eh, this exception doesn't change that the GPLv2 program perceives the
 GPLv3 as incompatible.  Why would it?

Doesn't matter, because the runtime library is not under GPLv3.  It's
under GPLv3 plus the runtime restriction.  That combination is more
permissive than GPLv2 (because of the exceptions it makes).  Therefore,
as far as I can tell, there is no conflict; the combined program has
no restrictions beyond the GPLv2 restrictions.

In particular, the DRM rules don't apply; the more restrictive rules
on patents don't apply.  Unless you can identify a specific restriction
that isn't waived by the runtime exception license, then I don't see
the problem.



Re: Compiling programs licensed under the GPL version 2 with GCC 4.4

2009-07-25 Thread Joe Buck
On Sat, Jul 25, 2009 at 01:53:40PM -0700, Florian Weimer wrote:
 Kalle Olavi Niemitalo discovered that as an operating system vendor,
 you are not allowed to distribute GPL version 2 programs if they are
 compiled with GCC 4.4.  The run-time library is GPL version 3 or
 later, which is incompatible with GPL version 2, so it is not
 permitted to link this with the GPLv2-only program and distribute the
 result. 

That's incorrect.  The runtime library is GPLv3 or later, but with an
*exception* that permits linking not only with GPLv2 programs, but
also with proprietary programs.

The relevant document is the GCC Runtime Library Exception.


Re: -print-* command-line switches misbehave or are misdocumented

2009-07-06 Thread Joe Buck
On Mon, Jul 06, 2009 at 02:35:13PM -0700, Brian O'Mahoney wrote:
 Re: -print-* command-line switches misbehave or are misdocumented
 
 Why not just fix it, if not document the way it works, cutsie, its a
 developer feature fools no one and just hands ammunition to the
 anti Linux and GNU camp, they read these lists too!

Patches are welcome.



Re: Problem on Front-End List Page

2009-06-26 Thread Joe Buck
On Fri, Jun 26, 2009 at 08:59:32AM -0700, Bryce wrote:
   Many IDEs other than the ones that you list on your page of
 front-ends to GCC compiler exist.  One such IDE is XCode 3.1.3, which
 is developed by Apple, Inc.

That's not an oversight.  The intention is to only include free software,
and XCode is proprietary.




Re: (known?) Issue with bitmap iterators

2009-06-26 Thread Joe Buck
On Fri, Jun 26, 2009 at 03:38:31AM -0700, Alexander Monakov wrote:
 1. Add bool field `modified_p' in bitmap structure.
 2. Make iterator setup functions (e.g. bmp_iter_set_init) reset it to
 false.
 3. Make functions that modify the bitmap set it to true.
 4. Make iterator increment function (e.g. bmp_iter_next) assert
 !modified_p.

Sorry, it doesn't work.  Function foo has a loop that iterates
over a bitmap.  During the iteration, it calls a function bar.  bar
modifies the bitmap, then iterates over the bitmap.  It then returns
to foo, which is in the middle of an iteration, which it continues.
The bitmap has been modified (by bar), but modified_p was reset to
false by the iteration that happened at the end of bar.




Re: Phase 1 of gcc-in-cxx now complete

2009-06-25 Thread Joe Buck
On Thu, Jun 25, 2009 at 03:19:19PM -0700, Joseph S. Myers wrote:
 On Thu, 25 Jun 2009, Ian Lance Taylor wrote:
 
  * Test starting the bootstrap with earlier versions of the compiler to
see which C++ compiler version is required, and document that.
 
 I think the right approach is not documenting observations like that, but
 investigating the causes of failures with older compilers and making it
 build with as wide a range of versions of GCC (and ideally at least one
 non-GCC C++ compiler, probably an EDG-based one such as the Intel
 compiler) as is reasonable.

Microsoft's and Sun's compilers would be more likely to run into issues,
particularly Sun's; Sun has had a policy of preferring solid backward
compatibility to standards compliance, so I've tended to have more
problems getting correct, standard C++ to run on their compiler than on
others.  This is particularly true of template-based code and nested
classes.



Re: Should -Wjump-misses-init be in -Wall?

2009-06-23 Thread Joe Buck

On Tue, Jun 23, 2009 at 12:43 AM, Alan Modraamo...@bigpond.net.au wrote:
  ..., but I think this warning should be in -Wc++-compat, not -Wall
  or even -Wextra.  Why?  I'd argue the warning is useless for C code,
  unless you care about C++ style.

On Tue, Jun 23, 2009 at 12:35:48AM -0700, Gabriel Dos Reis wrote:
 I do not think it is useless for C99 codes because C99 allows
 C++ style declarations/initialization in the middle of a block.

But if the initialization is skipped and the variable is then used,
won't we get an uninitialized-variable warning?


Re: Should -Wjump-misses-init be in -Wall?

2009-06-23 Thread Joe Buck

On Tue, Jun 23, 2009 at 11:12 AM, Joe Buckjoe.b...@synopsys.com wrote:
  But if the initialization is skipped and the variable is then used,
  won't we get an uninitialized-variable warning?

On Tue, Jun 23, 2009 at 09:32:51AM -0700, Gabriel Dos Reis wrote:
 Did we get any in the cases Ian reported?

Note the second condition I gave: and the variable is then used.
The new warning just tests the first part: the initialization is
skipped.
 


Re: (known?) Issue with bitmap iterators

2009-06-22 Thread Joe Buck

Richard Guenther wrote:
  It is known (but maybe not appropriately documented) that deleting
  bits in the bitmap you iterate over is not safe.  If it would be me I would
  see if I could make it safe though.

On Mon, Jun 22, 2009 at 10:06:38AM -0700, Jeff Law wrote:
 It's not a huge deal -- what bothers me is that it's not documented.
 Someone thought enough to document that the loop index shouldn't be
 modified in the loop, but didn't bother to mention that the bitmap
 itself shouldn't be modified in the loop.

As a general rule there is a performance cost for making iterators
on a data structure safe with respect to modifications of that data
structure.  I'm not in a position to say what the right solution is
in this case, but passes that iterate over bitmaps without modifying
those bitmaps shouldn't be penalized.  One solution sometimes used is
two sets of iterators, with a slower version that's safe under
modification.





Re: Should -Wjump-misses-init be in -Wall?

2009-06-22 Thread Joe Buck
On Mon, Jun 22, 2009 at 04:51:17PM -0700, Kaveh R. GHAZI wrote:
 I also agree with Robert's comments that all warnings are about valid C,
 with -Wall we diagnose what we subjectively feel is dubious coding
 practice.  Not everyone will agree with what -Wall contains, that's not a
 reason to freeze it.

Right, but it's a cost-benefit tradeoff.

 Now if someone does a test and shows that building the world exposes
 hundreds or even dozens of these warnings, **and** none of them are actual
 bugs, then I would reevaluate my opinion.

I think that this should be the standard: a warning belongs in -Wall if
it tends to expose bugs.  If it doesn't, then it's just somebody's idea
of proper coding style but with no evidence in support of its correctness.

A -Wall warning should expose bugs, and should be easy to silence in
correct code.


Re: increasing the number of GCC reviewers

2009-06-09 Thread Joe Buck
On Tue, Jun 09, 2009 at 10:54:06AM -0700, Adam Nemet wrote:
 Andrew Haley a...@redhat.com writes:
  We need something more like I think Fred Bloggs knows gcc well enough
  to approve patches to reload or I am Fred Bloggs and I know gcc well
  enough to approve patches to reload.
 
 And whom should such email be sent to?  The SC is best reached on gcc@
 but I don't think that recommending someone publicly is necessarly a
 good idea.  E.g. what if the SC does not appoint the person; does that
 mean that the SC decided that he or she was not qualified enough?

 IMO the best way would be to nominate someone to the SC directly and
 then if the SC decides to support the nomination they can check with the
 person if he or she would accept the appointment.

You could contact any SC member by private mail if you think the topic
is too sensitive to discuss in public.  It would work best to pick an
SC member who is familiar with the nominee's work (and we'd have to
know that Fred Bloggs wants the job).




Re: VTA merge?

2009-06-08 Thread Joe Buck
On Mon, Jun 08, 2009 at 02:03:53PM -0700, Alexandre Oliva wrote:
 On Jun  8, 2009, Diego Novillo dnovi...@google.com wrote:
 
  - Performance differences over SPEC2006 and the other benchmarks
we keep track of.
 
 This one is trivial: none whatsoever.  The generated code is the same,
 and it *must* be the same.  Debug information must never change the
 generated code, and VTA is all about debug information.  There's a lot
 of infrastructure to ensure that code remains the unchanged, and
 -fcompare-debug testing backs this up.  It doesn't make much sense to
 run the same code twice to verify that it performs the same, does it?

I haven't kept careful track, but at one point you were talking about
inhibiting some optimizations because they made it harder to keep the
debug information precise.  Is this no longer an issue?  Do you require
that any optimizations that are now in the trunk be disabled?


Re: LLVM as a gcc plugin?

2009-06-05 Thread Joe Buck


On Fri, Jun 5, 2009 at 12:40 PM, Andrew  Nisbeta.nis...@mmu.ac.uk wrote:
  Hello,
  I am interested in developing LLVM functionality to support the
  interfaces in GCC ICI.

On Jun 5, 2009, at 3:43 AM, Steven Bosscher wrote:
  GCC != LLVM.  And this is a GCC list. Can LLVM topics please be
  discussed on an LLVM mailing list?

On Fri, Jun 05, 2009 at 09:48:52AM -0700, Chris Lattner wrote:
 How is LLVM any different than another external imported library (like
 GMP or MPFR) in this context?

GMP and MPFR are required components of GCC, and every developer has to
deal with them.  For interfacing between GCC and LLVM, the experts who'll
be able to answer the questions are generally going to be found on the
LLVM lists, not the gcc list, and those (like you) who participate on
both lists, well, you're on both lists.

So as a practical matter, it seems that LLVM lists are more suitable.
If it's ever decided that LLVM becomes a required piece of GCC, like
GMP and MPFR, that would change.


Re: Checking for the Programming Language inside GCC

2009-04-28 Thread Joe Buck
On Tue, Apr 28, 2009 at 10:50:52AM -0700, Shobaki, Ghassan wrote:
 In some optimization passes it may be useful to know the programming
 language that we are compiling. Is there a way to get that information
 in the middle end and back end?

Is that really a good idea?  If a particular optimization, on the same
middle-end structure, is valid in one language and not in another,
that would suggest a problem with the implementation.


Re: -O3 and new optimizations in 4.4.0

2009-04-24 Thread Joe Buck
On Fri, Apr 24, 2009 at 01:34:37PM -0700, Andi Kleen wrote:
 Robert Dewar de...@adacore.com writes:
 
  Sebastian Pop wrote:
  On Fri, Apr 24, 2009 at 08:12, Robert Dewar de...@adacore.com wrote:
  What would we have to do to make PPL and CLooG required to build GCC?
  Why would that be desirable? Seems to me the current situation is
  clearly preferable.
  To enable loop transforms in -O3.
 
  To me, you would have to show very clearly a significant performance
  gain for typical applications to justify the impact of adding
  PPL and CLooG. I don't see it. If you want these transformations
  you can get them, why go to all this disruptive effort for the
  default optimization case?
 
 I think his point was that they would be only widely used if they
 were part of -O3 because likely most users are not willing to
 set individual -f optimization flags.

Agreed.  It might be a wise idea to include them in -O3 in 4.5,
provided that we are confident by then that they are a consistent win.
At that stage the libraries could be required.


Re: GCC 4.5: nonconstant array index in initializer error

2009-04-23 Thread Joe Buck

On Thu, Apr 23, 2009 at 1:38 PM, Denis Onischenko
 denis.onische...@gmail.com wrote:
  The minimal code example is following:
 
 
  extern unsigned int __invalid_size_argument;
  #define TYPECHECK(t)( sizeof(t) == sizeof(t[1]) ?  sizeof(t) :
  __invalid_size_argument )
 
  static int arr[] = {
 [TYPECHECK(int)] = 0,
  };
 
  int main()
  {
   return 0;
  }
 
 
  command line is: gcc test.c
 
  GCC 4.5.0 revision 146607 compiles this code with following errors:
  test.c:5: error: nonconstant array index in initializer
  test.c:5: error: (near initialization for 'arr')
 
  released GCC 4.4.0 compiles without any errors

On Thu, Apr 23, 2009 at 04:58:17PM -0700, James Dennett wrote:
 A diagnostic is required in standards-conforming modes, as
 TYPECHECK(int) is not an integral constant expression (because it
 includes a non-constant in a potentially-evaluated operand).

But there are cases where we avoid issuing such a diagnostic by
default, without flags such as -pedantic or calling out a specific
language standard.

The compiler is not supposed to be pedantic by default.  A standards
document saying that a diagnostic is required should not be the end
of the story, especially when we're talking about important, widely
used code bases.



  1   2   3   4   5   6   7   8   9   >