On Mon, 6 Jun 2005, Sam Lauber wrote:
There has been a lot of work recently on making GCC output faster code. But
GCC isn't very fast. On my slow 750MHz Linux box (which the PIII in it is now
R.I.P), it took a whole night to compile 3.4.3.
The memory of your box is probably too small, the CP
I am sorry for this another mail. I forget to add c to
gcc@gcc.gnu.org
> You didn't mention what those switches are.
I am using following options
1)Some D options these are source specific defines.
2)Some I options for specifying include files.
3) Wall
4)Os (also tried O4)
> Also, I gcc 3.
Hello folks,
This might have already been addressed, but I
tried searching on GCC mailing list archives
http://gcc.gnu.org/lists.html#searchbox
and google before posting.
My test file:
$ cat gcc_prob.c
struct a {
struct {
struct {
in
I get a few failures when trying to run the obj-c++ testsuite...
See, e.g., http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg00375.html
This is what I see in the log file and this is all over... :)
Setting LD_LIBRARY_PATH to
.:/usr/local/src/trunk/objdir32/sparc-linux/./libstdc++-v3/src/.lib
> The VRP pass is inside tree-ssa-dom.c for GCC 4.0.
Yup. And it's very very weak.
> GCC 4.1 has a much more powerful VRP pass, which is not related
> to the DOM pass.
Exactly. Hopefully we'll be able to remove the DOM version before
4.1 since the new tree-vrp.c is vastly better.
jeff
> From: Robert Dewar <[EMAIL PROTECTED]>
> Paul Schlie wrote:
>
>> Similar arguments has been given in support an undefined order of
>> evaluation; which is absurd, as the specification of a semantic order
>> of evaluation only constrains the evaluation of expressions which would
>> otherw
On Tue, Jun 07, 2005 at 02:38:26AM +0300, [EMAIL PROTECTED] wrote:
> does the 4.0.1 RC1 include the value range propagation (VRP) ssa-based pass
> developed by Diego Novillo?
>
No.
> If not what is the VRP status at the CVS for the C language? Is it basically
> working?
>
Essentially, yes. It'
> >> Intel already handed icc + performace libs to apple, but from my
> >> experience icc doesn't create any faster code then gcc. Is there
> >> any *recent* benchmark that shows otherwise?
>
> Define "recent".
>
> >> I know that heavy math code is likely to perform better on icc but
> >> this i
> int bar [ 4 * 256 ] = { 0,1,2, ... };
>
> I did not changed nor any compiler option, neither any
> declaration. I still cannot see the difference in between those
> two, since the declaration is exactly the same. The only difference
> being a default initialization.
There is a more subtl
Daniel Kegel wrote:
I don't know about everybody else, but the
subject lines are starting to run together for me :-)
Agreed, but will they also support what is wrong with Bugzilla?
or was that GCC and floating point?
Eric
LMAO
I don't know about everybody else, but the
subject lines are starting to run together for me :-)
On Tuesday 07 June 2005 01:44, [EMAIL PROTECTED] wrote:
> If you visit the following:
> http://gcc.gnu.org/gcc-4.0/changes.html
>
> a reference is found to value range propagation pass. However, $GCCHOME/gcc
> directory doesn't contain the required files (e.g. tree-vrp.c).
>
> Is this an addition f
If you visit the following:
http://gcc.gnu.org/gcc-4.0/changes.html
a reference is found to value range propagation pass. However, $GCCHOME/gcc
directory doesn't contain the required files (e.g. tree-vrp.c).
Is this an addition for a scheduled (pre)release or i just can't find it in the
released
Hi there
does the 4.0.1 RC1 include the value range propagation (VRP) ssa-based pass
developed by Diego Novillo?
If not what is the VRP status at the CVS for the C language? Is it basically
working?
thanks in advance
Nikolaos Kavvadias
Eric Botcazou wrote:
Once again, have you actually examined how awtul the code we
generate now is?
Yes, I have. Indeed not pretty, but suppose that we managed to cut the
overhead in half, would that make -gnato really more attractive?
Yes, it would definitely make the difference, given the
On Tuesday 07 June 2005 01:13, Robert Dewar wrote:
> > Well, they could do all they might. I'm just waiting for IBM coming
> > forward with a Linux PowerPC64 laptop, so that I can continue to use big
> > endian hardware.
>
> Suggestion, don't hold your breath!
He could try and join the hack-the-x
Laurent GUERBY wrote:
Such algorithm usually require a very detailed control of what's going
on at the machine level, given current high level programming languages
that means using assembler.
No, that's not true, you might want to look at some of Jim Demmel's
work in this area.
Or that man
Toon Moene wrote:
The first
thing I did after receiving it is wiping out OS X and installing a real
operating system, i.e., Debian.
Is it really necessary to post flame bait like this, hopefully people
ignore this
A big endian system is indispensible if you are a compiler writer,
because
Scott Robert Ladd wrote:
A better question might be: Has Intel provided Apple with an OS X
version of their compiler? If so (and I think it very likely), Apple may
have little incentive for supporting GCC, given how well Intel's
compilers perform.
Well that's probably jumping to a conclusion w
Paul Schlie wrote:
Similar arguments has been given in support an undefined order of
evaluation; which is absurd, as the specification of a semantic order
of evaluation only constrains the evaluation of expressions which would
otherwise be ambiguous, as expressions which are insensitive
Mirza Hadzic wrote:
>> Intel already handed icc + performace libs to apple, but from my
>> experience icc doesn't create any faster code then gcc. Is there
>> any *recent* benchmark that shows otherwise?
Define "recent".
>> I know that heavy math code is likely to perform better on icc but
>> th
Sam,
Since you seems very knowledgable why does the error desepear when I
initialize the structure ?
int bar [ 4 * 256 ] = { 0,1,2, ... };
I did not changed nor any compiler option, neither any declaration.
I still cannot see the difference in between those two, since the
declaration
> Hello,
>
> I have a question about a valid C code. I am trying to compile
> the following code in MacOSX (*). I don't understand what the
> problem is ? Could someone please explain me what is going on ?
> Since I declare the variable with extern I should not need to pass
> -fno-common,
> Hello,
>
> I have a question about a valid C code. I am trying to compile
> the following code in MacOSX (*). I don't understand what the
> problem is ? Could someone please explain me what is going on ?
> Since I declare the variable with extern I should not need to pass
> -fno-common,
On Mon, 2005-05-30 at 23:10 -0400, Robert Dewar wrote:
> Toon Moene wrote:
>
> >> But even this were fixed, many users would still complain.
> >> That's why I think that the Linux kernel should set the CPU
> >> in double-precision mode, like some other OS's (MS Windows,
> >> *BSD) -- but this is o
A big endian system is indispensible if you are a compiler writer,
because little endian hardware hides too many programmer errors
Can you show example(s) where little endian hides errors? Just curious...
Intel already handed icc + performace libs to apple, but from my experience icc
doesn't
On Sun, Jun 05, 2005 at 12:41:43PM -0400, Nathanael Nerode wrote:
> * alpha*-*-unicosmk*
> No real update since 2002. If rth, the lone alpha maintainer, is actually
> maintaining it, I guess it should stay; it's not in bad shape. But does
> it really need fixproto?
This port was done by Ro
Samuel Smythe wrote:
It is well-known that Apple has been a significant provider of GCC enhancements.
But it is also probably now well-known that they have opted to drop
the PPC architecture in favor of an x86-based architecture.
Will Apple continue to contribute to the PPC-related componentry o
Hello,
I have a question about a valid C code. I am trying to compile the
following code in MacOSX (*). I don't understand what the problem is ?
Could someone please explain me what is going on ? Since I declare the
variable with extern I should not need to pass -fno-common, right ?
Thank
In article <[EMAIL PROTECTED]> you write:
>Samuel Smythe wrote:
>> It is well-known that Apple has been a significant provider of GCC
>> enhancements. But it is also probably now well-known that they have
>> opted to drop the PPC architecture in favor of an x86-based
>> architecture. Will Apple con
Daniel Berlin writes:
> Somehow the perl code got screwed up
> Try now
Works like a charm, thanks alot.
Rainer
On Mon, 2005-06-06 at 21:45 +0200, Rainer Orth wrote:
> I've recently sent a couple of gcc bug reports using gccbug. The latest
> one was
>
> Subject: All libjava execution tests fail on IRIX 6
> Date: Mon, 6 Jun 2005 19:34:48 GMT
>
> Unfortunately, the submissions seem to be silently ignore
FYI for the application my company is developing (integer and bit-field
intensive with very little floating point),
we have found gcc to be 10-30% FASTER than icc8.0.
We were told that this was partially because icc doesn't optimize unsigned
expressions very well
(I'm dubious that this is the ca
I've recently sent a couple of gcc bug reports using gccbug. The latest
one was
Subject: All libjava execution tests fail on IRIX 6
Date: Mon, 6 Jun 2005 19:34:48 GMT
Unfortunately, the submissions seem to be silently ignored: I neither got
the usual confirmation and info on the assigned bug
On Mon, Jun 06, 2005 at 12:17:24PM -0700, Samuel Smythe wrote:
> It is well-known that Apple has been a significant provider of GCC
> enhancements. But it is also probably now well-known that they have
> opted to drop the PPC architecture in favor of an x86-based
> architecture. Will Apple continue
On Jun 06, 2005 09:26 PM, Scott Robert Ladd <[EMAIL PROTECTED]> wrote:
> Samuel Smythe wrote:
> > It is well-known that Apple has been a significant provider of GCC
> > enhancements. But it is also probably now well-known that they have
> > opted to drop the PPC architecture in favor of an x86-bas
Samuel Smythe wrote:
> It is well-known that Apple has been a significant provider of GCC
> enhancements. But it is also probably now well-known that they have
> opted to drop the PPC architecture in favor of an x86-based
> architecture. Will Apple continue to contribute to the PPC-related
> compon
It is well-known that Apple has been a significant provider of GCC
enhancements. But it is also probably now well-known that they have opted to
drop the PPC architecture in favor of an x86-based architecture. Will Apple
continue to contribute to the PPC-related componentry of GCC, or will such
On Mon, 2005-06-06 at 11:05 +0100, Richard Sandiford wrote:
> Thanks for the summary. It sounds from your message, and particularly
> the quote from RMS, that we should be accepting the patches unless we
> have a particular reason not to trust MIPS to do what they said they'd
> do. I certainly ha
There has been a lot of work recently on making GCC output faster code. But
GCC isn't very fast. On my slow 750MHz Linux box (which the PIII in it is now
R.I.P), it took a whole night to compile 3.4.3. On my fast iBook G4 laptop,
to compile just one source file in Perl made me wait long enough f
On Sun, 2005-06-05 at 12:41 -0400, Nathanael Nerode wrote:
> * hppa1.1-*-bsd*
I'm 99.9% sure this can go -- in fact, I just recently found out that
the previous single largest installation of PA BSD boxes recently shut
off its last PA.
jeff
> From: Robert Dewar <[EMAIL PROTECTED]>
> Paul Schlie wrote:
>
>> - So technically as such semantics are undefined, attempting to track
>> and identify such ambiguities is helpful; however the compiler should
>> always optimize based on the true semantics of the target, which is
>> what the
> "Nathanael" == Nathanael Nerode <[EMAIL PROTECTED]> writes:
Nathanael> * pdp11-*-* (generic only) Useless generic.
I believe this one generates DEC (as opposed to BSD) calling
conventions, so I'd rather keep it around. It also generates .s files
that can (modulo a few bugfixes I need to g
Richard Sandiford wrote:
Thanks for the summary. It sounds from your message, and particularly
the quote from RMS, that we should be accepting the patches unless we
have a particular reason not to trust MIPS to do what they said they'd
do.
I'm hesitant to color it too strongly, in that I had a
Nathanael Nerode wrote:
I seem to remember asking about this some years ago, and finding out
that its existence was not documented anywhere public, which it still
isn't. It's also odd that a VxWorks simulation environment is
sufficiently different from VxWorks that it needs a different configur
E. Weddington wrote:
Nathanael Nerode wrote:
Propose to stop using fixproto immediately:
avr-*-*
I'm not even sure exactly what fixproto is supposed to do, but I
*highly* doubt that it is needed for the AVR target. The AVR target is
an embedded processor that uses it's own C library, av
Milind Katikar <[EMAIL PROTECTED]> writes:
> Hello,
>
> I was using gcc 2.9 (host - i386-pc-cygwin, target √
> sparclet-aout). Recently I have started using gcc 3.2
> (same host and target) primarily to ge the benefit of
> size reduction optimizations in gcc. However I
> observed increase in siz
Joseph S. Myers wrote:
> If the required version of any tool is changed then the documentation of
> that version in install.texi needs to be updated accordingly.
Here is an updated patch.
> The generated files in CVS will also need to be regenerated on commit.
Yes. The one who commits it for me
Daniel Kegel wrote:
> So, I'm looking around for other reports of performance
> regressions in gcc-4.0. So far, the only other ones I've
> heard of are those reported in http://www.coyotegulch.com/reviews/gcc4/
> I'm tempted to have a student try reproducing and boiling down the POV-Ray
> performa
Paul Schlie wrote:
- So technically as such semantics are undefined, attempting to track
and identify such ambiguities is helpful; however the compiler should
always optimize based on the true semantics of the target, which is
what the undefined semantics truly enable (as pretending a targ
On 6/6/05, Segher Boessenkool <[EMAIL PROTECTED]> wrote:
> > Better use a union for the (final) conversion, i.e
> >
> > int conv(unsigned char *c)
> > {
> > unsigned int i;
> > union {
> > unsigned int u;
> > int i;
> > } u;
> >
> > u.u = 0;
> > for (i = 0; i < s
Hello,
I was using gcc 2.9 (host - i386-pc-cygwin, target
sparclet-aout). Recently I have started using gcc 3.2
(same host and target) primarily to ge the benefit of
size reduction optimizations in gcc. However I
observed increase in size for many applications when
compiled with gcc3.2. All swit
On Mon, 6 Jun 2005, Bruno Haible wrote:
> The files cp/cfns.gperf and java/keyword.gperf are - as distributed -
> processed by gperf-2.7.2 or with particular options. The use of gperf-3.0.1
> (released in 2003) can create smaller and faster hash tables, with less
> command line options:
If the re
On 6/6/05, Georg Bauhaus <[EMAIL PROTECTED]> wrote:
> Daniel Kegel wrote:
>
> > So, I'm looking around for other reports of performance
> > regressions in gcc-4.0.
>
> I came across this one:
>
> int foo(int a, int b)
> {
> return a + b;
> }
>
> int bar()
> {
> int x = 0, y =
> From: Andrew Pinski <[EMAIL PROTECTED]>
>>> No they should be using -ftrapv instead which traps on overflow and then
>>> make sure they are not trapping when testing.
>>
>> - why? what language or who's code/target ever expects such a behavior?
> Everyone's who writes C/C++ should know that over
Jan-Benedict Glaw wrote:
> On Sun, 2005-06-05 12:41:43 -0400, Nathanael Nerode <[EMAIL PROTECTED]> wrote:
>
>
>>* vax-*-bsd*
>>* vax-*-sysv*
>> If anyone is still using these, GCC probably doesn't run already. I
>> certainly haven't seen any test results. Correct me if I'm wrong!
>> And afte
Better use a union for the (final) conversion, i.e
int conv(unsigned char *c)
{
unsigned int i;
union {
unsigned int u;
int i;
} u;
u.u = 0;
for (i = 0; i < sizeof u; i++)
u.u = (u.u << 8) + c[i];
return u.i;
}
This is not portable, though; access
Daniel Kegel wrote:
So, I'm looking around for other reports of performance
regressions in gcc-4.0.
I came across this one:
int foo(int a, int b)
{
return a + b;
}
int bar()
{
int x = 0, y = 10;
int c;
for (c=0; c < 123123123 && x > -1; ++c, --y)
On 6/6/05, Segher Boessenkool <[EMAIL PROTECTED]> wrote:
> > There's also a fair amount of code whih relies on -1 ==
> > (int)0x.
> >
> > Or is there any truly portable and efficient way to convert a sequence
> > of bytes (in big-endian order) to a signed integer?
>
> Of course there is.
Hi,
The files cp/cfns.gperf and java/keyword.gperf are - as distributed -
processed by gperf-2.7.2 or with particular options. The use of gperf-3.0.1
(released in 2003) can create smaller and faster hash tables, with less
command line options:
* cp/cfns.gperf: If you drop the options "-k '1-6,$'
Thanks for the summary. It sounds from your message, and particularly
the quote from RMS, that we should be accepting the patches unless we
have a particular reason not to trust MIPS to do what they said they'd
do. I certainly have no reason not to trust MIPS, so I guess that
means the patches ca
Mark Mitchell wrote:
> Daniel Jacobowitz wrote:
>
>> On Sun, Jun 05, 2005 at 12:41:43PM -0400, Nathanael Nerode wrote:
>>
>>> * mips-wrs-windiss
>>> * powerpc-wrs-windiss
>>> I don't think these were supposed to be in the FSF tree at all, were
>>> they?
>>
>>
>>
>> This question belongs more in t
> Once again, have you actually examined how awtul the code we
> generate now is?
Yes, I have. Indeed not pretty, but suppose that we managed to cut the
overhead in half, would that make -gnato really more attractive?
> Well of course that's just a plain bug, should be addressed as such.
> Obvi
There's also a fair amount of code whih relies on -1 ==
(int)0x.
Or is there any truly portable and efficient way to convert a sequence
of bytes (in big-endian order) to a signed integer?
Of course there is. Assuming no padding bits:
int conv(unsigned char *c)
{
unsigned int
René Rebe wrote:
I think this massive -Os regressions on C++ code as experienced in tramp3d and
botan should be investigated. However I have not looked for filled PRs or
more recnt snapshots of 4.0 so far ...
Oh good, so it's not just me. ;)
I opened PR21314 a while back but ended up chickeni
Hi,
On Monday 06 June 2005 09:01, Daniel Kegel wrote:
> I recently worked with a UCLA student to boil down
> a reported openssl performance regression with gcc-4.0
> to a small standalone case (see http://gcc.gnu.org/PR19923).
> We have a bit more followup to do there, but it seems
> to have been
I recently worked with a UCLA student to boil down
a reported openssl performance regression with gcc-4.0
to a small standalone case (see http://gcc.gnu.org/PR19923).
We have a bit more followup to do there, but it seems
to have been a good use of an student's time.
So, I'm looking around for oth
67 matches
Mail list logo