Re: [Ada] Read directory in Ada.Directories.Start_Search rather than Get_Next_Entry

2022-01-08 Thread Duncan Sands via Gcc-patches
Hi Pierre-Marie, is this really a good idea?  If a directory has millions of 
files in it (rare, but I've seen it) this may consume a lot of memory.  Also, if 
using a slow medium like a network file system, reading the entire directory 
contents may take a long time.  Finally, you aren't really solving the race 
condition, you're just making the window smaller, right?  After all, if I 
understand right you are still using readdir, you just use it during a shorter 
time period.


Best wishes, Duncan.

On 07/01/2022 17:27, Pierre-Marie de Rodat via Gcc-patches wrote:

The Ada.Directories directory search function is changed so the contents
of the directory is now read in Start_Search instead of in
Get_Next_Entry.  Start_Search now stores the result of the directory
search in the search object, with Get_Next_Entry returning results from
the search object. This differs from the prior implementation where
Get_Next_Entry would query the directory directly for the next item
using the POSIX readdir function.

The problem with building Get_Next_Entry around the readdir function is
POSIX does not specify the behavior of readdir when files are added or
removed from the directory being read. For example: on most systems,
deleting files from the folder being read does not impact readdir.
However, some systems, like RTEMS and HFS+ volumes on macOS, will return
NULL instead of the next item in the directory if the current item
returned by readdir is deleted.

To avoid this issue, the contents of the directory is read in
Start_Search and the user is given a copy of these results.
Consequently, any subsequent modification to the directory does not
affect the ability to iterate through the results. This approach is the
same taken by the popular fts C functions.

Tested on x86_64-pc-linux-gnu, committed on trunk

gcc/ada/

* libgnat/a-direct.adb (Search_Data): Remove type.
(Directory_Vectors): New package instantiation.
(Search_State): New type.
(Fetch_Next_Entry): Remove.
(Close): Remove.
(Finalize): Rewritten.
(Full_Name): Ditto.
(Get_Next_Entry): Return next entry from Search results vector
rather than querying the directory directly using readdir.
(Kind): Rewritten.
(Modification_Time): Rewritten.
(More_Entries): Use Search state cursor to determine if more
entries are available for users to read.
(Simple_Name): Rewritten.
(Size): Rewritten.
(Start_Search_Internal): Rewritten to load the contents of the
directory that matches the pattern and filter into the search
object.
* libgnat/a-direct.ads (Search_Type): New type.
(Search_Ptr): Ditto.
(Directory_Entry_Type): Rewritten to support new Start_Search
procedure.
* libgnat/s-filatt.ads (File_Length_Attr): New function.





Re: [Ada] Improve performance of 'Image with enumeration types.

2017-09-26 Thread Duncan Sands

On 09/26/2017 12:17 PM, Eric Botcazou wrote:

By the way, why not always do this "inlining", even when not optimizing?


Because this generates more bloated code and inferior debugging experience.


This is a trick question, because when you answer "because XYZ" I will then
reply "but XYZ is a common reason that people disable inlining when
optimizing, so shouldn't you only do it when inlining is enabled?" :)


People ought not to disable inlining when optimizing though.


I've seen a few projects disable inlining when optimizing because it can 
generate bloated code and an inferior debugging experience :)  But I won't argue 
the point any further (that this should really be conditioned on inlining being 
enabled, not on optimization being enabled) as while I'm probably right in 
theory, in practice I doubt it will actually cause trouble for anyone.


Best wishes, Duncan.


Re: [Ada] Improve performance of 'Image with enumeration types.

2017-09-26 Thread Duncan Sands

Hi Arno,


it looks like this is in essence inlining the run-time library
routine. In which case, shouldn't you only do it if inlining is
enabled?  For example, it seems rather odd to do this if
compiling with -Os.


Actually, measurements showed that this instance of inlining is a
win for both performance and code size, so it???s a good candidate
even for -Os. Note that we inline string concatenation routines
for the same reason.


thanks for explaining.  I think it merits a comment in the code though.

By the way, why not always do this "inlining", even when not optimizing?


That's a practical trade off, based on our past experience.


if it's a trade-off then there must be a down-side.  What is the down-side?

Best wishes, Duncan.


Re: [Ada] Improve performance of 'Image with enumeration types.

2017-09-26 Thread Duncan Sands

Hi Pierre-Marie,

On 09/26/2017 11:30 AM, Pierre-Marie de Rodat wrote:

On 09/25/2017 02:47 PM, Duncan Sands wrote:
it looks like this is in essence inlining the run-time library routine. In 
which case, shouldn't you only do it if inlining is enabled?  For example, it 
seems rather odd to do this if compiling with -Os.


Actually, measurements showed that this instance of inlining is a win for both 
performance and code size, so it’s a good candidate even for -Os. Note that we 
inline string concatenation routines for the same reason.


thanks for explaining.  I think it merits a comment in the code though.

By the way, why not always do this "inlining", even when not optimizing?

This is a trick question, because when you answer "because XYZ" I will then 
reply "but XYZ is a common reason that people disable inlining when optimizing, 
so shouldn't you only do it when inlining is enabled?" :)


Best wishes, Duncan.

PS: I'm imagining XYZ is related to a better debugging experience.


Re: [Ada] Improve performance of 'Image with enumeration types.

2017-09-25 Thread Duncan Sands

Hi,

On 09/25/2017 10:54 AM, Pierre-Marie de Rodat wrote:

This patch improves the performance of the code generated by the compiler
for attribute Image when applied to user-defined enumeration types and the
sources are compiled with optimizations enabled.


it looks like this is in essence inlining the run-time library routine.  In 
which case, shouldn't you only do it if inlining is enabled?  For example, it 
seems rather odd to do this if compiling with -Os.


Best wishes, Duncan.



No test required.

Tested on x86_64-pc-linux-gnu, committed on trunk

2017-09-25  Javier Miranda  

* exp_imgv.adb (Is_User_Defined_Enumeration_Type): New subprogram.
(Expand_User_Defined_Enumeration_Image): New subprogram.
(Expand_Image_Attribute): Enable speed-optimized expansion of
user-defined enumeration types when we are compiling with optimizations
enabled.





Re: [Ada] Use the Monotonic Clock on Linux

2017-09-25 Thread Duncan Sands

Hi,

On 09/25/2017 10:47 AM, Pierre-Marie de Rodat wrote:

The monotonic clock epoch is set to some undetermined time
in the past (typically system boot time).  In order to use the
monotonic clock for absolute time, the offset from a known epoch
is calculated and incorporated into timed delay and sleep.



--- libgnarl/s-taprop__linux.adb(revision 253134)
+++ libgnarl/s-taprop__linux.adb(working copy)
@@ -257,6 +266,73 @@
end if;
 end Abort_Handler;
  
+   --

+   -- Compute_Base_Monotonic_Clock --
+   --
+
+   function Compute_Base_Monotonic_Clock return Duration is
+  TS_Bef0, TS_Mon0, TS_Aft0 : aliased timespec;
+  TS_Bef,  TS_Mon,  TS_Aft  : aliased timespec;
+  Bef, Mon, Aft : Duration;
+  Res_B, Res_M, Res_A   : Interfaces.C.int;
+   begin
+  Res_B := clock_gettime
+   (clock_id => OSC.CLOCK_REALTIME, tp => TS_Bef0'Unchecked_Access);
+  pragma Assert (Res_B = 0);
+  Res_M := clock_gettime
+   (clock_id => OSC.CLOCK_RT_Ada, tp => TS_Mon0'Unchecked_Access);
+  pragma Assert (Res_M = 0);
+  Res_A := clock_gettime
+   (clock_id => OSC.CLOCK_REALTIME, tp => TS_Aft0'Unchecked_Access);
+  pragma Assert (Res_A = 0);
+
+  for I in 1 .. 10 loop
+ --  Guard against a leap second which will cause CLOCK_REALTIME
+ --  to jump backwards.  In the extrenmely unlikely event we call
+ --  clock_gettime before and after the jump the epoch result will
+ --  be off slightly.
+ --  Use only results where the tv_sec values match for the sake
+ --  of convenience.
+ --  Also try to calculate the most accurate
+ --  epoch by taking the minimum difference of 10 tries.
+
+ Res_B := clock_gettime
+  (clock_id => OSC.CLOCK_REALTIME, tp => TS_Bef'Unchecked_Access);
+ pragma Assert (Res_B = 0);
+ Res_M := clock_gettime
+  (clock_id => OSC.CLOCK_RT_Ada, tp => TS_Mon'Unchecked_Access);
+ pragma Assert (Res_M = 0);
+ Res_A := clock_gettime
+  (clock_id => OSC.CLOCK_REALTIME, tp => TS_Aft'Unchecked_Access);
+ pragma Assert (Res_A = 0);
+
+ if (TS_Bef0.tv_sec /= TS_Aft0.tv_sec and then
+ TS_Bef.tv_sec  = TS_Aft.tv_sec)
+--  The calls to clock_gettime before the loop were no good.
+or else
+(TS_Bef0.tv_sec = TS_Aft0.tv_sec and then
+ TS_Bef.tv_sec  = TS_Aft.tv_sec and then
+(TS_Aft.tv_nsec  - TS_Bef.tv_nsec <
+ TS_Aft0.tv_nsec - TS_Bef0.tv_nsec))
+--  The most recent calls to clock_gettime were more better.


were more better -> were better

Best wishes, Duncan.


+ then
+TS_Bef0.tv_sec := TS_Bef.tv_sec;
+TS_Bef0.tv_nsec := TS_Bef.tv_nsec;
+TS_Aft0.tv_sec := TS_Aft.tv_sec;
+TS_Aft0.tv_nsec := TS_Aft.tv_nsec;
+TS_Mon0.tv_sec := TS_Mon.tv_sec;
+TS_Mon0.tv_nsec := TS_Mon.tv_nsec;
+ end if;
+  end loop;
+
+  Bef := To_Duration (TS_Bef0);
+  Mon := To_Duration (TS_Mon0);
+  Aft := To_Duration (TS_Aft0);
+
+  return Bef / 2 + Aft / 2 - Mon;
+  --  Distribute the division to avoid potential type overflow someday.
+   end Compute_Base_Monotonic_Clock;
+
 --
 -- Lock_RTS --
 --




Re: pass 'lto_gimple_out' not found, how to migrate it for GCC v6.x?

2017-07-28 Thread Duncan Sands

Hi,


It says

// Disable all LTO passes.

(for whatever reason).  So try just removing this part - the pass is
already removed.


IIRC it disables passes that run after gimple has been converted to LLVM IR, as 
running them would just consume time pointlessly.


Best wishes, Duncan.


Re: whereis PLUGIN_REGISTER_GGC_CACHES? how to migrate it for GCC v6.x?

2017-07-26 Thread Duncan Sands

Hi David,


It looks strange to me that this repository contains these per-gcc
-version auto-generated .inc files; aren't these something that should
just be created at build time?


IIRC I did it this way because to generate these files you need to have the 
entire GCC sources, while one of the goals of dragonegg was that in order to be 
able to build dragonegg you should only need to have the gcc headers installed.


Best wishes, Duncan.


Re: [patch] Restore cross-language inlining into Ada

2016-01-23 Thread Duncan Sands

Hi Eric,

On 23/01/16 10:25, Eric Botcazou wrote:

I think we was inlining them with LTO until I installed the patch.  Most of
time DECL_STRUCT_FUNCTION == NULL for WPA and thus the original check
testing the flags was disabled.  We did not update the EH coddegen during
inlining, so probably we just did not produce non-call EH for these.


OK, we may have inlined them after all...  My understanding of the new code is
that we will still inline them if the Ada callee doesn't use EH, which is good
enough in my opinion.


it would be nice to also inline if the caller doesn't use EH even if the callee 
does, for example when calling Ada from C.


Best wishes, Duncan.


Re: [Ada] More efficient code generated for object overlays

2015-11-12 Thread Duncan Sands

Hi Arnaud,

On 12/11/15 12:06, Arnaud Charlet wrote:

This change refines the use of the "volatile hammer" to implement the advice
given in RM 13.3(19) by disabling it for object overlays altogether. relying
instead on the ref-all aliasing property of reference types to achieve the
desired effect.

This will generate better code for object overlays, for example the following
function should now make no memory accesses at all on 64-bit platforms when
compiled at -O2 or above:


this is great!  When doing tricks to improve performance I've several times 
resorted to address overlays, forgetting about the "volatile hammer", only to 
rediscover it for the N'th time due to the poor performance and the horrible 
code generated.


Best wishes, Duncan.


Re: [Ada] Correct some anmolies in the handling of Atomic

2015-05-22 Thread Duncan Sands

Hi Arnaud,


Index: exp_util.adb
===
--- exp_util.adb(revision 223476)
+++ exp_util.adb(working copy)
@@ -204,6 +204,13 @@
  when others = null;
   end case;

+  --  Nothing to do for the identifier in an object renaming declaration,
+  --  the renaming itself does not need atomic syncrhonization.


syncrhonization - synchronization

Ciao, Duncan.


Re: [Ada] Out parameters of a null-excluding access type in entries.

2014-07-30 Thread Duncan Sands

Hi Arnaud,

On 29/07/14 16:02, Arnaud Charlet wrote:

If a procedure or entry has an formal out-parameter of a null-excluding access
type, there is no check applied to the actual before the call. This patch
removes a spurious access check on such parameters on entry calls.

Compiling and executing p.adb must yield;

 Procedure version did not raise exception
 Entry version did not raise exception

---
with Ada.Text_IO; use Ada.Text_IO;
procedure P is
type Integer_Access is access all Integer;

An_Integer : aliased Integer;

procedure Procedure_Version (A : out not null Integer_Access) is
begin
   A := An_Integer'Access;
end Procedure_Version;

protected Object is
   entry Entry_Version (A : out not null Integer_Access);
end Object;

protected body Object is
   entry Entry_Version (A : out not null Integer_Access) when True is
  Junk : integer := 0;


this variable Junk seems useless.


   begin
  A := An_Integer'Access;
   end Entry_Version;
end Object;

A : Integer_Access;
begin
A := null;
Procedure_Version (A);
Put_Line (Procedure version did not raise exception);

A := null;
Object.Entry_Version (A);
Put_Line (Entry version did not raise exception);
end;


Ciao, Duncan.



Re: clang vs free software

2014-01-24 Thread Duncan Sands

Hi Vladimir,

o Comparing LLVM and GCC on Fortran benchmarks.  LLVM has no fortran FE and just
quietly call system GCC.  So comparison of LLVM and GCC on Fortran benchmarks
means comparison of system GCC and a given GCC.


a few people are working on LLVM based Fortran compilers.  I'm not sure how far 
they got though.  I think the one farthest along is this one:

  https://github.com/hfinkel/lfort/
There is also:
  https://github.com/isanbard/flang/
Of course you can always cheat and use the GCC Fortran front-end, using the 
dragonegg plugin bridging it to LLVM, but as I'm not maintaining dragonegg any 
more (no time) this solution is likely to bitrot fast unless someone else picks 
up the project.


Best wishes, Duncan.


Re: clang and FSF's strategy

2014-01-23 Thread Duncan Sands

Hi David,

 At any rate, if you want to bash the strategies of the GNU project,

these lists are the wrong place to go.  Try doing it on the Clang list
though I am skeptical that they do not have better things to do as well.


the Clang list is for technical rather than political discussion, as you 
guessed.  I unfortunately don't have any helpful suggestions for where Michael 
could best continue this discussion, but I'm pretty sure it's not the clang 
mailing list.


Best wishes, Duncan.


Re: clang and FSF's strategy

2014-01-23 Thread Duncan Sands

On 23/01/14 12:42, Michael Witten wrote:

On Thu, Jan 23, 2014 at 11:04 AM, Duncan Sands baldr...@free.fr wrote:


the... list is for technical rather than political discussion


That's just it; that's the whole point.

The *political* aspects are dictating the *technical* aspects.


Not for clang they aren't, so please leave the clang mailing list out of it.

Best wishes, Duncan.



So... like it or not, that makes this list exactly the right place to
have this discussion.




Re: [Ada] Imported C++ exceptions

2013-10-14 Thread Duncan Sands

Hi Arnaud,

On 14/10/13 15:29, Arnaud Charlet wrote:

It is now possible to import C++ exceptions and to handle it.

...

Index: exp_prag.adb
===
--- exp_prag.adb(revision 203544)
+++ exp_prag.adb(working copy)
@@ -575,6 +575,64 @@
  if No (Init_Call) and then Present (Expression (Parent (Def_Id))) then
 Set_Expression (Parent (Def_Id), Empty);
  end if;
+  elsif Ekind (Def_Id) = E_Exception
+and then Convention (Def_Id) = Convention_CPP
+  then
+
+ --  Import a C++ convention


should this comment say Import a C++ exception?

Ciao, Duncan.


Re: [GOOGLE] More strict checking for call args

2013-05-31 Thread Duncan Sands

Hi Dehao,

On 31/05/13 00:47, Dehao Chen wrote:

This patch makes more strict check of call args to make sure the
number of args match.

Bootstrapped and passed regression tests.


did you thoroughly test Fortran?  The Fortran front-end has long had an
unfortunate tendency to eg declare a function as taking 4 int arguments,
but in the call pass it one argument (an array of length 4, consisting
of ints).  It would be great if all such nastiness has been fixed.  There
are also a few cases in which it declares a builtin as taking, say, an
int,float pair, but passes a float,int pair in the call.  I fixed a couple
of instances of this a while back, but I still have one outstanding patch.

Ciao, Duncan.



OK for google branches?

Thanks,
Dehao

Index: gcc/gimple-low.c
===
--- gcc/gimple-low.c (revision 199414)
+++ gcc/gimple-low.c (working copy)
@@ -254,9 +254,13 @@ gimple_check_call_args (gimple stmt, tree fndecl)
 !fold_convertible_p (DECL_ARG_TYPE (p), arg)))
  return false;
   }
+  if (p != NULL)
+ return false;
  }
else if (parms)
  {
+  if (list_length (parms) - nargs != 1)
+ return false;
for (i = 0, p = parms; i  nargs; i++, p = TREE_CHAIN (p))
   {
tree arg;





Re: PATCH: PR plugins/56754 some missing plugin headers during installation in gcc 4.8

2013-05-21 Thread Duncan Sands

Hi Jakub, I actually committed this patch to mainline earlier today, as it is
trivial, enables my own plugin (dragonegg) to compile against gcc-4.8, and
according to the PR makes some other plugins work with gcc-4.8 too.  I will
backport it to the gcc-4.8 branch if no-one objects.  But maybe you are
objecting?

On 21/05/13 17:09, Jakub Jelinek wrote:

On Sat, Mar 30, 2013 at 03:17:59PM +0100, Magnus Granberg wrote:

This patch readd TARGET_H that was removed with revision 188166
IPA_PROP_H is in use by PLUGIN_HEADERS and did depend on GIMPLE_H that
did have TARGET_H before it was removed and it was not added to IPA_PROP_H or
PLUGIN_HEADERS. See the bug for more info.



2013-03-30  Magnus Granberg zo...@gentoo.org


Two spaces before , instead of just one.


I had corrected this one already in the version I committed.



PR plugins/56754
* Makefile.in (PLUGIN_HEADERS): Add TARGET_H


Missing dot at the end of line, plus it should be $(TARGET_H)
instead of TARGET_H.


I missed these however.  I will correct the changelog if you are otherwise
OK with the commit.



Where has it been tested?


In addition to what Jack mentioned, x86-64 ubuntu 13.04.

Ciao, Duncan.




--- a/gcc/Makefile.in   2013-02-08 10:07:49.0 +0100
+++ b/gcc/Makefile.in   2013-03-28 03:43:53.343390945 +0100
@@ -4597,7 +4597,7 @@ PLUGIN_HEADERS = $(TREE_H) $(CONFIG_H) $
$(C_PRAGMA_H)  $(CPPLIB_H)  $(FUNCTION_H) \
cppdefault.h flags.h $(MD5_H) params.def params.h prefix.h tree-inline.h \
$(GIMPLE_PRETTY_PRINT_H) realmpfr.h \
-  $(IPA_PROP_H) $(RTL_H) $(TM_P_H) $(CFGLOOP_H) $(EMIT_RTL_H) version.h
+  $(IPA_PROP_H) $(TARGET_H) $(RTL_H) $(TM_P_H) $(CFGLOOP_H) $(EMIT_RTL_H) 
version.h

  # generate the 'build fragment' b-header-vars
  s-header-vars: Makefile



Jakub





Re: SPEC2000 comparison of LLVM-3.2 and coming GCC4.8 on x86/x86-64

2013-02-07 Thread Duncan Sands

Hi Vladimir, thanks for these numbers.

...

   Therefore I had to use *Dragonegg* (a GCC plugin which uses LLVM
   backend instead of GCC backend) for generation of Fortran benchmarks
   by LLVM.

...

   I believe such progress is achieved mostly because of a *new RA*
   introduced in LLVM 3.0 and *auto-vectorization*.


I don't think it can be auto-vectorization, because I forgot to turn on the
LLVM auto-vectorizer in dragonegg-3.2 (oops!).

Ciao, Duncan.


Re: [Ada] Ease interface with builtins that returns void *

2012-07-16 Thread Duncan Sands

Hi Arnaud,


The natural way to import a builtin that returns void * is to use
System.Address in Ada, which is in fact an integral type.


how about doing this for formal arguments too and not just the return type?
This would improve optimization by LLVM of calls to standard library functions
since the optimizers bail out when they see an int parameter where normally
there would be a void* (or other pointer type).

Ciao, Duncan.



Addressed by this patch, which makes it possible to e.g. compile:

with System;
procedure Btins1 is

function Frame_Address (Level : Integer) return System.Address;
pragma Import (Intrinsic, Frame_Address, __builtin_frame_address);

Ptr : System.Address;
pragma Volatile (Ptr);
begin
Ptr := Frame_Address (0);
end;

Tested on x86_64-pc-linux-gnu, committed on trunk

2012-07-16  Tristan Gingold  ging...@adacore.com

* gcc-interface/decl.c (intrin_return_compatible_p): Map Address to
void *.






Re: [Ada] Ease interface with builtins that returns void *

2012-07-16 Thread Duncan Sands

Hi Tristan,

On 16/07/12 15:17, Tristan Gingold wrote:


On Jul 16, 2012, at 3:16 PM, Duncan Sands wrote:


Hi Arnaud,


The natural way to import a builtin that returns void * is to use
System.Address in Ada, which is in fact an integral type.


how about doing this for formal arguments too and not just the return type?


Formal arguments were already handled.


indeed, for two years already.  Is there any reason not to do this for all
functions, rather than just limiting it to builtins?

Ciao, Duncan.



Tristan.


This would improve optimization by LLVM of calls to standard library functions
since the optimizers bail out when they see an int parameter where normally
there would be a void* (or other pointer type).

Ciao, Duncan.



Addressed by this patch, which makes it possible to e.g. compile:

with System;
procedure Btins1 is

function Frame_Address (Level : Integer) return System.Address;
pragma Import (Intrinsic, Frame_Address, __builtin_frame_address);

Ptr : System.Address;
pragma Volatile (Ptr);
begin
Ptr := Frame_Address (0);
end;

Tested on x86_64-pc-linux-gnu, committed on trunk

2012-07-16  Tristan Gingold  ging...@adacore.com

* gcc-interface/decl.c (intrin_return_compatible_p): Map Address to
void *.











Re: [Ada] Ease interface with builtins that returns void *

2012-07-16 Thread Duncan Sands

Hi Tristan,


indeed, for two years already.  Is there any reason not to do this for all
functions, rather than just limiting it to builtins?


I don't understand what do you mean.  We need to do this implicit conversion 
for builtins because they are known by the compiler.  Which other functions 
(that aren't builtins) are you referring to ?


all of them!  First off, the LLVM optimizers do a better job if an argument of a
user defined function that is really a pointer is declared as such, rather than
declared as an integer then cast to a pointer before being used.  I don't know
if the GCC optimizers are sensitive to this too.  Also, the LLVM optimizers
recognize some standard library functions that the gcc optimizers do not, but
fail to recognize them when called from Ada because they have the wrong
prototype: an integer rather than a pointer argument.  Finally I would argue
that as System.Address is really a pointer, playing pretty much exactly the
same role as void* in C, it is more philosophically correct to express it as a
void*.  That said, it should probably just be declared as a pointer in the
System package rather than doing all this mucking around in the gcc interface.

Ciao, Duncan.


Re: [Ada] Ease interface with builtins that returns void *

2012-07-16 Thread Duncan Sands

Hi Tristan,


Ah, what you want is the use of 'void *' for System.Address.
We didn't choose that because the semantic of System.Address (which includes 
arithmetic on the whole address space) doesn't match the void * one.


void* arithmetic of this kind exists, it's a gcc extension to C :)


But, you can try to implement this scheme by modifying the runtime.  I don't 
know if this is a small work or not.


It crashes the front-end, so it's not trivial.

Ciao, Duncan.


Re: [Ada] Ease interface with builtins that returns void *

2012-07-16 Thread Duncan Sands

Hi Tristan,


Ah, what you want is the use of 'void *' for System.Address.
We didn't choose that because the semantic of System.Address (which includes 
arithmetic on the whole address space) doesn't match the void * one.


void* arithmetic of this kind exists, it's a gcc extension to C :)


The issue is not void * vs char *, but the fact that the C standard has 
restriction on pointer arithmetic.


I see, -fno-strict-overflow would be needed to get System.Address modulo
arithmetic semantics if it was always turned into void*, which would then
presumably pessimize other code.  However this isn't really relevant to
whether Address formal parameters should always be turned into void* or
not.

Ciao, Duncan.


But, you can try to implement this scheme by modifying the runtime.  I don't 
know if this is a small work or not.


It crashes the front-end, so it's not trivial.


:-)






Re: [Ada] Ease interface with builtins that returns void *

2012-07-16 Thread Duncan Sands

PS: That said, I have to admit that using void* for builtins does cover the
most important cases.


Re: [Patch 4.6] In system.h, wrap include of C++ header in 'extern C++'

2012-06-16 Thread Duncan Sands

Hi,


If ENABLE_BUILD_WITH_CXX is defined, then GCC itself is built with C++,
and we want a C++ signature for functions.  If it is not defined, then
GCC itself is not built with C++, and we want (and must have) a C
signature.

I suppose we would decide that fancy_abort always uses a C signature,
but that seems odd.

Ian


I guess the issue is when people care only about C plugins, yet fancy_abort
get implicitly exported with a C++ linkage.

I suspect this goes back to the eternal question: what do we consider as
part of the public GCC public API (no, Basile, I am not suggesting to have
the same discussion again.)


if the following are to hold

(1) fancy_abort is declared in system.h
(2) system.h should not be wrapped in extern C when included from a plugin,
(3) it should be valid to include it from plugins compiled as C or as C++,
(4) fancy_abort should use the same linkage as GCC, i.e. C when GCC built as C,
C++ when built as C++ (aka ENABLE_BUILD_WITH_CXX).

then something like the following seems inevitable:

#ifdef ENABLE_BUILD_WITH_CXX
#ifdef __cplusplus
extern void fancy_abort(const char *, int, const char *) ATTRIBUTE_NORETURN;
#else
extern void _Z11fancy_abortPKciS0_(const char *, int, const char *) 
ATTRIBUTE_NORETURN;

#endif
#else
#ifdef __cplusplus
extern C void fancy_abort(const char *, int, const char *) ATTRIBUTE_NORETURN;
#else
extern void fancy_abort(const char *, int, const char *) ATTRIBUTE_NORETURN;
#endif
#endif

That's pretty nasty.  But to avoid the nastiness one of (1) - (4) needs to be
dropped.  Which one?

Ciao, Duncan.


[Patch 4.6] In system.h, wrap include of C++ header in 'extern C++'

2012-06-15 Thread Duncan Sands

My plugin is written in C++.  When including headers from gcc-4.6 it wraps them
in 'extern C' to prevent name mangling.  Some of the plugin headers include
gcc/system.h which includes the C++ header cstring if it detects the use of a
C++ compiler.  As a result cstring routines included this way end up wrapped in
'extern C', while those included directly from C++ aren't 'extern C'.  This
doesn't worry g++, but clang gets upset, erroring out with a complaint about
multiple inconsistent declarations of memchr and friends.  Is the following
patch OK to apply to gcc-4.6?  And is it in principle OK to apply to gcc-4.7
(I didn't test it there yet)?  It would be useful if gcc-4.7 is compiled as
C.

Thanks, Duncan.

Index: gcc/system.h
===
--- gcc/system.h(revision 188518)
+++ gcc/system.h(working copy)
@@ -191,7 +191,9 @@
 #endif

 #ifdef __cplusplus
+extern C++ {
 # include cstring
+}
 #endif

 /* Some of glibc's string inlines cause warnings.  Plus we'd rather


Re: [Patch 4.6] In system.h, wrap include of C++ header in 'extern C++'

2012-06-15 Thread Duncan Sands

Hi Richard,


Uh, I don't think we should do that.  Why do we include cstring here anyways?

Ian - you added this include in rev. 167764, I don't think that was proper.
But I'm not sure wrapping a system.h include inside extern C from a C++
plugin is proper either ...


since the plugin needs to call GCC routines, and GCC is built as C, it has to
wrap at least some GCC headers in extern C to avoid mangling of the names
of those GCC routines (otherwise you can't load the plugin because the linker
will look for the mangled names in GCC and not find them).  But perhaps you
know a trick to avoid the name mangling problem?  It is true that maybe via
a careful dance it is possible to not wrap system.h in extern C - I will
give it a go.

Ciao, Duncan.



Thanks,
Richard.


Thanks, Duncan.

Index: gcc/system.h
===
--- gcc/system.h(revision 188518)
+++ gcc/system.h(working copy)
@@ -191,7 +191,9 @@
  #endif

  #ifdef __cplusplus
+extern C++ {
  # includecstring
+}
  #endif

  /* Some of glibc's string inlines cause warnings.  Plus we'd rather




Re: [Patch 4.6] In system.h, wrap include of C++ header in 'extern C++'

2012-06-15 Thread Duncan Sands

Hi Richard,


As system.h is supposed to only include system headers and do nothing
else it has to be prepared to be included from C++ already, so no extern C
wrapping should be necessary for it.


it defines fancy_abort.  Not wrapping system.h in extern C results in
  undefined symbol: _Z11fancy_abortPKciS0_
when loading the plugin.

Ciao, Duncan.


Re: [Patch 4.6] In system.h, wrap include of C++ header in 'extern C++'

2012-06-15 Thread Duncan Sands

Hi Gabriel,


Richard just reminded me that we have two fancy_aborts.
Could you tell which one your code is indirectly using?


the one installed as plugin/include/system.h, which seems to be
gcc/include/system.h.  It is used for example in tree.h here:

/* Advance to the next argument.  */
static inline void
function_args_iter_next (function_args_iterator *i)
{
  gcc_assert (i-next != NULL_TREE);
  i-next = TREE_CHAIN (i-next);
}

Best wishes, Duncan.


Re: [Patch 4.6] In system.h, wrap include of C++ header in 'extern C++'

2012-06-15 Thread Duncan Sands

Hi Gabriel,


Richard just reminded me that we have two fancy_aborts.
Could you tell which one your code is indirectly using?



the one installed as plugin/include/system.h, which seems to be
gcc/include/system.h.


OK.  I think that declaration has to have the C language spec.
Would you prepare a patch for that?


you mean: wrap the fancy_abort declaration in system.h in 'extern C'?
Sure, I will prepare a patch.

Best wishes, Duncan.




  It is used for example in tree.h here:

/* Advance to the next argument.  */
static inline void
function_args_iter_next (function_args_iterator *i)
{
  gcc_assert (i-next != NULL_TREE);
  i-next = TREE_CHAIN (i-next);
}

Best wishes, Duncan.




Re: [google] Hide all uses of __float128 from Clang (issue6195066)

2012-05-09 Thread Duncan Sands

Hi Simon,


Hide all uses of __float128 from Clang.

Brackets _GLIBCXX_USE_FLOAT128 with #ifndef __clang__.  Clang does not
currently support the __float128 builtin, and so will fail to process
libstdc++ headers that use it.


if one day clang gets support for this type, won't this still turn everything
off?  Is it possible to test the compiler on some small program using
__float128, and turn off use of __float128 if the compiler barfs?

Ciao, Duncan.


Re: Switching to C++ by default in 4.8

2012-04-16 Thread Duncan Sands

And I want to say that tree/gimple/rtl are compiler's data(or state),
not compiler's text(or logic), the most important thing about them is
how to access their fields.



Given the above assumption, now I doubt the necessity of accessor
macros or C++ getter/setter method.

Is tree-code more direct and efficient than TREE_CODE(tree) or
tree-get_code() ?


On a side note, the dragonegg plugin (which is written in C++) defines

  /// isa - Return true if the given tree has the specified code.
  templateenum tree_code code bool isa(const_tree t) {
return TREE_CODE(t) == code;
  }

which lets you write things like

  if (isaINTEGRAL_TYPE(t)) ...

and so on.

While this is a bit more compact than if (TREE_CODE(t) == INTEGRAL_TYPE,
the main advantage to my mind is that it is a standard C++ idiom that should
be natural for many C++ programmers.

Ciao, Duncan.


Re: GCC 4.7.0RC: Mangled names in cc1

2012-03-09 Thread Duncan Sands

Hi,


I believe this is not intentional, right?


No, this is intentional.  We bootstrap the compiler using the C++
front-end now.  We build stage1 with the C compiler and then build
stages 2 and 3 with the C++ compiler.


OK.

However, this means that plug-ins must now be built with g++, except
when GCC was configured with --disable-build-poststage1-with-cxx.  This
seems difficult to deal with, for plug-in writers.


I’m really concerned about the maintenance difficulties that this change
entails for external plug-ins.


does this mean that if built with --disable-bootstrap then symbols aren't
mangled, while otherwise they are?  I personally don't mind if symbols are
mangled or not, but it would be simpler if symbols can be relied upon to always
be mangled, or always not be mangled...

I guess I could work around it by detecting at build time whether the targeted
gcc has mangled symbols, and using extern C only if not mangled.  It would
help if there was a simple way to ask gcc if it is using mangled symbols or not.

Ciao, Duncan.


Re: [Ada] Do not pass -Werror during linking

2012-02-10 Thread Duncan Sands

Hi Eric,


Can you try to extract a testcase (assuming it's just a single case?).
We shouldn't
warn for layout-compatible types (but we may do so if for example struct
nesting differs).


It's more basic than that: for example, we map pointers on the C side to
addresses (integers) on the Ada side.


is having Address be an integer useful any more?  Nowadays it should be possible
to declare Address to be an access type with no associated storage pool.  Even
nicer might be to have it be turned into void* when lowering to GCC types.
After all, Address is really used much like void*; see how GNAT declares malloc
for example.  Both of these possibilities would probably play better with GCC's
middle end type system, which considers integers to be different to pointers.

Ciao, Duncan.

PS: I first thought of this when I noticed (when using the dragonegg plugin to
compile Ada) that some of LLVM's optimizations bail out when they see integers
being used where they expect a pointer.  I tried tweaking the declaration of
Address to be an access type as mentioned above, but unfortunately that crashes
the compiler pretty quickly.


Re: Dealing with compilers that pretend to be GCC

2012-01-24 Thread Duncan Sands

On 24/01/12 17:32, Joseph S. Myers wrote:

On Thu, 19 Jan 2012, Ludovic Court�s wrote:


It turns out that ICC manages to build a working GCC plug-in, so after


I would say there is some conceptual confusion here (based on this
sentence, without having looked at the autoconf macros you refer to).
Logically there are two or three different compilers involved:

* The compiler (host-x-target) into which a plugin would be loaded.  This
is the one that needs to be GCC.

* The compiler (build-x-host) building the plugin.  There is no particular
reason it should need to be GCC, if sufficiently compatible with the
compiler that built the host-x-target compiler that will load the plugin.


Users of the dragonegg plugin (what few there are!) are often confused by
this, thinking that they need to build the plugin with the compiler into
which the plugin is going to be loaded, which of course is not the case.
In fact if the target compiler (host-x-target) is a cross-compiler, for
example running on x86 and producing code for ppc, there is no way it can
be used to compile the plugin, since that needs to run on x86 not on ppc.

Ciao, Duncan.



* If you are testing a compiler for plugin support by running it in some
way, that will be a build-x-target compiler that is intended to be
configured in the same way as the final host-x-target compiler.  Such a
build-x-target compiler will be used to build target libraries in a
Canadian cross build of GCC.

So always think carefully about which compiler you wish to test - and what
the relevant properties of that compiler are.





Re: Dealing with compilers that pretend to be GCC

2012-01-21 Thread Duncan Sands

Hi Ludo,


For ICC, one can test __ICC. For instance, here's what we have in mpfr.h
(for the use of __builtin_constant_p and __extension__ ({ ... })):

#if defined (__GNUC__)  !defined(__ICC)  !defined(__cplusplus)


Yeah, but it’s a shame that those compilers define __GNUC__ without
supporting 100% of the GNU C extensions.  With this approach, you would
also need to add !defined for Clang, PGI, and probably others.


even GCC may not support 100% of the GCC extensions!  For example, you can
find hacked GCC's out there which disable nested function support by default
(I think Apple did this).  Even more problematic IMO than testing __GNUC__ is
code that tests for particular versions of GCC.  There are versions of GCC
which have backported features from more recent GCC's (eg: GNAT from Ada Core
Technologies is like this).  I've seen this cause problems with code that
includes different header files depending on the gcc version, since the
compiler doesn't have the expected set of header files.

Ciao, Duncan.


Re: Dealing with compilers that pretend to be GCC

2012-01-19 Thread Duncan Sands

Hi Ludo,


A number of compilers claim to be GCC, without actually being GCC.  This
has come to a point where they can hardly be distinguished–until one
actually tries to use them.


this suggests that you shouldn't be testing for GCC, and instead should be
testing for support for particular features.  For example, to know if nested
functions are supported you would have your configure script compile a mini
program that uses nested functions.

Ciao, Duncan.



I had the following macro to determine whether plug-in support is
available:

   
https://gforge.inria.fr/scm/viewvc.php/trunk/m4/gcc.m4?view=markuprevision=5169root=starpupathrev=5202

The macro is fairly elaborate.  Yet, ICC 12.0.1 and Clang 3.4 both pass
the test, because:

   - They support ‘--version’;

   - They define ‘__GNUC__’, which defeats Autoconf’s
 ‘_AC_LANG_COMPILER_GNU’.

   - They support ‘-print-file-name’, and have ‘-print-file-name=plugin’
 return GCC’s (!) plug-in header directory.  To that end, ICC simply
 runs ‘gcc -print-file-name=plugin’, while Clang appears to be doing
 some guesswork.

It turns out that ICC manages to build a working GCC plug-in, so after
all, it may be “entitled” to define ‘__GNUC__’, in a broad sense.

Conversely, Clang doesn’t support several GNU extensions, such as nested
functions, so it quickly fails to compile code.

Based on that, I modified my feature test like this:

   
https://gforge.inria.fr/scm/viewvc.php/trunk/m4/gcc.m4?root=starpur1=5169r2=5203

I don’t see what can be done on “our” side (perhaps Autoconf’s feature
test could be strengthened, but how?), but I wanted to document this
state of affairs.

Thanks,
Ludo’.




Re: Dealing with compilers that pretend to be GCC

2012-01-19 Thread Duncan Sands

Hi Ludo, I didn't really get it.  Why do you want to know whether the compiler
is GCC or not?  Presumably because you have several versions of your code,
one version using GCC feature XYZ and the other not using XYZ.  If so, the
logically correct (but maybe impractical) approach is to test if the compiler
supports XYZ, and switch between the two code versions depending on that.
For example if XYZ is nested functions, do you have a version of your code
that uses nested functions and another that does not?  If you don't have a
version that works with compilers like clang that don't support nested
functions, then why bother testing for nested function support?  You will
discover the lack of nested function support when your code fails to compile.

Ciao, Duncan.

On 19/01/12 15:39, Ludovic Courtès wrote:

Hi Ducan,

Duncan Sandsbaldr...@free.fr  skribis:


A number of compilers claim to be GCC, without actually being GCC.  This
has come to a point where they can hardly be distinguished–until one
actually tries to use them.


this suggests that you shouldn't be testing for GCC, and instead should be
testing for support for particular features.  For example, to know if nested
functions are supported you would have your configure script compile a mini
program that uses nested functions.


Yes.  The macro I posted is a feature test: it tests for plug-in header
availability, and the availability of several GCC internal types and
declarations.

When I noticed that Clang doesn’t support nested functions, I added that
to the test:

   
https://gforge.inria.fr/scm/viewvc.php/trunk/m4/gcc.m4?root=starpur1=5169r2=5203

Yet, I can’t reasonably add a feature test for each GNU extension that
GCC’s headers or my own code use.  Maybe tomorrow Clang will support
nested functions, while still lacking support for some other extension
that’s needed.

Thanks,
Ludo’.




Re: adding destroyable objects into Ggc

2011-10-20 Thread Duncan Sands

Hi Basile,


But I don't understand how Ggc could be avoided (and I am not sure to
understand how even LLVM can avoid any kind of garbage collection in the
long run).


I doubt LLVM will ever need garbage collection, because the way it is designed
makes memory management easy.  I already mentioned the use of containers, but
of course containers can't handle everything.  So consider a typical thing you
might want to do: replace an instruction I1 by a different one I2 (for example
because you understood that I1 simplifies to the simpler instruction I2) and
delete I1.  You can think of an LLVM instruction as being a gimple
statement.  One of the design points of LLVM is that instructions always know
about all users of the instruction (def-use chains are built in).  Thus you
can do
  I1-replaceAllUsesWith(I2);
and at this point everything using I1 as an operand now uses I2 instead.  Thus
the only place still referring to I1 is the function that the instruction I1
is linked into.  You can unlink it and free the memory for I1 as follows
  I1-eraseFromParent();
And that's it.  The price you pay for this simplicity is the need to keep track
of uses - and this does cost compilation time (clear to anyone who does some
profiling of LLVM) but it isn't that big.  The big advantage is that memory
management is easy - so easy that I suspect many LLVM users never thought
about the design choices (and trade-offs) that make it possible.

Ciao, Duncan.

PS: You may wonder about copies of I1 cached in a map or whatnot, where it can
be tricky (eg breaks an abstraction) or expensive to flush I1 from the data
structure.  This situation is handled conveniently by an extension of the above
mechanism where in essence your copy of I1 in the data structure can register
itself as an additional user of I1.  When I1 is replaced by I2 then (according
to how you chose to set things up) either the copy in the data structure gets
turned into I2, or nulled out, or a special action of your choice is performed.


Re: adding destroyable objects into Ggc

2011-10-19 Thread Duncan Sands

Hi Gabriel,


I also agree with you that GCC architecture is messy, and that scares newscomer 
a lot.



Yes, but the way we improve it isn't, in my opinion, adding more GC.
First we would like to remove complexity, and I do not think we should
start by focusing on storage management until we get a clearer idea
about lifetime of data structures we manipulate and how they mesh.
We might find out (as I suspect) that the builtin GC of C (or C++) is
remarkable at the job, provided we have a design that makes the
lifetime obvious and take advantage of it.


what you say sounds very sensible to me.  If you look at LLVM, most memory
management is done by using container objects (vectors, maps etc) that
automatically free memory when they go out of scope.  This takes care
of 99% of memory management in a clean and simple way, which is a great
situation to be in.

Ciao, Duncan.


Re: adding destroyable objects into Ggc

2011-10-18 Thread Duncan Sands

Hi Basile,


I would like to add destroyable objects into Ggc (the GCC garbage collector, 
see files
gcc/ggc*.[ch]).

The main motivation is to permit C++ objects to be garbage collected (I 
discussed that
briefly in the Gcc meeting at Google in London): adding destroyable object is a
prerequisite for that goal.


it is already possible to have garbage collected C++ objects with destructors.
I use this in the dragonegg plugin (see the file Cache.cpp).  I do only use
htab's though, which comes with support for destructors.  After allocating the
garbage collected memory, I construct the object using placement new.  The
memory is allocated using htab_create_ggc, the last argument of which is a
destructor function which in my case does a call to the objects destructor.

Ciao, Duncan.


Re: [Ada] Entity list of for loop for enumeration with rep gets truncated

2011-10-13 Thread Duncan Sands

Hi Arnaud,


--- exp_ch5.adb (revision 179894)
+++ exp_ch5.adb (working copy)
@@ -3458,6 +3458,20 @@
Statements = Statements (N,

End_Label = End_Label (N)));
+
+   --  The loop parameter's entity must be removed from the loop
+   --  scope's entity list, since itw will now be located in the


typo: itw - it

Ciao, Duncan.


Re: Comparison of GCC-4.6.1 and LLVM-2.9 on x86/x86-64 targets

2011-09-08 Thread Duncan Sands

Why is lto/whole program mode not used in LLVM for peak performance
comparison? (of course, peak performance should really use FDO..)


Thanks for the feedback.  I did not manage to use LTO for LLVM as it
described on

http://llvm.org/docs/LinkTimeOptimization.html#lto

I am getting 'file not recognized: File format not recognized'  during the
linkage pass.


Note that these are the instructions to follow on linux for LTO with llvm-gcc:
  http://llvm.org/docs/GoldPlugin.html

Ciao, Duncan.



You probably right that I should use -Ofast without -flto for gcc then.
  Although I don't think that it significantly change GCC peak performance.
  Still I am going to run SPEC2000 without -flto and post the data (probably
on the next week).


Note that due to a bug in 4.6.x -Ofast is not equivalent to -O3 -ffast-math
(it doesn't use crtfastmath.o).  I'll backport the fix.


As for FDO, unfortunately for some tests SPEC uses different training sets
and it gives sometimes wrong info for the further optimizations.

I do not look at this comparison as finished work and am going to run more
SPEC2000 tests and change the results if I have serious reasonable
objections for the current comparison.





Re: Comparison of GCC-4.6.1 and LLVM-2.9 on x86/x86-64 targets

2011-09-07 Thread Duncan Sands

Hi Vladimir, thanks for doing this.


The above said about compilation speed is true when GCC front-end is
used for LLVM.


It's not clear to me which GCC front-end you mean.  There is llvm-gcc
(based on gcc-4.2) and the dragonegg plugin (the 2.9 version works with
gcc-4.5; the development version works also with gcc-4.6).  Can you
please clarify.  By the way, some highly unscientific experiments I did
suggest that the GCC tree optimizers are (almost) as fast as the LLVM IR
optimizers while doing a better job; while at -O3 the LLVM code generators
are significantly faster than the GCC code generators and do a comparable
and sometimes better job.  Unfortunately I haven't had time to do a serious
study, so this might just be an accident of the benchmarks I looked at and
the options I happened to use rather than anything meaningful.

Ciao, Duncan.


Re: Comparison of GCC-4.6.1 and LLVM-2.9 on x86/x86-64 targets

2011-09-07 Thread Duncan Sands

On 07/09/11 17:55, Xinliang David Li wrote:

Why is lto/whole program mode not used in LLVM for peak performance
comparison? (of course, peak performance should really use FDO..)


Assuming Vladimir was using the dragonegg plugin: presumably because it's
a pain: you have to compile everything to assembler (-S) rather than to an
object file (-c).  That's because -flto outputs LLVM IR when used with this
plugin, and the system assembler doesn't understand it (and GCC insists on
sending output to the system assembler if you pass -c).  You then have to
convert each .s into a .o (or .bc) using llvm-as.  At that point if you have
the gold linker and the LLVM linker plugin you can just link using them and
you are done.

Ciao, Duncan.



thanks,

David

On Wed, Sep 7, 2011 at 8:15 AM, Vladimir Makarovvmaka...@redhat.com  wrote:

  Some people asked me to do comparison of  GCC-4.6 and LLVM-2.9 (both
released this spring) as I did GCC-LLVM comparison in previous year.

  You can find it on http://vmakarov.fedorapeople.org/spec under
2011 GCC-LLVM comparison tab entry.


  This year the comparison is done on GCC 4.6 and LLVM 2.9 which were
released in spring 2011.

  As usually I am focused mostly on the compiler comparison
as *optimizing* compilers on major platform x86/x86-64.  I don't
consider other aspects of the compilers as quality of debug
information, supported languages, standards and extensions (e.g. OMP),
supported targets and ABI, support of just-in-time compilation etc.

  Different to the 2010 comparison, the SPEC2000 benchmarks were run on
a recent *Sandy Bridge processor* which will be a mainstream
processor at least for the next year.

  This year I tried to decrease the number of graphs which are still too
many with my point of view.  Some graphs are bigger than for 2010
comparison and oriented to screens with a larger resolution.  If you
need exact numbers you should look at the tables from which the graphs
were generated.

  I added GCC run with -O1 which helps to understand
that *LLVM with -O2 or -O3 is analog of GCC 4.1 with -O1
with the point of view of generated code performance and
compilation speed*.  People are frequently saying that LLVM is a much
faster compiler than GCC.  That is probably not true.  If you need the same
generated code quality and compilation speed as LLVM -O2/-O3
you should use GCC with -O1.  If you want 10%-40% faster
generated code, you should use GCC with -O2/-O3 and you need
20%-40% (150%-200% if you use GCC LTO) more time for compilation.  I
believe that LLVM code performance is far away from GCC because
it is sufficiently easy to get first percents of code improvement, it
becomes much harder to get subsequent percents, and IMHO starting with
some point of the development the relation of the code improvement to
the spent efforts might become exponential.  So there is no magic --
GCC has a better performance because much more efforts of experienced
compiler developers have been spent and are being spent for GCC
development than for LLVM.

  The above said about compilation speed is true when GCC front-end is
used for LLVM.  LLVM has another C-language family front-end called
CLANG which can speed up compilation in optimization mode
(-O2/-O3) upto 20%-25%.  So even as LLVM optimizations
are not faster than GCC optimizations, CLANG front-end is really
faster than GCC-frontend.  I think GCC community should pay more attention
to this fact.  Fortunately, a few new GCC projects address to this problem
and I hope this problem will be solved or alleviated.

  This year I used -Ofast -flto -fwhole-program instead of
-O3 for GCC and -O3 -ffast-math for LLVM for comparison of peak
performance.  I could improve GCC performance even more by using
other GCC possibilities (like support of AVX insns, Graphite optimizations
and even some experimental stuff like LIPO) but I wanted to give LLVM
some chances too.  Probably an experienced user in LLVM could improve
LLVM performance too.  So I think it is a fair comparison.





Re: [Ada] Speed up build of gnatools

2011-09-06 Thread Duncan Sands

Hi Arnaud,


Now that gnatmake supports -j0, it's possible to speed up the build of
gnattools during GNAT build by using gnatmake -j0 instead of gnatmake.

This is useful since gnattools is the only target which isn't parallelized
in the Makefile before this change.


this means using as many processes as there are CPUs, right?  It seems pretty
dubious to me to use more processes than the user maybe asked for.  For example
I have to restrict the number of CPUs used when building GCC to less than I have
since otherwise my machine overheats and turns itself off.  Is there some way
to get at the -j level the user passed to the top-level make and use that?

Ciao, Duncan.



Tested on x86_64-linux-gnu, committed on trunk.

2011-09-06  Arnaud Charletchar...@adacore.com

* gcc-interface/Makefile.in (common-tools, gnatmake-re,
gnatlink-re): Speed up by using -j0.

--
Index: gcc-interface/Makefile.in
===
--- gcc-interface/Makefile.in   (revision 178566)
+++ gcc-interface/Makefile.in   (working copy)
@@ -2336,7 +2336,7 @@
  endif

  common-tools:
-   $(GNATMAKE) -c -b $(ADA_INCLUDES) \
+   $(GNATMAKE) -j0 -c -b $(ADA_INCLUDES) \
  --GNATBIND=$(GNATBIND) --GCC=$(CC) $(ALL_ADAFLAGS) \
  gnatchop gnatcmd gnatkr gnatls gnatprep gnatxref gnatfind gnatname \
  gnatclean -bargs $(ADA_INCLUDES) $(GNATBIND_FLAGS)
@@ -2375,16 +2375,18 @@
$(GNATLINK) -v vxaddr2line -o $@ --GCC=$(GCC_LINK) targext.o $(CLIB)

  gnatmake-re:  link.o targext.o
-   $(GNATMAKE) $(ADA_INCLUDES) -u sdefault --GCC=$(CC) $(MOST_ADA_FLAGS)
-   $(GNATMAKE) -c $(ADA_INCLUDES) gnatmake --GCC=$(CC) $(ALL_ADAFLAGS)
+   $(GNATMAKE) -j0 $(ADA_INCLUDES) -u sdefault --GCC=$(CC) 
$(MOST_ADA_FLAGS)
+   $(GNATMAKE) -j0 -c $(ADA_INCLUDES) gnatmake --GCC=$(CC) 
$(ALL_ADAFLAGS)
$(GNATBIND) $(ADA_INCLUDES) $(GNATBIND_FLAGS) gnatmake
$(GNATLINK) -v gnatmake -o ../../gnatmake$(exeext) \
--GCC=$(GCC_LINK) $(TOOLS_LIBS)

  # Note the use of the mv command in order to allow gnatlink to be linked 
with
  # with the former version of gnatlink itself which cannot override itself.
-gnatlink-re:  link.o targext.o
-   $(GNATMAKE) -c $(ADA_INCLUDES) gnatlink --GCC=$(CC) $(ALL_ADAFLAGS)
+# gnatlink-re cannot be run at the same time as gnatmake-re, hence the
+# dependency
+gnatlink-re: link.o targext.o gnatmake-re
+   $(GNATMAKE) -j0 -c $(ADA_INCLUDES) gnatlink --GCC=$(CC) 
$(ALL_ADAFLAGS)
$(GNATBIND) $(ADA_INCLUDES) $(GNATBIND_FLAGS) gnatlink
$(GNATLINK) -v gnatlink -o ../../gnatlinknew$(exeext) \
--GCC=$(GCC_LINK) $(TOOLS_LIBS)




Re: Vector shuffling

2011-08-31 Thread Duncan Sands

Hi Artem,

On 31/08/11 10:27, Artem Shinkarov wrote:

On Wed, Aug 31, 2011 at 12:51 AM, Chris Lattnerclatt...@apple.com  wrote:

On Aug 30, 2011, at 10:01 AM, Artem Shinkarov wrote:

The patch at the moment lacks of some examples, but mainly it works
fine for me. It would be nice if i386 gurus could look into the way I
am doing the expansion.

Middle-end parts seems to be more or less fine, they have not changed
much from the previous time.


+@code{__builtin_shuffle (vec, mask)} and
+@code{__builtin_shuffle (vec0, vec1, mask)}. Both functions construct

the latter would be __builtin_shuffle2.


Why??
That was the syntax we agreed on that elegantly handles both cases in one place.


If you're going to add vector shuffling builtins, you might consider adding the 
same builtin that clang has for compatibility:
http://clang.llvm.org/docs/LanguageExtensions.html#__builtin_shufflevector

It should be straight-forward to map it into the same IR.

-Chris



Chris

I am trying to use OpenCL syntax here which says that the mask for
shuffling is a vector. Also I didn't really get from the clang
description if the indexes could be non-constnants? If not, then I
have a problem here, because I want to support this.


probably it maps directly to the LLVM shufflevector instruction, see
  http://llvm.org/docs/LangRef.html#i_shufflevector
That requires the shuffle mask to be constant.

Ciao, Duncan.


Re: [Ada] Detect useless assignments to parts of objects

2011-08-04 Thread Duncan Sands

Hi Arnaud, this is a great feature.  How does it handle unchecked unions?  Will
it warn if you write to a field but only read the value via a different field?

Ciao, Duncan.


GNAT did not issue a warning when assigning to a part of an object, and not
referencing the object later on. Now it does so in some cases, similarly to
the existing warnings on assignment to elementary objects.

On the code below, GNAT now issues warnings:

$ gcc -c -gnatwa assign.adb
assign.adb:8:05: warning: useless assignment to X, value never referenced
assign.adb:9:04: warning: useless assignment to Y, value never referenced

---
procedure Assign is
type T is record
   U : Integer;
end record;
X : T := T'(U =  0);
Y : array (1..2) of Integer := (others =  0);
begin
X.U   := X.U + 1;
Y (2) := Y (1);
end Assign;

Tested on x86_64-pc-linux-gnu, committed on trunk

2011-08-04  Yannick Moym...@adacore.com

* checks.adb (Apply_Float_Conversion_Check): correct a typo where Lo_OK
was used instead of Hi_OK, which could cause a read of an uninitialized
value later on. Detected while working on the new warning.
* exp_ch9.adb (Expand_N_Entry_Declaration): remove useless assignment
to local variable.
* sem_ch5.adb (Analyze_Assignment): set the last assignment component
in more cases, in order to detect more unreferenced values.
* sem_util.adb, sem_util.ads (Get_Enclosing_Object): return enclosing
object for expression, if any.





Re: [Ada] Fix bugs with volatile and components of aggregate types

2011-06-19 Thread Duncan Sands

Hi Eric,


This is the usual problem of volatile accesses not preserved under (heavy)
optimization.  In Ada, we can put pragma Volatile on components of composite
types without putting it on the enclosing type itself,


if T is a non-volatile composite type with volatile components, and O is an
object of type T, are the optimizers allowed to remove the assignment O := O?

Ciao, Duncan.


Backport the fix for PR47714 to the 4.5 branch

2011-05-31 Thread Duncan Sands

The following patch backports the one-line fix for PR47714 from the 4.6 branch
to the 4.5 branch.  I hit this while working on the dragonegg plugin.  OK to
apply?

Ciao, Duncan.

Index: gcc/cp/method.c
===
--- gcc/cp/method.c (revision 173485)
+++ gcc/cp/method.c (working copy)
@@ -374,6 +374,7 @@
   DECL_CONTEXT (x) = thunk_fndecl;
   SET_DECL_RTL (x, NULL_RTX);
   DECL_HAS_VALUE_EXPR_P (x) = 0;
+  TREE_ADDRESSABLE (x) = 0;
   t = x;
 }
   a = nreverse (t);
Index: gcc/cp/ChangeLog
===
--- gcc/cp/ChangeLog(revision 173485)
+++ gcc/cp/ChangeLog(working copy)
@@ -1,3 +1,11 @@
+2011-05-31  Duncan Sands  baldr...@free.fr
+
+   Backported from 4.6 branch
+   2011-03-09  Martin Jambor  mjam...@suse.cz
+
+   PR tree-optimization/47714
+   * method.c (use_thunk): Clear addressable flag of thunk arguments.
+
 2011-04-27  Jason Merrill  ja...@redhat.com

PR c++/48046
Index: gcc/testsuite/ChangeLog
===
--- gcc/testsuite/ChangeLog (revision 173485)
+++ gcc/testsuite/ChangeLog (working copy)
@@ -1,3 +1,11 @@
+2011-05-31  Duncan Sands  baldr...@free.fr
+
+   Backported from 4.6 branch
+   2011-03-09  Martin Jambor  mjam...@suse.cz
+
+   PR tree-optimization/47714
+   * g++.dg/torture/pr47714.C: New test.
+
 2011-05-05  Jason Merrill  ja...@redhat.com

* g++.dg/init/new30.C: New.
Index: gcc/testsuite/g++.dg/torture/pr47714.C
===
--- gcc/testsuite/g++.dg/torture/pr47714.C  (revision 0)
+++ gcc/testsuite/g++.dg/torture/pr47714.C  (revision 0)
@@ -0,0 +1,16 @@
+struct A { virtual ~A () {} };
+struct B { virtual ~B () {} };
+struct C { virtual const A *foo (int) const = 0; };
+struct E : public B, public A { };
+struct F : public C
+{
+  virtual const E *foo (int) const;
+};
+void bar (int );
+
+const E *
+F::foo (int x) const
+{
+  bar (x);
+  return __null;
+}


Re: Backport the fix for PR47714 to the 4.5 branch

2011-05-31 Thread Duncan Sands

On 31/05/11 08:39, Jakub Jelinek wrote:

On Tue, May 31, 2011 at 08:35:35AM +0200, Duncan Sands wrote:

The following patch backports the one-line fix for PR47714 from the 4.6 branch
to the 4.5 branch.  I hit this while working on the dragonegg plugin.  OK to
apply?


Yes.


Thanks.  Applied as revision 174467.

Ciao, Duncan.


[PATCH] Do not crash in array_type_nelts if TYPE_MIN_VALUE is null

2011-04-09 Thread Duncan Sands

I recently changed the dragonegg plugin to use array_type_nelts, and just got
sent a Fortran testcase that shows that array_type_nelts can crash on array
types coming from Fortran.  The array type in question has TYPE_DOMAIN set,
with TYPE_MIN_VALUE equal to 1 (because the array is indexed from 1) but no
TYPE_MAX_VALUE (because the array length is not known).  Here's a patch that
fixes array_type_nelts.  Unfortunately I don't have a testcase that shows the
issue without the use of the dragonegg plugin.

Tested by bootstrapping mainline and running the testsuite with gcc-4.5.  OK to
apply on mainline and the 4.5 and 4.6 branches?

Ciao, Duncan.

Index: gcc/tree.c
===
--- gcc/tree.c  (revision 172166)
+++ gcc/tree.c  (working copy)
@@ -2462,6 +2462,10 @@
   min = TYPE_MIN_VALUE (index_type);
   max = TYPE_MAX_VALUE (index_type);

+  /* TYPE_MAX_VALUE may not be set if the array has unknown length.  */
+  if (!max)
+return error_mark_node;
+
   return (integer_zerop (min)
  ? max
  : fold_build2 (MINUS_EXPR, TREE_TYPE (max), max, min));
Index: gcc/ChangeLog
===
--- gcc/ChangeLog   (revision 172166)
+++ gcc/ChangeLog   (working copy)
@@ -1,3 +1,7 @@
+2011-04-08  Duncan Sands  baldr...@free.fr
+
+   * tree.c (array_type_nelts): Bail out if TYPE_MAX_VALUE not set.
+
 2011-04-08  Anatoly Sokolov  ae...@post.ru

* doc/tm.texi.in (ASM_OUTPUT_BSS): Remove documentation.


Re: [PATCH] Do not crash in array_type_nelts if TYPE_MIN_VALUE is null

2011-04-09 Thread Duncan Sands

On 09/04/11 17:45, Richard Guenther wrote:

On Sat, Apr 9, 2011 at 1:22 PM, Duncan Sandsbaldr...@free.fr  wrote:

I recently changed the dragonegg plugin to use array_type_nelts, and just
got
sent a Fortran testcase that shows that array_type_nelts can crash on array
types coming from Fortran.  The array type in question has TYPE_DOMAIN set,
with TYPE_MIN_VALUE equal to 1 (because the array is indexed from 1) but no
TYPE_MAX_VALUE (because the array length is not known).  Here's a patch that
fixes array_type_nelts.  Unfortunately I don't have a testcase that shows
the
issue without the use of the dragonegg plugin.

Tested by bootstrapping mainline and running the testsuite with gcc-4.5.  OK
to
apply on mainline and the 4.5 and 4.6 branches?


Ok.


Thanks - applied (mainline commit 172227).

Ciao, Duncan.



Thanks,
Richard.


Ciao, Duncan.

Index: gcc/tree.c
===
--- gcc/tree.c  (revision 172166)
+++ gcc/tree.c  (working copy)
@@ -2462,6 +2462,10 @@
   min = TYPE_MIN_VALUE (index_type);
   max = TYPE_MAX_VALUE (index_type);

+  /* TYPE_MAX_VALUE may not be set if the array has unknown length.  */
+  if (!max)
+return error_mark_node;
+
   return (integer_zerop (min)
  ? max
  : fold_build2 (MINUS_EXPR, TREE_TYPE (max), max, min));
Index: gcc/ChangeLog
===
--- gcc/ChangeLog   (revision 172166)
+++ gcc/ChangeLog   (working copy)
@@ -1,3 +1,7 @@
+2011-04-08  Duncan Sandsbaldr...@free.fr
+
+   * tree.c (array_type_nelts): Bail out if TYPE_MAX_VALUE not set.
+
  2011-04-08  Anatoly Sokolovae...@post.ru

* doc/tm.texi.in (ASM_OUTPUT_BSS): Remove documentation.





Re: [PATCH, Fortran] Correct declaration of frexp and friends

2011-04-05 Thread Duncan Sands

Hi Tobias,


Pong. It helps to send Fortran patches also to fortran@ ...


indeed :)


On 30/03/11 16:43, Duncan Sands wrote:

While working on the dragonegg plugin I noticed that the Fortran front-end
declares frexp with the parameters the wrong way round. Instead of
double frexp(double x, int *exp);
it is declared as
double frexp(int *exp, double x);



OK to apply on mainline and the 4.5 and 4.6 branches?


OK and thanks for the patch. Do you have an GCC SVN account?


I do, so that's not a problem.  By the way I just noticed that the arguments to
the scalbn functions also seem to be the wrong way round:

  gfc_define_builtin (__builtin_scalbnl, mfunc_longdouble[5],
  BUILT_IN_SCALBNL, scalbnl, 
ATTR_CONST_NOTHROW_LEAF_LIST);
  gfc_define_builtin (__builtin_scalbn, mfunc_double[5],
  BUILT_IN_SCALBN, scalbn, ATTR_CONST_NOTHROW_LEAF_LIST);
  gfc_define_builtin (__builtin_scalbnf, mfunc_float[5],
  BUILT_IN_SCALBNF, scalbnf, 
ATTR_CONST_NOTHROW_LEAF_LIST);

but

  /* type (*) (int, type) */
  fntype[5] = build_function_type_list (type,
integer_type_node, type, NULL_TREE);

so it looks like you get scalbn(int, double) and not scalbn(double, int) etc.
If you agree that they are the wrong way round I will fix this too.

Ciao, Duncan.



Tobias


2011-03-30 Duncan Sands baldr...@free.fr

* f95-lang.c (build_builtin_fntypes): Swap frexp parameter types.

- /* type (*) (int, type) */
- fntype[4] = build_function_type_list (type,
+ /* type (*) (type, int) */
+ fntype[4] = build_function_type_list (type, type,
build_pointer_type (integer_type_node),
- type,
NULL_TREE);








[PATCH, Fortran] Correct declaration of frexp and friends

2011-03-30 Thread Duncan Sands

While working on the dragonegg plugin I noticed that the Fortran front-end
declares frexp with the parameters the wrong way round.  Instead of
  double frexp(double x, int *exp);
it is declared as
  double frexp(int *exp, double x);
This is fairly harmless but might as well be fixed, so here is a patch (as far
as I can see fntype[4] is only used in declaring the frexp family of functions).
Bootstraps and has no impact on the Fortran testsuite (tested on mainline).  OK
to apply on mainline and the 4.5 and 4.6 branches?

Proposed fortran/Changelog entry:
2011-03-30  Duncan Sands  baldr...@free.fr

* f95-lang.c (build_builtin_fntypes): Swap frexp parameter types.


Index: gcc/fortran/f95-lang.c
===
--- gcc/fortran/f95-lang.c  (revision 171716)
+++ gcc/fortran/f95-lang.c  (working copy)
@@ -695,10 +695,9 @@
 type, integer_type_node, NULL_TREE);
   /* type (*) (void) */
   fntype[3] = build_function_type_list (type, NULL_TREE);
-  /* type (*) (int, type) */
-  fntype[4] = build_function_type_list (type,
+  /* type (*) (type, int) */
+  fntype[4] = build_function_type_list (type, type,
 build_pointer_type (integer_type_node),
-type,
 NULL_TREE);
   /* type (*) (int, type) */
   fntype[5] = build_function_type_list (type,


Re: Ada.Exceptions.Exception_Propagation is not a predefined library unit

2010-10-14 Thread Duncan Sands

Hi Luke,


a-exexpr.adb:39:06: Ada.Exceptions.Exception_Propagation is not a
predefined library unit


it looks like you get this error when the compiler can't find a file that it
thinks forms part of the Ada library (this is determined by the name, eg: a
package Ada.XYZ is expected to be part of the Ada library).  For example,
if the compiler looks for the spec of Ada.Exceptions.Exception_Propagation
(which should be called a-exexpr.ads) but can't find it then you will get
this message.  At least, that's my understanding from a few minutes of
rummaging around in the source code.

Ciao,

Duncan.


Dragonegg-2.8 released

2010-10-12 Thread Duncan Sands

A week and a day after the LLVM 2.8 release, I'm pleased to announce
the availability of the corresponding dragonegg release.  Get it while
it's hot!  http://dragonegg.llvm.org/#gettingrelease

Duncan.


Re: plugin-provided pragmas Fortran or Ada?

2010-06-22 Thread Duncan Sands

Hi Basile,


Assuming a plugin (e.g. MELT) add a new pragma using PLUGIN_PRAGMAS, is
this pragma usable from Ada or Fortran code?

I am not very familiar with Ada or Fortran. I believe Ada has some
syntax for pragmas -but do Ada pragma have the same API inside GCC
plugins as C or C++ pragmas?- and I am not sure about Fortran. Many
Fortran dialects or implementations parse specially some kind of
comments -but I don't know if these are giving pragma events to plugins.


I'm pretty sure this won't work for Ada without modifying the Ada
front-end.

Ciao,

Duncan.


Re: plugin-provided pragmas Fortran or Ada?

2010-06-22 Thread Duncan Sands

Hi Basile,


Assuming a plugin (e.g. MELT) add a new pragma using PLUGIN_PRAGMAS, is
this pragma usable from Ada or Fortran code?

I am not very familiar with Ada or Fortran. I believe Ada has some
syntax for pragmas -but do Ada pragma have the same API inside GCC
plugins as C or C++ pragmas?- and I am not sure about Fortran. Many
Fortran dialects or implementations parse specially some kind of
comments -but I don't know if these are giving pragma events to plugins.


I'm pretty sure this won't work for Ada without modifying the Ada
front-end.


Are there any plugin hooks for Ada pragmas? Perhaps the Ada team might
consider adding some, if it is simple enough. Specific pragmas have
definitely their places in plugins.


I'm pretty sure there are no such plugin hooks right now.

Ciao,

Duncan.


Re: GCC plugin support when using Ada

2010-06-19 Thread Duncan Sands

Hi PeteGarbett,


I see nothing in the GCC 4.5 release notes about
plugin support being language specific, and yet if I using the treehydra
plugin with Ada (admittedly using a patched GCC 4.3.4 as per the dehydra
notes), I get this


I use plugins with Ada all the time, with gcc-4.5, and it works fine for me.

Ciao,

Duncan.


Re: Using C++ in GCC is OK

2010-06-01 Thread Duncan Sands

On 01/06/10 10:03, Paolo Bonzini wrote:

On 05/31/2010 12:30 PM, 徐持恒 wrote:

I think compiler can and should be host independent, like LLVM.


It is. Changes to code generation depending on the host are considered
to be serious bugs, and have been long before LLVM existed.


Perhaps 徐持恒 meant target independent, in the sense that with LLVM the choice
of target is made at run-time, and not when building LLVM [*].  Just a guess
though.

Ciao,

Duncan.

[*] It is possible to choose which targets to build when configuring LLVM.
If only one is chosen then of course that's the only one that can be chosen
at run-time.


Re: Where does the time go?

2010-05-20 Thread Duncan Sands

Hi,


I don't know is it big or not to have such time spend in RTL parts.  But I
think that this RTL part could be decreased if RTL (magically :) would have
smaller footprint and contain less details.


checks pockets...
Bah, no wand... :-)


I noticed while working on the dragonegg plugin that replacing gimple - RTL
with gimple - LLVM IR significantly reduced the amount of memory used by the
compiler at -O0.  I didn't investigate where the memory was going, but it seems
likely that RTL either contains a whole lot more information than the LLVM IR,
or doesn't represent it in a very memory efficient way.

Ciao,

Duncan.


Re: --enable-plugin as default?

2010-04-23 Thread Duncan Sands

Plugin support is enabled by default if it works.


I can confirm this - on my linux box I don't have to explicitly
specify --enable-plugin.

Ciao,

Duncan.


Re: Some benchmark comparison of gcc4.5 and dragonegg (was dragonegg in FSF gcc?)

2010-04-21 Thread Duncan Sands

Hi Vladimir, thank you for doing this benchmarking.


Only SPECIn2000 for x86_64 has been compiled fully successfully by
dragonegg. There were a few compiler crashes including some in LLVM
itself for SPECFP2000 and for SPECINT2000 for x86.


Sorry about that.  Can you please send me preprocessed code for the
spec tests that crashed the plugin (unless you are not allowed to).
By the way, if you target something (eg: i386) that doesn't have SSE
support then I've noticed that the plugin tends to crash on code that
does vector operations.  If you have assertions turned on in LLVM then
you get something like:

Assertion `TLI.isTypeLegal(Op.getValueType())  Intrinsic uses a non-legal 
type?' failed.

Stack dump:
0.  Running pass 'X86 DAG-DAG Instruction Selection' on function 
'@_ada_sse_nolib'

So if the compile failures are of that kind, no need to send testcases, I
already have several.

Best wishes,

Duncan.


Re: Some benchmark comparison of gcc4.5 and dragonegg (was dragonegg in FSF gcc?)

2010-04-21 Thread Duncan Sands

Hi Vladimir,


Dragonegg does not work with -flto. It generates assembler code on which
gas complaints (a lot of non-assembler code like target data-layout
which are not in comments).


actually it does work with -flto, in an awkward way.  When you use -flto
it spits out LLVM IR.  You need to use -S, otherwise the system assembler
tries (and fails) to compile this.  You need to then use llvm-as to turn
this into LLVM bitcode.  You can then link and optimize the bitcode either
by hand (using llvm-ld) or using the gold plugin, as described in
  http://llvm.org/docs/GoldPlugin.html

It is annoying that gcc insists on running the system assembler when passed
-c.  Not running the assembler isn't only good for avoiding the -S + llvm-as
rigmarole mentioned above.  LLVM is now capable of writing out object files
directly, i.e. without having to pass via an assembler at all.  It would be
neat if I could have the plugin immediately write out the final object file
if -c is passed.  I didn't work out how to do this yet.  It probably requires
some gcc modifications, so maybe something can be done for gcc-4.6.

For transparent LTO another possibility is to encode LLVM bitcode in the
assembler in the same way as gcc does for gimple when passed -flto.  I didn't
investigate this yet.

Ciao,

Duncan.


Re: ICE: -flto and -g

2010-04-21 Thread Duncan Sands

$ /usr/bin/g++-4.5 -O0 -g -flto -o kfinddialog.o -c kfinddialog.ii
../../kdeui/findreplace/kfinddialog.cpp: In member function ‘RegExpAction’:
../../kdeui/findreplace/kfinddialog.cpp:445:9: internal compiler error: tree
check: expected class ‘type’, have ‘declaration’ (function_decl) in
gen_type_die_with_usage, at dwarf2out.c:18962


This looks like PR42653, see http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42653

Ciao,

Duncan.


Re: dragonegg in FSF gcc?

2010-04-14 Thread Duncan Sands

Hi Steven,


FWIW, this sounds great and all... but I haven't actually seen any
comparisons of GCC vs. LLVM with DragonEgg. A search with Google
doesn't give me any results.

Can you point out some postings where people actually made a
comparison between GCC and LLVM with DragonEgg?


I gave some comparisons in my talk at the 2009 LLVM developers meeting.
See the links at the bottom of http://dragonegg.llvm.org/

Since then I've been working on completeness and correctness, and didn't
do any additional benchmarking yet.  I don't know if anyone else did any
benchmarking.  If so they didn't inform me.

Ciao,

Duncan.


Re: Notes from the GROW'10 workshop panel (GCC research opportunities workshop)

2010-04-14 Thread Duncan Sands

Hi Manuel,


PS: On the other hand, I think that modifying GCC to suit the purposes
of dragonegg or LLVM is a *bad* idea.


my policy has been to only propose GCC patches that are useful to GCC itself.
Well, yesterday I broke this rule and posted a patch that was only of interest
to dragonegg, but let's hope that this is the exception that proves the rule,
as they say :)

Ciao,

Duncan.


Re: dragonegg in FSF gcc?

2010-04-12 Thread Duncan Sands

Hi Jonathan,


egcs code was always license-compatible with GCC and was always
assigned to the FSF

The difference is quite significant.


both dragonegg and LLVM are license-compatible with GCC.  The dragonegg
code is licensed under GPLv2 or later, while LLVM is licensed under the
University of Illinois/NCSA Open Source License, which is GPL compatible
according to

 http://www.gnu.org/licenses/license-list.html#GPLCompatibleLicenses

The dragonegg plugin, being a combination of these plus GCC, is therefore
GPLv3.

You are of course quite right that neither LLVM nor dragonegg has its
copyright assigned to the FSF.

Ciao,

Duncan.


Re: dragonegg in FSF gcc?

2010-04-11 Thread Duncan Sands

Hi Steven,


I think Jack wasn't suggesting that dragonegg should be changed to not be
a plugin any more.  I think he was suggesting that it should live in the gcc
repository rather than the LLVM repository.


So, no offense, but the suggestion here is to make this subversive
(for FSF GCC) plugin part of FSF GCC? What is the benefit of this for
GCC? I don't see any. I just see a plugin trying to piggy-back on the
hard work of GCC front-end developers and negating the efforts of
those working on the middle ends and back ends.


I'm sorry you see the dragonegg project so negatively.  I think it is useful
for gcc (though not hugely useful), since it makes it easy to compare the gcc
and LLVM optimizers and code generators, not to mention the gcc and LLVM
approaches to LTO.  If LLVM manages to produce better code than gcc for some
testcase, then it is a convenient tool for the gcc devs to find out why, and
improve gcc.  If gcc is consistently better than LLVM then there's nothing to
worry about!  Of course, right now it is LLVM that is mostly playing catchup
with gcc, so for the moment it is principally the LLVM devs that get to learn
from gcc, but as LLVM improves the other direction is likely to occur more
often.

As for negating the efforts of those working on the middle ends and back ends,
would you complain if someone came up with a new register allocator because it
negates the efforts of those who work on the old one?  If LLVM is technically
superior, then that's a fact and a good thing, not subversion, and hopefully
will encourage the gcc devs to either improve gcc or migrate to LLVM.  If GCC
is technically superior, then hopefully the dragonegg project will help people
see this, by making it easier to compare the two technologies, and result in
them giving up on LLVM and working on or using gcc instead.

In my opinion a bit of friendly competition from LLVM is on the whole a good
thing for gcc.

That said, maybe your worry is that dragonegg makes it easier to undermine the
GPL, or perhaps you don't like LLVM's BSD style license?  I really have no
understanding of the legal issues involved with undermining the GPL, but I
know that some of the gcc devs have thought hard about this so perhaps they can
comment.  I'm personally not at all interested in undermining the GPL.  As for
licenses, the dragonegg plugin, as a combined work of GPLv3 code (gcc), GPLv2
or later (the plugin) and GPL compatible code (LLVM), is as far as I can see
GPLv3 and as such no different to gcc itself.

Finally, I don't see much point in dragonegg being moved to the gcc repository.
It wasn't I who suggested it.

Ciao,

Duncan.


Re: dragonegg in FSF gcc?

2010-04-11 Thread Duncan Sands

Hi Eric,


As for negating the efforts of those working on the middle ends and back
ends, would you complain if someone came up with a new register allocator
because it negates the efforts of those who work on the old one?  If LLVM
is technically superior, then that's a fact and a good thing, not
subversion, and hopefully will encourage the gcc devs to either improve gcc
or migrate to LLVM.


Well, the last point is very likely precisely what Steven is talking about.
GCC doesn't have to shoot itself in the foot by encouraging its developers to
migrate to LLVM.


I hope it was clear from my email that by gcc I was talking about the gcc
optimizers and code generators and not the gcc frontends.  If the dragonegg
project shows that feeding the output of the gcc frontends into the LLVM
optimizers and code generators results in better code, then gcc can always
change to using the LLVM optimizers and code generators, resulting in a better
compiler.  I don't see how this is gcc the compiler shooting itself in the foot.

Of course, some gcc devs have invested a lot in the gcc middle and back ends,
and moving to LLVM might be personally costly for them.  Thus they might be
shooting themselves in the foot by helping the LLVM project, but this should
not be confused with gcc the compiler shooting itself in the foot.

All this is predicated on gcc-frontends+LLVM producing better code than the
current gcc-frontends+gcc-middle/backends.  As I mentioned, dragonegg makes
it easier, even trivial, to test this.  So those who think that LLVM is all
hype should be cheering on the dragonegg project, because now they have a
great way to prove that gcc does a better job!

Ciao,

Duncan.


Re: dragonegg in FSF gcc?

2010-04-11 Thread Duncan Sands

Hi David,


The Graphite project and the various GCC targets participate in GCC
development.  Helping fix GCC bugs affecting those features, supports
and grows the GCC developer base.  There needs to be some mutualistic
relationship.  I don't see members of the LLVM community arguing that
they should contribute to GCC to improve performance comparisons.


as I mentioned in my email, I see dragonegg as being a useful tool for
comparing the gcc and LLVM optimizers and code generators.  That sounds
like the kind of thing you are asking for, but perhaps I misunderstood?


As Steven mentioned, LLVM has been extremely effective at utilizing
FSF technology while its community complains about the FSF, GCC, GCC's
leadership and GCC's developer community.


It is true that plenty of people disaffected with gcc can be found in the
LLVM community.  Dislike of gcc or its license seems a common motivation
for looking into the clang compiler for example.  It seems to me that this
is a natural phenomenon - where else would such people go?  It would be a
mistake to think that the LLVM community consists principally of gcc haters
though.

If GCC is so helpful and

useful and effective, then work on it as well and give it credit; if
GCC is so bad, then why rely on it?  The rhetoric is disconnected from
the actions.


I'm not sure what you mean.  Working on an LLVM middle-end/back-end for
gcc doesn't mean I despise the gcc middle-end and back-ends, it just means
that I think this is an interesting project with the potential to result
in a better gcc in the long term.

Ciao,

Duncan.


Re: dragonegg in FSF gcc?

2010-04-11 Thread Duncan Sands

Hi Grigori,


Hope my question will not completely divert the topic of this discussion -
just curious what do you mean by better code? Better execution time, code size,
compilation time?..


this depends on each persons needs of course.  The dragonegg plugin makes it
easy for people to see if the LLVM optimizers and code generators are helpful
for their projects.  Evaluating whether replacing whole-sale the gcc middle and
backends with LLVM (which I consider pretty unlikely) is an overall win is much
harder, but I doubt anyone on this mailing list needs to be told that.


If yes, then why not to compare different compilers by just compiling multiple 
programs
with GCC, LLVM, Open64, ICC, etc, separately to compare those characteristics 
and then
find missing optimizations or better combinations of optimizations to achieve 
the result?


how do you compile a program with LLVM?  It's not a compiler, it's a set of
optimization and codegen libraries.  You also need a front-end, which takes
the users code and turns it into the LLVM intermediate representation [IR].  The
dragonegg plugin takes the output of the gcc-4.5 front-ends, turns it into LLVM
IR and runs the LLVM optimizers and code generators on it.  In other words, it
is exactly what you need in order to compile programs with LLVM.  There is also
llvm-gcc, which is a hacked version of gcc-4.2 that does much the same thing,
and for C and C++ there is now the clang front-end to LLVM.  The big advantage
of dragonegg is that it isolates the effect of the LLVM optimizers and code
generators by removing the effect of having a different front-end.  For example,
if llvm-gcc produces slower code than gcc-4.5, this might be due to front-end
changes between gcc-4.2 and gcc-4.5 rather than because the gcc optimizers are
doing a better job.  This confounding factor goes away with the dragonegg
plugin.

Ciao,

Duncan.


Re: dragonegg in FSF gcc?

2010-04-11 Thread Duncan Sands

Hi Robert,


b) better behavior for undefined cases


this is one of the problems with using LLVM with the Ada front-end.  LLVM makes
pretty aggressive deductions when it sees undefined behaviour, which can result
in (for example) validity checks being removed exactly in the cases when they
are most needed.  There are various ways of solving this problem, but I didn't
implement any of them yet.

Ciao,

Duncan.


Re: dragonegg in FSF gcc?

2010-04-10 Thread Duncan Sands

Hi Basile,


I tend to be quite happy with the idea of dragonegg being a good GCC
plugin, since it is a good illustration of the plugin feature.


I think Jack wasn't suggesting that dragonegg should be changed to not be
a plugin any more.  I think he was suggesting that it should live in the gcc
repository rather than the LLVM repository.

Ciao,

Duncan.


Re: packaging GCC plugins using gengtype (e.g. MELT)?

2010-03-14 Thread Duncan Sands

On 14/03/10 21:48, Matthias Klose wrote:

On 14.03.2010 13:15, Basile Starynkevitch wrote:

Basile Starynkevitch wrote in
http://lists.debian.org/debian-gcc/2010/03/msg00047.html


Now, one of the issues about MELT  Debian packaging is the fact that
melt-runtime.c (the source of melt.so plugin) uses GTY
http://gcc.gnu.org/onlinedocs/gccint/Type-Information.html#Type-Information

 register GGC roots thru PLUGIN_REGISTER_GGC_ROOTS ... Hence, it
needs gengtype (from GCC 4.5 build tree) to generate gt-melt-runtime.h
[#include-ed from melt-runtime.c] so the entire GCC 4.5 source  build
trees are needed to build melt.so (or any other gengtyp-ing GCC plugin).


there was another request to add the gengtype binary to the package,
required by the dragonegg plugin. details at:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=562882#13


I don't think dragonegg needs this.  The generated header seems to be
perfectly generic, so I've simply bundled it in with the plugin source.

Ciao,

Duncan.


Re: LLVM as a gcc plugin?

2009-06-04 Thread Duncan Sands

Hi,


Some time ago, there was a discussion about integrating LLVM and GCC
[1]. However, with plugin infrastructure in place, could LLVM be
plugged into GCC as an additional optimization plugin?


I plan to start working on an llvm plugin any day now.

Ciao,

Duncan.


Re: LLVM as a gcc plugin?

2009-06-04 Thread Duncan Sands

Hi Rafael,


There was some talk about it on #gcc. A plugin should be able to see
all the GCC IL, so it should be able to convert it to LLVM. Keeping
the current llvm-gcc interface would require some hacks

*) The plugin will have to call exit to keep gcc's code generation from running.


this would work when doing unit-at-a-time, but not when doing
function-at-a-time.  Does gcc still do function-at-a-time?


Another source of problem will be the early transformations that gcc
does and that are normally disabled in llvm-gcc. The one that I
remember right now is c++ thunk generation.


Good point.

Ciao,

Duncan.


Re: New GCC releases comparison and comparison of GCC4.4 and LLVM2.5 on SPEC2000

2009-05-13 Thread Duncan Sands
Hi,

 Sorry, I missed to mention that I used an additional option -mpc64 for 
 32-bit GCC4.4.  It is not possible to generate SPECFP2000 expected 
 results  by GCC4.4 without this option. LLVM does not support this 
 option.  And this option can significantly improve the performance.  So 
 32-bit comparison of SPECFP2000 should be taken with a grain of salt.

what does -mpc64 do exactly?  The gcc docs say:
  `-mpc64' rounds the the significands of results of floating-point operations 
to 53 bits (double precision)
Does this mean that a rounding operation is performed after each fp
operation, or that optimizations are permitted that don't result in
accurate extended double precision values as long as they are correct
to 53 bits, or something else?

The LLVM code generators have an option called -limit-float-precision:
  -limit-float-precision=uint   - Generate low-precision inline sequences 
for some float libcalls
I'm not sure what it does exactly, but perhaps it is similar to -mpc64?

Ciao,

Duncan.


Re: New GCC releases comparison and comparison of GCC4.4 and LLVM2.5 on SPEC2000

2009-05-13 Thread Duncan Sands
Hi Richard,

 -mpc64 sets the x87 floating point control register to not use the 80bit
 extended precision.  This causes some x87 floating point operations
 to operate faster and there are no issues with the extra roundings you
 get when storing an 80bit precision register to a 64bit memory location.

I see, thanks for the explanation.

 Does LLVM support x87 arithmetic at all or does it default to SSE
 arithmetic in 32bits?  I guess using SSE math for both would be a more
 fair comparison?

LLVM does support the x86 floating point stack, though it doesn't support
all asm expressions for it (which is why llvm-gcc disables math inlines).
My understanding is that no effort has been made to produce optimal code
when using the x86 fp stack, and all the effort when into SSE instead.

Ciao,

Duncan.


Re: Transforms on SSA form

2008-12-03 Thread Duncan Sands
Hi,

 I am looking to transform a tree in SSA form into a representation of it in C.

you can try using LLVM (which uses an IR in SSA form): it has a C backend
that squirts out C equivalent to the IR.  The resulting C is not very nice to
read though.

Ciao,

Duncan.

PS: This is a cute way of getting an Ada to C translator, or a Fortran to C
translator: use the llvm-gcc Ada/Fortran front-ends to turn your program into
LLVM IR, then use the C backend to turn it into C.  This currently doesn't
support all constructs though (eg: exception handling).


Re: Apple-employed maintainers (was Re: Apple, iPhone, and GPLv3 troubles)

2008-09-24 Thread Duncan Sands
  However if GPLv3 is such a huge issue
  at Apple, it does make one wonder if llvm will ever see a gcc front-end 
  newer
  than the current 4.2 one.
 
 The LLVM folks are writing a new frontend anyhow.  In the future they
 presumably plan to stop using the gcc frontend.  gcc's code is so
 tangled anyhow, it's not like using the gcc frontend somehow makes
 LLVM compile code the same way gcc does.

I'm quite interested in porting llvm-gcc to gcc head, in order to
get better Ada support.  Apple isn't planning Ada support in their
new compiler (clang) as far as I know :)

Duncan.

PS: I have no connection with Apple.


Warnings when building the Ada f-e

2008-09-02 Thread Duncan Sands
Building gcc from svn today I see the following:

prj-nmsc.adb: In function ‘Prj.Nmsc.Check_Naming_Schemes’:
prj-nmsc.adb:3272: warning: ‘Casing’ may be used uninitialized in this function
...
g-socket.adb: In function ‘GNAT.SOCKETS.SEND_SOCKET’:
g-socket.adb:1786: warning: ‘SIN’ is used uninitialized in this function
g-socket.adb: In function ‘GNAT.SOCKETS.RECEIVE_SOCKET’:
g-socket.adb:1586: warning: ‘SIN’ is used uninitialized in this function
g-socket.adb: In function ‘GNAT.SOCKETS.GET_SOCKET_NAME’:
g-socket.adb:1001: warning: ‘SIN’ is used uninitialized in this function
g-socket.adb: In function ‘GNAT.SOCKETS.CONNECT_SOCKET’:
g-socket.adb:623: warning: ‘SIN’ is used uninitialized in this function
g-socket.adb: In function ‘GNAT.SOCKETS.CONNECT_SOCKET’:
g-socket.adb:655: warning: ‘REQ’ is used uninitialized in this function
g-socket.adb: In function ‘GNAT.SOCKETS.BIND_SOCKET’:
g-socket.adb:396: warning: ‘SIN’ is used uninitialized in this function
g-socket.adb: In function ‘GNAT.SOCKETS.ACCEPT_SOCKET’:
g-socket.adb:277: warning: ‘SIN’ is used uninitialized in this function
g-socket.adb: In function ‘GNAT.SOCKETS.GET_PEER_NAME’:
g-socket.adb:929: warning: ‘SIN’ is used uninitialized in this function
...
a-strmap.adb: In function ‘Ada.Strings.Maps.To_Set’:
a-strmap.adb:269: warning: ‘Result’ is used uninitialized in this function
a-strmap.adb: In function ‘Ada.Strings.Maps.To_Set’:
a-strmap.adb:285: warning: ‘Result’ is used uninitialized in this function
...
g-comlin.adb: In function ‘GNAT.COMMAND_LINE.FIND_LONGEST_MATCHING_SWITCH’:
g-comlin.adb:96: warning: ‘PARAM’ may be used uninitialized in this function
g-comlin.adb:96: note: ‘PARAM’ was declared here

This is x86_64-unknown-linux-gnu.

Ciao,

Duncan.


Re: LLVM 2.3 Released

2008-06-09 Thread Duncan Sands
Are there any specific plans for moving llvm-gcc from the
 gcc 4.2 to the gcc 4.3 code base?

I plan to port llvm-gcc to gcc head, since I'm interested in the
Ada front-end and the Ada support in gcc-4.4 is much better than
in gcc-4.2.  However I can't say when this will happen, since I
don't have much time to spend on it.

Best wishes,

Duncan.


Re: LLVM 2.2

2008-02-16 Thread Duncan Sands
 Another is that it supports Ada (32 bit x86 on linux only for the moment)
 and Fortran to some extent.  I'm currently adding build instructions for
 these two languages to http://llvm.org/docs/CFEBuildInstrs.html (should
 be up in a day or two).  The release notes detail what works and what
 doesn't.

I've added build instructions for the Ada f-e.  They can be found at
http://llvm.org/docs/GCCFEBuildInstrs.html

Duncan.


Re: LLVM 2.2

2008-02-12 Thread Duncan Sands
 One of the big changes is that we now recommend the GCC 4.2-based  
 front-end,

Another is that it supports Ada (32 bit x86 on linux only for the moment)
and Fortran to some extent.  I'm currently adding build instructions for
these two languages to http://llvm.org/docs/CFEBuildInstrs.html (should
be up in a day or two).  The release notes detail what works and what
doesn't.

Ciao,

Duncan.


Re: ACATS Results for powerpc-rtems on Trunk

2008-02-09 Thread Duncan Sands
Hi,

 4.2.3 only failed c380004, c761007, and c953002.

c380004 can be considered to be an expected failure.
It also fails on x86-linux, and this is normal because
the code produced by the front-end (gcc-4.2) can't possibly pass.

Best wishes,

Duncan.


Re: powercp-linux cross GCC 4.2 vs GCC 4.0.0: -Os code size regression?

2008-01-16 Thread Duncan Sands
Hi,

 I'm using the ppc-linux gcc-4.2.2 compiler and noted the code
 size have increased significantly (about 40%!), comparing with
 old 4.0.0 when using the -Os option. Same code, same compile-
 and configuration-time options. Binutils are differ
 (2.16.1 vs 2.17.50), though.

what LLVM version is old 4.0.0?  Are you compiling C++ (I don't know
what CSiBE is)?  Are you using exception handling?

 I've looked at the CSiBE testing results for ppc-elf with -Os,
 comparing gcc_4_0_0 with mainline and found that the mainline
 actually optimizes better, at least for the CSiBE test environment.
 After some analysis I've came to the following results:
   Number of packages in the CSiBE test environment: 863
   N of packages where mainline GCC optimizes better:   290
   N of packages where mainline GCC optimizes worse: 436

From these numbers it looks like llvm-gcc is better than mainline
most of the time.  However you say: ... found that the mainline
actually optimizes better.  Can you please clarify.

Best wishes,

Duncan.


Re: powercp-linux cross GCC 4.2 vs GCC 4.0.0: -Os code size regression?

2008-01-16 Thread Duncan Sands
 LLVM? From what I know llvm-gcc is an alternative for gcc. Are any
 parts of LLVM used in current GCC? None of what I know.

Sorry, I confused my mailing lists and thought you had asked on
the LLVM mailing list.  This explains why I didn't understand
your questions :)

Sorry about the noise,

Duncan.


Re: Optimization of conditional access to globals: thread-unsafe?

2007-10-29 Thread Duncan Sands
Hi Tomash,

   moonlight:/tmp$ /usr/local/gcc-4.3-trunk/bin/gcc -O0 mmap.c -o mmap
   moonlight:/tmp$ ./mmap
   GCC is the best compiler ever!
   moonlight:/tmp$ /usr/local/gcc-4.3-trunk/bin/gcc -O1 mmap.c -o mmap
   moonlight:/tmp$ ./mmap
   Segmentation fault

I don't see this with gcc 4.1 or 4.2.  Just a data point.

Ciao,

Duncan.


Re: Static Chain Argument in call_expr

2007-03-08 Thread Duncan Sands
 in tree.def, in DEFTREECODE for call_expr, it says operand 2 is the
 static chain argument, or NULL. Can someone tell me or reference me to
 what static chain argument is?

It's for nested functions, eg

int parent (int n)
{
  int child (int m) { return m * n; }

  return child (2);
}

Notice how child using a variable of parent, namely
the parameter n.  This gets lowered to something like:

struct frame { int n; };

int child (struct frame *static_chain, int m) { return m * static_chain-n; }

int parent (int n)
{
  struct frame FRAME;
  FRAME.n = n;
  return child (FRAME, 2);
}

Ciao,

Duncan.


Re: What tells the coding style about whitespaces at end of lines or in *empty* lines ?

2007-03-01 Thread Duncan Sands
  I noticed while editing gcc files, that there are a lot of *useless* 
  whitespaces at the end of lines or within empty lines, which are getting 
  automatic removed by some *smarter* editors as I am common to use *sigh*. 
  This leads to huge diff files and the real change is getting veiled. I 
  think it would be nice to eliminate these *usesless* whitespaces.
 
 Note that the coding standard for GNAT forbids trailing white spaces.
 It also forbids embedded horizontal tabs for similar reasons (avoiding
 junk difs).

And the compiler enforces this, which is an important point.

Duncan.


Re: Fold and integer types with sub-ranges

2007-02-25 Thread Duncan Sands
On Saturday 24 February 2007 14:27:36 Richard Kenner wrote:
  Sure - I wonder if there is a reliable way of testing whether we face
  a non-base type in the middle-end.  I suppose TREE_TYPE (type) != NULL
  won't work in all cases... (?)
 
 That's the right way as far as I know.

Note that having TREE_TYPE(type)!=NULL does not imply that the type and the
base type are inequivalent.  For example, if you declare a type Int as follows:
subtype Int is Integer;
then TREE_TYPE(type_for_Int)=type_for_Integer, but the types are equivalent,
in particular they have the same TYPE_MIN_VALUE and TYPE_MAX_VALUE.

Ciao,

Duncan.


Re: Fold and integer types with sub-ranges

2007-02-23 Thread Duncan Sands
 Currently for example in fold_sign_changed_comparison we produce
 integer constants that are not inside the range of its type values
 denoted by [TYPE_MIN_VALUE (t), TYPE_MAX_VALUE (t)].  For example
 consider a type with range [10, 20] and the comparison created by
 the Ada frontend:
 
  if ((signed char)t == -128)
 
 t being of that type [10, 20] with TYPE_PRECISION 8, like the constant
 -128.  So fold_sign_changed_comparison comes along and decides to strip
 the conversion and convert the constant to type T which looks like
...
 What do we want to do about that?  Do we want to do anything about it?
 If we don't want to do anything about it, why care about an exact
 TREE_TYPE of integer constants if the only thing that matters is
 signedness and type precision?

I don't think gcc should be converting anything to a type like t's unless
it can prove that the thing it's converting is in the range of t's type.  So
it presumably should try to prove: (1) that -128 is not in the range of
t's type; if it's not, then fold the comparison to false; otherwise (2) try
to prove that -128 is in the range of t's type; if so, convert it.  Otherwise
do nothing.

That said, this whole thing is a can of worms.  Suppose the compiler wants to
calculate t+1.  Of course you do something like this:

int_const_binop (PLUS_EXPR, t, build_int_cst (TREE_TYPE (t), 1), 0);

But if 1 is not in the type of t, you just created an invalid value!

Personally I think the right thing to do is to eliminate these types
altogether somewhere early on, replacing them with their base types
(which don't have funky ranges), inserting appropriate ASSERT_EXPRs
instead.  Probably types like t should never be seen outside the Ada
f-e at all.

Ciao,

Duncan.


Re: what is difference between gcc-ada and GNAT????

2007-02-16 Thread Duncan Sands
Hi Sameer Sinha,

   can any one tell me what is the difference between gcc-ada and
 differnt other compiler for Ada 95 like GNAT GPL, GNAT Pro, 
 what is procedure to build only language Ada by using source code og 
 gcc-4.1???

they are closely related.  There are two groups:
(1) versions released by ACT (http://www.gnat.com): GNAT GPL and GNAT Pro.
(2) versions released by the FSF (http://gcc.gnu.org): gcc-4.1 etc

GNAT Pro is ACT's commercial offering: you have to have a support contract
with them to get it.  GNAT GPL is basically the same compiler, but it is
unsupported, and the license has been changed so that software built with
and distributed to the world has to be under the GPL license to be legal
(at least, that seems to be the intention).

The FSF compiler is freely available and doesn't have the license restrictions
of the GNAT GPL.

The technical differences between the compilers are basically in the code
generators: the ACT compilers use a modified code generator from gcc 3.4.6.
The FSF compilers use more recent and quite different code generators.
However the ACT compilers tend to be more stable than the FSF ones, because
the FSF code generators and the Ada front-end do not yet interact perfectly.

I understand that the next releases of GNAT GPL and GNAT Pro will be based
on the same code generator as the FSF compilers, at which point the difference
between the FSF and ACT offerings will doubtless be much less.

Best wishes,

Duncan.


Re: what is difference between gcc-ada and GNAT????

2007-02-16 Thread Duncan Sands
 So we are in better shape than implied above. We have quite
 a reasonable set of stability and regression tests for the
 Ada front end. Given the restrictions on proprietary code
 use, this is about as good as we can do for now. Of course
 it is valuable if people submit more tests to this test suite.

Couldn't you pass proprietary code through an obfuscator?

Duncan.


Re: what is difference between gcc-ada and GNAT????

2007-02-16 Thread Duncan Sands
 But Duncan, you were generating a bunch of proprietary
 Ada code recently, if you can get people to be comfortable
 submitting it, possibly in obfuscated form, by all means
 go ahead!

I already started doing this, see
http://gcc.gnu.org/ml/gcc/2006-07/msg00591.html

Duncan.


Re: Miscompilation of remainder expressions

2007-01-16 Thread Duncan Sands
On Tuesday 16 January 2007 16:50, Andrew Haley wrote:
 Roberto Bagnara writes:
   Andrew Haley wrote:
Roberto Bagnara writes:
  
  Reading the thread Autoconf manual's coverage of signed integer
  overflow  portability I was horrified to discover about GCC's
  miscompilation of the remainder expression that causes INT_MIN % -1
  to cause a SIGFPE on CPUs of the i386 family.  Are there plans to
  fix this bug (which, to me, looks quite serious)?

No, there aren't.  It would make more sense for you to wrap % in some
code that checks for this, rather than for us to slow down every division
for this one special case.
   
   With all due respect, I must say I am shocked.  I always thought
   (and taught) that we, Free Software people, value standard conformance
   and getting things right.
 
 This is a disgreement about interpretation of the langauge in the
 standard, which is:
 
 The result of the / operator is the quotient from the division of the
 first operand by the second; the result of the % operator is the
 remainder. In both operations, if the value of the second operand is
 zero, the behavior is undefined. When integers are divided, the result
 of the / operator is the algebraic quotient with any fractional part
 discarded.87) If the quotient a/b is representable, the expression
 (a/b)*b + a%b shall equal a.
 
 If the quotient a/b is *not* representable, is the behaviour of %
 well-defined or not?  It doesn't say.

In ada/exp_ch4.adb you will find:

 --  Deal with annoying case of largest negative number remainder
 --  minus one. Gigi does not handle this case correctly, because
 --  it generates a divide instruction which may trap in this case.

 --  In fact the check is quite easy, if the right operand is -1,
 --  then the mod value is always 0, and we can just ignore the
 --  left operand completely in this case.

Ada semantics require INT_MIN rem -1 to be zero.

Best wishes,

Duncan.


Re: Scheduling

2007-01-05 Thread Duncan Sands
 Please does anyone know the answer to the following questions?
 
 1. The operating system (OS) schedules tasks, but gnat allow us to set 
 schedule policies such as Round Robin, then how does gnat tell the OS to 
 start doing Round Robin scheduling?
 
 2. If someone wants to write a new scheduling policy, what files I need to 
 add and update to tell gnat to use my scheduling policy.
 
 For example, if I want tasks to block, even though it does NOT want to use a 
 shared data object, but  if another task on the same cpu is running and is 
 using a shared data object that other tasks on other CPUs need that shared 
 data object.
 
 For example, I see for Round Robin, gnat has the following files: 
 a-diroro.ads and a-diroro.adb
 
 3.  Which gnat files for tasking and scheduling tell the tasks to use these 
 files and how these files hookup to the tasking model?
 
 4. What gnat file is the Scheduling and Tasking file?  I want to see how it 
 uses the Round Robin or how scheduling works in gnat?

I suggest you try asking on comp.lang.ada.

Best wishes,

Duncan.


  1   2   >