Re: [PATCH, libmpx, i386, PR driver/65444] Pass '-z bndplt' when building dynamic objects with MPX

2015-03-18 Thread Robert Dewar
Do we really want to quote to this level? This message has 11 levels of 
quotes, the most I have ever seen. If everyone does this, the whole 
thread is in every message and that seems unnecessary. I don't know if 
there are gcc guidelines on this???


On 3/18/2015 9:59 AM, Ilya Enkovich wrote:

2015-03-18 16:52 GMT+03:00 H.J. Lu hjl.to...@gmail.com:

On Wed, Mar 18, 2015 at 6:41 AM, Ilya Enkovich enkovich@gmail.com wrote:

2015-03-18 16:31 GMT+03:00 H.J. Lu hjl.to...@gmail.com:

On Wed, Mar 18, 2015 at 6:24 AM, Ilya Enkovich enkovich@gmail.com wrote:

2015-03-18 15:42 GMT+03:00 Richard Biener richard.guent...@gmail.com:

On Wed, Mar 18, 2015 at 1:25 PM, H.J. Lu hjl.to...@gmail.com wrote:

On Wed, Mar 18, 2015 at 5:13 AM, Ilya Enkovich enkovich@gmail.com wrote:

2015-03-18 15:08 GMT+03:00 H.J. Lu hjl.to...@gmail.com:

On Wed, Mar 18, 2015 at 5:05 AM, Ilya Enkovich enkovich@gmail.com wrote:

2015-03-18 15:02 GMT+03:00 H.J. Lu hjl.to...@gmail.com:

On Wed, Mar 18, 2015 at 4:56 AM, Ilya Enkovich enkovich@gmail.com wrote:

Hi,

This patch fixes PR target/65444 by passing '-z bndplt' to linker when 
appropriate.  Bootstrapped and tested on x86_64-unknown-linux-gnu.  Will commit 
it to trunk in a couple of days if no objections arise.

Thanks,
Ilya
--
gcc/

2015-03-18  Ilya Enkovich  ilya.enkov...@intel.com

 PR driver/65444
 * config/i386/linux-common.h (MPX_SPEC): New.
 (CHKP_SPEC): Add MPX_SPEC.

libmpx/

2015-03-18  Ilya Enkovich  ilya.enkov...@intel.com

 PR driver/65444
 * configure.ac: Add check for '-z bndplt' support
 by linker. Add link_mpx output variable.
 * libmpx.spec.in (link_mpx): New.
 * configure: Regenerate.


diff --git a/gcc/config/i386/linux-common.h b/gcc/config/i386/linux-common.h
index 9c6560b..dd79ec6 100644
--- a/gcc/config/i386/linux-common.h
+++ b/gcc/config/i386/linux-common.h
@@ -59,6 +59,11 @@ along with GCC; see the file COPYING3.  If not see
   %:include(libmpx.spec)%(link_libmpx)
  #endif

+#ifndef MPX_SPEC
+#define MPX_SPEC \
+ %{mmpx:%{fcheck-pointer-bounds:%{!static:%:include(libmpx.spec)%(link_mpx)}}}
+#endif
+
  #ifndef LIBMPX_SPEC
  #if defined(HAVE_LD_STATIC_DYNAMIC)
  #define LIBMPX_SPEC \
@@ -89,5 +94,5 @@ along with GCC; see the file COPYING3.  If not see

  #ifndef CHKP_SPEC
  #define CHKP_SPEC \
-%{!nostdlib:%{!nodefaultlibs: LIBMPX_SPEC LIBMPXWRAPPERS_SPEC }}
+%{!nostdlib:%{!nodefaultlibs: LIBMPX_SPEC LIBMPXWRAPPERS_SPEC }} MPX_SPEC
  #endif
diff --git a/libmpx/configure.ac b/libmpx/configure.ac
index fe0d3f2..3f8b50f 100644
--- a/libmpx/configure.ac
+++ b/libmpx/configure.ac
@@ -40,7 +40,18 @@ AC_MSG_RESULT($LIBMPX_SUPPORTED)
  AM_CONDITIONAL(LIBMPX_SUPPORTED, [test x$LIBMPX_SUPPORTED = xyes])

  link_libmpx=-lpthread
+link_mpx=
+AC_MSG_CHECKING([whether ld accepts -z bndplt])
+echo int main() {};  conftest.c
+if AC_TRY_COMMAND([${CC} ${CFLAGS} -Wl,-z,bndplt -o conftest conftest.c 
1AS_MESSAGE_LOG_FD])
+then
+AC_MSG_RESULT([yes])
+link_mpx=$link_mpx -z bndplt
+else
+AC_MSG_RESULT([no])
+fi
  AC_SUBST(link_libmpx)
+AC_SUBST(link_mpx)



Without -z bndplt, MPX won't work correctly.  We should always pass -z bndplt
to linker.  If linker doesn't support it, ld will issue a warning, not
error and users
will know their linker is too old.  When they update linker, they don't have to
rebuild GCC.


If ld issues a warning instead of an error, then configure test passes
and we pass '-z bndplt' to linker.



Can you verify it with an older linker? The unknown XXX in -z XXX is always
warned and ignored in Linux linker.  If testing it on Linux always passes,
it is useless.


Old ld issues a warning:

ld: warning: -z bndplt ignored.


Does configure test pass?


But gold issues an error:

ld.gold: bndplt: unknown -z option
ld.gold: use the --help option for usage information


If gold is used, MPX won't work.  What should we do here?
Should we hardcode -fuse-ld=bfd for MPX?


Is MPX disabled when the host linker is gold and gld isn't available?


No. You may use MPX with gold and old ld but you would loose passed
bounds when make a call via plt.



If gold is default linker, the configure test will fail and we never pass
-z bndplt to linker even if ld.bfd is available and ld.gold is fixed later.
I'd rather always pass -z bndplt to ld.


If gold is used and it doesn't support '-z bndplt' then it doesn't
mean user can't use MPX.


They can use -fuse-ld=bfd to select bfd linker if gold fails to generate
proper MPX binary.


Which is a weird thing to do just to have a warning instead of an
error. You don't guarantee MPX PLT generation by always passing '-z
bndplt' but remove an opportunity to use gold at all. With current
check you may use any linker and manually provide additional options
if you want to.

Ilya




--
H.J.




Re: [PATCH x86] Enable v64qi permutations.

2014-12-10 Thread Robert Dewar

On 12/10/2014 11:49 AM, Richard Henderson wrote:

On 12/04/2014 01:49 AM, Ilya Tocar wrote:

+  if (!TARGET_AVX512BW || !(d-vmode == V64QImode))


Please don't over-complicate the expression.
Use x != y instead of !(x == y).


To me the original reads more clearly, since it
is of the parallel form !X or !Y, I don't see it
as somehow more complicated???



r~





Re: [PATCH] doc/generic.texi: Fix typo

2014-08-31 Thread Robert Dewar

On 8/31/2014 4:49 PM, Gerald Pfeifer wrote:

On Fri, 29 Aug 2014, Mike Stump wrote:

These errors are on purpose.


Surprising that someone would not get this obvious clever joke.



-There are many places in which this document is incomplet and incorrekt.
+There are many places in which this document is incomplete or incorrect.


Since this now came up for the second time this year, I went ahead
and applied the patch below.


Seems a shame that anyone should need an explanation, but oh well :-)

P.S. my favorite instance of this kind of documentation is an early
IBM Fortran manual, which says that you should put exactly the character
you want to see come out on the printer [in some context], e.g. an I 
for an I and a 2 for a 2. :-)


Re: [Ada] Remove VMS specific files

2014-07-31 Thread Robert Dewar

There's a user's group that works with VMS engineering that wants to
keep using the C compiler, so let's keep the config files and non-Ada
specific C files.  Tristan and I will stay on as maintainers of the
cross port for now.



Why should we continue to maintain these?


Re: Use [warning enabled by default] for default warnings

2014-02-11 Thread Robert Dewar

On 2/11/2014 4:45 AM, Richard Sandiford wrote:


OK, this version drops the [enabled by default] altogether.
Tested as before.  OK to install?


Still a huge earthquake in terms of affecting test suites and
baselines of many users. is it really worth it? In the case of
GNAT we have only recently started tagging messages in this
way, so changes would not be so disruptive, and we can debate
following whatever gcc does, but I think it is important to
understand that any change in this area is a big one in terms
of impact on users.


Thanks,
Richard


gcc/
* opts.c (option_name): Remove enabled by default rider.

gcc/testsuite/
* gcc.dg/gomp/simd-clones-5.c: Update comment for new warning message.

Index: gcc/opts.c
===
--- gcc/opts.c  2014-02-10 20:36:32.380197329 +
+++ gcc/opts.c  2014-02-10 20:58:45.894502379 +
@@ -2216,14 +2216,10 @@ option_name (diagnostic_context *context
return xstrdup (cl_options[option_index].opt_text);
  }
/* A warning without option classified as an error.  */
-  else if (orig_diag_kind == DK_WARNING || orig_diag_kind == DK_PEDWARN
-  || diag_kind == DK_WARNING)
-{
-  if (context-warning_as_error_requested)
-   return xstrdup (cl_options[OPT_Werror].opt_text);
-  else
-   return xstrdup (_(enabled by default));
-}
+  else if ((orig_diag_kind == DK_WARNING || orig_diag_kind == DK_PEDWARN
+   || diag_kind == DK_WARNING)
+   context-warning_as_error_requested)
+return xstrdup (cl_options[OPT_Werror].opt_text);
else
  return NULL;
  }
Index: gcc/testsuite/gcc.dg/gomp/simd-clones-5.c
===
--- gcc/testsuite/gcc.dg/gomp/simd-clones-5.c   2014-02-10 20:36:32.380197329 
+
+++ gcc/testsuite/gcc.dg/gomp/simd-clones-5.c   2014-02-10 21:00:32.549412313 
+
@@ -3,7 +3,7 @@

  /* ?? The -w above is to inhibit the following warning for now:
 a.c:2:6: warning: AVX vector argument without AVX enabled changes
-   the ABI [enabled by default].  */
+   the ABI.  */

  #pragma omp declare simd notinbranch simdlen(4)
  void foo (int *a)





Re: Use [warning enabled by default] for default warnings

2014-02-11 Thread Robert Dewar

On 2/11/2014 7:48 AM, Richard Sandiford wrote:


The patch deliberately didn't affect Ada's diagnostic routines given
your comments from the first round.  Calling this a huge earthquake
for other languages seems like a gross overstatement.


Actually it's much less of an impact for Ada for two reasons. First we
only just started tagging warnings. In fact we have only just released
an official version with the facility for tagging warnings.

Second, this tagging of warnings is not the default (that would have
been a big earthquake) but you have to turn it on explicitly.

But I do indeed think it will have a significant impact for users
of other languages, where this has been done for a while, and if
I am not mistaken, done by default?


I don't think gcc, g++, gfortran, etc, have ever made a commitment
to producing textually identical warnings and errors for given inputs
across different releases.  It seems ridiculous to require that,
especially if it stands in the way of improving the diagnostics
or introducing finer-grained -W control.

E.g. Florian's complaint was that we shouldn't have warnings that
are not under the control of any -W options.  But by your logic
we couldn't change that either, because all those [enabled by default]s
would become [-Wnew-option]s.


I am not saying you can't change it, just that it is indeed a big
earthquake. No of course there is no commitment not to make changes.
But you have to be aware that when you make changes like this, the
impact is very significant in real production environments, and
gcc is as you know extensively used in such environments.

What I am saying here is that this is worth some discussion on what
the best approach is.

Ideally indeed it would be better if all warnings were controlled by
some specific warning category. I am not sure a warning switch that
default-covered all otherwise uncovered cases (as suggested by one
person at least) would be a worthwhile approach.



Re: Use [warning enabled by default] for default warnings

2014-02-11 Thread Robert Dewar

On 2/11/2014 9:36 AM, Richard Sandiford wrote:

  I find it hard to believe that
significant numbers of users are not fixing the sources of those
warnings and are instead requiring every release of GCC to produce
warnings with a particular wording.


Good enough for me, I think it is OK to make the change.


Re: Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:00 PM, Richard Sandiford wrote:

We print [-Wfoo] after a warning that was enabled by the -Wfoo option,
which is pretty clear.  But for warnings that have no -W option we just
print [enabled by default], which leads to the question of _what_ is
enabled by default.  As shown by:

http://gcc.gnu.org/ml/gcc/2014-01/msg00234.html

it invites the wrong interpretation for things like:

warning: non-static data member initializers only available with -std=c++11 
or -std=gnu++11 [enabled by default]

IMO the natural assumption is that gnu++11 is enabled by default, which is
how Lars also read it.

There seemed to be support for using warning enabled by default instead,
so this patch does that.  Tested on x86_64-linux-gnu.  OK to install?


Sounds like an earthquake patch from the point of view of test suite
baselines!


I'll post an Ada patch separately.


Will definitely have a big impact on the Ada test suite. Fine to
post the Ada patch (which is of course trivial as a patch), but
we will have to coordinate installing it with a pass through
test base lines.



Re: [Ada] Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:03 PM, Richard Sandiford wrote:

This switches Ada from using [enabled by default] to [warning enabled
by default] for consistency with:

   http://gcc.gnu.org/ml/gcc-patches/2014-02/msg00549.html

Tested on x86_64-linux-gnu.  OK if the above patch goes in?


I would say hold off on this until we can find the time
to coordinate updating our test suite, which we will do
as fast as possible.


Thanks,
Richard


gcc/ada/
* erroutc.adb (Output_Msg_Text): Use [warning enabled by default].
* err_vars.ads, errout.ads, gnat_ugn.texi: Update comments and
documentation accordingly.

Index: gcc/ada/erroutc.adb
===
--- gcc/ada/erroutc.adb 2014-02-09 20:02:00.971968883 +
+++ gcc/ada/erroutc.adb 2014-02-09 20:02:58.640471235 +
@@ -456,7 +456,7 @@ package body Erroutc is

if Warn and then Warn_Chr /= ' ' then
   if Warn_Chr = '?' then
-Warn_Tag := new String'( [enabled by default]);
+Warn_Tag := new String'( [warning enabled by default]);

   elsif Warn_Chr in 'a' .. 'z' then
  Warn_Tag := new String'( [-gnatw  Warn_Chr  ']');
Index: gcc/ada/err_vars.ads
===
--- gcc/ada/err_vars.ads2014-02-09 20:02:00.971968883 +
+++ gcc/ada/err_vars.ads2014-02-09 20:02:58.639471226 +
@@ -141,8 +141,8 @@ package Err_Vars is
 --  Setting is irrelevant if no  insertion character is present. Note
 --  that it is not necessary to reset this after using it, since the proper
 --  procedure is always to set it before issuing such a message. Note that
-   --  the warning documentation tag is always [enabled by default] in the
-   --  case where this flag is True.
+   --  the warning documentation tag is always [warning enabled by default]
+   --  in the case where this flag is True.

 Error_Msg_String : String (1 .. 4096);
 Error_Msg_Strlen : Natural;
Index: gcc/ada/errout.ads
===
--- gcc/ada/errout.ads  2014-02-09 20:02:00.971968883 +
+++ gcc/ada/errout.ads  2014-02-09 20:02:58.639471226 +
@@ -287,8 +287,8 @@ package Errout is

 --Insertion character ?? (Two question marks: default warning)
 --  Like ?, but if the flag Warn_Doc_Switch is True, adds the string
-   --  [enabled by default] at the end of the warning message. For
-   --  continuations, use this in each continuation message.
+   --  [warning enabled by default] at the end of the warning message.
+   --  For continuations, use this in each continuation message.

 --Insertion character ?x? (warning with switch)
 --  Like ?, but if the flag Warn_Doc_Switch is True, adds the string
Index: gcc/ada/gnat_ugn.texi
===
--- gcc/ada/gnat_ugn.texi   2014-02-09 20:02:00.971968883 +
+++ gcc/ada/gnat_ugn.texi   2014-02-09 20:02:58.644471270 +
@@ -5055,8 +5055,8 @@ indexed components, slices, and selected
  @cindex @option{-gnatw.d} (@command{gcc})
  If this switch is set, then warning messages are tagged, either with
  the string ``@option{-gnatw?}'' showing which switch controls the warning,
-or with ``[enabled by default]'' if the warning is not under control of a
-specific @option{-gnatw?} switch. This mode is off by default, and is not
+or with ``[warning enabled by default]'' if the warning is not under control
+of a specific @option{-gnatw?} switch. This mode is off by default, and is not
  affected by the use of @code{-gnatwa}.

  @item -gnatw.D





Re: Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:09 PM, Arnaud Charlet wrote:

IMO the natural assumption is that gnu++11 is enabled by default, which is
how Lars also read it.

There seemed to be support for using warning enabled by default instead,
so this patch does that.  Tested on x86_64-linux-gnu.  OK to install?

I'll post an Ada patch separately.


FWIW this doesn't seem desirable to me, this will make the diagnostic longer.
For Ada this wouldn't really disambiguate things, and some users may be
dependent on the current format, so changing it isn't very friendly.

Arno


can't we just reword the one warning where there is an ambiguity to 
avoid the confusion, rather than creating such an earthquake, which

as Arno says, really has zero advantages to Ada programmers, and clear
disadvantages .. to me [enabled by default] is already awfully long!


Re: [Ada] Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:10 PM, Richard Sandiford wrote:


Which testsuite do you mean?  I did test this with Ada enabled
and there were no regressions.

If you mean an external testsuite then I certainly don't mind
holding off the Ada part.  I hope the non-Ada part could still
go in without it though.


I mean many external test suites, many of our users maintain their
own test suites, and base lines for their codes, and any change like
this is very disruptive.



Re: Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:23 PM, Richard Sandiford wrote:


can't we just reword the one warning where there is an ambiguity to
avoid the confusion, rather than creating such an earthquake, which
as Arno says, really has zero advantages to Ada programmers, and clear
disadvantages .. to me [enabled by default] is already awfully long!


Well, since the Ada part has been rejected I think we just need to
consider this from the non-Ada perspective.  And IMO there's zero
chance that each new warning will be audited for whether the
[enabled by default] will be unambiguous.  The fact that this
particular warning caused confusion and someone actually reported
it doesn't mean that there are no other warnings like that.  E.g.:

   -fprefetch-loop-arrays is not supported with -Os [enabled by default]

could also be misunderstood, especially if working on an existing codebase
with an existing makefile.  And the effect for:

   pragma simd ignored because -fcilkplus is not enabled [enabled by default]

is a bit unfortunate.  Those were just two examples -- I'm sure I could
pick more.


Indeed, worrisome examples,

a shorter substitute would be [default warning]

???


Thanks,
Richard





Re: [PATCH] Do not set flag_complex_method to 2 for C++ by default.

2014-01-07 Thread Robert Dewar

On 1/7/2014 8:46 PM, Andrew Pinski wrote:


Correctness over speed is better.  I am sorry GCC is the only one
which gets it correct here.  If people don't like there is a flag to
disable it.


Obviously in a case like this, it is the programmer who should
be able to decide between fast-and-acceptable and slow-and-accurate.
This is an old debate (e.g. consider Cray, who always went for the
fast-and-acceptable path, and was able to build machines that were
interestingly fast partly as a result of this philosophy).

So having a switch is not controversial

But then the question is, what should the default be. The trouble with
the slow-but-accurate is that many users will never know about the 
switch, and will judge the compiler ONLY on the basis that it is slow,

without even knowing, noticing, or caring that it is more correct
than the competition.

We have seen gcc lose out in a number of head to head comparisons,
because GCC defaulted to -O0 (optimization really really off, and
don't care how horrible the code is) and the competition defaulted
to optimization turned on.

We even worked with one customer, and explained the issue, and they
said sorry, company procedures require us to run both compilers with
their default settings, since that is perceived as being fairer!
Their conclusion was that gcc was unacceptably inefficient and they
went with the competition.




You can say the same thing that people who find C is slower can use
the flag to disable it.

thanks,

David



Thanks,
Andrew Pinski



thanks,

David


On Wed, Nov 13, 2013 at 9:07 PM, Andrew Pinski pins...@gmail.com wrote:

On Wed, Nov 13, 2013 at 5:26 PM, Cong Hou co...@google.com wrote:

This patch is for PR58963.

In the patch http://gcc.gnu.org/ml/gcc-patches/2005-02/msg00560.html,
the builtin function is used to perform complex multiplication and
division. This is to comply with C99 standard, but I am wondering if
C++ also needs this.

There is no complex keyword in C++, and no content in C++ standard
about the behavior of operations on complex types. The complex
header file is all written in source code, including complex
multiplication and division. GCC should not do too much for them by
using builtin calls by default (although we can set -fcx-limited-range
to prevent GCC doing this), which has a big impact on performance
(there may exist vectorization opportunities).

In this patch flag_complex_method will not be set to 2 for C++.
Bootstraped and tested on an x86-64 machine.


I think you need to look into this issue deeper as the original patch
only enabled it for C99:
http://gcc.gnu.org/ml/gcc-patches/2005-02/msg01483.html .

Just a little deeper will find
http://gcc.gnu.org/ml/gcc/2007-07/msg00124.html which says yes C++
needs this.

Thanks,
Andrew Pinski




thanks,
Cong


Index: gcc/c-family/c-opts.c
===
--- gcc/c-family/c-opts.c (revision 204712)
+++ gcc/c-family/c-opts.c (working copy)
@@ -198,8 +198,10 @@ c_common_init_options_struct (struct gcc
opts-x_warn_write_strings = c_dialect_cxx ();
opts-x_flag_warn_unused_result = true;

-  /* By default, C99-like requirements for complex multiply and divide.  */
-  opts-x_flag_complex_method = 2;
+  /* By default, C99-like requirements for complex multiply and divide.
+ But for C++ this should not be required.  */
+  if (c_language != clk_cxx  c_language != clk_objcxx)
+opts-x_flag_complex_method = 2;
  }

  /* Common initialization before calling option handlers.  */
Index: gcc/c-family/ChangeLog
===
--- gcc/c-family/ChangeLog (revision 204712)
+++ gcc/c-family/ChangeLog (working copy)
@@ -1,3 +1,8 @@
+2013-11-13  Cong Hou  co...@google.com
+
+ * c-opts.c (c_common_init_options_struct): Don't let C++ comply with
+ C99-like requirements for complex multiply and divide.
+
  2013-11-12  Joseph Myers  jos...@codesourcery.com

   * c-common.c (c_common_reswords): Add _Thread_local.




Re: gcc's obvious patch policy

2013-11-26 Thread Robert Dewar

To me the issue is not what is written down about
the policy, but whether the policy works in practice,
and it seems like it does, so what's the problem?

This just seems to be making a problem where
none exists.


Re: RFA: patch to fix PR58967

2013-11-04 Thread Robert Dewar

On 11/4/2013 2:23 PM, Vladimir Makarov wrote:

The following patch fixes

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58967

The removed code is too old.  To be honest, I even don't remember why I
added this.  LRA has been changed a lot since this change and now it
works fine without it.


Whenever I see a comment like this, it reminds me to remind everyone
to comment your code! Do not assume you will remember why you wrote
what you wrote, so even if it is you who will look at your code, write
comments for yourself assuming you have totally forgotten!



Re: Copyright years for new old ports (Re: Ping^6: contribute Synopsys Designware ARC port)

2013-10-03 Thread Robert Dewar

On 10/3/2013 5:10 PM, Joseph S. Myers wrote:

On Wed, 2 Oct 2013, Joern Rennecke wrote:


 From my understanding, the condition for adding the current Copyright year
without a source code change is to have a release in that year.  Are we
sure 4.9.0 will be released this year?


release here includes availability of a development version in public
version control, as well as snapshots and non-FSF releases.  The effect is
that if the first copyright year in a GCC source file is 1987 or later, a
single range year-2013 can be used.



Just as a FYI, for the GNAT front end we have always used
year ranges, but we only update the year if we actually
modify a file.


Re: [x86, PATCH 2/2] Enabling of the new Intel microarchitecture Silvermont

2013-06-01 Thread Robert Dewar

On 6/1/2013 9:52 AM, Jakub Jelinek wrote:


Sorry for nitpicking, but there are various formatting issues.


A number of these formatting issues could be easily detected by
the compiler. It might be really useful to add a switch to do
such detection. For Ada, the GNAT compiler has -gnatyg which
enables standard style checking according to our coding
standards for Ada, and we find this saves a lot of time
as well as avoiding style errors getting into the code base
(this kind of nitpicking style error detection is more easily
done by a machine than a human). Of course not all stlye errors
can be easily handled, but a lot of them can!



Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-09 Thread Robert Dewar

On 4/9/2013 5:39 AM, Florian Weimer wrote:

On 04/09/2013 01:47 AM, Robert Dewar wrote:

Well the back end has all the information to figure this out I think!
But anyway, for Ada, the current situation is just fine, and has
the advantage that the -gnatG expanded code listing clearly shows in
Ada source form, what is going on.


Isn't this a bit optimistic, considering that run-time overflow checking
currently does not use existing hardware support?


Not clear what you mean here, we don't rely on the back end for run-time
overflow checking. What is over-optimistic here?

BTW, existing hardware support can be a dubious thing, you have
to be careful to evaluate costs, for instance you don't want to
use INTO on modern x86 targets!






Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

It may be interesting to look at what we have done in
Ada with regard to overflow in intermediate expressions.
Briefly we allow specification of three modes

all intermediate arithmetic is done in the base type,
with overflow signalled if an intermediate value is
outside this range.

all intermediate arithmetic is done in the widest
integer type, with overflow signalled if an intermediate
value is outside this range.

all intermediate arithmetic uses an infinite precision
arithmetic package built for this purpose.

In the second and third cases we do range analysis that
allows smaller intermediate precision if we know it's
safe.

We also allow separate specification of the mode inside
and outside assertions (e.g. preconditions and postconditions)
since in the latter you often want to regard integers as
mathematical, not subject to intermediate overflow.


Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 9:15 AM, Kenneth Zadeck wrote:


I think this applies to Ada constant arithmetic as well.


Ada constant arithmetic (at compile time) is always infinite
precision (for float as well as for integer).



Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 9:24 AM, Kenneth Zadeck wrote:


So then how does a language like ada work in gcc?   My assumption is
that most of what you describe here is done in the front end and by the
time you get to the middle end of the compiler, you have chosen types
for which you are comfortable to have any remaining math done in along
with explicit checks for overflow where the programmer asked for them.


That's right, the front end does all the promotion of types


Otherwise, how could ada have ever worked with gcc?


Sometimes we do have to make changes to gcc to accomodate Ada
specific requirements, but this was not one of those cases. Of
course the back end would do a better job of the range analysis
to remove some unnecessary use of infinite precision, but the
front end in practice does a good enough job.



Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 9:23 AM, Kenneth Zadeck wrote:

On 04/08/2013 09:19 AM, Robert Dewar wrote:

On 4/8/2013 9:15 AM, Kenneth Zadeck wrote:


I think this applies to Ada constant arithmetic as well.


Ada constant arithmetic (at compile time) is always infinite
precision (for float as well as for integer).


What do you mean when you say constant arithmetic?Do you mean
places where there is an explicit 8 * 6 in the source or do you mean any
arithmetic that a compiler, using the full power of interprocedural
constant propagation can discover?


Somewhere between the two. Ada has a very well defined notion of
what is and what is not a static expression, it definitely does not
include everything the compiler can discover, but it goes beyond just
explicit literal arithmetic, e.g. declared constants

   X : Integer := 75;

are considered static. It is static expressions that must be computed
with full precision at compile time. For expressions the compiler can
tell are constant even though not officially static, it is fine to
compute at compile time for integer, but NOT for float, since you want
to use target precision for all non-static float-operations.






Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 9:58 AM, Kenneth Zadeck wrote:


yes but the relevant question for the not officially static integer
constants is in what precision are those operations to be performed
in?I assume that you choose gcc types for these operations and you
expect the math to be done within that type, i.e. exactly the way you
expect the machine to perform.


As I explained in an earlier message, *within* a single expression, we
are free to use higher precision, and we provide modes that allow this
up to and including the usea of infinite precision. That applies not
just to constant expressions but to all expressions.






Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 10:26 AM, Kenneth Zadeck wrote:


My confusion is what you mean by we?   Do you mean we the writer of
the program, we the person invoking the compiler by the use command
line options or we, your company's implementation of ada?


Sorry, bad usage, The gcc implementation of Ada allows the user to
specify by pragmas how intermediate overflow is handled.


My interpretation of your first email was that it was possible for the
programmer to do something equivalent to adding attributes surrounding a
block in the program to control the precision and overflow detection of
the expressions in the block.   And if this is so, then by the time the
expression is seen by the middle end of gcc, those attributes will have
been converted into tree code will evaluate the code in a well defined
way by both the optimization passes and the target machine.


Yes, that's a correct understanding


Kenny





Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 5:12 PM, Lawrence Crowl wrote:

(BTW, you *really* don't need to quote entire messages, I find
it rather redundant for the entire thread to be in every message,
we all have thread following mail readers!)


Correct me if I'm wrong, but the Ada standard doesn't require any
particular maximum evaluation precision, but only that you get an
exception if the values exceed the chosen maximum.


Right, that's at run-time, at compile-time for static expressions,
infinite precision is required.

But at run-time, all three of the modes we provide are
standard conforming.


In essence, you have moved some of the optimization from the back
end to the front end.  Correct?


Sorry, I don't quite understand that. If you are syaing that the
back end could handle this widening for intermediate values, sure
it could, this is the kind of thing that can be done at various
different places.






Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 5:46 PM, Kenneth Zadeck wrote:

In some sense you have to think in terms of three worlds:
1) what you call compile-time static expressions is one world which in
gcc is almost always done by the front ends.
2) the second world is what the optimizers can do.   This is not
compile-time static expressions because that is what the front end has
already done.
3) there is run time.

My view on this is that optimization is just doing what is normally done
at run time but doing it early.   From that point of view, we are if not
required, morally obligated to do thing in the same way that the
hardware would have done them.This is why i am so against richi on
wanting to do infinite precision.By the time the middle or the back
end sees the representation, all of the things that are allowed to be
done in infinite precision have already been done.   What we are left
with is a (mostly) strongly typed language that pretty much says exactly
what must be done. Anything that we do in the middle end or back ends in
infinite precision will only surprise the programmer and make them want
to use llvm.


That may be so in C, in Ada it would be perfectly reasonable to use
infinite precision for intermediate results in some cases, since the
language standard specifically encourages this approach.



Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 6:34 PM, Mike Stump wrote:

On Apr 8, 2013, at 2:48 PM, Robert Dewar de...@adacore.com wrote:

That may be so in C, in Ada it would be perfectly reasonable to use
infinite precision for intermediate results in some cases, since the
language standard specifically encourages this approach.


gcc lacks an infinite precision plus operator?!  :-)


Right, that's why we do everything in the front end in the
case of Ada. But it would be perfectly reasonable for the
back end to do this substitution.


Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 7:46 PM, Kenneth Zadeck wrote:


On 04/08/2013 06:45 PM, Robert Dewar wrote:

On 4/8/2013 6:34 PM, Mike Stump wrote:

On Apr 8, 2013, at 2:48 PM, Robert Dewar de...@adacore.com wrote:

That may be so in C, in Ada it would be perfectly reasonable to use
infinite precision for intermediate results in some cases, since the
language standard specifically encourages this approach.


gcc lacks an infinite precision plus operator?!  :-)


Right, that's why we do everything in the front end in the
case of Ada. But it would be perfectly reasonable for the
back end to do this substitution.

but there is no way in the current tree language to convey which ones
you can and which ones you cannot.


Well the back end has all the information to figure this out I think!
But anyway, for Ada, the current situation is just fine, and has
the advantage that the -gnatG expanded code listing clearly shows in
Ada source form, what is going on.






Re: Ada: ^M in ada source files

2012-12-07 Thread Robert Dewar

On 12/7/2012 1:56 PM, Mike Stump wrote:

I've noticed that:

$ grep -l '^M' gcc/testsuite/gnat.dg/*
discr36.ads
discr36_pkg.adb
discr36_pkg.ads
discr38.adb
loop_optimization11.adb
loop_optimization11_pkg.ads
loop_optimization13.adb
loop_optimization13.ads

:-(  Surely these are just normal text files, right?  Can I strip the ^M from 
them?



Probably good to have some tests with standard CR/LF terminators, since 
this is what a lot of the world uses.


Re: Ada: ^M in ada source files

2012-12-07 Thread Robert Dewar

On 12/7/2012 2:09 PM, Mike Stump wrote:

On Dec 7, 2012, at 10:57 AM, Robert Dewar de...@adacore.com wrote:

On 12/7/2012 1:56 PM, Mike Stump wrote:

I've noticed that:

$ grep -l '^M' gcc/testsuite/gnat.dg/*
discr36.ads
discr36_pkg.adb
discr36_pkg.ads
discr38.adb
loop_optimization11.adb
loop_optimization11_pkg.ads
loop_optimization13.adb
loop_optimization13.ads

:-(  Surely these are just normal text files, right?  Can I strip the ^M from 
them?



Probably good to have some tests with standard CR/LF terminators, since this is 
what a lot of the world uses.


Then, to preserve them, the files must be tagged as binary in svn and git.  
Doing so will probably make the normal file merging that git/svn would do, 
inoperative.

Ok to so tag all the files?


probably not worth it if it causes that disruption. svn certainly 
handleds CR/LF terminators fine, I guess Git does not?






Re: Ada: ^M in ada source files

2012-12-07 Thread Robert Dewar

On 12/7/2012 2:16 PM, Mike Stump wrote:


Yes, you can strip them, no problem.


Since emails likely crossed paths….  I'm going to give you and Robert a change 
to figure out what you'd like to do…  I _only_ care about consistency between 
contents as seen from svn and git.  Stripping ^M can do this, as can marking 
them as binary.  So marking them, ensures that the ^Ms are always there, both 
on ^M systems and non-^M systems.

So, after hashing it how, let me know the final verdict.  Thanks.


I would strip the CR's, not a big deal, and not worth worrying about.






Re: Ada: ^M in ada source files

2012-12-07 Thread Robert Dewar

On 12/7/2012 2:50 PM, Arnaud Charlet wrote:


Anyway, I'll let Robert have the final word on this one.

I'm fine with either solution (converting to LF, or marking files binary,
or a mix of both).

Arno



I would convert to LF, I think it causes less confusion


Re: patch to fix constant math

2012-10-08 Thread Robert Dewar

On 10/8/2012 11:01 AM, Nathan Froyd wrote:

- Original Message -

Btw, as for Richards idea of conditionally placing the length field
in
rtx_def looks like overkill to me.  These days we'd merely want to
optimize for 64bit hosts, thus unconditionally adding a 32 bit
field to rtx_def looks ok to me (you can wrap that inside a union to
allow both descriptive names and eventual different use - see what
I've done to tree_base)


IMHO, unconditionally adding that field isn't optimize for 64-bit
hosts, but gratuitously make one of the major compiler data
structures bigger on 32-bit hosts.  Not everybody can cross-compile
from a 64-bit host.  And even those people who can don't necessarily
want to.  Please try to consider what's best for all the people who
use GCC, not just the cases you happen to be working with every day.


I think that's rasonable in general, but as time goes on, and every
$300 laptop is 64-bit capable, one should not go TOO far out of the
way trying to make sure we can compile everything on a 32-bit machine.
After all, we don't try to ensure we can compile on a 16-bit machine
though when I helped write the Realia COBOL compiler, it was a major
consideration that we had to be able to compile arbitrarily large
programs on a 32-bit machine with one megabyte of memory. That was
achieved at the time, but is hardly relevant now!



Re: [patch][lra] Comment typo fix

2012-10-01 Thread Robert Dewar

On 10/1/2012 6:09 PM, Steven Bosscher wrote:

I suppose no-one would object if I commit this as obvious at some point?

Index: lra-constraints.c
===
--- lra-constraints.c   (revision 191858)
+++ lra-constraints.c   (working copy)
@@ -4293,7 +4293,7 @@ update_ebb_live_info (rtx head, rtx tail
 {
   if (prev_bb != NULL)
 {
- /* Udpate DF_LR_IN (prev_bb):  */
+ /* Update DF_LR_IN (prev_bb):  */
   EXECUTE_IF_SET_IN_BITMAP (check_only_regs, 0, j, bi)
 if (bitmap_bit_p (live_regs, j))
   bitmap_set_bit (DF_LR_IN (prev_bb), j);



took me a few readings to see the change you had made, amazing how
the brain reads what it expects to see :-)


Re: [CPP] Add pragmas for emitting diagnostics

2012-09-26 Thread Robert Dewar

On 9/26/2012 4:19 PM, Tom Tromey wrote:

Florian == Florian Weimer fwei...@redhat.com writes:


Florian This patch adds support for #pragma GCC warning and #pragma GCC
Florian error. These pragmas can be used from preprocessor macros,
Florian unlike the existing #warning and #error directives.  Library
Florian authors can use these pragmas to add deprecation warnings to
Florian macros they define.

I'm not sure if my libcpp review powers extend to an extension like
this.

It seems reasonable to me though.


To me too, these correspond to the Compile_Time_Warning and 
Compile_Time_Error in Ada, and are definitely very useful!




Re: [PATCH] Combine location with block using block_locations

2012-09-13 Thread Robert Dewar

On 9/13/2012 8:00 AM, Richard Guenther wrote:


Because doing so would create code generation differences -g vs. -g0.


Sometimes I wonder whether the insistence on -g not changing code
generation is warranted. In practice, gdb for me is so weak in handling
-O1 or -O2, that if I want to debug something I have to recompile
with -O0 -g, which causes quite a bit of code generation change :-)



Re: [PATCH] Combine location with block using block_locations

2012-09-13 Thread Robert Dewar

On 9/13/2012 9:38 AM, Jakub Jelinek wrote:

On Thu, Sep 13, 2012 at 09:33:20AM -0400, Robert Dewar wrote:

On 9/13/2012 8:00 AM, Richard Guenther wrote:


Because doing so would create code generation differences -g vs. -g0.


Sometimes I wonder whether the insistence on -g not changing code
generation is warranted. In practice, gdb for me is so weak in handling


It is.  IMHO the most important reason is not that somebody would build
first with just -O2 and then later on to debug the code would build it again
with -g -O2 and hope the code is the same, but by making sure -g vs. -g0
doesn't change generate code we ensure -g doesn't pessimize the generated
code, and really many people compile even production code with -g -O2
or similar.  The debug info is then either stripped, or stripped into
separate files/not shipped or only optionally shipped with the product.

Jakub


Sure, it is obvious that you don't want -g to affect -O1 or -O2 code,
but I think if you have -Og (if and when we have that), it would not
be a bad thing for -g to affect that. I can even imagine that what
-Og means is -O1 if you don't have -g, and something good for
debugging if you do have -g.


Re: [PATCH] Combine location with block using block_locations

2012-09-13 Thread Robert Dewar

On 9/13/2012 12:07 PM, Xinliang David Li wrote:

It is very important to make sure -g does not affect code gen ---
people do release build with -g with optimization, and strip the
binary before sending it to production machines ..


Yes, of course, and for sure -g cannot affect optimized code, see
my follow on message.


David

On Thu, Sep 13, 2012 at 6:33 AM, Robert Dewar de...@adacore.com wrote:

On 9/13/2012 8:00 AM, Richard Guenther wrote:


Because doing so would create code generation differences -g vs. -g0.



Sometimes I wonder whether the insistence on -g not changing code
generation is warranted. In practice, gdb for me is so weak in handling
-O1 or -O2, that if I want to debug something I have to recompile
with -O0 -g, which causes quite a bit of code generation change :-)





Re: [PATCH] Combine location with block using block_locations

2012-09-13 Thread Robert Dewar

On 9/13/2012 12:46 PM, Tom Tromey wrote:

Robert == Robert Dewar de...@adacore.com writes:


Robert Sometimes I wonder whether the insistence on -g not changing code
Robert generation is warranted. In practice, gdb for me is so weak in handling
Robert -O1 or -O2, that if I want to debug something I have to recompile
Robert with -O0 -g, which causes quite a bit of code generation change :-)

If those are gdb bugs, please file them.


Well I think everyone knows about the failings of gdb in -O1 mode, they
have been much discussed, and they are not really gdb bugs, more an 
issue of it being basically hard to debug optimized code. Things used

to be a LOT better, I routinely debugged code at -O1, but then the
compiler got better at optimization, and things deteriorated so much
at -O1 that now I don't even attempt it.


Tom





Re: Allow use of ranges in copyright notices

2012-07-02 Thread Robert Dewar

On 7/2/2012 8:35 AM, Alexandre Oliva wrote:

On Jun 30, 2012, David Edelsohn dje@gmail.com wrote:


IBM's policy specifies a comma:



first year, last year



and not a dash range.


But this notation already means something else in our source tree.



I think using the dash is preferable, and is a VERY widely used
notation, used by all major software companies I deal with!



Re: [PATCH] Improved re-association of signed arithmetic

2012-05-18 Thread Robert Dewar

On 5/18/2012 4:27 PM, Ulrich Weigand wrote:


I finally got some time to look into this in detail.  The various special-
case transforms in associate_plusminus all transform a plus/minus expression
tree into either a single operand, a negated operand, or a single plus or
minus of two operands.  This is valid as long as we can prove that the
newly introduced expression can never overflow (if we're doing signed
arithmetic).


It's interesting to note that for Ada, reassociatin is allowed if there
are no overriiding parens, even if it would introduce an overflow 
(exception) that would not occur otherwise. However, I think I prefer

the C semantics!


Re: Use sed -n … instead of sed s/…/p -e d in s-header-vars

2012-05-15 Thread Robert Dewar

On 5/14/2012 11:22 PM, Hans-Peter Nilsson wrote:


Random non-maintainer comments: I'd suggest adding a nearby
comment to avoid a future edit changing it back.  The attachment
with the patch had the mime-type Video/X-DV, maybe indicating
an issue with your mail-client setup mismatching the .dif
filename suffix.


As always, comments about what you didn't do and why you
didn't do it, are often the most important (and note that
code can never be self-documenting in this regard :-))


brgds, H-P




Re: Ada testcase CR line endings

2012-04-10 Thread Robert Dewar

On 4/10/2012 1:35 AM, Mike Stump wrote:

So, I'd like to change all the ada testcases to use normal unix line endings.

testsuite/gnat.dg/taft_type2_pkg.ads

is an example if one such file, any objections?


As long as the test is not about line endings this seems fine.


Re: [patch, committed] invoke.texi: big-endian, little-endian

2012-02-17 Thread Robert Dewar

On 2/17/2012 8:00 PM, Sandra Loosemore wrote:

I've checked in this patch to consistently hyphenate big-endian and
little-endian when used as adjectives.  I observe that Jonathan Swift
also hyphenated Big-endians when used as a noun in Gulliver's Travels,
but I did not see any uses of either term as a noun in invoke.texi, so
thankfully I did not have to deal with that case.  ;-)


I can't see any legitimate case of the terms referring to bit ordering
being nouns, they are always adjectives. In Swift, when it is used as
a noun, it refers to the followers, and that just doesn't apply to the
computer use.

Odd note of interest, the project to build a big-endian version of
Windows was called Hiawatha (= big indian) :-)


Re: [PATCH, alpha]: Default to full IEEE compliance mode for Go language.

2012-02-04 Thread Robert Dewar

On 2/4/2012 10:06 AM, Gerald Pfeifer wrote:

On Sun, 29 Jan 2012, Robert Dewar wrote:

* config/alpha/alpha.c (alpha_option_overrride): Default to
full IEEE compliance mode for Go language.

It's always worrisome for gcc based languages to default to horrible
performance, it means that many benchmarks will be run only with this
horrible performance.


Yes, but this is alpha-specific, and I don't think relevant benchmarks
are still done on that platform. ;-)


We certainly still have customers using that platform who are definitely
concerned with performance, and not willing to step forward to ia64 as
the official successor (if they are using VMS they have no other choice).


Gerald




Re: [PATCH] invoke.texi: compile time, run time cleanup

2012-01-30 Thread Robert Dewar

On 1/28/2012 12:05 PM, Sandra Loosemore wrote:

I'm specifically asking for review of this patch by one of the docs
maintainers before checking it in, since it seems not everyone agrees
that these copyediting patches qualify as obvious.  In this particular
chunk, I had to make some judgment calls, too.

We usually use compile time, link time, and run time to refer to
the times at which the program is compiled, linked, and run,
respectively.  So, the normal English rules about hyphenation apply
here; it's correct to say at run time, but run-time behavior, for
example.

To confuse matters, though, compile time can also be used as a noun
meaning the amount of time it takes to compile.  I saw that in some
places invoke.texi was already using compilation time for this meaning
instead and decided it made sense to apply that terminology uniformly.
Likewise execution time was already being used in some places for the
amount of time it takes the program to run, so I made that usage
consistent.  Confusingly, there were also a couple of places referring
to run time of a compilation pass; I reworded those to refer to
compilation time in that pass.

Finally, runtime (without a space or hyphen) is commonly used as a
noun to refer collectively to the startup code, libraries, trap
handlers, bootloader or operating system, etc that are present on the
target at run time.  In the Objective-C section of this document there's
a lot of existing discussion of the GNU runtime versus the NeXT
runtime, for example, and other option descriptions refer to the
simulator runtime.  Google shows a lot of variation on usage here (and
Wikipedia, in particular, is quite confused on this topic), but to me it
seems silly to make the adjective form of runtime hyphenated when it's
already a single word as a noun.  So, I decided to go consistently with
runtime support, runtime library, etc when discussing what the
runtime includes.

OK to check in?

-Sandra


I am still inclined to go with runtime everywhere, to ease searching and
avoid everyone having to understand the above set of rules, which seem
over-delicate to me (admitting all three forms, with tricky rules as
to which of the three is used, will ensure that people make constant
mistakes).

In particular

runtime support
run-time behavior

seems to tricky a distinction. I am not clear on what the exact rule
that you have in mind here is, and I am absolutely sure that it will
not be practical to expect the community to absorb and follow this rule.

The adjective vs noun rule is one thing, but having a rule that
delicately distinguishes two forms of adjectival use is a step
too far for me.

At the very least, you need to state the rule you have invented
here, rather than just give a few usage based examples.

I also find the disctinction between compile time and compilation
time tricky to expect to be used consistently, although it is a
useful distinction if

a) enforced carefully
b) documented in a glossary

whereas the distinction between the two adjectival forms
runtime and run-time seems to have no technical value.


Re: [committed] invoke.texi: fix hyphenation of floating point and related terms

2012-01-30 Thread Robert Dewar

On 1/28/2012 11:33 AM, Sandra Loosemore wrote:


Sometimes the best idea is to just drop the hyphen completetly. It
seems for example (try google) that runtime is becoming much more
accepted than run-time or run time.


Coincidentally, runtime is the subject of my next patch chunk, and I
had to make some judgment calls there.  I'll post it for review by one
of the docs maintainers instead of just checking it in.


I am in favor of dropping the hyphen in all cases for runtime, given
the clear lead from Microsoft and Sun. CMOS really is not the right
place to look for appropriate contemporary technical usage. If you
google around, you will find that at this stage runtime is getting
to be used widely, and the most notable uses:

Java Runtime Environment (SUn/Oracle)
Runtime for all Microsoft components

Are clearly influential

As for floating-point, it just makes sense to be consistent instead
of following the mandated CMOS inconsistent style, because it eases
searches. Remember CMOS was written when people had never heard of
computer searching, but the widespread use of searches argue for a
more consistent style. I don't see that hyphenating floating-point
as a noun can ever be confusing.

Yes, it will jar some people who have been taught a particular
grammar rule, just as it jars people to hear Shakespeare using
between you and I :-) But I think it is reasonable to take a
more pragmatic approach in the context of software documentation.

Another point is that if you choose a simple consistent rule,
rather than a more complex inconsistent rule, it is much easier
to get people to follow it, and much easier to correct it when
they fail to do so.

Otherwise, if you have a more complex rule, people will keep
getting it wrong, resulting in noise patches fixing the problems
(I use noise here in the sense that such patches have no technically
relevant content, but nevertheless have to be taken into account
in keeping sources up to date).

Anyway, interesting to see what others think! :-)


Re: [PATCH, alpha]: Default to full IEEE compliance mode for Go language.

2012-01-29 Thread Robert Dewar

On 1/29/2012 3:40 PM, Richard Henderson wrote:

On 01/30/2012 05:22 AM, Uros Bizjak wrote:

2012-01-29  Uros Bizjakubiz...@gmail.com

* config/alpha/alpha.c (alpha_option_overrride): Default to
full IEEE compliance mode for Go language.


I'm not keen on this, but I also don't have an alternative to suggest.

Ok.


It's always worrisome for gcc based languages to default to horrible
performance, it means that many benchmarks will be run only with this
horrible performance.

We have seen instances in which GNAT performs poorly in benchmarks
because it is run with -O0, and competing compilers default to
something more similar to -O1. In one case, when we pointed this
out, the response was that company mandated policies insisted on
all benchmarks being run with default options.


Re: [committed] invoke.texi: fix hyphenation of floating point and related terms

2012-01-28 Thread Robert Dewar

On 1/27/2012 10:57 PM, Sandra Loosemore wrote:


I've checked in this patch as obvious.  (Again, if anyone thinks these
kinds of edits are not obvious, let me know, and I'll start posting them
for review first instead.)


Following these dubious hyphenation rules slavishly is not a good idea.
It makes searching more erratic. I recommend never hyphenating command
line, and always hyphenating floating-point.

Sometimes the best idea is to just drop the hyphen completetly. It
seems for example (try google) that runtime is becoming much more
accepted than run-time or run time.


-Sandra


2012-01-28  Sandra Loosemoresan...@codesourcery.com

gcc/
* doc/invoke.texi: Correct hyphenation of floating point,
double precision, and related terminology throughout the file.






Re: [ada] Fix bootstrap error in s-taprop-tru64.adb

2011-11-23 Thread Robert Dewar

On 11/23/2011 7:31 AM, Rainer Orth wrote:

Tru64 UNIX Ada bootstrap recently got broken:

s-taprop.adb:892:12: access to volatile object cannot yield 
access-to-non-volatile type
make[6]: *** [s-taprop.o] Error 1

s-taprop-tru64.adb missed a patch already applied to s-taprop-{irix,
solaris}.adb.  With that change, the bootstrap continues and
libgnat-4.7.so built.

Ok for mainline?


Yes, this is fine


Re: hash policy patch

2011-09-17 Thread Robert Dewar

On 9/17/2011 5:38 AM, Paolo Carlini wrote:

On 09/17/2011 11:27 AM, François Dumont wrote:

Paolo, I know that using float equality comparison is not reliable in
general and I have remove the suspicious line but in this case I can't
imagine a system where it could fail.

As a general policy, in the testsuite we should never assert equality of
floating point quantities, sooner or later that would byte us, and very
badly (just search our Bugzilla or the web if you are not convinced).
And, given that, I don't think we should waste time figuring out whether
in specific cases, for specific machines, actually it would be safe to
do it.


An absolute rule of this kind makes me a bit nervous. There are
perfectly legitimate algorithms that assume IEEE arithmetic and
expect and should get absolute equality, and as long as the test
is restricted to IEEE, it seems quite reasonable to have equality
checks.

If you are creating a set of records where a unique floating-point
value is the key, that's another case where equality comparison
is reasonable.

Finally

   if x = x

is a reasonable test for not being a NaN


Re: [Ada] Speed up build of gnatools

2011-09-06 Thread Robert Dewar

On 9/6/2011 7:14 AM, Duncan Sands wrote:


this means using as many processes as there are CPUs, right?  It
seems pretty dubious to me to use more processes than the user maybe
asked for.


We often find that the optimum number of processes is a little bit more
than the number of physical processes (not surprising when there is
mixed I/O computation going on.


For example I have to restrict the number of CPUs used when building
GCC to less than I have since otherwise my machine overheats and
turns itself off.


That seems a (pretty disastrous) engineering error in the design of
your machine. In a properly designed machine, extra fans should come
on to counteract the extra heating (that's certainly what happens on
my Toshiba R700 with is the new core i7-2620M with four cores.


Is there some way to get at the -j level the user passed to the
top-level make and use that?


I am pretty sure you can specify any -j value you like, but will
let Arno clarify that.


Re: [PATCH] Fix Ada bootstrap failure

2011-09-02 Thread Robert Dewar

On 9/2/2011 8:52 AM, Arnaud Charlet wrote:


Thanks!

In Ada, it's quite natural to end up with a dynamically sized object of
size 0. For instance, if you declare an array with a dynamic bound:

Table : Unit_Table (1 .. Last_Unit);

and Last_Unit happens to be 0 at run-time

Arno


But isn't it odd that we would dereference such an address?


Re: [PATCH] Fix Ada bootstrap failure

2011-09-02 Thread Robert Dewar

On 9/2/2011 8:58 AM, Arnaud Charlet wrote:

In Ada, it's quite natural to end up with a dynamically sized object of
size 0. For instance, if you declare an array with a dynamic bound:

Table : Unit_Table (1 .. Last_Unit);

and Last_Unit happens to be 0 at run-time


But are we expected to read/store from the storage?


No, that shouldn't happen, although you can e.g. reference Table'Address
and expect it to be non null.


Actually I am not sure of this, I discussed this with Bob, Address
is defined as the pointing to the first storage unit allocated for
an object. Not clear what this means when the object has no storage
units. This is a gap in the RM. Bob's view is that it must return
some random valid address (what exactly *is* a valid address?)



I'd have
expected that alloca (0) returning NULL shouldn't break
anything at runtime ...


Not sure exactly what failed here, probably something relatively subtle
(perhaps related to passing this variable or a slice of this variable
to another procedure).


But that wouldn't cause a dereference, however, it might cause an
explicit test that the argument was not null, and perhaps that's
what is causing the trouble.

For example, if you have something like

type S is aliased array (1 .. N);
type P is access all S;
B : S;

procedure Q is (A : not null Astring) is
begin
   null;
end;

Q (B'Access);

Then there will be an explicit check that B is not null






Arno




Re: [PATCH] Fix Ada bootstrap failure

2011-09-02 Thread Robert Dewar

On 9/2/2011 11:47 AM, Michael Matz wrote:

Hi,

On Fri, 2 Sep 2011, Robert Dewar wrote:


On 9/2/2011 9:16 AM, Richard Guenther wrote:

Might be interesting to pursue, but we don't know that the null pointers
being dereferenced are in fact the ones returned by alloca. May not be
worth the effort.


Given the nature of the work-around which makes Ada work again it's fairly
sure that the Ada frontend does emit accesses to an alloca'ed area of
memory even if its size is zero.  I.e. definitely a real bug.


maybe so, but I gave a scenario (there are others) in which exceptions
are legitimately raised without deferencing the pointer. Once an
exception is raised all sorts of funny things can happen (e.g.
tasks silently terminating fi they have no top level exception
handler), so you can't make that direct conclusion.

I guess if you made alloca(0) return a junk non-derefencable
address, *that* would be definitive.



Ciao,
Michael.




Re: [patch libstdc++]: Add some missing errno-constants for mingw-targets

2011-08-29 Thread Robert Dewar

On 8/29/2011 4:50 PM, Paolo Carlini wrote:

.. also, you forgot to add 2011 to the Copyright years.

Paolo.


In the GNAT development environment we have an SVN style
checking filter, and this is one of the things it checks
for so we prevent any checkin missing the current year in
the copyright notice.


Re: [Ada] Expansion of Ada2012 predicate checks for type conversions

2011-08-18 Thread Robert Dewar

On 8/18/2011 5:33 AM, Arnaud Charlet wrote:

2011-08-05  Ed Schonbergschonb...@adacore.com

* exp_ch4.adb (Expand_N_Type_Conversion): When expanding a
predicate
check, indicate that the copy of the original node does not come from
source, to prevent an infinite recursion of the expansion.


For ChangeLog entries we usually, and per the GNU Coding Conventions,
do not provide the Why?, just the What?.


Yes we know about this and we'll have unfortunately to disagree on this one: we
very strongly believe at AdaCore and for GNAT development that mentioning the
why is much more useful than just the what and insist on doing so in our
changelogs, rather than having to refer to separate emails for understanding
a change.

Having such detailled changelogs is very useful in practice to maintain code
and modify it, at least that's our experience.

So in other words, we find this GNU Coding Conventions a bad practice, and
insist on not following it, intentionally.


to add to this a bit. We do agree that having the why only in the 
changelog and not in the code is a bad idea. Indeed, it is critical

that the code contain full comments (often for instance, critical
comments are what you are NOT doing and why, this is one respect in
which code can never be self documenting).

I fully understand the concern about people using revision histories
as a substitute for proper code comments. We never let that happen in
the GNAT case, part of our review process ensures that any missing
comments in the source get fixed before an FSF checkin.


Arno




Re: [RFA/libiberty] Darwin has case-insensitive filesystems

2011-06-15 Thread Robert Dewar

On 6/15/2011 5:58 AM, Mark Kettenis wrote:


Over my dead body.  On a proper operating system filenames are
case-sensitive.  Your suggestion would create spurious matches.


Yes, we all know that Unix systems chose case sensitive, and
are happy to have files differing only by case in the same
directory.

Obviously any proper software has to fully support such
systems (if I was in the same mode as you and adding
gratuitious flames to my comments, I would have
preceded the word systems by brain-dead).


Even on case-preserving filesystems I'd argue that treating them as
case-sensitive is still the right approach.


Absolutely not, please don't visit your unix-borne predjudices
on non-unix systems. There is nothing worse for Windows users
than having to put up with silly decisions like this that
visit unix nonsense (and it is nonsense in a windows environment)
on windows software.


If that creates problems,
it means somebody was sloppy and didn't type the proper name of the
file


The whole point in a system like Windows which is case preserving
but not case sensitive is that you are NOT expected to type in
the proper capitalization. In English, we recognize the words
English and ENGLISH as equivalent, and windows users expect the
same treatment.

So the normal expectation in windows systems is that, yes, you can
make nice capitalization like MyFile if you like, and it will be
properly displayed.

But any software that requires me to type MyFile rather than
myfile is junk!


If you're still using an operating system with fully case-insensitive
filesystems, I feel very, very sorry for you.


You are allowed to have this opinion, I feel the same about people
who have to tolerate case-sensitive file systems, but I am quite
happy with software for Unix systems fully behaving (I would agree
that any software that EVER did case insensitive matching, as
suggested earlier in this thread would be broken on Unix). But
following your suggestion would be equally broken in Windows.


 or some piece of code in the toolchain arbitrarily changed the
case of a filename.  I don't mind punishing people for that.  They
have to learn that on a proper operating system file names are
case-sensitive!


This kind of unix arrogance leads to junk unusable software on
windows. It's really important not to visit your unix predjudices
on windows users. After all we feel the same way in return, I
find Unix systems complete junk for many reasons, one of which
is the very annoying case sensitive viewpoint, but I do not
translate my feelings into silly suggestions for making
software malfunction on Unix. You should not make this mistake
in a reverse direction.