Re: [PATCH, libmpx, i386, PR driver/65444] Pass '-z bndplt' when building dynamic objects with MPX

2015-03-18 Thread Robert Dewar
Do we really want to quote to this level? This message has 11 levels of 
quotes, the most I have ever seen. If everyone does this, the whole 
thread is in every message and that seems unnecessary. I don't know if 
there are gcc guidelines on this???


On 3/18/2015 9:59 AM, Ilya Enkovich wrote:

2015-03-18 16:52 GMT+03:00 H.J. Lu hjl.to...@gmail.com:

On Wed, Mar 18, 2015 at 6:41 AM, Ilya Enkovich enkovich@gmail.com wrote:

2015-03-18 16:31 GMT+03:00 H.J. Lu hjl.to...@gmail.com:

On Wed, Mar 18, 2015 at 6:24 AM, Ilya Enkovich enkovich@gmail.com wrote:

2015-03-18 15:42 GMT+03:00 Richard Biener richard.guent...@gmail.com:

On Wed, Mar 18, 2015 at 1:25 PM, H.J. Lu hjl.to...@gmail.com wrote:

On Wed, Mar 18, 2015 at 5:13 AM, Ilya Enkovich enkovich@gmail.com wrote:

2015-03-18 15:08 GMT+03:00 H.J. Lu hjl.to...@gmail.com:

On Wed, Mar 18, 2015 at 5:05 AM, Ilya Enkovich enkovich@gmail.com wrote:

2015-03-18 15:02 GMT+03:00 H.J. Lu hjl.to...@gmail.com:

On Wed, Mar 18, 2015 at 4:56 AM, Ilya Enkovich enkovich@gmail.com wrote:

Hi,

This patch fixes PR target/65444 by passing '-z bndplt' to linker when 
appropriate.  Bootstrapped and tested on x86_64-unknown-linux-gnu.  Will commit 
it to trunk in a couple of days if no objections arise.

Thanks,
Ilya
--
gcc/

2015-03-18  Ilya Enkovich  ilya.enkov...@intel.com

 PR driver/65444
 * config/i386/linux-common.h (MPX_SPEC): New.
 (CHKP_SPEC): Add MPX_SPEC.

libmpx/

2015-03-18  Ilya Enkovich  ilya.enkov...@intel.com

 PR driver/65444
 * configure.ac: Add check for '-z bndplt' support
 by linker. Add link_mpx output variable.
 * libmpx.spec.in (link_mpx): New.
 * configure: Regenerate.


diff --git a/gcc/config/i386/linux-common.h b/gcc/config/i386/linux-common.h
index 9c6560b..dd79ec6 100644
--- a/gcc/config/i386/linux-common.h
+++ b/gcc/config/i386/linux-common.h
@@ -59,6 +59,11 @@ along with GCC; see the file COPYING3.  If not see
   %:include(libmpx.spec)%(link_libmpx)
  #endif

+#ifndef MPX_SPEC
+#define MPX_SPEC \
+ %{mmpx:%{fcheck-pointer-bounds:%{!static:%:include(libmpx.spec)%(link_mpx)}}}
+#endif
+
  #ifndef LIBMPX_SPEC
  #if defined(HAVE_LD_STATIC_DYNAMIC)
  #define LIBMPX_SPEC \
@@ -89,5 +94,5 @@ along with GCC; see the file COPYING3.  If not see

  #ifndef CHKP_SPEC
  #define CHKP_SPEC \
-%{!nostdlib:%{!nodefaultlibs: LIBMPX_SPEC LIBMPXWRAPPERS_SPEC }}
+%{!nostdlib:%{!nodefaultlibs: LIBMPX_SPEC LIBMPXWRAPPERS_SPEC }} MPX_SPEC
  #endif
diff --git a/libmpx/configure.ac b/libmpx/configure.ac
index fe0d3f2..3f8b50f 100644
--- a/libmpx/configure.ac
+++ b/libmpx/configure.ac
@@ -40,7 +40,18 @@ AC_MSG_RESULT($LIBMPX_SUPPORTED)
  AM_CONDITIONAL(LIBMPX_SUPPORTED, [test x$LIBMPX_SUPPORTED = xyes])

  link_libmpx=-lpthread
+link_mpx=
+AC_MSG_CHECKING([whether ld accepts -z bndplt])
+echo int main() {};  conftest.c
+if AC_TRY_COMMAND([${CC} ${CFLAGS} -Wl,-z,bndplt -o conftest conftest.c 
1AS_MESSAGE_LOG_FD])
+then
+AC_MSG_RESULT([yes])
+link_mpx=$link_mpx -z bndplt
+else
+AC_MSG_RESULT([no])
+fi
  AC_SUBST(link_libmpx)
+AC_SUBST(link_mpx)



Without -z bndplt, MPX won't work correctly.  We should always pass -z bndplt
to linker.  If linker doesn't support it, ld will issue a warning, not
error and users
will know their linker is too old.  When they update linker, they don't have to
rebuild GCC.


If ld issues a warning instead of an error, then configure test passes
and we pass '-z bndplt' to linker.



Can you verify it with an older linker? The unknown XXX in -z XXX is always
warned and ignored in Linux linker.  If testing it on Linux always passes,
it is useless.


Old ld issues a warning:

ld: warning: -z bndplt ignored.


Does configure test pass?


But gold issues an error:

ld.gold: bndplt: unknown -z option
ld.gold: use the --help option for usage information


If gold is used, MPX won't work.  What should we do here?
Should we hardcode -fuse-ld=bfd for MPX?


Is MPX disabled when the host linker is gold and gld isn't available?


No. You may use MPX with gold and old ld but you would loose passed
bounds when make a call via plt.



If gold is default linker, the configure test will fail and we never pass
-z bndplt to linker even if ld.bfd is available and ld.gold is fixed later.
I'd rather always pass -z bndplt to ld.


If gold is used and it doesn't support '-z bndplt' then it doesn't
mean user can't use MPX.


They can use -fuse-ld=bfd to select bfd linker if gold fails to generate
proper MPX binary.


Which is a weird thing to do just to have a warning instead of an
error. You don't guarantee MPX PLT generation by always passing '-z
bndplt' but remove an opportunity to use gold at all. With current
check you may use any linker and manually provide additional options
if you want to.

Ilya




--
H.J.




Re: [PATCH x86] Enable v64qi permutations.

2014-12-10 Thread Robert Dewar

On 12/10/2014 11:49 AM, Richard Henderson wrote:

On 12/04/2014 01:49 AM, Ilya Tocar wrote:

+  if (!TARGET_AVX512BW || !(d-vmode == V64QImode))


Please don't over-complicate the expression.
Use x != y instead of !(x == y).


To me the original reads more clearly, since it
is of the parallel form !X or !Y, I don't see it
as somehow more complicated???



r~





Re: [PATCH] doc/generic.texi: Fix typo

2014-08-31 Thread Robert Dewar

On 8/31/2014 4:49 PM, Gerald Pfeifer wrote:

On Fri, 29 Aug 2014, Mike Stump wrote:

These errors are on purpose.


Surprising that someone would not get this obvious clever joke.



-There are many places in which this document is incomplet and incorrekt.
+There are many places in which this document is incomplete or incorrect.


Since this now came up for the second time this year, I went ahead
and applied the patch below.


Seems a shame that anyone should need an explanation, but oh well :-)

P.S. my favorite instance of this kind of documentation is an early
IBM Fortran manual, which says that you should put exactly the character
you want to see come out on the printer [in some context], e.g. an I 
for an I and a 2 for a 2. :-)


Re: [Ada] Remove VMS specific files

2014-07-31 Thread Robert Dewar

There's a user's group that works with VMS engineering that wants to
keep using the C compiler, so let's keep the config files and non-Ada
specific C files.  Tristan and I will stay on as maintainers of the
cross port for now.



Why should we continue to maintain these?


Re: Use [warning enabled by default] for default warnings

2014-02-11 Thread Robert Dewar

On 2/11/2014 4:45 AM, Richard Sandiford wrote:


OK, this version drops the [enabled by default] altogether.
Tested as before.  OK to install?


Still a huge earthquake in terms of affecting test suites and
baselines of many users. is it really worth it? In the case of
GNAT we have only recently started tagging messages in this
way, so changes would not be so disruptive, and we can debate
following whatever gcc does, but I think it is important to
understand that any change in this area is a big one in terms
of impact on users.


Thanks,
Richard


gcc/
* opts.c (option_name): Remove enabled by default rider.

gcc/testsuite/
* gcc.dg/gomp/simd-clones-5.c: Update comment for new warning message.

Index: gcc/opts.c
===
--- gcc/opts.c  2014-02-10 20:36:32.380197329 +
+++ gcc/opts.c  2014-02-10 20:58:45.894502379 +
@@ -2216,14 +2216,10 @@ option_name (diagnostic_context *context
return xstrdup (cl_options[option_index].opt_text);
  }
/* A warning without option classified as an error.  */
-  else if (orig_diag_kind == DK_WARNING || orig_diag_kind == DK_PEDWARN
-  || diag_kind == DK_WARNING)
-{
-  if (context-warning_as_error_requested)
-   return xstrdup (cl_options[OPT_Werror].opt_text);
-  else
-   return xstrdup (_(enabled by default));
-}
+  else if ((orig_diag_kind == DK_WARNING || orig_diag_kind == DK_PEDWARN
+   || diag_kind == DK_WARNING)
+   context-warning_as_error_requested)
+return xstrdup (cl_options[OPT_Werror].opt_text);
else
  return NULL;
  }
Index: gcc/testsuite/gcc.dg/gomp/simd-clones-5.c
===
--- gcc/testsuite/gcc.dg/gomp/simd-clones-5.c   2014-02-10 20:36:32.380197329 
+
+++ gcc/testsuite/gcc.dg/gomp/simd-clones-5.c   2014-02-10 21:00:32.549412313 
+
@@ -3,7 +3,7 @@

  /* ?? The -w above is to inhibit the following warning for now:
 a.c:2:6: warning: AVX vector argument without AVX enabled changes
-   the ABI [enabled by default].  */
+   the ABI.  */

  #pragma omp declare simd notinbranch simdlen(4)
  void foo (int *a)





Re: Use [warning enabled by default] for default warnings

2014-02-11 Thread Robert Dewar

On 2/11/2014 7:48 AM, Richard Sandiford wrote:


The patch deliberately didn't affect Ada's diagnostic routines given
your comments from the first round.  Calling this a huge earthquake
for other languages seems like a gross overstatement.


Actually it's much less of an impact for Ada for two reasons. First we
only just started tagging warnings. In fact we have only just released
an official version with the facility for tagging warnings.

Second, this tagging of warnings is not the default (that would have
been a big earthquake) but you have to turn it on explicitly.

But I do indeed think it will have a significant impact for users
of other languages, where this has been done for a while, and if
I am not mistaken, done by default?


I don't think gcc, g++, gfortran, etc, have ever made a commitment
to producing textually identical warnings and errors for given inputs
across different releases.  It seems ridiculous to require that,
especially if it stands in the way of improving the diagnostics
or introducing finer-grained -W control.

E.g. Florian's complaint was that we shouldn't have warnings that
are not under the control of any -W options.  But by your logic
we couldn't change that either, because all those [enabled by default]s
would become [-Wnew-option]s.


I am not saying you can't change it, just that it is indeed a big
earthquake. No of course there is no commitment not to make changes.
But you have to be aware that when you make changes like this, the
impact is very significant in real production environments, and
gcc is as you know extensively used in such environments.

What I am saying here is that this is worth some discussion on what
the best approach is.

Ideally indeed it would be better if all warnings were controlled by
some specific warning category. I am not sure a warning switch that
default-covered all otherwise uncovered cases (as suggested by one
person at least) would be a worthwhile approach.



Re: Use [warning enabled by default] for default warnings

2014-02-11 Thread Robert Dewar

On 2/11/2014 9:36 AM, Richard Sandiford wrote:

  I find it hard to believe that
significant numbers of users are not fixing the sources of those
warnings and are instead requiring every release of GCC to produce
warnings with a particular wording.


Good enough for me, I think it is OK to make the change.


Re: Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:00 PM, Richard Sandiford wrote:

We print [-Wfoo] after a warning that was enabled by the -Wfoo option,
which is pretty clear.  But for warnings that have no -W option we just
print [enabled by default], which leads to the question of _what_ is
enabled by default.  As shown by:

http://gcc.gnu.org/ml/gcc/2014-01/msg00234.html

it invites the wrong interpretation for things like:

warning: non-static data member initializers only available with -std=c++11 
or -std=gnu++11 [enabled by default]

IMO the natural assumption is that gnu++11 is enabled by default, which is
how Lars also read it.

There seemed to be support for using warning enabled by default instead,
so this patch does that.  Tested on x86_64-linux-gnu.  OK to install?


Sounds like an earthquake patch from the point of view of test suite
baselines!


I'll post an Ada patch separately.


Will definitely have a big impact on the Ada test suite. Fine to
post the Ada patch (which is of course trivial as a patch), but
we will have to coordinate installing it with a pass through
test base lines.



Re: [Ada] Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:03 PM, Richard Sandiford wrote:

This switches Ada from using [enabled by default] to [warning enabled
by default] for consistency with:

   http://gcc.gnu.org/ml/gcc-patches/2014-02/msg00549.html

Tested on x86_64-linux-gnu.  OK if the above patch goes in?


I would say hold off on this until we can find the time
to coordinate updating our test suite, which we will do
as fast as possible.


Thanks,
Richard


gcc/ada/
* erroutc.adb (Output_Msg_Text): Use [warning enabled by default].
* err_vars.ads, errout.ads, gnat_ugn.texi: Update comments and
documentation accordingly.

Index: gcc/ada/erroutc.adb
===
--- gcc/ada/erroutc.adb 2014-02-09 20:02:00.971968883 +
+++ gcc/ada/erroutc.adb 2014-02-09 20:02:58.640471235 +
@@ -456,7 +456,7 @@ package body Erroutc is

if Warn and then Warn_Chr /= ' ' then
   if Warn_Chr = '?' then
-Warn_Tag := new String'( [enabled by default]);
+Warn_Tag := new String'( [warning enabled by default]);

   elsif Warn_Chr in 'a' .. 'z' then
  Warn_Tag := new String'( [-gnatw  Warn_Chr  ']');
Index: gcc/ada/err_vars.ads
===
--- gcc/ada/err_vars.ads2014-02-09 20:02:00.971968883 +
+++ gcc/ada/err_vars.ads2014-02-09 20:02:58.639471226 +
@@ -141,8 +141,8 @@ package Err_Vars is
 --  Setting is irrelevant if no  insertion character is present. Note
 --  that it is not necessary to reset this after using it, since the proper
 --  procedure is always to set it before issuing such a message. Note that
-   --  the warning documentation tag is always [enabled by default] in the
-   --  case where this flag is True.
+   --  the warning documentation tag is always [warning enabled by default]
+   --  in the case where this flag is True.

 Error_Msg_String : String (1 .. 4096);
 Error_Msg_Strlen : Natural;
Index: gcc/ada/errout.ads
===
--- gcc/ada/errout.ads  2014-02-09 20:02:00.971968883 +
+++ gcc/ada/errout.ads  2014-02-09 20:02:58.639471226 +
@@ -287,8 +287,8 @@ package Errout is

 --Insertion character ?? (Two question marks: default warning)
 --  Like ?, but if the flag Warn_Doc_Switch is True, adds the string
-   --  [enabled by default] at the end of the warning message. For
-   --  continuations, use this in each continuation message.
+   --  [warning enabled by default] at the end of the warning message.
+   --  For continuations, use this in each continuation message.

 --Insertion character ?x? (warning with switch)
 --  Like ?, but if the flag Warn_Doc_Switch is True, adds the string
Index: gcc/ada/gnat_ugn.texi
===
--- gcc/ada/gnat_ugn.texi   2014-02-09 20:02:00.971968883 +
+++ gcc/ada/gnat_ugn.texi   2014-02-09 20:02:58.644471270 +
@@ -5055,8 +5055,8 @@ indexed components, slices, and selected
  @cindex @option{-gnatw.d} (@command{gcc})
  If this switch is set, then warning messages are tagged, either with
  the string ``@option{-gnatw?}'' showing which switch controls the warning,
-or with ``[enabled by default]'' if the warning is not under control of a
-specific @option{-gnatw?} switch. This mode is off by default, and is not
+or with ``[warning enabled by default]'' if the warning is not under control
+of a specific @option{-gnatw?} switch. This mode is off by default, and is not
  affected by the use of @code{-gnatwa}.

  @item -gnatw.D





Re: Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:09 PM, Arnaud Charlet wrote:

IMO the natural assumption is that gnu++11 is enabled by default, which is
how Lars also read it.

There seemed to be support for using warning enabled by default instead,
so this patch does that.  Tested on x86_64-linux-gnu.  OK to install?

I'll post an Ada patch separately.


FWIW this doesn't seem desirable to me, this will make the diagnostic longer.
For Ada this wouldn't really disambiguate things, and some users may be
dependent on the current format, so changing it isn't very friendly.

Arno


can't we just reword the one warning where there is an ambiguity to 
avoid the confusion, rather than creating such an earthquake, which

as Arno says, really has zero advantages to Ada programmers, and clear
disadvantages .. to me [enabled by default] is already awfully long!


Re: [Ada] Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:10 PM, Richard Sandiford wrote:


Which testsuite do you mean?  I did test this with Ada enabled
and there were no regressions.

If you mean an external testsuite then I certainly don't mind
holding off the Ada part.  I hope the non-Ada part could still
go in without it though.


I mean many external test suites, many of our users maintain their
own test suites, and base lines for their codes, and any change like
this is very disruptive.



Re: Use [warning enabled by default] for default warnings

2014-02-09 Thread Robert Dewar

On 2/9/2014 3:23 PM, Richard Sandiford wrote:


can't we just reword the one warning where there is an ambiguity to
avoid the confusion, rather than creating such an earthquake, which
as Arno says, really has zero advantages to Ada programmers, and clear
disadvantages .. to me [enabled by default] is already awfully long!


Well, since the Ada part has been rejected I think we just need to
consider this from the non-Ada perspective.  And IMO there's zero
chance that each new warning will be audited for whether the
[enabled by default] will be unambiguous.  The fact that this
particular warning caused confusion and someone actually reported
it doesn't mean that there are no other warnings like that.  E.g.:

   -fprefetch-loop-arrays is not supported with -Os [enabled by default]

could also be misunderstood, especially if working on an existing codebase
with an existing makefile.  And the effect for:

   pragma simd ignored because -fcilkplus is not enabled [enabled by default]

is a bit unfortunate.  Those were just two examples -- I'm sure I could
pick more.


Indeed, worrisome examples,

a shorter substitute would be [default warning]

???


Thanks,
Richard





Re: [PATCH] Do not set flag_complex_method to 2 for C++ by default.

2014-01-07 Thread Robert Dewar

On 1/7/2014 8:46 PM, Andrew Pinski wrote:


Correctness over speed is better.  I am sorry GCC is the only one
which gets it correct here.  If people don't like there is a flag to
disable it.


Obviously in a case like this, it is the programmer who should
be able to decide between fast-and-acceptable and slow-and-accurate.
This is an old debate (e.g. consider Cray, who always went for the
fast-and-acceptable path, and was able to build machines that were
interestingly fast partly as a result of this philosophy).

So having a switch is not controversial

But then the question is, what should the default be. The trouble with
the slow-but-accurate is that many users will never know about the 
switch, and will judge the compiler ONLY on the basis that it is slow,

without even knowing, noticing, or caring that it is more correct
than the competition.

We have seen gcc lose out in a number of head to head comparisons,
because GCC defaulted to -O0 (optimization really really off, and
don't care how horrible the code is) and the competition defaulted
to optimization turned on.

We even worked with one customer, and explained the issue, and they
said sorry, company procedures require us to run both compilers with
their default settings, since that is perceived as being fairer!
Their conclusion was that gcc was unacceptably inefficient and they
went with the competition.




You can say the same thing that people who find C is slower can use
the flag to disable it.

thanks,

David



Thanks,
Andrew Pinski



thanks,

David


On Wed, Nov 13, 2013 at 9:07 PM, Andrew Pinski pins...@gmail.com wrote:

On Wed, Nov 13, 2013 at 5:26 PM, Cong Hou co...@google.com wrote:

This patch is for PR58963.

In the patch http://gcc.gnu.org/ml/gcc-patches/2005-02/msg00560.html,
the builtin function is used to perform complex multiplication and
division. This is to comply with C99 standard, but I am wondering if
C++ also needs this.

There is no complex keyword in C++, and no content in C++ standard
about the behavior of operations on complex types. The complex
header file is all written in source code, including complex
multiplication and division. GCC should not do too much for them by
using builtin calls by default (although we can set -fcx-limited-range
to prevent GCC doing this), which has a big impact on performance
(there may exist vectorization opportunities).

In this patch flag_complex_method will not be set to 2 for C++.
Bootstraped and tested on an x86-64 machine.


I think you need to look into this issue deeper as the original patch
only enabled it for C99:
http://gcc.gnu.org/ml/gcc-patches/2005-02/msg01483.html .

Just a little deeper will find
http://gcc.gnu.org/ml/gcc/2007-07/msg00124.html which says yes C++
needs this.

Thanks,
Andrew Pinski




thanks,
Cong


Index: gcc/c-family/c-opts.c
===
--- gcc/c-family/c-opts.c (revision 204712)
+++ gcc/c-family/c-opts.c (working copy)
@@ -198,8 +198,10 @@ c_common_init_options_struct (struct gcc
opts-x_warn_write_strings = c_dialect_cxx ();
opts-x_flag_warn_unused_result = true;

-  /* By default, C99-like requirements for complex multiply and divide.  */
-  opts-x_flag_complex_method = 2;
+  /* By default, C99-like requirements for complex multiply and divide.
+ But for C++ this should not be required.  */
+  if (c_language != clk_cxx  c_language != clk_objcxx)
+opts-x_flag_complex_method = 2;
  }

  /* Common initialization before calling option handlers.  */
Index: gcc/c-family/ChangeLog
===
--- gcc/c-family/ChangeLog (revision 204712)
+++ gcc/c-family/ChangeLog (working copy)
@@ -1,3 +1,8 @@
+2013-11-13  Cong Hou  co...@google.com
+
+ * c-opts.c (c_common_init_options_struct): Don't let C++ comply with
+ C99-like requirements for complex multiply and divide.
+
  2013-11-12  Joseph Myers  jos...@codesourcery.com

   * c-common.c (c_common_reswords): Add _Thread_local.




Re: gcc's obvious patch policy

2013-11-26 Thread Robert Dewar

To me the issue is not what is written down about
the policy, but whether the policy works in practice,
and it seems like it does, so what's the problem?

This just seems to be making a problem where
none exists.


Re: RFA: patch to fix PR58967

2013-11-04 Thread Robert Dewar

On 11/4/2013 2:23 PM, Vladimir Makarov wrote:

The following patch fixes

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58967

The removed code is too old.  To be honest, I even don't remember why I
added this.  LRA has been changed a lot since this change and now it
works fine without it.


Whenever I see a comment like this, it reminds me to remind everyone
to comment your code! Do not assume you will remember why you wrote
what you wrote, so even if it is you who will look at your code, write
comments for yourself assuming you have totally forgotten!



Re: Copyright years for new old ports (Re: Ping^6: contribute Synopsys Designware ARC port)

2013-10-03 Thread Robert Dewar

On 10/3/2013 5:10 PM, Joseph S. Myers wrote:

On Wed, 2 Oct 2013, Joern Rennecke wrote:


 From my understanding, the condition for adding the current Copyright year
without a source code change is to have a release in that year.  Are we
sure 4.9.0 will be released this year?


release here includes availability of a development version in public
version control, as well as snapshots and non-FSF releases.  The effect is
that if the first copyright year in a GCC source file is 1987 or later, a
single range year-2013 can be used.



Just as a FYI, for the GNAT front end we have always used
year ranges, but we only update the year if we actually
modify a file.


Re: [x86, PATCH 2/2] Enabling of the new Intel microarchitecture Silvermont

2013-06-01 Thread Robert Dewar

On 6/1/2013 9:52 AM, Jakub Jelinek wrote:


Sorry for nitpicking, but there are various formatting issues.


A number of these formatting issues could be easily detected by
the compiler. It might be really useful to add a switch to do
such detection. For Ada, the GNAT compiler has -gnatyg which
enables standard style checking according to our coding
standards for Ada, and we find this saves a lot of time
as well as avoiding style errors getting into the code base
(this kind of nitpicking style error detection is more easily
done by a machine than a human). Of course not all stlye errors
can be easily handled, but a lot of them can!



Re: Calculating cosinus/sinus

2013-05-11 Thread Robert Dewar

On 5/11/2013 5:42 AM, jacob navia wrote:


1) The fsin instruction is ONE instruction! The sin routine is (at
least) thousand instructions!
 Even if the fsin instruction itself is slow it should be thousand
times faster than the
 complicated routine gcc calls.
2) The FPU is at 64 bits mantissa using gcc, i.e. fsin will calculate
with 64 bits mantissa and
 NOT only 53 as SSE2. The fsin instruction is more precise!


You are making conclusions based on naive assumptions here.


I think that gcc has a problem here. I am pointing you to this problem,
but please keep in mind
I am no newbee...


Sure, but that does not mean you are familiar with the intracacies
of accurate computation of transcendental functions!


jacob





Re: Calculating cosinus/sinus

2013-05-11 Thread Robert Dewar



As 1) only way is measure that. Compile following an we will see who is
rigth.


Right, probably you should have done that before posting
anything! (I leave the experiment up to you!)


cat 
#include math.h

int main(){ int i;
   double x=0;

   double ret=0;
   double f;
   for(i=0;i1000;i++){
  ret+=sin(x);
 x+=0.3;
   }
   return ret;
}
  sin.c

gcc sin.c -O3 -lm -S
cp sin.s fsin.s
#change implementation in to fsin.s
gcc sin.s -lm -o  sin; gcc fsin.s -lm -o fsin
for I in `seq 1 10` ; do
time ./sin
time ./fsin
done




I think that gcc has a problem here. I am pointing you to this problem,
but please keep in mind
I am no newbee...


Sure, but that does not mean you are familiar with the intracacies
of accurate computation of transcendental functions!


jacob







Re: Calculating cosinus/sinus

2013-05-11 Thread Robert Dewar

On 5/11/2013 10:46 AM, Robert Dewar wrote:



As 1) only way is measure that. Compile following an we will see who is
rigth.


Right, probably you should have done that before posting
anything! (I leave the experiment up to you!)


And of course this experiment says nothing about accuracy!



Re: Calculating cosinus/sinus

2013-05-11 Thread Robert Dewar

On 5/11/2013 11:20 AM, jacob navia wrote:


OK I did a similar thing. I just compiled sin(argc) in main.
The results prove that you were right. The single fsin instruction
takes longer than several HUNDRED instructions (calls, jumps
table lookup what have you)

Gone are the times when an fsin would take 30 cycles or so.
Intel has destroyed the FPU.


That's an unwarrented claim, but indeed the algorithm used
within the FPU is inferior to the one in the library. Not
so surprising, the one in the chip is old, and we have made
good advances in learning how to calculate things accurately.
Also, the library is using the fast new 64-bit arithmetic.
So none of this is (or should be surprising).


In the benchmark code all that code/data is in the L1 cache.
In real life code you use the sin routine sometimes, and
the probability of it not being in the L1 cache is much higher,
I would say almost one if you do not do sin/cos VERY often.


But of course you don't really care about performance so much
unless you *are* using it very often. I would be surprised if
there are any real programs in which using the FPU instruction
is faster.

And as noted earlier in the thread, the library algorithm is
more accurate than the Intel algorithm, which is also not at
all surprising.


For the time being I will go on generating the fsin code.
I will try to optimize Moshier's SIN function later on.


Well I will be surprised if you can find significant
optimizations to that very clever routine. Certainly
you have to be a floating-point expert to even touch it!

Robert Dewar




Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-09 Thread Robert Dewar

On 4/9/2013 5:39 AM, Florian Weimer wrote:

On 04/09/2013 01:47 AM, Robert Dewar wrote:

Well the back end has all the information to figure this out I think!
But anyway, for Ada, the current situation is just fine, and has
the advantage that the -gnatG expanded code listing clearly shows in
Ada source form, what is going on.


Isn't this a bit optimistic, considering that run-time overflow checking
currently does not use existing hardware support?


Not clear what you mean here, we don't rely on the back end for run-time
overflow checking. What is over-optimistic here?

BTW, existing hardware support can be a dubious thing, you have
to be careful to evaluate costs, for instance you don't want to
use INTO on modern x86 targets!






Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

It may be interesting to look at what we have done in
Ada with regard to overflow in intermediate expressions.
Briefly we allow specification of three modes

all intermediate arithmetic is done in the base type,
with overflow signalled if an intermediate value is
outside this range.

all intermediate arithmetic is done in the widest
integer type, with overflow signalled if an intermediate
value is outside this range.

all intermediate arithmetic uses an infinite precision
arithmetic package built for this purpose.

In the second and third cases we do range analysis that
allows smaller intermediate precision if we know it's
safe.

We also allow separate specification of the mode inside
and outside assertions (e.g. preconditions and postconditions)
since in the latter you often want to regard integers as
mathematical, not subject to intermediate overflow.


Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 9:15 AM, Kenneth Zadeck wrote:


I think this applies to Ada constant arithmetic as well.


Ada constant arithmetic (at compile time) is always infinite
precision (for float as well as for integer).



Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 9:24 AM, Kenneth Zadeck wrote:


So then how does a language like ada work in gcc?   My assumption is
that most of what you describe here is done in the front end and by the
time you get to the middle end of the compiler, you have chosen types
for which you are comfortable to have any remaining math done in along
with explicit checks for overflow where the programmer asked for them.


That's right, the front end does all the promotion of types


Otherwise, how could ada have ever worked with gcc?


Sometimes we do have to make changes to gcc to accomodate Ada
specific requirements, but this was not one of those cases. Of
course the back end would do a better job of the range analysis
to remove some unnecessary use of infinite precision, but the
front end in practice does a good enough job.



Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 9:23 AM, Kenneth Zadeck wrote:

On 04/08/2013 09:19 AM, Robert Dewar wrote:

On 4/8/2013 9:15 AM, Kenneth Zadeck wrote:


I think this applies to Ada constant arithmetic as well.


Ada constant arithmetic (at compile time) is always infinite
precision (for float as well as for integer).


What do you mean when you say constant arithmetic?Do you mean
places where there is an explicit 8 * 6 in the source or do you mean any
arithmetic that a compiler, using the full power of interprocedural
constant propagation can discover?


Somewhere between the two. Ada has a very well defined notion of
what is and what is not a static expression, it definitely does not
include everything the compiler can discover, but it goes beyond just
explicit literal arithmetic, e.g. declared constants

   X : Integer := 75;

are considered static. It is static expressions that must be computed
with full precision at compile time. For expressions the compiler can
tell are constant even though not officially static, it is fine to
compute at compile time for integer, but NOT for float, since you want
to use target precision for all non-static float-operations.






Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 9:58 AM, Kenneth Zadeck wrote:


yes but the relevant question for the not officially static integer
constants is in what precision are those operations to be performed
in?I assume that you choose gcc types for these operations and you
expect the math to be done within that type, i.e. exactly the way you
expect the machine to perform.


As I explained in an earlier message, *within* a single expression, we
are free to use higher precision, and we provide modes that allow this
up to and including the usea of infinite precision. That applies not
just to constant expressions but to all expressions.






Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 10:26 AM, Kenneth Zadeck wrote:


My confusion is what you mean by we?   Do you mean we the writer of
the program, we the person invoking the compiler by the use command
line options or we, your company's implementation of ada?


Sorry, bad usage, The gcc implementation of Ada allows the user to
specify by pragmas how intermediate overflow is handled.


My interpretation of your first email was that it was possible for the
programmer to do something equivalent to adding attributes surrounding a
block in the program to control the precision and overflow detection of
the expressions in the block.   And if this is so, then by the time the
expression is seen by the middle end of gcc, those attributes will have
been converted into tree code will evaluate the code in a well defined
way by both the optimization passes and the target machine.


Yes, that's a correct understanding


Kenny





Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 5:12 PM, Lawrence Crowl wrote:

(BTW, you *really* don't need to quote entire messages, I find
it rather redundant for the entire thread to be in every message,
we all have thread following mail readers!)


Correct me if I'm wrong, but the Ada standard doesn't require any
particular maximum evaluation precision, but only that you get an
exception if the values exceed the chosen maximum.


Right, that's at run-time, at compile-time for static expressions,
infinite precision is required.

But at run-time, all three of the modes we provide are
standard conforming.


In essence, you have moved some of the optimization from the back
end to the front end.  Correct?


Sorry, I don't quite understand that. If you are syaing that the
back end could handle this widening for intermediate values, sure
it could, this is the kind of thing that can be done at various
different places.






Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 5:46 PM, Kenneth Zadeck wrote:

In some sense you have to think in terms of three worlds:
1) what you call compile-time static expressions is one world which in
gcc is almost always done by the front ends.
2) the second world is what the optimizers can do.   This is not
compile-time static expressions because that is what the front end has
already done.
3) there is run time.

My view on this is that optimization is just doing what is normally done
at run time but doing it early.   From that point of view, we are if not
required, morally obligated to do thing in the same way that the
hardware would have done them.This is why i am so against richi on
wanting to do infinite precision.By the time the middle or the back
end sees the representation, all of the things that are allowed to be
done in infinite precision have already been done.   What we are left
with is a (mostly) strongly typed language that pretty much says exactly
what must be done. Anything that we do in the middle end or back ends in
infinite precision will only surprise the programmer and make them want
to use llvm.


That may be so in C, in Ada it would be perfectly reasonable to use
infinite precision for intermediate results in some cases, since the
language standard specifically encourages this approach.



Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 6:34 PM, Mike Stump wrote:

On Apr 8, 2013, at 2:48 PM, Robert Dewar de...@adacore.com wrote:

That may be so in C, in Ada it would be perfectly reasonable to use
infinite precision for intermediate results in some cases, since the
language standard specifically encourages this approach.


gcc lacks an infinite precision plus operator?!  :-)


Right, that's why we do everything in the front end in the
case of Ada. But it would be perfectly reasonable for the
back end to do this substitution.


Re: Comments on the suggestion to use infinite precision math for wide int.

2013-04-08 Thread Robert Dewar

On 4/8/2013 7:46 PM, Kenneth Zadeck wrote:


On 04/08/2013 06:45 PM, Robert Dewar wrote:

On 4/8/2013 6:34 PM, Mike Stump wrote:

On Apr 8, 2013, at 2:48 PM, Robert Dewar de...@adacore.com wrote:

That may be so in C, in Ada it would be perfectly reasonable to use
infinite precision for intermediate results in some cases, since the
language standard specifically encourages this approach.


gcc lacks an infinite precision plus operator?!  :-)


Right, that's why we do everything in the front end in the
case of Ada. But it would be perfectly reasonable for the
back end to do this substitution.

but there is no way in the current tree language to convey which ones
you can and which ones you cannot.


Well the back end has all the information to figure this out I think!
But anyway, for Ada, the current situation is just fine, and has
the advantage that the -gnatG expanded code listing clearly shows in
Ada source form, what is going on.






Re: C/C++ Option to Initialize Variables?

2013-02-18 Thread Robert Dewar



Forgive me, but I don't see where anything is guaranteed to be zero'd
before use. I'm likely wrong somewhere since you disagree.


http://en.wikipedia.org/wiki/.bss


This is about what happens to work, and specifically notes that it is
not part of the C standard. There is a big difference between programs
that obey the standard, and those that don't but happen to work on some
systems. The latter programs have latent bugs that can definitely
cause trouble.

A properly written C program should avoid uninitialized variables, just
as a properly written Ada program should avoid them.

In GNAT, we have found the Initialize_Scalars pragma to be very useful
in finding uninitialized variables. It causes all scalars to be 
initialized using a specified bit pattern that can be specified at

link time, and modified at run-time.

If you run a program with different patterns, it should give the same
result, if it does not, you have an uninitialized variable or other
non-standard aspect in your program which should be tracked down and
fixed.

Note that the BSS-is-always-zero guarantee often does not apply when
embedded programs are restarted, so it is by no means a universal
guarantee.



Re: C/C++ Option to Initialize Variables?

2013-02-18 Thread Robert Dewar



Wrong.  It specifies that objects with static storage duration that
aren't explicitely initialized are initialized with null pointers, or
zeros depending on type.  6.7.8.10.


OK, that means that the comments of my last mesage don't apply to
variables of this type. So they should at least optionally be excluded
from any feature to initialize variables


Hence if .bss is to be used to place such objects then the runtime system
_must_ make sure that it's zero initialized.




Re: hard typdef - proposal - I know it's not in the standard

2013-01-28 Thread Robert Dewar

On 1/28/2013 6:48 AM, Alec Teal wrote:

On 28/01/13 10:41, Jonathan Wakely wrote:

On 28 January 2013 06:18, Alec Teal wrote:

the very
nature of just putting the word hard before a typedef is something I find
appealing

I've already explained why that's not likely to be acceptable, because
identifiers are allowed before 'typedef' and it would be ambiguous.
You need a different syntax.


That is why I'd want both, but at least in my mind n3515 would be nearer to
if I really wanted it I could use classes than the hard-typedef.

I've already said N3515 is not about classes.

You keep missing the point of what I mean by like classes I mean in
terms of achieving the result, PLEASE think it though.


I have read this thread, and I see ZERO chance of this proposal being
accepted for inclusion into gcc at the current time.

Feel free to create your own version of gcc that has this feature (that
after all is what freedom in software is about) and promote it elsewhere
but it is really a waste of time to debate it further on this list.

The burden for non-standard language extensions in gcc is very high.
The current proposal is ambiguous and flawed, and in any case does not
begin to meet this high standard.

I think this thread should be allowed to RIP at this stage.



Re: hard typdef - proposal - I know it's not in the standard

2013-01-24 Thread Robert Dewar

On 1/24/2013 9:10 AM, Alec Teal wrote:


Alec I am eager to see what you guys think, this is a 'feature' I've
wanted for a long time and you all seem approachable rather than the
distant compiler gods I expected.


I certainly see the point of this proposal, indeed introducing
this kind of strong typing makes sense to anyone familiar with
Ada, where it is a standard feature of the language, and the
way that Ada is always used.

However, I wonder whether it is simply too big a feature for
gcc to add on its own to C++. For sure you would have to have
language lawyers look very carefully at this proposal to see
if it is indeed sound with respect to the formal rules of the
language. Often features that make good sense when expressed
informally turn out to be problematic when they are fully
defined in the appropriate language of the standard.


I can also see why 'strong typedefs' were not done, it tries to do
too much with the type system and becomes very object like


I don't see what this has to do with objects!


Re: Integer Overflow/Wrap and GCC Optimizations

2013-01-24 Thread Robert Dewar

On 1/24/2013 10:02 AM, Jeffrey Walton wrote:


What I am not clear about is when an operation is deemed undefined
or implementation defined.


The compiler is free to assume that no arithmetic operation
on signed integers results in overflow. It is allowed to
take advantage of such assumptions in generating code (and
it does so).

You have no right to assume *anything* about the semantics
of code that has an integer overflow (let alone make
asssumptions about the generated code).

This is truly undefined, not implementation defined, and
if your program has such an overflow, you cannot assume
ANYTHING about the generated code.



Re: Integer Overflow/Wrap and GCC Optimizations

2013-01-24 Thread Robert Dewar

On 1/24/2013 10:33 AM, Jeffrey Walton wrote:


In this case, I claim we must perform the operation. Its the result
that we can't use under some circumstances (namely, overflow or wrap).


You do not have to do the operation if the program has an
overflow. The compiler can reason about this, so for example

  a = b + 1;
  if (a  b) ...

The compiler can assume that the test is true, because the only
conceivable way it would be false is on an overflow that wraps,
but that's undefined. If a is not used other than in this test,
the compiler can also eliminate the addition and the assignment




Re: gcc : c++11 : full support : eta?

2013-01-22 Thread Robert Dewar



About the time Clang does because GCC now has to compete.
How about that? Clang is currently slightly ahead and GCC really needs
to change if it is to continue to be the best.


Best is measured by many metrics, and it is unrealistic to expect
any product to be best in all respects.

Anyway, it still comes down to figuring out how to find the resources.
Not clear that there is commercial interest in rapid implementation
of c++11, we certainly have not heard of any such interest, and in the 
absence of such commercial interest, we do indeed come down to hoping

to find the volunteer help that is needed.



Re: not-a-number's

2013-01-16 Thread Robert Dewar

On 1/16/2013 6:54 AM, Mischa Baars wrote:
]

And indeed apparently the answer then is '2'. However, I don't think
this is correct. If that means that there is an error in the C
specification, then there probably is an error in the specification.


The C specification seems perfectly reasonable to me (in fact it is
rather familiar that x != x is a standard test for something being
a NaN. The fact that you for unclear reasons don't like the C spec
does not mean it is wrong!



Re: not-a-number's

2013-01-16 Thread Robert Dewar

On 1/16/2013 7:10 AM, Mischa Baars wrote:


And as I have said before: if you are satisfied with the answer '2',
then so be it and you keep the compiler the way it is, personally I'm am
not able to accept changes to the sources anyway. I don't think it is
the right answer though.


The fact that you don't think that gcc shoudl follow the C standard
is hardly convincing unless it is backed up by convincing technical
argument. I see nothing surprising about the 2 here, indeed any other
answer *would* be surprising. I still don't understand the basis for
your non-stnadard views.


Mischa.





Re: Fwd: Updating copyright dates automatically

2013-01-02 Thread Robert Dewar

On 1/2/2013 12:26 PM, Jeff Law wrote:


Any thoughts on doing something similar?

I've always found lazily updating the copyright years to be error prone.
   If we could just update all of them now, which is OK according to the
FSF guidelines we could avoid one class of problems.


For GNAT at AdaCore, we have a precommit script that does not let
you check in something with a wrong copyright date. That works well.

(boy that was a gigantic email, I hope we don't get a slew of people
being lazy and quoting it :-))


Re: Please don't deprecate i386 for GCC 4.8

2012-12-15 Thread Robert Dewar

On 12/15/2012 12:42 AM, Ralf Corsepius wrote:


If you want a port to be live show that it is live by posting regular
testresults to gcc-testresults.

Not all of this world is Linux nor backed by large teams at 
companies :)  We simply do not have the resources do to this.


But that's the point. If you don't have the resources, you seem
to be expecting others to provide them, but at this stage I
really don't see a strong argument for investing such effort.



Re: Please don't deprecate i386 for GCC 4.8

2012-12-15 Thread Robert Dewar

On 12/15/2012 12:32 PM, Cynthia Rempel wrote:

Hi,

Thanks for the fast response!

So to keep an architecture supported by GCC, we would need to:

Three or more times a year preferably either during OR after
stage3

1. use the SVN version of gcc, 2. patch with an RTEMS patch, 3. use
./contrib/test_summary and pipe the output to a shell. 4. Report the
testresults to gcc-patches.

Would this be sufficient to maintain support for an architecture?  As
far as support goes, I rebuild RTEMS quite often, so once I
understand how to run the tests I don't mind doing so for the x86
architectures. If running the test script is all that's required, I
can do that.


Well of course it would always be appreciated if you can jump in
and help sort out problems that are 386 specific (hopefully there
won't be any!)


Re: Please don't deprecate i386 for GCC 4.8

2012-12-14 Thread Robert Dewar

On 12/14/2012 3:13 PM, Cynthia Rempel wrote:

Hi,

RTEMS still supports the i386, and there are many i386 machines still
in use.  Deprecating the i386 will negatively impact RTEMS ability to
support the i386.  As Steven Bosscher said, the benefits are small,
and the impact would be serious for RTEMS i386 users.


Since there is a significant maintenance burden for such continued
support, I guess a question to ask is whether the RTEMS folks or
someone using RTEMS are willing to step in and shoulder this burden.


Re: Please don't deprecate i386 for GCC 4.8

2012-12-14 Thread Robert Dewar

Having read this whole thread, Ivote for deprecating the 386.
People using this ancient architecture can perfectly well use
older versions of gcc that have this support.


Re: Deprecate i386 for GCC 4.8?

2012-12-13 Thread Robert Dewar



Intel stopped producing embedded 386 chips in 2007.


Right, but this architecture is not protected, so the
question is whether there are other vendors producing
compatible chips. I don't know the answer.



Re: Deprecate i386 for GCC 4.8?

2012-12-13 Thread Robert Dewar

On 12/13/2012 7:26 AM, Steven Bosscher wrote:


Ralf has found one such a vendor, it seems.

But to me, that doesn't automatically imply that GCC must continue to
support such a target. Other criteria should also be considered. For
instance, quality of implementation and maintenance burden.


Yes, of course these are valid concerns. It's just important to have
all the facts. In particular, it would be interesting to contact this
company and see if they use gcc. Perhaps they would be willing to invest
some development effort?



Re: Deprecate i386 for GCC 4.8?

2012-12-12 Thread Robert Dewar

On 12/12/2012 1:01 PM, Steven Bosscher wrote:

Hello,

Linux support for i386 has been removed. Should we do the same for GCC?
The oldest ix86 variant that'd be supported would be i486.


Are there any embedded chips that still use the 386 instruction set?



Re: Deprecate i386 for GCC 4.8?

2012-12-12 Thread Robert Dewar

On 12/12/2012 2:52 PM, Steven Bosscher wrote:


And as usual: If you use an almost 30 years old architecture, why
would you need the latest-and-greatest compiler technology?
Seriously...


Well the embedded folk often end up with precisely this dichotomy :-)
But if no sign of 386 embedded chips, then reasonable to deprecate
I agree.


Ciao!
Steven





Re: Ada: ^M in ada source files

2012-12-07 Thread Robert Dewar

On 12/7/2012 1:56 PM, Mike Stump wrote:

I've noticed that:

$ grep -l '^M' gcc/testsuite/gnat.dg/*
discr36.ads
discr36_pkg.adb
discr36_pkg.ads
discr38.adb
loop_optimization11.adb
loop_optimization11_pkg.ads
loop_optimization13.adb
loop_optimization13.ads

:-(  Surely these are just normal text files, right?  Can I strip the ^M from 
them?



Probably good to have some tests with standard CR/LF terminators, since 
this is what a lot of the world uses.


Re: Ada: ^M in ada source files

2012-12-07 Thread Robert Dewar

On 12/7/2012 2:09 PM, Mike Stump wrote:

On Dec 7, 2012, at 10:57 AM, Robert Dewar de...@adacore.com wrote:

On 12/7/2012 1:56 PM, Mike Stump wrote:

I've noticed that:

$ grep -l '^M' gcc/testsuite/gnat.dg/*
discr36.ads
discr36_pkg.adb
discr36_pkg.ads
discr38.adb
loop_optimization11.adb
loop_optimization11_pkg.ads
loop_optimization13.adb
loop_optimization13.ads

:-(  Surely these are just normal text files, right?  Can I strip the ^M from 
them?



Probably good to have some tests with standard CR/LF terminators, since this is 
what a lot of the world uses.


Then, to preserve them, the files must be tagged as binary in svn and git.  
Doing so will probably make the normal file merging that git/svn would do, 
inoperative.

Ok to so tag all the files?


probably not worth it if it causes that disruption. svn certainly 
handleds CR/LF terminators fine, I guess Git does not?






Re: Ada: ^M in ada source files

2012-12-07 Thread Robert Dewar

On 12/7/2012 2:16 PM, Mike Stump wrote:


Yes, you can strip them, no problem.


Since emails likely crossed paths….  I'm going to give you and Robert a change 
to figure out what you'd like to do…  I _only_ care about consistency between 
contents as seen from svn and git.  Stripping ^M can do this, as can marking 
them as binary.  So marking them, ensures that the ^Ms are always there, both 
on ^M systems and non-^M systems.

So, after hashing it how, let me know the final verdict.  Thanks.


I would strip the CR's, not a big deal, and not worth worrying about.






Re: Ada: ^M in ada source files

2012-12-07 Thread Robert Dewar

On 12/7/2012 2:50 PM, Arnaud Charlet wrote:


Anyway, I'll let Robert have the final word on this one.

I'm fine with either solution (converting to LF, or marking files binary,
or a mix of both).

Arno



I would convert to LF, I think it causes less confusion


Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-24 Thread Robert Dewar



2) The fact that Android refuses to provide a non-HTML e-mail capability
is ridiculous but does not seem to me to be a reason for us to change
our policy.


Surely there are altenrative email client for Android that have plain
text capability???



Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-24 Thread Robert Dewar

On 11/24/2012 12:59 PM, Daniel Berlin wrote:

On Sat, Nov 24, 2012 at 12:47 PM, Robert Dewar de...@adacore.com wrote:



2) The fact that Android refuses to provide a non-HTML e-mail capability
is ridiculous but does not seem to me to be a reason for us to change
our policy.



Surely there are altenrative email client for Android that have plain
text capability???



Yes, we should expect users to change, instead of keeping up with users.


Well my experience with HTML-burdened mail is awful. From people who set
ludicrous font choices, to bad color choices, to inappropriate use of
multiple fonts, to inappropriate use of colors, it's a mess.

I think it is perfectly reasonable to expect serious developers to
send text messages in text form. BTW, our experience at AdaCore, where
we get lots of email from lots of customers, users, hobbyists, and
students, sending email from all sorts
of programs, is that yes, occasionally they send us HTML burdened
email, but almost always when we ask them to adjust their mailers to
send text, they can do so without problems.



Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-24 Thread Robert Dewar

On 11/24/2012 1:13 PM, Jonathan Wakely wrote:


The official gmail app, which obviously integrates well with gmail and
is good in most other ways, won't send non-html mails.


There seem to be a variety of alternatives


http://www.tested.com/tech/android/3110-the-best-alternative-android-apps-to-manage-all-your-email/


K-9 is a free software client that looks interesting


I find that very annoying, but I get annoyed with the app and am not
suggesting the GCC lists should change to deal with it.





Re: Could we start accepting rich-text postings on the gcc lists?

2012-11-23 Thread Robert Dewar

For me the most annoying thing about HTML burdened emails
is idiots who choose totally inappropriate fonts, that make
their stuff really hard to read. I choose a font for plain
text emails that is just right on my screen etc. I do NOT
want it overridden. And as for people who use color etc,
well others have said enough there .


Re: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

On 11/7/2012 5:52 AM, nk...@physics.auth.gr wrote:


1. Is it possible to use this scheme and not violate the GPL,v3 for
GCC? If I use GIMPLE dumps generated by -fdump-tree-all I think
there is a violation (correct me if not). Thus this module should be
FLOSS/GPL'ed, right?


You can't expect to get legal advice from a list like this, and if
you do get advice, you can't trust it. You have to consult an attorney
to evaluate issues like this, and even then you can't get
guaranteed definitive advice. Copyright issues are complex,
as Supap Kirtsaeng is discovering in his trip to the supreme court.

Furthermore, no one has any interest in assuring you that what
you are doing is OK in advance. The GPL is about encouraging
people to use the GPL, and the gcc community does not really
have an interest in making it easier for people to follow
some other path.

This may seem a little harsh, but it's (somewhat inevitably)
the way things are.

The only thing that would assure you that what you are planning
is OK is a specific intepretation of how the GPL applies by the
copyright holder. But this is not going to happen. Random non-expert
opinions by folks who are not attorneys may help confirm your
intepretation, but it's risky to rely on such opinions.

BTW, it is no surprise that you got no response from
licens...@fsf.org.

Robert Dewar


Re: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

I'm pretty certain I have correctly interpreted GPL,v3. I have good
reasons to believe that. However, I'm willing to read your
interpretation of the GPL,v3, if you have any.


If you are certain enough, then you can of course proceed
on that assumption. I have no interest in giving my opinion
on this, why should I? Perhaps others will, who knows?
We will see, but it would not surprise me if no one is
willing to provide the equivalent of an electronic
letter of comfort :-)



BTW, it is no surprise that you got no response from
licens...@fsf.org.


I thought this was their job. Obviously I was wrong. I'm not trying to
circumvent the GPL just to adhere to it. Is this so wrong? Then what
is the point of the exception clauses? They are there but you don't
want people to understand how to use them?


Yes, you were wrong, it is not the job of that mailing list to
provide legal advice!

There are two comfortable ways to conform to the GPL.

a) make all your own stuff GPL'ed

b) write proprietary code, that links in only modules with
the standard library exception.

Anything else, and you are prettty much on your own. Especially
if trying to rig up some system that has full-GPL components, and
non-GPL components.

Even a) and b) are a little tricky if you don't have a well defined
entity that can guarantee the licensing of the modules you use (remember
that notices within files do not have legal weight).



Re: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

On 11/7/2012 8:17 AM, nk...@physics.auth.gr wrote:


I disagree.


I think you are wrong, however it is not really productive to express it.


I would not casually ignore Richard's opinion, he has FAR more 
experience here than you do, and far more familiarity with

the issues involved.



Re: Fwd: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

On 11/7/2012 9:44 AM, nk...@physics.auth.gr wrote:

Quoting Richard Kenner ken...@vlsi1.ultra.nyu.edu:


There are not many lawyers in Greece that deal with open-source licenses.


The legal issue here has nothing whatsoever to do with open-source
licenses: the exact same issue comes up with proprietary licenses and
that, in fact, is where most of the precedents come from.

The legal issue is in the definition of a derived work and what kind
of separation is needed between two programs (works) to be able to
successfully assert that one is not a derived work of the other.


Yes, this is the major issue here.


One principle that can be applied is that if you have a program in
two pieces, then they are independent if either of them can be used
(and is used in practice) with other programs. But if the two pieces
can only work together, that seems part of the same program. I tried
to get this principle established in federal fourt in the Bentley
vs Intergraph trial, but unfortunately it settled 24 hours before
the judge published his opinion.



Re: Fwd: Questions regarding licensing issues

2012-11-07 Thread Robert Dewar

On 11/7/2012 11:08 AM, Richard Kenner wrote:

Correct.  A court of competent jurisdiction can decide whether your scheme
conforms to the relevant licenses; neither licens...@fsf.org nor the
people on this list can.


A minor correction: licens...@fsf.org *could* determine that since they are
the copyright holders.  If they say it's OK, that would be permitting such
a scheme.  However, the FSF, as a matter of policy, *does not* respond to
queries about whether or not some scheme violates the GPL.


And why should they? Or why would they?



I believe in free software as a contribution to a better society and
believe in the use of licenses such as GPLv3 to promote software sharing
by providing a software commons that can be used by those who will
contribute their changes to that commons, and do not consider this list -
or any GNU Project list - an appropriate place to seek advice about how to
do things going against the spirit of that commons.


I very much agree!


Me too!






Re: Libgcc and its license

2012-10-10 Thread Robert Dewar

On 10/10/2012 10:48 AM, Joseph S. Myers wrote:

On Wed, 10 Oct 2012, Gabor Loki wrote:


2) repeat all the compilation commands related to the previous list in
the proper environment. The only thing which I have added to the
compilation command is an extra -E option to preprocess every sources.
3) create a unique list of all source and header files from the
preprocessed files.
4) at final all source, header and generated files are checked for their
licenses.


The fact that a header is read by the compiler at some point in generating
a .o file does not necessarily mean that object file is a work based on
that header; that is a legal question depending on how the object code
relates to that header.


Well legally the status of a file is not in anyway affected by what
the header of the file says, but we should indeed try to make sure
that all headers properly reflect the intent.



Re: Libgcc and its license

2012-10-10 Thread Robert Dewar

On 10/10/2012 4:16 PM, Joseph S. Myers wrote:


I'm not talking about the relation between the headings textually located
in a source file and the license of that source file.  I'm talking about
the relation between the license of a .o file and the license of .h files
#included at several levels of indirection from the .c source that was
compiled to that .o file (in particular, headers included within tm.h, but
most or all of the content of which is irrelevant for code being built for
the target).


Right, I understand, but that gets messy quickly!






Re: patch to fix constant math

2012-10-08 Thread Robert Dewar

On 10/8/2012 11:01 AM, Nathan Froyd wrote:

- Original Message -

Btw, as for Richards idea of conditionally placing the length field
in
rtx_def looks like overkill to me.  These days we'd merely want to
optimize for 64bit hosts, thus unconditionally adding a 32 bit
field to rtx_def looks ok to me (you can wrap that inside a union to
allow both descriptive names and eventual different use - see what
I've done to tree_base)


IMHO, unconditionally adding that field isn't optimize for 64-bit
hosts, but gratuitously make one of the major compiler data
structures bigger on 32-bit hosts.  Not everybody can cross-compile
from a 64-bit host.  And even those people who can don't necessarily
want to.  Please try to consider what's best for all the people who
use GCC, not just the cases you happen to be working with every day.


I think that's rasonable in general, but as time goes on, and every
$300 laptop is 64-bit capable, one should not go TOO far out of the
way trying to make sure we can compile everything on a 32-bit machine.
After all, we don't try to ensure we can compile on a 16-bit machine
though when I helped write the Realia COBOL compiler, it was a major
consideration that we had to be able to compile arbitrarily large
programs on a 32-bit machine with one megabyte of memory. That was
achieved at the time, but is hardly relevant now!



Re: [patch][lra] Comment typo fix

2012-10-01 Thread Robert Dewar

On 10/1/2012 6:09 PM, Steven Bosscher wrote:

I suppose no-one would object if I commit this as obvious at some point?

Index: lra-constraints.c
===
--- lra-constraints.c   (revision 191858)
+++ lra-constraints.c   (working copy)
@@ -4293,7 +4293,7 @@ update_ebb_live_info (rtx head, rtx tail
 {
   if (prev_bb != NULL)
 {
- /* Udpate DF_LR_IN (prev_bb):  */
+ /* Update DF_LR_IN (prev_bb):  */
   EXECUTE_IF_SET_IN_BITMAP (check_only_regs, 0, j, bi)
 if (bitmap_bit_p (live_regs, j))
   bitmap_set_bit (DF_LR_IN (prev_bb), j);



took me a few readings to see the change you had made, amazing how
the brain reads what it expects to see :-)


Re: [CPP] Add pragmas for emitting diagnostics

2012-09-26 Thread Robert Dewar

On 9/26/2012 4:19 PM, Tom Tromey wrote:

Florian == Florian Weimer fwei...@redhat.com writes:


Florian This patch adds support for #pragma GCC warning and #pragma GCC
Florian error. These pragmas can be used from preprocessor macros,
Florian unlike the existing #warning and #error directives.  Library
Florian authors can use these pragmas to add deprecation warnings to
Florian macros they define.

I'm not sure if my libcpp review powers extend to an extension like
this.

It seems reasonable to me though.


To me too, these correspond to the Compile_Time_Warning and 
Compile_Time_Error in Ada, and are definitely very useful!




Re: GCC

2012-09-24 Thread Robert Dewar

On 9/24/2012 6:53 AM, Jerome Huck wrote:

from Mr Jerome Huck

Good morning.

I have been using the GCC suite on Windows, mainly in the various
Fortran. 77, 2003,... Thanks for those tools ! The Little Google Nexus 7
seems a wonderfull tool. I would like to know if we can expect a version
of GCC to run on Android for such the Nexus 7 ?


Sooner if you get to work on creating the port!


Thanks in advance.

Best regards.





Re: [PATCH] Combine location with block using block_locations

2012-09-13 Thread Robert Dewar

On 9/13/2012 8:00 AM, Richard Guenther wrote:


Because doing so would create code generation differences -g vs. -g0.


Sometimes I wonder whether the insistence on -g not changing code
generation is warranted. In practice, gdb for me is so weak in handling
-O1 or -O2, that if I want to debug something I have to recompile
with -O0 -g, which causes quite a bit of code generation change :-)



Re: [PATCH] Combine location with block using block_locations

2012-09-13 Thread Robert Dewar

On 9/13/2012 9:38 AM, Jakub Jelinek wrote:

On Thu, Sep 13, 2012 at 09:33:20AM -0400, Robert Dewar wrote:

On 9/13/2012 8:00 AM, Richard Guenther wrote:


Because doing so would create code generation differences -g vs. -g0.


Sometimes I wonder whether the insistence on -g not changing code
generation is warranted. In practice, gdb for me is so weak in handling


It is.  IMHO the most important reason is not that somebody would build
first with just -O2 and then later on to debug the code would build it again
with -g -O2 and hope the code is the same, but by making sure -g vs. -g0
doesn't change generate code we ensure -g doesn't pessimize the generated
code, and really many people compile even production code with -g -O2
or similar.  The debug info is then either stripped, or stripped into
separate files/not shipped or only optionally shipped with the product.

Jakub


Sure, it is obvious that you don't want -g to affect -O1 or -O2 code,
but I think if you have -Og (if and when we have that), it would not
be a bad thing for -g to affect that. I can even imagine that what
-Og means is -O1 if you don't have -g, and something good for
debugging if you do have -g.


Re: [PATCH] Combine location with block using block_locations

2012-09-13 Thread Robert Dewar

On 9/13/2012 12:07 PM, Xinliang David Li wrote:

It is very important to make sure -g does not affect code gen ---
people do release build with -g with optimization, and strip the
binary before sending it to production machines ..


Yes, of course, and for sure -g cannot affect optimized code, see
my follow on message.


David

On Thu, Sep 13, 2012 at 6:33 AM, Robert Dewar de...@adacore.com wrote:

On 9/13/2012 8:00 AM, Richard Guenther wrote:


Because doing so would create code generation differences -g vs. -g0.



Sometimes I wonder whether the insistence on -g not changing code
generation is warranted. In practice, gdb for me is so weak in handling
-O1 or -O2, that if I want to debug something I have to recompile
with -O0 -g, which causes quite a bit of code generation change :-)





Re: [PATCH] Combine location with block using block_locations

2012-09-13 Thread Robert Dewar

On 9/13/2012 12:46 PM, Tom Tromey wrote:

Robert == Robert Dewar de...@adacore.com writes:


Robert Sometimes I wonder whether the insistence on -g not changing code
Robert generation is warranted. In practice, gdb for me is so weak in handling
Robert -O1 or -O2, that if I want to debug something I have to recompile
Robert with -O0 -g, which causes quite a bit of code generation change :-)

If those are gdb bugs, please file them.


Well I think everyone knows about the failings of gdb in -O1 mode, they
have been much discussed, and they are not really gdb bugs, more an 
issue of it being basically hard to debug optimized code. Things used

to be a LOT better, I routinely debugged code at -O1, but then the
compiler got better at optimization, and things deteriorated so much
at -O1 that now I don't even attempt it.


Tom





Re: Allow use of ranges in copyright notices

2012-07-02 Thread Robert Dewar

On 7/2/2012 8:35 AM, Alexandre Oliva wrote:

On Jun 30, 2012, David Edelsohn dje@gmail.com wrote:


IBM's policy specifies a comma:



first year, last year



and not a dash range.


But this notation already means something else in our source tree.



I think using the dash is preferable, and is a VERY widely used
notation, used by all major software companies I deal with!



Re: Allow use of ranges in copyright notices

2012-07-02 Thread Robert Dewar

On 7/2/2012 8:35 AM, Alexandre Oliva wrote:

On Jun 30, 2012, David Edelsohn dje@gmail.com wrote:


IBM's policy specifies a comma:



first year, last year



and not a dash range.


But this notation already means something else in our source tree.



I think using the dash is preferable, and is a VERY widely used
notation, used by all major software companies I deal with!



Re: Code optimization: warning for code that hangs

2012-06-24 Thread Robert Dewar

On 6/24/2012 11:22 AM, Richard Guenther wrote:


I suppose I think it would be reasonable to issue a -Wall warning for
code like that.  The trick is detecting it.  Obviously there is nothing
wrong with a recursive call.  What is different here is that the
recursive call is unconditional.  I don't see a way to detect that
without writing a specific warning pass to look for that case.


Ada has this warning, and it has proved useful!


Re: Code optimization: warning for code that hangs

2012-06-24 Thread Robert Dewar

On 6/24/2012 12:09 PM, Ángel González wrote:

Peter A. Felvegi writes:

My question is: wouldn't it be possible to print a warning when a jmp
to itself or trivial infinite recursion is generated? The code
compiled fine w/ -Wall -Wextra -Werror w/ 4.6 and 4.7.

Note that if the target architecture is a microcontroller, an endless
loop can be a legitimate way to finish / abort the program.



But not an infinite recursion! And an endless loop is such a rare
case that it deserves a warning, it's a false positive in this case,
so what?



Re: [PATCH] Improved re-association of signed arithmetic

2012-05-18 Thread Robert Dewar

On 5/18/2012 4:27 PM, Ulrich Weigand wrote:


I finally got some time to look into this in detail.  The various special-
case transforms in associate_plusminus all transform a plus/minus expression
tree into either a single operand, a negated operand, or a single plus or
minus of two operands.  This is valid as long as we can prove that the
newly introduced expression can never overflow (if we're doing signed
arithmetic).


It's interesting to note that for Ada, reassociatin is allowed if there
are no overriiding parens, even if it would introduce an overflow 
(exception) that would not occur otherwise. However, I think I prefer

the C semantics!


Re: Use sed -n … instead of sed s/…/p -e d in s-header-vars

2012-05-15 Thread Robert Dewar

On 5/14/2012 11:22 PM, Hans-Peter Nilsson wrote:


Random non-maintainer comments: I'd suggest adding a nearby
comment to avoid a future edit changing it back.  The attachment
with the patch had the mime-type Video/X-DV, maybe indicating
an issue with your mail-client setup mismatching the .dif
filename suffix.


As always, comments about what you didn't do and why you
didn't do it, are often the most important (and note that
code can never be self-documenting in this regard :-))


brgds, H-P




Re: How do I disable warnings across gcc versions?

2012-05-14 Thread Robert Dewar

On 5/14/2012 6:26 PM, Andy Lutomirski wrote:


This seems to defeat the purpose, and adding
#pragma GCC diagnostic ignored -Wpragmas
is a little gross.  How am I supposed to do this?


The gcc mailing list is for gcc development, not
quetions about the use of gcc, please address such
questions to the gcc help list.


Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-30 Thread Robert Dewar

On 4/30/2012 4:16 AM, Paulo J. Matos wrote:

Peter,

We have a working backend for an Harvard Architecture chip where
function pointer and data pointers have necessarily different sizes. We
couldn't do this without changing GCC itself in strategic places and
adding some extra support in our backend. We haven't used address spaces
or any other existing GCC solution.


Sounds like a useful set of changes to have in the main sources, since
this is hardly a singular need!


Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-29 Thread Robert Dewar

On 4/29/2012 8:51 AM, Georg-Johann Lay wrote:

Peter Bigot a écrit:


The MSP430's split address space and ISA make it expensive to place
data above the 64 kB boundary, but cheap to place code there.  So I'm
looking for a way to use HImode for data pointers, but PSImode for
function pointers.  If gcc supports this, it's not obvious how.

I get partway there with FUNCTION_MODE and some hacks for the case
where the called object is a symbol, but not when it's a
pointer-to-function data object.


I don't think it's a good solution to use different pointer sizes.
You will run into all sorts of trouble -- both in the application and
in GCC.


Just to be clear, there is nothing in the standard that forbids the
sizes being different AFAIK? I understand that both gcc and apps
may make unwarranted assumptions.



Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-29 Thread Robert Dewar

On 4/29/2012 9:25 AM, Andreas Schwab wrote:

Robert Dewarde...@adacore.com  writes:


Just to be clear, there is nothing in the standard that forbids the
sizes being different AFAIK? I understand that both gcc and apps
may make unwarranted assumptions.


POSIX makes that assumption, via the dlsym interface.


that's most unfortunate, I wonder why this assumption was ever
allowed to creep into the POSIX interface. I wonder if it was
deliberate, or accidental?


Andreas.





Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-29 Thread Robert Dewar

On 4/29/2012 12:47 PM, Basile Starynkevitch wrote:


My biased point of view is that designing a processor instruction set (for 
POSIX-like
systems or standard C software in mind) with function pointers of different 
size than
data pointers is today a mistake: most software make the implicit assumption 
that all
pointers have the same size.


What's your data for most here? I would have guessed that most
software doesn't care.


Re: making sizeof(void*) different from sizeof(void(*)())

2012-04-29 Thread Robert Dewar

On 4/29/2012 1:19 PM, Basile Starynkevitch wrote:


For instance, I don't think that porting the Linux kernel (or the FreeBSD one) 
to such an
architecture (having data pointers of different size that function pointers) is 
easy.


Well it doesnt' surprise me too much that GNU/Linux has non-standard 
stuff in it


And GTK wants nearly all pointers to be gpointer-s, and may cast them to 
function
pointers internally.


But GTK surprises me more. I guess the C world always surprises me in 
the extent to which people ignore the standard :-)


Regards.




Re: Switching to C++ by default in 4.8

2012-04-17 Thread Robert Dewar

On 4/16/2012 5:36 AM, Chiheng Xu wrote:

On Sat, Apr 14, 2012 at 7:07 PM, Robert Dewarde...@adacore.com  wrote:

hand, but to suggest banning all templates is not a supportable
notion.



Why ?



Because some simple uses of templates are very useful, and
not problematic from any point of view.


Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/13/2012 9:15 PM, Chiheng Xu wrote:


So, I can say, most of the GCC source code is in large files.

And this also hold for language front-ends.


I see nothing inherently desirable about having all small files.
For example, in GNAT, yes, some files are large, sem_ch3 (semantic
analysis for chapter 3 stuff which includes all of type handling)
is large (over 20,000 lines 750KB, but nothing would be gained
(and something would be lost) by trying to split this file up.

As long as all your tools can handle large files nicely, and
as long as the internal organization of the large file is
clean and clear, I see no problem.






Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/13/2012 9:34 PM, Chiheng Xu wrote:

On Wed, Apr 4, 2012 at 7:38 PM, Richard Guenther
richard.guent...@gmail.com  wrote:


Oh, and did we address all the annoyances of debugging gcc when it's
compiled by a C++ compiler? ...



Probably, if you can refrain from using some advance C++
features(namespace, template, etc.),  you will not have such
annoyances.


To me namespaces are fundamental in terms of the advantages that
moving to C++ can give in a large project, I would never regard
them as some advanced feature to be avoided. If namespaces
cause trouble for the debugger, that's surprising and problematic!






Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/14/2012 6:38 AM, Chiheng Xu wrote:


Actually, I only partially agree with you on this. And I didn't say
smaller is necessarily better.
But normally, high cohesion and low coupling code tend not be large.
Normally large files tend to export only few highly related entry
points. Most of the functions in large file are sub-routines(directly
or indirectly) of the entry points. The functions can be divided into
several groups or layers, each group or layer can form a conceptual
sub-module. I often see GCC developer divide functions in large file
into sub-modules by prefix them with sub-module specific prefix and
group them together.  This is good,  but not enough. If the functions
in sub-modules are put in separate files,  then the code will be more
manageable than not doing so. This is because the
interfaces/boundaries between sub-modules are more clear, and the code
have higher cohesion and lower coupling.


I find the claim unconvincing in practice, it is possible to have code
in separate files with unclear interfaces and boundaries, and code in
single files with perfectly clear interfaces and boundaries. You can
claim without evidence that there is a causal relation here but that
is simply not the case in my experience.







Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/14/2012 6:39 AM, Gabriel Dos Reis wrote:


Indeed, the notion that 'namspace' is advance is troublesome.
Similarly I would find any notion that simple uses  and definitions
of templates (functions, datatypes) advanced a bit specious.


Indeed! In the case of templates there is a real issue, in that
we all know that misuse of templates can get completely out of
hand, but to suggest banning all templates is not a supportable
notion.



Re: Switching to C++ by default in 4.8

2012-04-14 Thread Robert Dewar

On 4/14/2012 6:02 AM, Chiheng Xu wrote:


If debugger fully support namespace, that will be nice. I just say,
in case debugger have trouble with namespace, you can avoid it.

But personally, when I write C++ code, I never use namespace.  I
always prefix my class name(and corresponding source file names) with
proper module name, and put the all source files of a module in its
dedicated sub-directory .  This make class name globally unique
throughout the project, and facilitate further re-factoring(searching
and replacing).


I find that rather a horrible substitute for proper use of namespaces.
I know it is common, partly because that's what you have to do in C,
and partly because namespac3es were added late


When using namespace,  people can and tend to use the same name in
different namespaces,  this seems like a advantage, but I see it as a
disadvantage.


I think that is a seriously misguided position. There is a good reason
for adding namespaces (Ada has always had this kind of capability in
the form of packages, and the package concept in Ada is, to Ada
programmers, one of its most powerful features). Since you never use
namespaces, it is not surprising that you do not appreicate their
importance.

To me, the ability to make extensive use of namespaces is one of
the strong arguments for switching to C++


If you want to change a name in one namespace to some
other more accurate/proper name,  you use some search tools to search
all the references of the name, you will find that the name is
probably also used in other namespaces, so you just can't use replace
all command to replace all references with the new name, you must
manually replace them one by one. Is this what you want ?.


You use proper tools that do the replacement just of references to
the entity whose name you want to change. It is often the case that
people avoid use of features because of a lack of proper tools, but
certainly there are tools that can do this kind of intelligent
replacement (GPS from AdaCore is one such example, but we certainly
wouldn't suggest it was unique in this respect!)


Re: RFC: -Wall by default

2012-04-13 Thread Robert Dewar

On 4/13/2012 2:03 AM, Gabriel Dos Reis wrote:

On Thu, Apr 12, 2012 at 4:50 PM, Robert Dewarde...@adacore.com  wrote:

End of thread for me, remove me from the reply lists, thanks
discussion is going nowhere, at this stage my vote is for
no change whatever in the way warnings are handled.


I was asked wassup with Robert?.  All I can say s that
it is a decade-old relationship :-)

-- Gaby


Nothing up, just felt nothing more was worth saying on this
thread, no point in just getting into the mode of repeating
stuff going nowhere.


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 4:55 AM, Fabien Chêne wrote:


I've got a radically different experience here, real bugs were
introduced while trying to remove this warning, and as far as I can
tell, I've never found any bugs involving precedence of  and || --
in the code I'm working on --, whose precedence is really well known
from everyone.


You simply can't make a claim on behalf of everyone like this, and it's
very easy to prove you wrong, i personally know many competent 
programmers who do NOT know this rule.


 In the real life, things are not as simple as (a  b)

|| ( c  d), some checks usually lie over more than five lines. This
warning applied to such checks are really a pain to remove.


a) complex conditionals over five lines are a bit of a menace
anyway, but ones that rely on knowing this precedence rule are
a true menace if you ask me.

b) it should be trivial to remove this warning, as it is a simple
automatic refactoring that should be easily done with a tool (most
certainly the automatic refactoring available in GPS for GNAT would
take care of this, if it needed to, which it does not, since in Ada
parentheses are required in such cases (the designers of Ada most
certainly disagreed with you that everyone knows this warning).


We shall definitely have an option to remove this very warning,
without  getting rid of the whole sets of usefull warnings embedded in
-Wparentheses.


Yes, that seems a perfectly reasonable proposition. In GNAT there is
a very general mechanism to suppress any specific warning (pragma
Warnings (Off, string), where string matches the text of the message
you want to suppress)) as well as a long list of specific warnings
switches, similar to what we have in GNU C.






Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 5:55 AM, Miles Bader wrote:


... and it's quite possible that such bugs resulting from adding
parentheses means that the programmer fixing the code didn't
actually know the right precedence!


or that the layout (which is what in practice we should rely on
to make things clear with or without the parentheses) was sloppy
or plain incorrect.


I think the relative precedence of * and + can be safely termed very
well known, but in the case of  and ||, it's not so clear...


indeed


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 6:44 AM, Andrew Haley wrote:


I would also suggest that a competent programmer would know what they
don't know; when reading code they'd look it up, when writing code
they'd insert parentheses for clarity.


Yes, of course I 100% agree with that. But then by your definition
code that does not have the parentheses for clarity is written by
incompetent programmers, and it seems reasonable to have a warning
that warns them of this incompetence :-)


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 9:30 AM, Andrew Haley wrote:



Sorry for the confusion: I intended to write


I would also suggest that your competent programmer would know what
they don't know; when reading code they'd look it up, when writing
code they'd insert parentheses for clarity.


Using two different definitions of competent programmer without
clarification makes me an incompetent writer, I suppose.  :-)

Andrew.


The correct thing to write definitely does NOT depend on the
competence or otherwise of the writer. If putting in
parentheses adds to clarity, then everyone should do it
since you are writing code for other people to read,
not yourself.





Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 10:26 AM, Gabriel Dos Reis wrote:


-W0: no warnings (equivalent to -w)
-W1: default
-W2: equivalent to the current -Wall
-W3: equivalent to the current -Wall -Wextra


  I like this suggestion a lot.


Me too!

I also like short switches, but gcc mostly favors long
hard-to-type not-necessarily-easy-to-remember switch
names.


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 11:06 AM, Gabriel Dos Reis wrote:


What is nonsensical there?


But they *are* ordinal.


Now?  What is the order?


less warnings to more warnings, what could be more
ordered than that!


  It works just fine for -O,


Exactly what happens with -O?  -On does not necessarily
generate faster or better code when n is higher.


-On means more optimizations for higher n, simple enough?


In fact, -Os is a perfect example of a short name that is NOT
a number.


right, because -Os lies outside the more optimizations for
higher values rule.

I agree with Dave Korn, I do not understand your objection.

I would understand an objection of the general kind that you
prefer mnemonic names to numbers, but that ultimately is just
that a preference, nothing more. You seem on the contrary to
be trying to make a substantive argument against the digit
scheme, but I can't understand it.


Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 10:48 AM, Andrew Haley wrote:


Certainly, everything that adds to clarity (and has no runtime costs!)
is desirable.  But adding parentheses may not add to clarity if doing
so also obfuscates the code.  There is a cost to the reader due to a
blizzard of syntactically redundant parentheses; if there weren't, we
wouldn't bother with operator precedence.


Well I think blizzard is overblown. Ada requires these parentheses
and I never heard of anyone complaining of blizzards :-)


Ultimately, it's a matter of taste and experience.  I'm going to find
it hard to write for people who don't know the relative precedence of
  and | .


Well it's always a problem for programmers who know too much to write
code that can easily be read by everyone, in Ada we take the position
that readability is paramount, and we really don't care if programmers
find it harder to write readable code :-)


Andrew.




Re: RFC: -Wall by default

2012-04-12 Thread Robert Dewar

On 4/12/2012 11:23 AM, Gabriel Dos Reis wrote:


less warnings to more warnings, what could be more
ordered than that!


What exactly do you put in -Wn to make it give *more* warning?
I can think of a reduced number of switch that would give you
more warning on a specific program without them being terribly
useful.


It's JUST like the optimization case, you use a higher number
to get more optimization. Yes, there may be cases where this
hurts (we have seen cases where -O3 is slower than -O2
due to cache effects)

For warnings you put a higher number to get more warnings. Yes,
you may find that you get too many warnings and they are not
useful. Remedy: reduce the number after -W :-)


-On means more optimizations for higher n, simple enough?


like the traditional -O2 vs. -O3?


Right, -O3 does more optimziations than -O2. Of course there
might be cases where this doesn't help. I bet if you look
hard enough you will find cases where -O1 code is slower
than -O0.

For -O, we do not guarantee that a higher number means faster code,
just that more optimizations are applied.

for -W, we do not guarantee that a higher number means a more
useful set of warnings, just more of them.


  1   2   3   4   5   6   7   8   9   10   >