Re: -Wconversion versus libstdc++

2007-01-14 Thread Martin Sebor

Paolo Carlini wrote:
[...]
Let's wait a bit more for other opinions, say one day or two, then I 
will start the actual work. As far as I can see, other compilers do not 
warn in such cases, and adding casts (*) isn't the cleanest practice in 
the world, thus my caution...


FYI: HP aCC warns for some but not all of these types of lossy
initializations. For example, a double to int conversion emits
a warning but ptrdiff_t to size_t or other signed to unsigned
conversions do not, probably because they're so common. I find
this behavior useful.

Martin

$ cat t.cpp  aCC -AA -V -c +w t.cpp
int foo (double d)
{
const int i = d;
return i;
}

aCC: HP ANSI C++ B3910B A.03.70
Warning (suggestion) 819: t.cpp, line 3 # Initialization of type 'int' 
with 'double' may result in truncation of value.

const int i = d;
  ^
Warning (suggestion) 818: t.cpp, line 3 # Type 'double' is larger than 
type 'int', truncation in value may result.

const int i = d;
  ^


Re: not using push by gcc

2015-02-06 Thread Martin Sebor

On 02/02/2015 01:15 AM, Mr.reCoder wrote:

Dear gcc developer,

I have a code like this:

#include stdio.h
void foo(int x)
{
   int y;
   x++;
   y = 4;
}
int main(void)
{
   foo(2);
   return 0;
}

and compiled with gcc -o outexec srcfile.c command.
when disassemble the file we see that sending argument to function
foo is done by

--
sub esp, 4
mov dword ptr [esp], 2
callfoo
--

instructions. why gcc doesn't use the push command like:


You can find a discussion on the same subject here:

http://stackoverflow.com/questions/4534791/why-does-it-use-the-movl-instead-of-push

A recent GCC does as you suggest, so the version you're using
might be either too old or buggy. I see -mpush-args documented
as enabled by default as far back as 3.0.

Martin



Re: Undefined behavior due to 6.5.16.1p3

2015-03-11 Thread Martin Sebor

On 03/11/2015 07:27 AM, Robbert Krebbers wrote:

Dear Joseph,

On 03/10/2015 11:01 PM, Joseph Myers wrote:

and did u.b.b2 = f (u.a); instead of u.b.b2 = u.a;, that would not be
undefined (see 6.8.6.4 and GCC PR 43784).

Thanks for the references, those are useful!

But what about long long on 32 bits machines. For example:

union {
   long long a;
   struct { char b1; long long b2; } b;
} u;

Will GCC perform similar optimizations as for the case of big structs? I
tried to play around with long long in Martin's example, but failed to
trigger unexpected behaviors in GCC.


Whether or not this will preserve the source value will depend
on the alignment of long long (32 or 64 bits) and on how gcc
decides to copy the contents of the operands.

Commonly 8-byte aligned long longs will not overlap in the union
above so the value will always be preserved.

When long long is aligned on a 4 byte boundary (as in the i386
ABI) the two long long members will overlap, but if the whole
source operand is read into a pair of registers first and only
then written out into the destination operand then the original
value will also be preserved.

Martin


Re: Undefined behavior due to 6.5.16.1p3

2015-03-10 Thread Martin Sebor

On 03/09/2015 01:26 PM, Robbert Krebbers wrote:

I was wondering whether GCC uses 6.5.16.1p3 of the C11 standard as a
license to perform certain optimizations. If so, could anyone provide me
an example program.

In particular, I am interested about the then the overlap shall be
exact part of 6.5.16.1p3:

   If the value being stored in an object is read from another
   object that overlaps in any way the storage of the first
   object, then the overlap shall be exact and the two objects
   shall have qualified or unqualified versions of a compatible
   type; otherwise, the behavior is undefined.


I suspect every compiler relies on this requirement in certain
cases otherwise copying would require making use of temporary
storage. Here's an example:

  struct A {
int a [32];
  };

  union {
struct A a;
struct {
  char b1;
  struct A b2;
} b;
  } u;

  void foo (void) {
u.b.b2 = u.a;
  }

Martin


Re: Undefined behavior due to 6.5.16.1p3

2015-03-12 Thread Martin Sebor

On 03/12/2015 03:10 AM, Vincent Lefevre wrote:

On 2015-03-12 00:19:55 +0100, Robbert Krebbers wrote:

On 03/11/2015 05:31 PM, Vincent Lefevre wrote:

I disagree that it is an extension. The standard does not say
that one union member can be active at any time.

The interpretation under which this is allowed in confirmed by
Note 95 of 6.5.2.3p3.

Effective types disallow to access a union member other than the current one
arbitrarily, so naively effective types contradict note 95 of 6.5.2.3p3.


Well, this depends on the interpretation of effective types in the
case of a union. For instance, when writing

   union { char a[16]; int b; } u;
   u.b = 1;

you don't set the member only (an int), but the whole union object is
affected, even bytes that are not parts of the int. So, one may see
the effective type as being the union type.


The purpose of the term /effective type/ is to make it possible
to talk about types of allocated objects (those with no declared
type). In the example above, u.b is declared to have the type
int and assigning to it doesn't change the type of other members
of the union. But because u.a has a character type the value of
u.b can be accessed via u.a (or any other lvalue of that type).



The item an aggregate or union type that includes one of the
aforementioned types among its members (including, recursively,
a member of a subaggregate or contained union), or about aliasing
is also very unclear.


The purpose of this bullet is to allow access to subobjects
of otherwise incompatible aggregate types that contain members
of compatible types. For example:

  // Types A and B are incompatible.
  struct A { int a; };
  struct B { unsigned b; double d; };

  struct A *x = malloc (sizeof *x);
  struct B *y = malloc (sizeof *y);

  // Allocated objects *x and *y have no type, declared
  // or effective (yet).

  *x = (struct A){ 0 };
  *y = (struct B){ 0, 0 };

  // *x has effective type A and *y has effective type B.

  // Access initial element of *y by an lvalue of an aggregate
  // type whose first member's effective type is compatible with
  // the effective type of the first member of *x.

   *x = *(struct A*)y;

   // The effective type of *x is still A.

Martin



Re: Bug with compound literal initializer?

2015-03-25 Thread Martin Sebor

On 03/24/2015 03:26 PM, Alexey Neyman wrote:

Hi,

I am seeing a strange behavior when a compound initializer is used in a
structure initialization. A test case:

[[[
struct s {
 int y;
 unsigned long *x;
};

struct s foo = {
 .y = 25,
 .x = (unsigned long [SZ]){},
};
]]]

If SZ is zero, the initializer for .y (.y = 25) member is dropped as
well:



I'm having trouble finding support for any expectation in this
case. When SZ is zero, the source code above is invalid in C
not only because it attempts to create an array with zero
elements but also because it doesn't provide any initializers
for the initializer-list. Strictly speaking, it's required to
be diagnosed but at runtime it has undefined behavior.

It's unclear from the GCC manual (or from the apparent absence
of tests for this construct in the test suite) that the code
is intended to do something meaningful in GCC. The compiler
supports zero size arrays as an extension when declaring such
objects as the last members of structs. But in this case, the
code attempts to create an unnanmed temporary array of zero
elements which, IMO, makes no sense. At the same time, I
wouldn't expect gcc to simply omit the initialization for
foo.x. I suggest to open a gcc bug for it.

Martin

PS This is the output what I get with gcc 5.0:

$ cat u.c  gcc -Wall -Wextra -Wpedantic -std=c11 -ansi u.c  ./a.out
struct s {
int y;
unsigned long *x;
};

struct s foo = {
.y = 25,
.x = (unsigned long [0]){},
};

extern int printf (const char*, ...);

int main (void) {
printf (%i %p\n, foo.y, (void*)foo.x);
return 0;
}
u.c:7:5: warning: ISO C90 forbids specifying subobject to initialize 
[-Wpedantic]

 .y = 25,
 ^
u.c:8:5: warning: ISO C90 forbids specifying subobject to initialize 
[-Wpedantic]

 .x = (unsigned long [0]){},
 ^
u.c:8:5: warning: ISO C forbids zero-size array [-Wpedantic]
u.c:8:29: warning: ISO C forbids empty initializer braces [-Wpedantic]
 .x = (unsigned long [0]){},
 ^
u.c:8:29: warning: ISO C90 forbids compound literals [-Wpedantic]
u.c:9:1: warning: missing initializer for field ‘x’ of ‘struct s’ 
[-Wmissing-field-initializers]

 };
 ^
u.c:3:20: note: ‘x’ declared here
 unsigned long *x;
^
0 (nil)


Re: Bug with compound literal initializer?

2015-03-25 Thread Martin Sebor

Regarding undefined behavior: this object has static storage, so I think
6.7.9-10 from C11 should apply:


Strictly speaking, once the behavior of a program is undefined,
even well-defined constructs can have unexpected effects. But
I do agree that dropping initialization for members with a valid
(i.e., non-zero sized aggregate) initializer is likely a bug.
(GCC does allow empty initializer-lists as a (useful) extension
independent of zero-size arrays and I don't think that's the
source of the problem here.)


Isn't it strange that C90 warnings are emitted in presence of -std=c11?
I don't get these C90 warnings with 4.9.1 if I specify -std=c99 or
-stc=c11.


This was my mistake because specifying -ansi after -std=c11
overrides the latter with -std=c90. (It would be nice if
the driver warned about one option overriding another.)

Martin


Re: Obscure crashes due to gcc 4.9 -O2 = -fisolate-erroneous-paths-dereference

2015-03-03 Thread Martin Sebor

On 02/20/2015 10:01 AM, Jeff Law wrote:

On 02/20/15 05:10, Jakub Jelinek wrote:

On Fri, Feb 20, 2015 at 12:06:28PM +0100, Florian Weimer wrote:

On 02/19/2015 09:56 PM, Sandra Loosemore wrote:

H,  Passing the additional option in user code would be one thing,
but what about library code?  E.g., using memcpy (either explicitly or
implicitly for a structure copy)?


The memcpy problem isn't restricted to embedded architectures.

   size_t size;
   const unsigned char *source;
   std::vectorchar vec;
   …
   vec.resize(size);
   memcpy(vec.data(), source, size);

std::vectorT::data() can return a null pointer if the vector is empty,
which means that this code is invalid for empty inputs.

I think the C standard is wrong here.  We should extend it, as a QoI
matter, and support null pointers for variable-length inputs and outputs
if the size is 0.  But I suspect this is still a minority view.


I disagree.  If you want a function that will have that different
property,
don't call it memcpy.

Right.  If someone wants to take it up with the Austin group, that's
fine. But until/unless the Austin group blesses, I don't think we should
extend as a QoI matter.


As a data point(*) it might be interesting to note that GCC itself
relies on memcpy providing stronger guarantees than the C standard
requires it to by emitting calls to the function for large structure
self-assignments (which are strictly conforming, as discussed in bug
65029).

Martin

[*] IMO, one in favor of tightening up the memcpy specification
to require implementations to provide the expected semantics.


Re: gcc -S vs clang -S

2015-05-13 Thread Martin Sebor

On 05/12/2015 07:40 PM, Andrew Pinski wrote:

On Tue, May 12, 2015 at 6:36 PM, Fei Ding fding...@gmail.com wrote:

I think Thiago and Eric just want to know which code-gen is better and why...



You need to understand for a complex process (CISC ISAs) like x86,
there is no one right answer sometimes.  You need to look at each
micro-arch and understand the pipeline.  Sometimes different code
stream will performance the same but it also depends on the code size
too.


A good place to start is the Intel 64 and IA-32 Architectures
Optimization Reference Manual. It lists the throughput and
latencies of x86 instructions and gives guidance for which
ones might be more efficient on which processors. For example,
in the section titled Using LEA it discusses why the three
operand form of the instruction is slower on the Sandy Bridge
microarchitecture than on others:

http://www.intel.com/content/dam/doc/manual/64-ia-32-architectures-optimization-manual.pdf

Martin



Thanks,
Andrew Pinski



2015-05-12 23:29 GMT+08:00 Eric Botcazou ebotca...@libertysurf.fr:

Note that at -O3 there is a difference still:
clang (3.6.0):
 addl%esi, %edi
 movl%edi, %eax
 retq

gcc (4.9.2)
 leal(%rdi,%rsi), %eax
 ret

Can't tell which is best, if any.


But what's your point exactly here?  You cannot expect different compilers to
generate exactly the same code on a given testcase for non-toy architectures.

Note that this kind of discussion is more appropriate for gcc-h...@gcc.gnu.org

--
Eric Botcazou




Re: WPP capabilities in gcc

2015-05-18 Thread Martin Sebor

On 04/26/2015 11:47 AM, Shoham Peller wrote:

You are completely right Jonathan. My Apologies.
WPP is a tool I use in my work field on an every-day basis, so I
thought it was known.

Here is the Wikipedia page on WPP:
http://en.wikipedia.org/wiki/Windows_software_trace_preprocessor

In short, WPP allows to put traces and logs in your C/C++ products and
send to customers, without including sensitive tracing/logging strings
in your binary.
What WPP does, is it runs during pre-compilation, and replaces the
string in each call to a trace macro, with an obfuscated string. So:
DoTrace(program started %d %s beginning, num, str);
is Replaced to:
DoTrace(897192873 __LINE__ __FILE__ %d %s, num, str);
And puts the original string in the Debug's .pdb file, with the debug symbols.
Later, the user can use a software like EtwDataViewer (see first
google result for screenshot) to Reverse the WPP traces, and recover
the original traces of the program.

I hope my explanation is clear.

So again:
1. Can you think of a way to achieve this with gcc?
 We used regular expressions until now to find the DoTrace and
replace the strings, but needless to say its ugly.


Here's a simple (perhaps naive) way to do it without changing
the compiler:

1) Define a DoTrace macro that annotates each format string
   with a special tag to make it reliably distinguishable
   from all other strings in a program. For example:

   #define DoTrace(fmt, ...) \
   vfprintf (stderr, @DoTrace@ fmt, __VA_ARGS__)

   (This assumes that all tracing format strings are literals.)

2) Add a stage to your build system that searches the .rodata
   section of each object (program or DSO) for occurrences of
   strings that start with the unique @DoTrace@ tag, replaces
   each instance of such a string with its unique id while
   retaining any formatting directives, and stores the mapping
   from the modified format string to the replaced string in
   some other file.

   I would be inclined to start by using the Binutils objcopy
   command to extract the .rodata section, modifying it as
   necessary using a script, and then putting the result back
   into the object file. If scripting turned out to be too
   clumsy, error-prone, or slow I would look at using the
   Binutils BFD library instead.

3) Write a tool that, given the other file created in stage
   (3), replaces the encoded format strings with the originals.


 We also thought about implementing a gcc plugin, but I think a lot
of people can benefit from it being embedded in gcc, instead of having
to import an .so file to enjoy this feature.
2. Do you think its a good feature for gcc, and will you (the
maintainers) be willing to merge it, after we'll implement it?


As others have already implied I too suspect this is overly
specialized to be of general interest or appropriate for
inclusion in a compiler.

Martin


Re: optimization question

2015-05-18 Thread Martin Sebor

On 05/18/2015 02:01 PM, mark maule wrote:

I have a loop which hangs when compiled with -O2, but runs fine when
compiled with -O1.  Not sure what information is required to get an
answer, so starting with the full src code.  I have not attempted to
reduce to a simpler test case yet.


Typically a standalone test case is required.

Note that a difference in behavior between -O1 and -O2 doesn't
necessarily imply there's a compiler bug. One or more of the
-O2 optimizations could be exposing a bug in the program. This
is true even if the program runs successfully when compiled with
a prior version of the compiler since new optimizations are added
existing ones improve between releases. Nevertheless, compiling
the program with different versions of the compiler (or different
compilers altogether) to see if the problem can be consistently
reproduced can provide a useful data point.



Attachments:

bs_destage.c - full source code
bs_destage.dis.O2 - gdb disassembly of bs_destageLoop()
bs_destage.dis+m.O2 - src annotated version of the above


I suspect this is too big for most people to analyze (it is for
me). You will need to reduce it to something more manageable
that can be compiled independently of the rest of your system
(all the ... include files).



The function in question is bs_destageSearch().  When I compile
bs_destage.c with -O2, it seems that the dgHandle condition at line 741
is being ignored, leading to an infinite loop.  I can see in the
disassembly that dgHandle is still in the code as a 16-bit value stored
at 0x32(%rsp), and a running 32-bit copy stored at 0x1c(%rsp).  I can
also see that the 16 bit version at 0x32(%rsp) is being incremented at
the end of the loop, but I don't see anywhere in the code where either
version of dgHandle is being used when determining if the while() at 741
should be continued.

I'm not very familiar with the optimizations that are done in O2 vs O1,
or even what happens in these optimizations.

So, I'm wondering if this is a bug, or a subtle valid optimization that
I don't understand.  Any help would be appreciated.


It's hard to tell where the problem is or offer suggestions
without having a more complete understanding of the objects
and types used in the program. For example, the underlying
type of dgHandle_t or the value of the BS_CFG_DRIVE_GROUPS
constant the variable is being compared to(*). The way to
expose these details is by creating a translation unit for
the source file (i.e., preprocessing it with the -E option).
From these details it should be possible to determine whether
the code in the file is correct (and the likely cause of the
problem is a compiler bug) or whether the problem is due to
a bug in the code. The translation unit will be much bigger
than the source file so you will need to do the initial
analysis yourself before asking for more help (on gcc-help).



Note:  changing the declaration of dgHandle to be volitile appears to
modify the code sufficiently that it looks like the dgHandle check is
honored (have not tested).


[*] For instance, if dgHandle_t was an alias for unsigned
short and the value of BS_CFG_DRIVE_GROUPS was USHRT_MAX
-1 then GCC could remove the corresponding test in the
while loop (since it could never exceed the maximum value
for its type) unless dgHandle was declared volatile.



Thanks in advance for any help/advice.


The gcc list is meant for discussions related to GCC
development. The gcc-help list is the right forum for
requests for help.

Martin



Re: is it time to mass change from .c to .cc in gcc/ ?

2015-04-18 Thread Martin Sebor

Started with the latter. By the way, what is the policy concerning
getting write access to the wiki?


You are expected to ask one of the existing editors who's
willing to vouch for you to add you to the EditorGroup:

http://gcc.gnu.org/wiki/EditorGroup

Martin



Re: optimization question

2015-05-19 Thread Martin Sebor

I'm not very familiar with the optimizations that are done in O2 vs O1,
or even what happens in these optimizations.

So, I'm wondering if this is a bug, or a subtle valid optimization that
I don't understand.  Any help would be appreciated.


Another approach to debugging a suspected optimization bug is
to look at the optimization dumps produced in response to one
or more of the -fdump-tree-xxx and -fdump-rtl-xxx options.
There are many of them and they tend to be verbose but not
as much as the raw assembly and are usually easier to follow.
The dumps make it possible to identify the buggy optimization
pass and avoid the bug by disabling just the problematic
optimization while leaving all the others enabled. With
attribute optimize and/or #pragma GCC optimize, you can then
control specify which optimization to disable for subsets of
functions in a file.

See the following sections in the manual for more:

http://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/Debugging-Options.html#index-fdump-tree-673

http://gcc.gnu.org/onlinedocs/gcc-4.8.2/gcc/Function-Specific-Option-Pragmas.html

Martin



Re: ctype_members.cc Comparison Always True

2015-08-18 Thread Martin Sebor

On 08/03/2015 12:35 PM, Joel Sherrill wrote:

Hi

Just noticed this building the head for arm-rtems4.11. Should
the first comparison be eliminated and, maybe, a comment added?

ctype_members.cc:216:14: warning: comparison of unsigned expression = 0
is always true [-Wtype-limits]
  if (__wc = 0  __wc  128  _M_narrow_ok)
   ^
ctype_members.cc: In member function 'virtual const wchar_t*
std::ctypewchar_t::do_narrow(const wchar_t*, const wchar_t*, char,
char*) const':
ctype_members.cc:230:14: warning: comparison of unsigned expression = 0
is always true [-Wtype-limits]
 if (*__lo = 0  *__lo  128)


Unconditionally removing the test alone wouldn't be right for targets
where wchar_t is a signed type. But casting the result to an unsigned
type with as much precision as wchar_t should make it possible to
remove it safely.

Martin


Re: UTF-8 quotation marks in diagnostics

2015-10-22 Thread Martin Sebor

On 10/22/2015 10:53 AM, Joseph Myers wrote:

On Wed, 21 Oct 2015, Martin Sebor wrote:


That would go against the usual (i.e., POSIX) expected effect
of the environment variable.  Specifically for GCC (or the c99
utility), POSIX requires LC_CTYPE to determine the locale used
to parse the input, and LC_MESSAGE to determine the locale of
diagnostic messages.  If the c99 utility on your system doesn't
honor that I would recommend opening a bug.


LC_MESSAGES determines "the language and cultural conventions in which
messages should be written" (not necessarily the interpretation of
multibyte characters in that output).


Yes, but setting LC_CTYPE shouldn't affect the format, language,
or encoding of diagnostic messages, which is what the text quoted
from the GCC page recommends as a mechanism to change the quote
character in diagnostic messages.

If it did, it would mean that programmers would have to adjust
to the language of the source code of the program they're working
with.  For example, if part of a program was written assuming,
say a German locale, then Japanese programmers would need to learn
German to understand the compiler output.  (Otherwise, if they set
LC_CTYPE to their Japanese locale) the German quotes in the source
code could be interpreted as something else.

Martin



Re: UTF-8 quotation marks in diagnostics

2015-10-22 Thread Martin Sebor

Again, LC_CTYPE does *not* affect source file interpretation.


I understand what you're saying. What I am saying is that if this
is how c99 behaves it's in conflict with POSIX because LC_CTYPE
is exactly how source file interpretation is specified to be
controlled:

  LC_CTYPE
  Determine the locale for the interpretation of sequences of
  bytes of text data as characters (for example, single-byte
  as opposed to multi-byte characters in arguments and input
  files).


I think we should clearly update the documentation to reflect reality
regarding source file encoding, and leave it strictly for wrappers such as
"c99" to specify -finput-charset= options rather than leaving open the
possibility that GCC's own default might change in future.


That sounds reasonable.

Martin


Re: UTF-8 quotation marks in diagnostics

2015-10-21 Thread Martin Sebor

On 10/21/2015 03:23 PM, D. Hugh Redelmeier wrote:

Several of us don't want UTF-8 quotation marks in diagnostics in our
environment (Jove subshells).  We'd like a way to turn them off.  We don't
think that they are a bad idea but they are bad in our environment.



English-language diagnostic messages will now use Unicode
quotation marks in UTF-8 locales. (Non-English messages
already used the quotes appropriate for the language in
previous releases.) If your terminal does not support UTF-8
but you are using a UTF-8 locale (such locales are the default
on many GNU/Linux systems) then you should set LC_CTYPE=C in
the environment to disable that locale. Programs that parse
diagnostics and expect plain ASCII English-language messages
should set LC_ALL=C. See Markus Kuhn's explanation of Unicode
quotation marks for more information.

This suggests that LC_CTYPE=C would do what we want: go back to ` and
' instead of 342\200\230 and \342\200\231.


That would go against the usual (i.e., POSIX) expected effect
of the environment variable.  Specifically for GCC (or the c99
utility), POSIX requires LC_CTYPE to determine the locale used
to parse the input, and LC_MESSAGE to determine the locale of
diagnostic messages.  If the c99 utility on your system doesn't
honor that I would recommend opening a bug.

Martin



Re: Understand GCC test process

2015-10-07 Thread Martin Sebor

On 10/07/2015 07:57 AM, Sabrina Souto wrote:

I was seeing these files but I could not put the puzzle pieces
together in my mind, and after you explained, all pieces make sense
now. Thanks for the explanation, Jonathan.
The testing process is clear now, but I still not understanding what
can explain that amount of common functions (over 60%) across all
traces of the tests (that I analyzed) from gcc.dg test suite, could
you help me with this? I have some hypotheses:
(1) All tests run the compiler and there is a large "core" code that
is executed regardless of the compiler option?
(2) Since all tests run the same driver - dg.exp, I could say that
this great amount of common functions is due to what is specified in
the driver, and the remaining few different functions are related with
the particularity of each test?
(3) None of those. Do you have another explanation?


I'm not sure what you mean by "function" (compiler invocations?)
or "trace" (verbose DejaGnu output?) but perhaps what you are
referring to by "common functions across all traces" is the test
driver invoking the compiler and running tests to figure out what
features are supported on the target.

For example, running just the simple array-1.c test like so:

  $ make RUNTESTFLAGS='dg.exp=array-1.c -v -v' check-gcc

will show something like:

  ...
  check_compile tool: gcc for lto
  doing compile
  Invoking the compiler as ...
  ...
  check_compile tool: gcc for linker_plugin
  doing compile
  Invoking the compiler as ...
  ...
  Testing gcc.dg/array-1.c
  doing compile
  Invoking the compiler as...

More involved tests (those with more complex DejaGnu directives)
might cause the driver to run other tests. The results of these
should be cached so subsequent tests that depend on the results
don't cause the same tests to be run again.

If by the "function" you mean a DejaGnu function, you need to look
at the .exp files to see what's invoked when: your local runtest.exp
which is invoked first, and then GCC's dg.exp (and anything in
between). You can see the pathnames of all the .exp files in the
DejaGnu verbose output (-v -v).

The best way to understand how it all works is to read the .exp
files and experiment with some increasingly more involved use cases,
study their verbose output and the .log files in the corresponding
testsuire subdirectory.

Martin



Re: Ubsan build of GCC 6.0 fails with: cp/search.c:1011:41: error: 'otype' may be used uninitialized in this function

2015-09-09 Thread Martin Sebor

On 09/09/2015 12:36 PM, Toon Moene wrote:

See:

https://gcc.gnu.org/ml/gcc-testresults/2015-09/msg00699.html

Full error message:

/home/toon/compilers/trunk/gcc/cp/search.c: In function 'int
accessible_p(tree, tree, bool)':
/home/toon/compilers/trunk/gcc/cp/search.c:1011:41: error: 'otype' may
be used uninitialized in this function [-Werror=maybe-uninitialized]
dfs_accessible_data d = { decl, otype };
  ^
Any ideas ?


It looks as though GCC assumes that TYPE can be null even though
it can't (if it was, TYPE_P (type) would then dereference a null
pointer). As a workaround until this is fixed, initializing OTYPE
with type instead of in the else block should get rid of the error.

Here's a small test case that reproduces the bogus warning:

cat t.c && /build/gcc-trunk/gcc/xg++ -B /build/gcc-trunk/gcc 
-Wmaybe-uninitialized -O2 -c -fsanitize=undefined t.c

struct S { struct S *next; int i; };

int foo (struct S *s) {
int i;

if (s->i) {
struct S *p;
for (p = s; p; p = p->next)
i = p->i;
}
else
i = 0;

return i;
}
t.c: In function ‘int foo(S*)’:
t.c:14:12: warning: ‘i’ may be used uninitialized in this function 
[-Wmaybe-uninitialized]

 return i;
^

Martin


Re: Optimization bug?

2015-09-19 Thread Martin Sebor

On 09/19/2015 03:32 PM, Sören Brinkmann wrote:

Hi Andrew,

On Sat, 2015-09-19 at 11:34AM -0700, pins...@gmail.com wrote:




On Sep 19, 2015, at 11:00 AM, Sören Brinkmann  
wrote:

Hi,

I recently came across some bug in my program that I could narrow down
to the snipped below (also attached with a Makefile).

  extern unsigned int _vector_table;


You need the attribute weak here if the location of _vector_table can be a null 
pointer.


Indeed, that fixes the problem. Would you have any additional background
why the 'weak' is required in this case? To me it sounded like the
'-fno-delete-null-pointer-checks' should be sufficient.


-fno-delete-null-pointer-checks prevents GCC from removing tests
for null pointers after such pointers have been dereferenced. For
example, in the following function, GCC normally eliminates the
second if statement because the first statement lets it assume
that p is not null (otherwise dereferencing it would be undefined).

  int foo (int *p) {
if (*p == 0) return 0;
if (p == 0) return 1;
return 2;
  }

The option doesn't affect other assumptions such as that every
object and function in a program lives at a valid, non-null
address.

Martin


Re: Optimization bug?

2015-09-20 Thread Martin Sebor

On 09/20/2015 12:38 AM, Richard Biener wrote:

On September 20, 2015 1:40:12 AM GMT+02:00, Martin Sebor <mse...@gmail.com> 
wrote:

On 09/19/2015 03:32 PM, Sören Brinkmann wrote:

Hi Andrew,

On Sat, 2015-09-19 at 11:34AM -0700, pins...@gmail.com wrote:




On Sep 19, 2015, at 11:00 AM, Sören Brinkmann

<soren.brinkm...@xilinx.com> wrote:


Hi,

I recently came across some bug in my program that I could narrow

down

to the snipped below (also attached with a Makefile).

   extern unsigned int _vector_table;


You need the attribute weak here if the location of _vector_table

can be a null pointer.


Indeed, that fixes the problem. Would you have any additional

background

why the 'weak' is required in this case? To me it sounded like the
'-fno-delete-null-pointer-checks' should be sufficient.


-fno-delete-null-pointer-checks prevents GCC from removing tests
for null pointers after such pointers have been dereferenced. For
example, in the following function, GCC normally eliminates the
second if statement because the first statement lets it assume
that p is not null (otherwise dereferencing it would be undefined).

   int foo (int *p) {
 if (*p == 0) return 0;
 if (p == 0) return 1;
 return 2;
   }

The option doesn't affect other assumptions such as that every
object and function in a program lives at a valid, non-null
address.


Actually we abuse this flag for exactly this purpose so I think the testcase 
shows a bug.  Unless the particular target cannot have global vars at address 
zero.

Would you mind opening a bugreport?


I'm not sure I follow. I've tested the original example with 6.0,
5.1, 4.9, 4.8.3, and 4.8.1. All of these eliminate the test with
-fno-delete-null-pointer-checks. Are you saying that's a bug? If
so, what version retains the test?

Martin



Re: building gcc with macro support for gdb?

2015-12-05 Thread Martin Sebor

On 12/04/2015 10:32 AM, Tom Tromey wrote:

"Martin" == Martin Sebor <mse...@gmail.com> writes:


Martin> To get around these, I end up using info macro to print the
Martin> macro definition and using whatever it expands to instead.  I
Martin> wonder if someone has found a more convenient workaround.

For some of these, like the __builtin_offsetof and __null problems, you
can add a new "macro define" to gcc's gdbinit.in.

In fact I already see __null there, so maybe you don't have the correct
add-auto-load-safe-path setting in your ~/.gdbinit.


Yes, thanks.  The __null problem is fairly easy to work around
as you suggested. The one that's more difficult is 18881 where
the debugger cannot resolve calls to functions overloaded on
the constness of the argument.  Do you happen to have a trick
for dealing with that one?

Martin


Re: building gcc with macro support for gdb?

2015-12-03 Thread Martin Sebor

On 12/02/2015 06:48 PM, Peter Bergner wrote:

On Wed, 2015-12-02 at 20:05 -0500, Ryan Burn wrote:

Is there any way to easily build a stage1 gcc with macro support for debugging?

I tried setting CFLAGS, and CXXFLAGS to specify "-O0 -g3" via the
command line before running configure, but that only includes those
flags for some of the compilation steps.

I was only successful after I manually edited the makefile to replace
"-g" with "-g3".


Try CFLAGS_FOR_TARGET='-O0 -g3 -fno-inline' and CXXFLAGS_FOR_TARGET='-O0 -g3 
-fno-inline'


I've been using STAGE1_CFLAGS as Andreas suggested.  The tree
checks in GCC make heavy use of macros that GDB unfortunately
often has trouble with.  See GDB bugs 19111, 1888, and 18881
for some of the problems.  To get around these, I end up using
info macro to print the macro definition and using whatever it
expands to instead.  I wonder if someone has found a more
convenient workaround.

Martin



-Wplacement-new on by default

2015-12-10 Thread Martin Sebor

Jason,

I just want to make sure we still want the -Wplacement-new option
I added some time ago enabled by default.

I think I had initially intended it to be on because the original
implementation was more permissive and didn't diagnose cases where
(for example) the buffer spanned multiple members of the same struct,
as in the example below.  After our discussion of the patch where
you pointed out that C++ is moving in the direction of increased
strictness of aliasing requirements, I changed it to diagnose
those cases.

Can you please confirm that enabling -Wplacement-new by default
is still appropriate?

Thanks
Martin

  struct S {
  int  a;
  char b;
  } s;

  new () char [sizeof s]; // accepted
  new () char [sizeof s];   // diagnosed


Re: Question on repeating -march flags

2016-01-01 Thread Martin Sebor

On 12/23/2015 05:02 AM, Yuri D'Elia wrote:

I couldn't find anything on the matter, but what happens if -march is
repeated more than once?  I would assume the usual behavior that the
last flag "wins".


This is correct for all -march= arguments except for native.

Some -march= option processing takes place in the compiler driver
and some in the compiler itself.

For -march=native, unless the handling is overridden in the GCC
spec file, the i386 compiler driver determines the host CPU model
and substitutes its name and a set of other options for "native"
in the -march option passed to the compiler.  This can be seen
in the driver output by invoking it with -march=native -v.  For
example, on my machine, -march=native ends up replaced with
-march=sandybridge -mmmx -mno-3dnow -msse and a few dozen other
options.

All other -march= options are passed to the compiler unchanged.
The compiler then takes the last -march= option and ignores any
other -march= options that may have preceded it.


On a haswell host, the following:

gcc -march=native -march=opteron

or

gcc -march=opteron -march=native
both emit code which is illegal for the opteron ISA, as if -march=native
had special treatment. Specifying -march=opteron alone works as intended.


Because of the special handling, when -march=native and another,
more specific -march= option is specified, the latter option is
always appended to the cc1 command line by the driver.  This has
the effect of -march=opteron overriding the -march= option
substituted for -march=native by the driver regardless of which
comes first.  (See also bug 64534.)


Since I generally override the default flags in makefiles by appending
exceptions where needed, I'd sort of expected the regular behavior.

Is this intended? If not, should I try to generate a small test case?


A test case with the -v output of the compiler and its version
is always helpful.  Based on your description it sounds to me
as though one of the other (non-march) options substituted for
-march=native by the driver is interfering with the subsequent
-march=opteron option.

In any event, I would be inclined to consider this a bug since
the details above aren't documented anywhere and are far from
intuitive.  At the very least, the special treatment of
-march=native should be documented.  Ideally, though, I would
expect the driver to avoid doing the substitution when the
-march=native is subsequently overridden on the command line.

Martin


Re: gcc-4.9.1 generating different code between two successive builds

2015-12-21 Thread Martin Sebor

On 12/20/2015 11:39 PM, Cole wrote:

Hi,

I am busy trying to generate a package for gcc that is consistent
between two successive builds, and I am now down to the final few
files.

I am stuck with the file: cilk-abi-cilk-for.o, which is obviously
built with -O2, but between two successive builds, the assembly code
generated is different. Please refer to [1] for output and details.

Would anyone have any suggestions that I might be able to try to get
the file to generate the same assembly code?


It's been some time since I had to deal with this problem and
my recollection is somewhat fuzzy and probably inaccurate. IIRC,
part of it was caused by qsort being unstable (I think that was
on Solaris). We replaced it with a stable version. Another part
of it might have been caused by changes in the address space
layout caused by ASLR (causing differences in hash values used
by the register allocator or something like that).  We disabled
ASLR to get around that (and other problems). With that, we had
fully repeatable builds. This was in the GCC 4.5 time frame.
With newer versions, there's also LTO that can affect things.
The -frandom-seed= option exists to make it possible to emit
identical object files that might otherwise be different.

Martin



Re: gcc-4.9.1 generating different code between two successive builds

2015-12-22 Thread Martin Sebor

On 12/21/2015 11:30 PM, Cole wrote:

HI Martin,

Thanks very much for the suggestions as well as all the information
about the qsort issues.

Did you ever publish your changes as a patch, or create a write up
about what was necessary to create a reproducible build? I would be
interested to see what others have done, and potentially anything else
I may have missed.


No we didn't.  Unfortunately, most of the work we did back then
was in a proprietary GCC fork.

Martin



Thanks
/Cole

On 21 December 2015 at 21:08, Martin Sebor <mse...@gmail.com> wrote:

On 12/20/2015 11:39 PM, Cole wrote:


Hi,

I am busy trying to generate a package for gcc that is consistent
between two successive builds, and I am now down to the final few
files.

I am stuck with the file: cilk-abi-cilk-for.o, which is obviously
built with -O2, but between two successive builds, the assembly code
generated is different. Please refer to [1] for output and details.

Would anyone have any suggestions that I might be able to try to get
the file to generate the same assembly code?



It's been some time since I had to deal with this problem and
my recollection is somewhat fuzzy and probably inaccurate. IIRC,
part of it was caused by qsort being unstable (I think that was
on Solaris). We replaced it with a stable version. Another part
of it might have been caused by changes in the address space
layout caused by ASLR (causing differences in hash values used
by the register allocator or something like that).  We disabled
ASLR to get around that (and other problems). With that, we had
fully repeatable builds. This was in the GCC 4.5 time frame.
With newer versions, there's also LTO that can affect things.
The -frandom-seed= option exists to make it possible to emit
identical object files that might otherwise be different.

Martin





Re: C++ order of evaluation of operands, arguments

2015-11-25 Thread Martin Sebor

On 11/24/2015 02:55 AM, Andrew Haley wrote:

On 23/11/15 23:01, Jason Merrill wrote:

There's a proposal working through the C++ committee to define the order
of evaluation of subexpressions that previously had unspecified ordering:

http://www.open-std.org/Jtc1/sc22/wg21/docs/papers/2015/p0145r0.pdf

I agree with much of this, but was concerned about the proposal to
define order of evaluation of function arguments as left-to-right, since
GCC does right-to-left on PUSH_ARGS_REVERSED targets, including x86_64.

Any thoughts?


Not about PUSH_ARGS_REVERSED targets, but my two-penn'orth:

The proposal seems to be a bit of a minefield.  This one:

a(b, c, d)

is a bit counter-intuitive.  I wouldn't expect a to be evaluated before
the arg list.  I wonder how many C++ programmers would.


The motivating example in the paper suggests that many C++
programmers expect a left to right order of evaluation here
due to the commonality of constructs like chains of calls.

Incidentally, both GCC and Clang evaluate the first operand
of the function call expression first in C as well in C++
(the program below prints a b c d when compiled with GCC,
and a d c b with Clang).

Interestingly, though, not all compilers take that approach.
IBM XLC, for example, evaluates it last (and the program below
prints b c d a).

  extern int printf (const char*, ...);

  void f (int b, int c, int d) { }

  typedef void F (int, int, int);

  F* f1 (void) { printf ("a\n"); return f; }
  int f2 (void) { printf ("b\n"); return 0; }
  int f3 (void) { printf ("c\n"); return 0; }
  int f4 (void) { printf ("d\n"); return 0; }

  int main ()
  {
  #define a f1()
  #define b f2()
  #define c f3()
  #define d f4()

a (b, c, d);
}

Martin


Re: C++ order of evaluation of operands, arguments

2015-11-25 Thread Martin Sebor

On 11/25/2015 11:49 AM, Andrew Haley wrote:

On 11/25/2015 06:25 PM, Martin Sebor wrote:

The motivating example in the paper suggests that many C++
programmers expect a left to right order of evaluation here
due to the commonality of constructs like chains of calls.


Sure, I often see

   foo.bar(1).bar(2).bar(3), etc.

but does anyone actually do

   foo(1)bar(2)bar(3)  ?


That doesn't look syntactically valid to me but I suspect what
you mean might be closer to something like this:

foo(1)(bar(2))(baz(3))

which with the right declarations is just a shorthand for this:

   foo(1).operator()(bar(2)).operator()(bar(3));

Having foo(1) evaluate before bar(2) becomes important when
(if) one of the uniform call syntax proposals is adopted. With
that, it will possible to write the expression above like so:

  operator()(operator()(foo(1), bar(2)), bar(3));

or more generally for (almost) any member function F:

  F (F (foo (1), bar (2)), bar (3));

If C++ were to guarantee an order in which function arguments
were evaluated without also imposing an order on the operand
of the function call expression, uniform calls would be error
prone to use.

Martin



Re: C++ order of evaluation of operands, arguments

2015-11-24 Thread Martin Sebor

On 11/23/2015 04:01 PM, Jason Merrill wrote:

There's a proposal working through the C++ committee to define the order
of evaluation of subexpressions that previously had unspecified ordering:

http://www.open-std.org/Jtc1/sc22/wg21/docs/papers/2015/p0145r0.pdf

I agree with much of this, but was concerned about the proposal to
define order of evaluation of function arguments as left-to-right, since
GCC does right-to-left on PUSH_ARGS_REVERSED targets, including x86_64.

Any thoughts?


I have no comments on the impact on the implementation but I do
have a couple of general concerns with the proposal.

As others have already suggested, I think a change like this needs
to be coordinated with WG14 and made either in both languages or
in neither.  If C++ were to make it alone, C++ programmers who only
occasionally write C code would be prone to introducing bugs into
their C code by relying on the C++ guarantees.  As WG14 is just
starting to talk about the goals for C2x, this seems like a good
time to bring it up for consideration.

Even in C++, though, I'd worry that specifying the evaluation order
for just a subset of expression would lead to portability bugs
creeping in as programmers inadvertently start to assume that
the same order is guaranteed for all expressions.

Martin


Re: _Bool and trap representations

2016-06-15 Thread Martin Sebor

There has been quite a bit of discussion among the committee on
this subject lately (the last part is the subject of DR #451,
though it's discussed in the context of uninitialized objects
with indeterminate values).


Are there notes from these discussions or something?


Notes from discussions during committee meetings are in the meeting
minutes that are posted along with other committee documents on the
public site.  Those that relate to aspects of defect reports are
usually captured in the Committee Discussion and Committee Response
to the DR.  Other than that, committee discussions that take place
on the committee mailing list (such as the recent ones on this topic)
are archived for reference of committee members (unlike C++, the C
archives are not intended to be open to the public).

Martin


Re: _Bool and trap representations

2016-06-08 Thread Martin Sebor

On 06/08/2016 12:36 AM, Alexander Cherepanov wrote:

Hi!

If a variable of type _Bool contains something different from 0 and 1
its use amounts to UB in gcc and clang. There is a couple of examples in
[1] ([2] is also interesting).

[1] https://github.com/TrustInSoft/tis-interpreter/issues/39
[2] https://github.com/TrustInSoft/tis-interpreter/issues/100

But my question is about the following example:

--
#include 

int main()
{
   _Bool b;
   *(char *) = 123;
   printf("%d\n", *(char *));
}
--

Results:

--
$ gcc -std=c11 -pedantic -Wall -Wextra test.c && ./a.out
123

$ gcc -std=c11 -pedantic -Wall -Wextra -O3 test.c && ./a.out
1
--

gcc version: gcc (GCC) 7.0.0 20160604 (experimental)

It seems that padding in _Bool is treated as permanently unspecified. Is
this behavior intentional? What's the theory behind it?

One possible explanations is C11, 6.2.6.2p1, which reads: "The values of
any padding bits are unspecified." But it's somewhat a stretch to
conclude from it that the values of padding bits cannot be specified
even with explicit assignment.

Another possible approach is to refer to Committee Response for Question
1 in DR 260 which reads: "Values may have any bit-pattern that validly
represents them and the implementation is free to move between alternate
representations (for example, it may normalize pointers, floating-point
representations etc.). [...] the actual bit-pattern may change without
direct action of the program."


There has been quite a bit of discussion among the committee on
this subject lately (the last part is the subject of DR #451,
though it's discussed in the context of uninitialized objects
with indeterminate values).  I would hesitate to call it
consensus but I think it would be fair to say that the opinion
of the vocal majority is that implementations aren't intended
to spontaneously change valid (i.e., determinate) representations
of objects in the absence of an access to the value of the object.
There are also two special cases that apply to the code example
above: accesses via an lvalue of a character type (which has no
padding bits and so no trap representation), and objects that
could not have been declared to have register storage because
their address is taken (DR #338).  Those should be expected
to have a stable representation/bit pattern from one read
to the next.

Martin



Re: _Bool and trap representations

2016-06-20 Thread Martin Sebor

On 06/17/2016 02:19 PM, Alexander Cherepanov wrote:

On 2016-06-15 17:15, Martin Sebor wrote:

There has been quite a bit of discussion among the committee on
this subject lately (the last part is the subject of DR #451,
though it's discussed in the context of uninitialized objects
with indeterminate values).


Are there notes from these discussions or something?


Notes from discussions during committee meetings are in the meeting
minutes that are posted along with other committee documents on the
public site.   Those that relate to aspects of defect reports are
usually captured in the Committee Discussion and Committee Response
to the DR.  Other than that, committee discussions that take place
on the committee mailing list (such as the recent ones on this topic)
are archived for reference of committee members (unlike C++, the C
archives are not intended to be open to the public).


So it seems the discussion you referred to is not public, that's
unfortunate.

And to clarify what you wrote about stability of valid representations,
is padding expected to be stable when it's not specifically set? I.e. is
the following optimization supposed to be conforming or not?


The standard says that padding bytes take on unspecified values
after an assignment to a structure, so the program isn't strictly
conforming because its output depends on such a value.  At the
same time, unspecified values are, in general, expected to be
stable.  But I think in this case it's only because of the
standard's limited vocabulary.  The distinction between
an unspecified and undefined value was meant to allow for the
latter to be a trap representation.  But lately an undefined
value has also come to mean potentially unstable (some people
call such values "wobbly").  If the standard adopted a term
for unspecified values that don't need not be stable (say
wobbly) I would expect the committee to be comfortable applying
it padding bits and allowing the code in the example to produce
two different lines.  But this is one of the topics still under
active discussion.

Martin



Source code:

--
#include 

int main(int argc, char **argv)
{
   (void)argv;

   struct { char c; int i; } s = {0, 0};

   printf("%d\n", argc ? ((unsigned char *))[1] : 5);
   printf("%d\n", argc ? ((unsigned char *))[1] : 7);
}
--

Results:

--
$ gcc -std=c11 -pedantic -Wall -Wextra -O3 test.c && ./a.out
5
7
--

gcc version: gcc (GCC) 7.0.0 20160616 (experimental)

Of course, clang does essentially the same but the testcase is a bit
more involved (I can post it if somebody is interested). OTOH clang is
more predictable in this area because rules for dealing with undefined
values in llvm are more-or-less documented --
http://llvm.org/docs/LangRef.html#undefined-values .

I don't see gcc treating padding in long double as indeterminate in the
same way as in structs but clang seems to treat them equally.





Re: How to use _Generic with bit-fields

2016-02-22 Thread Martin Sebor

On 02/21/2016 06:44 PM, Wink Saville wrote:

I've tried you hack, but it doesn't work with "long bit fields". Also,
I've run into another problem. Instead of using unsigned char for
the bit field I changed it to a long unsigned int:33 and now I can't
print it without a warning.


That's unfortunate.  As I suggested before, it would be useful to
add your test cases to bug 65471 to help bring them to the committee's
attention.  The bug submitter especially seems to be interested in
making the feature more useful and while he may not be reading this
list he is likely to notice the bug update.

The purpose of the generic selection in C is to provide a limited
form of function overloading.  Calling functions with bitfield
arguments is obviously useful regardless of the bit-field width
so it should "just work" with _Generic.  It does with other
compilers (Clang and the EDG front end) and there's no good reason
for the C standard not to support it.  (The problem is simply that
wording that specifies the feature was poorly chosen and GCC seems
to follow it a bit too strictly, and to the detriment of usability.)

I'm sorry but I can't think of a way to make this work the way you're
trying to.  Again, I encourage you to add it to the existing bug or
open a new one.

Martin

PS From some of the followups downthread it looks to me as though
the problem might be thought to be the bit-field width in your test
case exceeding that of int.  The following example shows that the
same problem error is reported even with a smaller bit-field.

$ cat z.c && /home/msebor/build/gcc-trunk-svn/gcc/xgcc 
-B/home/msebor/build/gcc-trunk-svn/gcc -Wall -Wextra -Wpedantic -xc z.c

struct S { unsigned i: 31; } s;
int i = _Generic (s.i, unsigned: 1);
z.c:2:19: error: ‘_Generic’ selector of type ‘unsigned int:31’ is not 
compatible with any association

 int i = _Generic (s.i, unsigned: 1);
   ^



Re: How to use _Generic with bit-fields

2016-02-24 Thread Martin Sebor

$ cat z.c && /home/msebor/build/gcc-trunk-svn/gcc/xgcc
-B/home/msebor/build/gcc-trunk-svn/gcc -Wall -Wextra -Wpedantic -xc z.c
struct S { unsigned i: 31; } s;
int i = _Generic (s.i, unsigned: 1);
z.c:2:19: error: ‘_Generic’ selector of type ‘unsigned int:31’ is not
compatible with any association
  int i = _Generic (s.i, unsigned: 1);
^


That can be avoided simply by using unary + in the controlling expression
of _Generic (just as using unary + will avoid an error from sizeof, if you
want to be able to apply that to expressions that might be bit-fields) -
or any of the other techniques for achieving promotions of selected types.


Unfortunately, the + n trick is far too limited to be generally
usable.  Since GCC allows bit-fields of other integers types
besides those described by the standard (e.g., long long), the
plus expression would have to be converted to the widest possible
type (e.g., by (x + 0LL)) which would defeat the purpose of
_Generic.  The trick of course work at all for type-generic
macro intended to also accept non- scalar arguments.



You can't use bit-fields with sizeof, or with unary &, or in GNU C with
typeof.  I think extending this constraint to _Generic - meaning you need
to use a trick such as unary + when a bit-field might occur there - is
consistent with the peculiar position of bit-fields in C as not quite
normal objects or types (and consistent with how _Generic is intended for
certain limited kinds of overloading such as ,


GCC's handling of bit-fields in __typeof__  is also a problem
and causes bugs in .  For example, the following is
rejected by GCC (with not just one but 42 errors) as a result:

  struct S { unsigned b: 31; } s;
  carg (s.b);

even though this is accepted:

  _Complex double cd = s.b;


not for
arbitrarily expressive logic on types, cf. the rejection of overloading on
qualifiers).  If someone uses e.g. unsigned int:8 as the controlling
expression of _Generic, it's hardly clear whether a selection for unsigned
char (a type with the same set of values), unsigned int (the declared type
ignoring width) or int (the type resulting from the integer promotions)
would be the most useful selection.


If it isn't clear it should be brought up in WG14 and clarified.
It's clear enough in C++ for bit-fields to be used as arguments
to overloaded functions or function templates.  I can't imagine
a good reason for not nailing it down in C.  Especially since
existing practice shows that it can be made to work in
unsurprising ways (all 5 GCC-compatible compilers that I've
tested accept bit-fields in __typeof__ and, those that support
it, also in _Generic).

Martin



Re: How to use _Generic with bit-fields

2016-02-19 Thread Martin Sebor

On 02/19/2016 11:25 AM, Wink Saville wrote:

I'm using gcc 5.3.0:

$ gcc --version
gcc (GCC) 5.3.0
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


And I've tried to use _Generic to print the type of a bit field but
the compiler fails with:

$ make
gcc -Wall -std=c11 -o test.o -c test.c
test.c: In function ‘main’:
test.c:8:35: error: ‘_Generic’ selector of type ‘unsigned char:2’ is
not compatible with any association
  #define type_to_str(__x) _Generic((__x), \
^
test.c:17:18: note: in expansion of macro ‘type_to_str’
printf("%s\n", type_to_str(b.f1));
   ^
Makefile:7: recipe for target 'test.o' failed
make: *** [test.o] Error 1
bash: ./test: No such file or directory


The test program is:

$ cat test.c
#include 

struct bits {
   unsigned char f0;
   unsigned char f1:2;
};

#define type_to_str(__x) _Generic((__x), \
   unsigned char : "unsigned char")

int main(void) {
   struct bits b = { .f0 = 255, .f1 = 3 };

   printf("%d\n", b.f0);
   printf("%s\n", type_to_str(b.f0));
   printf("%d\n", b.f1);
   printf("%s\n", type_to_str(b.f1));

   return 0;
}


...

How do I create a type association for a bit field?


I suspect this falls under the set of issues that fall under bug
65471 - type interpretation in _Generic.  The C language rules
for _Generic aren't completely clear about what conversions are
to take place.  It would be helpful if you could your test case
to the bug to make sure the bitfield case is considered when
the issue is discussed by the C committee.

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65471

The only idea for a workaround that comes to mind is to trick GCC
into forgetting that it's a bit-field.  This hack seems to work:

  #define type_to_str(__x) _Generic((0)?(__x):(__x), \

Martin



Re: Should a disabled warning be allowed to be promoted to an error(Bugzilla PR 70275)?

2016-03-28 Thread Martin Sebor

On 03/28/2016 01:56 PM, Florian Weimer wrote:

In Bugzilla PR # 70275, Manuel López-Ibáñez reports that even though
he provides the "-Werror=return-type" option, the compiler doesn't
flag the warning/error about a control reaching the end of a non-void
function, due to the presence of the "-w" option.  He points out that
clang++ wtill flags the promoted warning even though warnings are
inhibited.


I think -w is ordered with respect to the other warning obtions, and
-w inhibits previously requested warnings, and future -W flags may
enable other warnings.  With this in mind, I agree that the current
GCC behavior is consistent and probably not a bug.


The general rule of thumb documented in the manual is that more
specific options take precedence over more general ones, regardless
of where they appear on the command line:

  The combined effect of positive and negative forms [of warning
  options] is that more specific options have priority over less
  specific ones, independently of their position in the command-
  line.

The manual doesn't give an exhaustive list of the more general
options but it mentions -Wall and -Wextra as examples.  I would
tend to view -w as another such example and, like the reporter,
expect "-Werror=return-type -w" to disable all warnings but treat
-Wreturn-type as an error.

Martin


Re: Should a disabled warning be allowed to be promoted to an error(Bugzilla PR 70275)?

2016-03-31 Thread Martin Sebor

On 03/31/2016 10:30 AM, Segher Boessenkool wrote:

On Mon, Mar 28, 2016 at 04:32:50PM -0600, Martin Sebor wrote:

On 03/28/2016 01:56 PM, Florian Weimer wrote:

In Bugzilla PR # 70275, Manuel López-Ibáñez reports that even though
he provides the "-Werror=return-type" option, the compiler doesn't
flag the warning/error about a control reaching the end of a non-void
function, due to the presence of the "-w" option.  He points out that
clang++ wtill flags the promoted warning even though warnings are
inhibited.


I think -w is ordered with respect to the other warning obtions, and
-w inhibits previously requested warnings, and future -W flags may
enable other warnings.  With this in mind, I agree that the current
GCC behavior is consistent and probably not a bug.


The general rule of thumb documented in the manual is that more
specific options take precedence over more general ones, regardless
of where they appear on the command line:


Currently, -w is a nice easy quick way of shutting up all warnings
whenever they are getting in the way.  Let's keep it that way.


I agree that having a knob to suppress all warnings can be useful.

At the same time, having the ability to do what PR 70275 asks for
(i.e., suppress only warnings that have not be been explicitly
enabled or elevated to errors) can be handy as well.  If it's
preferable to keep -w unchanged, providing a new option to do it
might be worth considering.

Martin


Re: GCC 6 symbol poisoning and c++ header usage is fragile

2016-04-22 Thread Martin Sebor

On 04/21/2016 07:20 AM, Jonathan Wakely wrote:

On 21 April 2016 at 13:33, Szabolcs Nagy wrote:

On 21/04/16 12:52, Jonathan Wakely wrote:

On 21 April 2016 at 12:11, Szabolcs Nagy wrote:

the root cause is c++: c++ headers include random libc headers with
_GNU_SOURCE ftm so all sorts of unexpected symbols are defined/declared.


Yes, I'd really like to be able to stop defining _GNU_SOURCE
unconditionally. It needs some libstdc++ and glibc changes for that to
happen, I'll be looking at it for gcc 7.



since it's unlikely the c++ standard gets fixed (to properly specify
the namespace rules)


Fixed how? What's wrong with the rules? (I'd like to understand what's
wrong here before I try to change anything, and I don't understand the
comment above).



posix has "namespace rules" specifying what symbols
are reserved for the implementation when certain
headers are included.
(it's not entirely trivial, i have a collected list
http://port70.net/~nsz/c/posix/reserved.txt
http://port70.net/~nsz/c/posix/README.txt
i use for testing musl headers, glibc also does
such namespace checks.)

e.g. the declared function names in a header are
reserved to be defined as macros.

c++ does not specify how its headers interact with
posix headers except for a few c standard headers
where it requires no macro definition for functions
(and imposes some other requirements on the libc
like being valid c++ syntax, using extern "C" where
appropriate etc).

so from a libc implementor's point of view, including
libc headers into c++ code is undefined behaivour
(neither posix nor c++ specifies what should happen).
without a specification libc headers just piling
#ifdef __cplusplus hacks when ppl run into problems.

e.g. c++ code uses ::pthread_equal(a,b), but musl used
a macro for pthread_equal (the only sensible
implementation is (a)==(b), this has to be suppressed
for c++, which now uses an extern call to do the
same), i'm also pretty sure a large number of c++
code would break if unistd.h defined "read", "write",
"link" etc as macros, since these are often used as
method names in c++, but this would be a conforming
libc implementation.


Gotcha, I understand what you mean now, thanks.

Those rules belong in a POSIX binding for C++, not in the C++
standard, but unfortunately the group working on that has been
inactive for some time.


The binding effort is unfortunately dead.  I don't recall
the group ever discussing the problem of shadow macros but
I think the only viable approach is for C++ code (both the
implementation and user programs) to deal with them.  It
doesn't seem practical to expect POSIX implementations to
change their headers to avoid defining them when __cplusplus
is defined.  In my experience, what works fairly well (but
take quite some effort to implement) is for the C++ standard
library headers to avoid including POSIX headers altogether,
and to guard against the shadow macros interfering with its
own symbols (by parenthesizing their names used in its own
APIs, as ugly as it may look).  This doesn't completely solve
the problem but at least it avoid errors in the C++ standard
library headers and bug reports for them.

Martin



relying on static_assert to test constexpr changes

2016-04-28 Thread Martin Sebor

Many GCC tests for constexpr rely on static_assert to verify things
work correctly.  While testing some changes of my own, I was surprised
to find the static_asserts pass even though my changes did not (see
below).  It took me a while to realize that, and it took printing
the computed constant values via printf() to see they were wrong.
Even a runtime assert() didn't uncover the bug (see below).  After
thinking about it a bit, it makes sense that using the tool under
test to verify its own correctness isn't the most reliable way to
do it.  I've seen it before when testing other software, but it was
eye opening none-the-less to see it happen with a compiler.  Enough
that I think it's worthwhile to share it here.

Martin

$ cat builtin_constexpr.cpp && /home/msebor/build/gcc-70507/gcc/xgcc 
-B/home/msebor/build/gcc-70507/gcc -Wall -Wextra -Wpedantic 
builtin_constexpr.cpp && ./a.out

constexpr int f (int x, int y)
{
int z = 0;
__builtin_add_overflow (x, y, );
return z;
}

constexpr int z = f (12, 34);

int main ()
{
  static_assert (z == (123 + 456), "z == (123 + 456)");   // passes

  __builtin_printf ("z = %i\n", z);

  if (z != (123 + 456))   // passes too
__builtin_abort ();
}

z = 0   << yet the output is zero!


Re: relying on static_assert to test constexpr changes

2016-04-28 Thread Martin Sebor

On 04/28/2016 03:54 PM, Andrew Pinski wrote:

On Thu, Apr 28, 2016 at 2:49 PM, Martin Sebor <mse...@gmail.com> wrote:

Many GCC tests for constexpr rely on static_assert to verify things
work correctly.  While testing some changes of my own, I was surprised
to find the static_asserts pass even though my changes did not (see
below).  It took me a while to realize that, and it took printing
the computed constant values via printf() to see they were wrong.
Even a runtime assert() didn't uncover the bug (see below).  After
thinking about it a bit, it makes sense that using the tool under
test to verify its own correctness isn't the most reliable way to
do it.  I've seen it before when testing other software, but it was
eye opening none-the-less to see it happen with a compiler.  Enough
that I think it's worthwhile to share it here.

Martin

$ cat builtin_constexpr.cpp && /home/msebor/build/gcc-70507/gcc/xgcc
-B/home/msebor/build/gcc-70507/gcc -Wall -Wextra -Wpedantic
builtin_constexpr.cpp && ./a.out
constexpr int f (int x, int y)
{
 int z = 0;
 __builtin_add_overflow (x, y, );
 return z;
}

constexpr int z = f (12, 34);

int main ()
{
   static_assert (z == (123 + 456), "z == (123 + 456)");   // passes

   __builtin_printf ("z = %i\n", z);

   if (z != (123 + 456))   // passes too
 __builtin_abort ();
}

z = 0   << yet the output is zero!


How about this doing something like this:

int main ()
{
   int z1 = *(volatile int*)

   if (z1 != (123 + 456))   // passes too
 __builtin_abort ();
}

Does the above pass now?


Yes, I think that should work.  I'm hoping for a solution that
doesn't require linking and running the tests.  The approach
you suggest and scanning the tree dump(s) for abort might be
the way to go.

Martin


Re: Bug maintenance

2016-04-28 Thread Martin Sebor

On 04/28/2016 01:35 AM, David Wohlferd wrote:

As part of the work I've done on inline asm, I've been looking thru the
bugs for it.  There appear to be a number that have been fixed or
overtaken by events over the years, but the bug is still open.

Is closing some of these old bugs of any value?

If so, how do I pursue this?


There are nearly 10,000 still unresolved bugs in Bugzilla, almost
half of which are New, and a third Unconfirmed, so I'm sure any
effort to help reduce the number is of value and appreciated.

I can share with you my own approach to dealing with them (others
might have better suggestions).  In cases where the commit that
fixed a bug is known, I mention it in the comment closing the bug.
I also try to indicate the version in which the bug was fixed (if
I can determine it using the limited number of versions I have
built).  Otherwise, when a test doesn't already exist (finding
out whether or not one does can be tedious), I add one before
closing the bug will help avoid a regression.

Martin


Ada testsuite failures due to missing gnatlib

2016-05-03 Thread Martin Sebor

In my builds lately I've been noticing many Ada tests failing
that didn't use to fail before.  I don't think I'm doing
anything different than before.  The failures all seem to be
due to the error below.  Has something changed about how to
run the Ada test suite or how to configure GCC to enable it?

This is on an x86_64 machine running Fedora 23 with GNAT
5.3.1 installed.  GCC was configured with
--enable-languages=ada,c,c++ and no other options.

$ make -C /build/gcc-trunk/gcc check-ada
make: Entering directory '/home/msebor/build/gcc-trunk/gcc'
gnatlib missing, exiting.
/src/gcc/trunk/gcc/ada/gcc-interface/Make-lang.in:887: recipe for target 
'check-acats' failed

make: *** [check-acats] Error 1
make: Leaving directory '/home/msebor/build/gcc-trunk/gcc'

Thanks
Martin


Re: Ada testsuite failures due to missing gnatlib

2016-05-03 Thread Martin Sebor

On 05/03/2016 03:47 PM, Eric Botcazou wrote:

In my builds lately I've been noticing many Ada tests failing
that didn't use to fail before.  I don't think I'm doing
anything different than before.  The failures all seem to be
due to the error below.  Has something changed about how to
run the Ada test suite or how to configure GCC to enable it?


Yes, see https://gcc.gnu.org/ml/gcc-cvs/2016-04/msg01024.html


$ make -C /build/gcc-trunk/gcc check-ada
make: Entering directory '/home/msebor/build/gcc-trunk/gcc'
gnatlib missing, exiting.


So is there a /build/gcc-trunk/gcc/gcc/ada/rts directory or not?


There is no /build/gcc-trunk/gcc/gcc but presumably you meant
/build/gcc-trunk/gcc/ada (which does exist).  But there is no
rts directory anywhere under the build tree.

Martin



Re: Ada testsuite failures due to missing gnatlib

2016-05-03 Thread Martin Sebor

On 05/03/2016 04:44 PM, Eric Botcazou wrote:

There is no /build/gcc-trunk/gcc/gcc but presumably you meant
/build/gcc-trunk/gcc/ada (which does exist).  But there is no
rts directory anywhere under the build tree.


Then the build failed at some point and this should be in the log.


Actually, I misstated the steps I had done.  I hadn't done make
bootstrap prior to make check, but rather make stage1-bubble.
That's what causes the error (the error doesn't happen after
make bootstrap).  Your change doesn't appear to have anything
to do with it.

Martin


r235766 incomplete?

2016-05-02 Thread Martin Sebor

Hi Jan,

I just noticed the compilation errors in the attached file with
the latest trunk.  It seems as though your recent patch below may
be incomplete:

  commit 46e5dccc6f188bd0fd5af4e9778f547ab63c9cae
  Author: hubicka 
  Date: Mon May 2 16:55:56 2016 +

The following change causes compilation errors due to
ipa_find_agg_cst_for_param taking just three arguments, while it
is being called with four.  (I haven't looked into the other error.)

Regards
Martin

--- a/gcc/ipa-inline-analysis.c
+++ b/gcc/ipa-inline-analysis.c
@@ -850,7 +850,8 @@ evaluate_conditions_for_known_args (struct 
cgraph_node *node

,
  if (known_aggs.exists ())
{
  agg = known_aggs[c->operand_num];
- val = ipa_find_agg_cst_for_param (agg, c->offset, c->by_ref);
+ val = ipa_find_agg_cst_for_param (agg, 
known_vals[c->operand_num],

+   c->offset, c->by_ref);
/src/gcc/66561/gcc/ipa-inline-analysis.c: In function ‘clause_t evaluate_conditions_for_known_args(cgraph_node*, bool, vec, vec)’:
/src/gcc/66561/gcc/ipa-inline-analysis.c:854:27: error: invalid conversion from ‘tree_node*’ to ‘long int’ [-fpermissive]
   c->offset, c->by_ref);
   ^
/src/gcc/66561/gcc/ipa-inline-analysis.c:854:27: error: too many arguments to function ‘tree_node* ipa_find_agg_cst_for_param(ipa_agg_jump_function*, long int, bool)’
In file included from /src/gcc/66561/gcc/ipa-inline-analysis.c:90:0:
/src/gcc/66561/gcc/ipa-prop.h:639:6: note: declared here
 tree ipa_find_agg_cst_for_param (struct ipa_agg_jump_function *, HOST_WIDE_INT,
  ^
/src/gcc/66561/gcc/ipa-inline.c: In function ‘bool can_inline_edge_p(cgraph_edge*, bool, bool, bool)’:
/src/gcc/66561/gcc/ipa-inline.c:341:55: error: ‘CIF_THUNK’ was not declared in this scope
 e->inline_failed = e->caller->thunk.thunk_p ? CIF_THUNK : CIF_MISMATCHED_ARGUMENTS;
   ^


Re: "error: static assertion failed: [...]"

2016-07-13 Thread Martin Sebor

On 07/13/2016 07:26 AM, Thomas Schwinge wrote:

Hi!

I had recently noticed that given:

 #ifndef __cplusplus /* C */
 _Static_assert(0, "foo");
 #else /* C++ */
 static_assert(0, "foo");
 #endif

..., for C we diagnose:

 [...]:2:1: error: static assertion failed: "foo"
  _Static_assert(0, "foo");
  ^~

..., and for C++ we diagnost:

 [...]:4:1: error: static assertion failed: foo
  static_assert(0, "foo");
  ^

("foo" quoted vs. un-quoted.)  Assuming this difference between C and C++
diagnostics is not intentional, which one should we settle on?  I thought
I'd like the un-quoted version better, but judging by Martin's recent
wiki change (see below), "foo" is a string constant, so should be quoted
in diagnostics?  If yes, OK to commit to trunk the obvious changes (plus
any testsuite updates)?


I hadn't thought of this case.  I don't have a personal preference
but I can provide a few data points to help with the decision (or
maybe muddy the waters).

Neither the C nor the C++ standard specifies whether or not to
include the quotes.  Both simply require that

  ...the implementation shall produce a diagnostic message that
  includes the text of the string literal, ...

Of the compilers I tried (Clang, EDG eccp, IBM XLC, Microsoft
Visual C++, and Oracle CC), more include the quotes in both
languages than not.  IBM XLC only quotes the C++ text, the
opposite of G++.  Visual C++ doesn't quote the C++ text and
doesn't understand the C11 _Static_assert.

From other similar constructs, C and C++ specify that the #error
directive include the preprocessing tokens that follow it (i.e.,
including quotes).

GCC doesn't include the quotes when printing the string argument
in attribute error.

Martin



For reference:

On Tue, 12 Jul 2016 22:34:17 -, GCC Wiki  wrote:

The "DiagnosticsGuidelines" page has been changed by MartinSebor:
https://gcc.gnu.org/wiki/DiagnosticsGuidelines?action=diff=7=8

Comment:
Added a table of contents and a Quoting section.



+ === Quoting ===
+
+ The following elements should be quoted in GCC diagnostics, either using the 
{{{q}}} modifier in a directive such as {{{%qE}}}, or by enclosing the quoted text in 
a pair of {{{%<}}} and {{{%>}}} directives:
+
+  * Language keywords.
+  * Tokens.
+  * Boolean, numerical, character, and string constants that appear in the 
source code.
+  * Identifiers, including function, macro, type, and variable names.
+
+ Other elements such as numbers that do no refer to numeric constants that 
appear in the source code should not be quoted.  For example, in the message:
+ {{{#!highlight c++ numbers=disable
+ argument %d of %qE must be a pointer type
+ }}}
+ since the argument number does not refer to a numerical constant in the 
source code it should not be quoted.



Grüße
  Thomas





Re: DejaGnu directive matching multiple messages on the same line

2016-07-20 Thread Martin Sebor

On 07/20/2016 02:28 PM, Jakub Jelinek wrote:

On Wed, Jul 20, 2016 at 02:19:15PM -0600, Martin Sebor wrote:

Is there a way to express a requirement that a single line cause
two or more diagnostic messages (in any order) each matching one
of the regex strings?


Sure, and it is used many times in the testsuite.

   whatever; /* { dg-error "whatever1" } */
   /* { dg-error "whatever2" "" { target *-*-* } 33 } */

You need to use the target specification and supply line number in the
second and following directive which you want to apply to the same line.


Thanks!  I'd seen this pattern but completely forgot about it.
Let me update the Wiki.

Btw., the above works fine when each directive is on its own line
but when the second follows the first on the same line (somehow
I thought it needed to be at first), the second one needs another
string argument.  I haven't looked into what it's for but the regex
is the second argument in this case, not the first.  Like this:

  whatever; // { dg-error "regex-1" } { dg-error "???" "regex2" "fail 
message" { target-selector }  }


Martin


DejaGnu directive matching multiple messages on the same line

2016-07-20 Thread Martin Sebor

When multiple diagnostics for a given line in a test are expected,
I have been using the vertical bar ('|') in regular expression
arguments to DejaGnu directives like dg-error and dg-warning on
the assumption that all are matched.

This use appears to be sanctioned by the GCC TestCaseWriting Wiki
(https://gcc.gnu.org/wiki/TestCaseWriting):

  Should a line produce two errors, the regular expression can
  include an "|" (ie. a regular expression OR) between the possible
  message fragments.

  If only one of the errors is important, a better way to deal
  with the multiple errors is to check for the one you want with
  dg-error and discard the extras with dg-prune-output: ...

In a discussion of a recent patch of mine I was surprised to learn
that this use may not actually accomplish quite what's intended.
The effect of the bar operator is for the directive to match
a message on that line that contains either the left operand or
the right operand of the regex.  Which means that the directive
is appropriate when it doesn't matter which of the alternatives
matches, but not when both or all messages are expected and need
to be verified.

Is there a way to express a requirement that a single line cause
two or more diagnostic messages (in any order) each matching one
of the regex strings?

I've searched the test suite for examples but couldn't find
anything.  The dg-{begin,end}-multiline-output directive looked
promising until I realized that it does plain string matching
and doesn't appear to be able to match whole errors.

Thanks
Martin


Re: "error: static assertion failed: [...]"

2016-07-16 Thread Martin Sebor

 From a diagnostics point-of-view, neither version is quoted:

c/c-parser.c: error_at (assert_loc, "static assertion failed: %E", string);

cp/semantics.c: error ("static assertion failed: %s",

To be "quoted", it would need to use either %q or %<%>. Note that %qs
would produce `foo' not "foo". Nevertheless, we probably want to print
'x' for character literals and not `'x'' and "string" for string
literals and not `string'. Thus, the wiki should probably be amended to
clarify this.

Also, there is a substantial difference between %E and %s when the
string contains control characters such as \n \t \u etc. Clang uses
something similar to %E.


I agree that the two sets of quotes in '"string"' don't look quite
right, and that letting embedded control sequences affect the compiler
output probably isn't a good idea.  Which, AFAICT, leaves a plain %E
as the only option.

The nice thing about %qE (or %<%E%>) vs plain %E is that the former
highlight the text of the string (i.e., make it bold).  That makes
the reason for the error stand out.  It would be nice if there were
a way to do that without adding the extra pair of quotes or giving
up on the control character transformation.



For comparison, we use %s to print

test.c:1:9: note: #pragma message:
string
#pragma message "\nstring"


That seems potentially unsafe and might be worth changing to match
the static_assert.

Martin


Re: New GCC 7 -Wformat-truncation suppressed by (and only by) -Og?

2017-02-23 Thread Martin Sebor

On 02/23/2017 06:51 AM, Stephan Bergmann wrote:

At least with a recent GCC 7 trunk build ("gcc (GCC) 7.0.1 20170221
(experimental)"), I noticed that -Wformat-truncation warnings happen to
not be emitted if and only if -Og is given:


That's unfortunately a bug.  The warning runs at different stages
depending on whether or not optimization is enabled and it looks
like the global GCC variable it uses to determine this is not set
for -Og.  Can you please open a bug in Bugzilla?

Thanks
Martin




$ cat test.c
#include 
int main() {
char buf[3];
snprintf(buf, sizeof buf, "%s", "foo");
return 0;
}
$ gcc -Wformat-truncation -Og ~/test.c
$ gcc -Wformat-truncation -O ~/test.c
test.c: In function ‘main’:
test.c:4:34: warning: ‘snprintf’ output truncated before the last
format character [-Wformat-truncation=]
 snprintf(buf, sizeof buf, "%s", "foo");
  ^
test.c:4:5: note: ‘snprintf’ output 4 bytes into a destination of size 3
 snprintf(buf, sizeof buf, "%s", "foo");
 ^~


Any other optimization level (-O1..3/s/fast) does emit the warning.  The
documentation makes it clear that the behavior of that warning may be
coupled to the optimization level, but this difference between -Og and
everything else still looks somewhat odd.  Could it be by mistake?




Re: make[1]: *** wait: No child processes during make -j8 check

2017-01-17 Thread Martin Sebor

On 01/17/2017 08:30 AM, Jeff Law wrote:

On 01/16/2017 05:37 PM, Martin Sebor wrote:

I've run into this failure during make check in the past with
a very large make -j value (such as -j128), but today I've had
two consecutive make check runs fail with -j12 and -j8 on my 8
core laptop with no much else going on.  The last thing running
was the go test suite.  Has something changed recently that
could be behind it?  (My user process limit is 62863.)

What version of make are you using?  There was a bug where make could
lose track of children a while back.  Ask Patsy -- she may remember the
RHEL BZ and from that we can probably extract the version #s that were
affected and see if they match the version you're running.


I have make 4.0 and I'm running Fedora 23.  I haven't changed
anything on the machine in a while and used to be able to run
make -j16.  I still can bootstrap with that but make check
has been failing recently.  Three times yesterday (-12, -j8,
and -j12 again), but then the last one with -j12 completed.

The frustrating thing is that after the failure I have to
restart make check from scratch, and so it can waste hours.

Martin



jeff




make[1]: *** wait: No child processes during make -j8 check

2017-01-16 Thread Martin Sebor

I've run into this failure during make check in the past with
a very large make -j value (such as -j128), but today I've had
two consecutive make check runs fail with -j12 and -j8 on my 8
core laptop with no much else going on.  The last thing running
was the go test suite.  Has something changed recently that
could be behind it?  (My user process limit is 62863.)

Thanks
Martin


Re: Proposal: readable and writable attributes on variables

2016-09-01 Thread Martin Sebor

On 08/31/2016 04:56 AM, Jens Bauer wrote:

Hi Martin (and everyone else on the gcc list).

I appreciate your input and kind reply to my proposal. :)

On Tue, 30 Aug 2016 20:44:35 -0600, Martin Sebor wrote:

This sounds reasonable and useful to me but to be fully integrated
into the language, attribute not_readable would need to be elevated
to the status of a type qualifier analogous to const.  Otherwise it
would (likely, if applied as most other attributes) be lost during
conversions such as in

   __attribute__ ((not_readable)) int write_only;
   int *preadwrite = _only;


Would it not be possible to bring a warning in such cases ?
-or would that actually require a kludge ?


I would expect it to be easier to issue a warning than to add support
for a whole new type qualifier to the two GCC front ends (C and C++).
The latter, of course, has the advantage of a more robust solution.


If it's not possible (or if it's too complicated to bring a warning or error), 
then I would still find it valuable to have __attribute__((not_writable)) and 
__attribute__((not_readable)).

They're intended to be used in structures, which in the CMSIS case would be 
something like this:

#define __I volatile __attribute__((not_writable))
#define __O volatile __attribute__((not_readable))
#define __IOvolatile
#define __NA__attribute__((not_readable,not_writable)) /* not available or 
no access */

typedef struct {
   __IO uint32_tSR
   union {
 __I  uint32_t  IDR;/* input data register */
 __O  uint32_t  ODR;/* output data register */
   };
   __NA uint32_treserved[2];/*  */


This could be implemented via unnamed bit-fields:

  uint32_t: 32;
  uint32_t: 32;


   /* .. more registers here .. */
};

USART3->ODR = byteToSend;
receivedByte = USART3->IDR;


I'm not sure I understand the purpose of using the union here but
that's probably not important for your proposal.


Normally, __I, __O and __IO are defined as follows in the CMSIS include files:
#ifdef __cplusplus
   #define __I   volatile  /*!< Defines 'read only' permissions */
#else
   #define __I   volatile const/*!< Defines 'read only' permissions */
#endif
#define __O volatile  /*!< Defines 'write only' permissions */
#define __IOvolatile  /*!< Defines 'read / write' permissions */

-That means for C++, __I does not work entirely as intended; it was probably 
done this way due to RTTI.
I believe that in this case an attribute would not have this problem ?


I can't imagine what impact RTTI might have on this. I'd expect
"__attribute__((not_writable))" to map to "const" with no need for
the attribute.


__O can not prohibit reading.
Some hardware registers react upon 'access'; eg. if read, it becomes zero, if 
written, only the set bits are changed.
__O currently cannot prevent reading such registers.


Understood.  I think a write-only attribute or type qualifier would
make sense.  Until/unless it's implemented I would recommend to work
around its absence by hiding access to the registers behind a read-
only and write-only functional API.

Martin


Re: Proposal: readable and writable attributes on variables

2016-08-30 Thread Martin Sebor

On 08/30/2016 06:22 AM, Jens Bauer wrote:

Hi all.

I know it's possible to declare a variable 'read-only' by using 'const'.

When working with microcontrollers (small ICs, which often requires you to 
write your code at driver-level), you need to be able to declare a structure 
member 'read-only', 'write-only' or 'read+write'.
In addition to that, it would definitely be handy to declare variables 'no 
access'

So I'd like to propose the following two attributes, which is 'off' by default 
(eg. read access + write access):
__attribute__((not_readable))
__attribute__((not_writable))

Any combination of those allowed.


Presumably you are proposing to have GCC diagnose accesses to objects
declared using these attributes that were in conflict with the intent
of the attributes (as is modifying a const object).

This sounds reasonable and useful to me but to be fully integrated
into the language, attribute not_readable would need to be elevated
to the status of a type qualifier analogous to const.  Otherwise it
would (likely, if applied as most other attributes) be lost during
conversions such as in

  __attribute__ ((not_readable)) int write_only;
  int *preadwrite = _only;

partly defeating the purpose of feature.  That should be doable but
AFAIK it's different from most other attributes GCC supports and so
worth calling out as a potential pitfall.

In any case, unless someone pokes a major hole in your proposal
I suggest to raise an enhancement request for it in Bugzilla and
provide as much detail as you can specifying how you envision this
to work (including any other feedback you may get here).  Fleshing
out a prototype implementation of this proposal and submitting it
to gcc-patches would be even better.

Martin


Re: Lessons learned from compiler error/warnings tests

2016-09-09 Thread Martin Sebor

On 09/09/2016 06:28 AM, Florian Weimer wrote:

For compile-time fortify checks (such as the wrappers for type-safe
open/openat), we need to add tests in glibc which examine the compiler
output for warnings and errors.

I do not want to add Dejagnu as a dependency to the glibc test suite,
but I wonder if you could share any lessons learned from the testing
support facility in GCC.

Would you suggest to mirror the GCC directives exactly?

I probably have to implement the testing directive parser as a small C
program.


Writing a parser in C or C++ or other high level language wouldn't
be my first (or second) choice.  If all I needed to do is check for
compiler errors and warnings on the source lines  they point to, my
first choice would be a POSIX shell script combined with grep, sed,
and/or awk.

Martin


Re: style convention: /*foo_p=*/ to annotate bool arguments

2016-10-04 Thread Martin Sebor

This would have been easier if C++ had allowed the same default value to
be given in both the declaration and the definition:

void foo(int x, int y, bool bar_p = false);

void foo(int x, int y, bool bar_p = false)
{
}

It seems strange that this is not allowed.  The standard says "A default
argument shall not be redefined by a later declaration (not even to the
same value)", but I can't think /why/ it is not allowed.


It seems like an innocuous and even useful feature to allow just
as long as the default value is the same.  The perhaps surprising
challenge is in defining "is the same."  E.g., in

  void foo (T = X + 1);
  ...
  void foo (T = 1 + X) { }

the result of the addition could be different depending on the type
of T and the definition of operator+.  (The default value assigned
to the parameter also depends on the type of T and the conversion
from the result of the addition to it.)

A rule that required both declarations to evaluate to the same value
would be a lot more involved than simply prohibiting a redeclaration.
WG21 tried to nail the general problem down for C++ 2017 with
the Defaulted Comparison Operators but ultimately decided it was too
thorny of a problem to solve.

Martin


Re: VR_RANGE with inverted bounds

2016-10-07 Thread Martin Sebor

On 10/07/2016 11:15 AM, Richard Biener wrote:

On October 7, 2016 6:49:39 PM GMT+02:00, Martin Sebor <mse...@gmail.com> wrote:

While processing the (p += i) expression below to validate the bounds
of the pointer in I call get_range_info for i (in tree-object-size.c).
The function returns the following VR_RANGE: [2147483648, -2147483649]
rather than the expected [0, 1].  Is such a range to be expected or
is it a bug?


This range is not valid (unless unsigned and you show it as signed).


Thanks.  I believe it's signed but even if it was unsigned it wouldn't
be correct (correspond to the actual range of [0, 1]).

I don't have a test case that makes this manifest with an unpatched
compiler.  There aren't too many places that call get_range_info
and the range only appears to be wrong between the second and third
runs of the object-size pass.  After the third run the range is
correct.  (I can see this with the -Wformat-length warning when
I pass the variable to sprintf and trigger a warning.)

Unless you have a suggestion for how I might be able to reproduce
it I'll go ahead and open a bug and include a minimum patch to show
the problem.

Martin


VR_RANGE with inverted bounds

2016-10-07 Thread Martin Sebor

While processing the (p += i) expression below to validate the bounds
of the pointer in I call get_range_info for i (in tree-object-size.c).
The function returns the following VR_RANGE: [2147483648, -2147483649]
rather than the expected [0, 1].  Is such a range to be expected or
is it a bug?

In general, what assumptions can I safely make about the bounds for
VR_RANGE and VR_ANTI_RANGE?  For example, besides being inverted like
in this example, can the bounds also ever be the same?  I don't think
I have seen them be the same but I think I have seen a case where the
variable's effective range was [X, X] (i.e., it was equal to
a constant) and get_range_info returned VR_VARYING.  Should that also
be expected?

Thanks
Martin

  void g (char *p, int i)
  {
if (i < 0 || 1 < i)
  i = 0;

p += i;
...
  }


Re: style convention: /*foo_p=*/ to annotate bool arguments

2016-10-04 Thread Martin Sebor

On 10/04/2016 04:41 AM, Jonathan Wakely wrote:

On 4 October 2016 at 10:21, David Brown wrote:

On 04/10/16 01:48, Martin Sebor wrote:

In a recent review Jason and I discussed the style convention
commonly followed in the C++ front end to annotate arguments
in calls to functions taking bool parameters with a comment
along the lines of

  foo (1, 2, /*bar_p=*/true);


I like this convention at the call-site, otherwise "true" or "false"
doesn't tell you much and you have to look at the declaration.


FWIW, it occurs to me that another similar case is passing NULL
to a function that expects a pointer (or tree) argument.  When
the function takes more than one such pointer (and perhaps even
handles null in some but not others), it can be hard to tell
which is which.  For the null case (as opposed to for simply
determining which argument is an ordinary pointer and which one
is a tree) it's not enough to look at the declaration.  One has
to look at the definition of the function or the comment above
it that lays out its contract to tell whether it handles a null
pointer or tree.  But beyond the NULL_TREE attempt that (only
superficially) distinguishes ordinary pointers from trees there
is no convention in place where the argument would be similarly
annotated.

The same observation can be made about other constants (in the
GCC sources I've looked it, frequently argument numbers such as
in TREE_OPERAND or get_nth_callarg, or initial container sizes
such as auto_vec constructor, etc.)

(I realize that these examples could be viewed as arguments for
expanding the C++ FE convention beyond bool, or for abandoning
it as too incomplete to be generally useful.)



IMHO even better is to not use bool and define an enumeration type, so
the call site has something unambiguous like foo (1, 2, yes_bar) or
foo (1, 2, no_bar).


This strikes me as an indictment of hardcoding literals of any
kind, and a good argument in favor of using symbolic constants
in general, but less like a solution to a problem that's unique
to the two bool constants.


I assume the -1 default argument was a typo and that nobody is
proposing using -1 as a sensible bool value. I hope that's the case
anyway :-)


Yes, that was a typo, thanks for the correction!


In general I agree, although it could be useful in a more realistic
example. If the parameter with a default argument is a 'tree' and
could be null, then the comment serves as a reminder that callers
might not have passed a valid tree:

int frob_trees(tree lhs, tree rhs /* = NULL_TREE */)
{
  // Handle the case where only one argument is passed:
  if (rhs == NULL_TREE)
rhs = lhs;  // or something more sensible.
  // ...
}

Even here it's not hugely valuable, as the "Handle the case ..."
comment can convey the same information.


This is a great example, thank you!  A "Handle NULL_TREE comment"
does convey the same information but it can get lost in the rest
of the comment text.  In my own experience, the /* = NULL_TREE */
comment is much less likely to be missed.  (YMMV of course.)

That said, putting attribute nonnull to use in declarations of
functions that do not accept null pointers would arguably be
an even better way of both documenting the contract for pointers
and helping detect its violations early.

Martin


tree-prof parallel make check failures

2016-09-20 Thread Martin Sebor

I'm seeing a number of failures in different tests in the tree-prof
directory when I run make check in parallel none of which are
reproducible with -j1.  I don't see anything about in Bugzilla or
in recent test results.  Has anyone noticed this or am I missing
something?

This is on a 56-core x86_64 machine running Fedora 21 and DejaGnu
1:1.5.1-4.fc21 installed.

Thanks
Martin

Serial make check passes:
$ make -C /build/gcc-49905 check-c RUNTESTFLAGS='tree-prof.exp'
...
=== gcc Summary ===

# of expected passes307
# of unsupported tests  48
/build/gcc-49905/gcc/xgcc  version 7.0.0 20160920 (experimental) (GCC)


Parallel make check fails a bunch of different tests each time
it's run:
$ make -C /build/gcc-49905 -j80 -k check-c RUNTESTFLAGS='tree-prof.exp' 
2>&1 | grep FAIL

FAIL: gcc.dg/tree-prof/peel-1.c execution,-g
FAIL: gcc.dg/tree-prof/update-tailcall.c execution,-g
FAIL: gcc.dg/tree-prof/cold_partition_label.c execution,-g
FAIL: gcc.dg/tree-prof/val-prof-1.c execution,-g
FAIL: gcc.dg/tree-prof/pr45354.c execution,-g
FAIL: gcc.dg/tree-prof/val-prof-2.c execution,-g
FAIL: gcc.dg/tree-prof/crossmodule-indircall-1.c execution,-g
FAIL: gcc.dg/tree-prof/crossmodule-indircall-1a.c execution,-g
FAIL: gcc.dg/tree-prof/pr52027.c execution,-g
FAIL: gcc.dg/tree-prof/20041218-1.c execution,-g

and again:
$ nice make -C /opt/notnfs/msebor/build/gcc-49905 -j80 -k check-c 
RUNTESTFLAGS='tree-prof.exp' 2>&1 | grep FAIL

FAIL: gcc.dg/tree-prof/bb-reorg.c execution,-g
FAIL: gcc.dg/tree-prof/prof-robust-1.c execution,-g
FAIL: gcc.dg/tree-prof/update-loopch.c execution,-g
FAIL: gcc.dg/tree-prof/crossmodule-indircall-1a.c execution,-g
FAIL: gcc.dg/tree-prof/switch-case-2.c execution,-g
FAIL: gcc.dg/tree-prof/ic-misattribution-1.c execution,-g
FAIL: gcc.dg/tree-prof/tracer-1.c execution,-g


Re: tree-prof parallel make check failures

2016-09-21 Thread Martin Sebor

On 09/21/2016 07:39 AM, Marek Polacek wrote:

On Tue, Sep 20, 2016 at 05:29:03PM -0600, Martin Sebor wrote:

I'm seeing a number of failures in different tests in the tree-prof
directory when I run make check in parallel none of which are
reproducible with -j1.  I don't see anything about in Bugzilla or
in recent test results.  Has anyone noticed this or am I missing
something?


I'm seeing those too and it's very bothering.  I think Andi Kleen
had some patches to fix these, but seems it needs to be fixed more.


Thank you both for confirming that.  I've raised testsuite/77684
- many tree-prof testsuite failures in parallel make check.

Martin


style convention: /*foo_p=*/ to annotate bool arguments

2016-10-03 Thread Martin Sebor

In a recent review Jason and I discussed the style convention
commonly followed in the C++ front end to annotate arguments
in calls to functions taking bool parameters with a comment
along the lines of

  foo (1, 2, /*bar_p=*/true);

I pointed out to Jason that in another code review, Jeff asked
to remove the same comment from the middle-end [1].  In the
interest of consistency Jason and I thought I should bring this
up for discussion so we can all be clear on whether or not this
is something worth standardizing and documenting.

As a separate question, in the same discussion I mention to Jason
a convention that I myself have found useful where the value of
a default argument is indicated in a comment in a function
definition that is declared elsewhere to take one, e.g., like so:

  // In some header:
  void foo (int, int, bool = -1);

  // In some .c file:
  void foo (int x, int y, bool bar_p /* = false */)
  {
...
  }

Jason thought this would be a good convention.  Is there any
interest in/support for adopting it?

Thanks
Martin

[1] https://gcc.gnu.org/ml/gcc-patches/2016-08/msg00354.html
[2] https://gcc.gnu.org/ml/gcc-patches/2016-09/msg01469.html


anti-ranges of signed variables

2016-11-11 Thread Martin Sebor

I noticed that variables of signed integer types that are constrained
to a specific subrange of values of the type like so:

 [-TYPE_MAX + N, N]

are reported by get_range_info as the anti-range

 [-TYPE_MAX, TYPE_MIN - 1]

for all positive N of the type regardless of the variable's actual
range.  Basically, such variables are treated the same as variables
of the same type that have no range info associated with them at all
(such as function arguments or global variables).

For example, while a signed char variable between -1 and 126 is
represented by

 VR_ANTI_RANGE [127, -2]

as one would expect, when the same variable is between -4 and 123
it's represented by

 VR_ANTI_RANGE [128, -129]

rather than the expected

 VR_ANTI_RANGE [124, -5]

I'd like to know if this is a deliberate feature or a bug.  If it's
a feature, can and should it be relied on for all signed variables?
Or if it need not always hold, under what conditions, if any, might
it not hold?

I'm also curious why get_range_info returns VR_ANTI_RANGE for signed
variables with bounds of opposite signedness, and if it's something
that can be relied on.  I.e., does an anti-range always imply that
the original variable (before being cast to an unsigned type as
often happens) is signed and its lower bound is negative?

I've gone through tree-ssa-vrp.[hc] to try to tease out answers to
these questions from the code but it looks too involved for me to
be confident that I haven't overlooked something.

Thanks
Martin

PS I came across this while stepping through the code below in
the determine_block_size function in builtins.c:

   void f (void *d, void *s, signed char n)
   {
 if (n < -4 || 123 < n) return;
 memcpy (d, s, n);
   }

The function tries to estimate the likely maximum block size for
the memcpy call and determines it to be 127 (i.e., SCHAR_MAX) in
this case.  When the if statement is changed to this:

 if (n < -4 || 122 < n) return;

it determines the likely maximum size to be 122 as I would expect.



Re: anti-ranges of signed variables

2016-11-11 Thread Martin Sebor

On 11/11/2016 12:12 PM, Andrew Pinski wrote:

On Fri, Nov 11, 2016 at 10:51 AM, Martin Sebor <mse...@gmail.com> wrote:

On 11/11/2016 10:53 AM, Richard Biener wrote:


On November 11, 2016 6:34:37 PM GMT+01:00, Martin Sebor <mse...@gmail.com>
wrote:


I noticed that variables of signed integer types that are constrained
to a specific subrange of values of the type like so:

 [-TYPE_MAX + N, N]

are reported by get_range_info as the anti-range

 [-TYPE_MAX, TYPE_MIN - 1]

for all positive N of the type regardless of the variable's actual
range.  Basically, such variables are treated the same as variables
of the same type that have no range info associated with them at all
(such as function arguments or global variables).

For example, while a signed char variable between -1 and 126 is
represented by

 VR_ANTI_RANGE [127, -2]



? I'd expect [-1, 126].  And certainly never range-min > range-max



Okay.  With this code:

  void f (void *d, const void *s, signed char i)
  {
if (i < -1 || 126 < i) i = -1;
__builtin_memcpy (d, s, i);
  }

I see the following in the output of -fdump-tree-vrp:

  prephitmp_11: ~[127, 18446744073709551614]



prephitmp_11 is an unsigned type as the last argument to memcpy is
size_t which is an unsigned type. We had this discussion in bugzilla
about this same thing when we were talking about pointer plus.


Right.  But that wasn't my question (the reported range above
correctly reflects the variable's range in the code).

My question is: why is the range for i in

  if (i < -3 || 124 < i) i = -1;

the same as in

  if (i < -4 || 123 < i) i = -1;

In both cases it's represented as

  prephitmp_12: ~[128, 18446744073709551487]

which is unrelated to i's actual range.  This seems to be true
for all signed types in the range [-TYPE_MAX + N, N].

Martin



Thanks,
Andrew


  ...
  # prephitmp_11 = PHI <_12(3), 18446744073709551615(2)>
  __builtin_memcpy (d_8(D), s_9(D), prephitmp_11);

and this in GDB:

  Breakpoint 1, determine_block_size (len=0x70daacf0,
  ...
  2913else if (range_type == VR_ANTI_RANGE)
  (gdb) s
  2916if (min == 0)
  (gdb) p range_type
  $1 = VR_ANTI_RANGE
  (gdb) p min
  $2 = { = {val = {127, 18136432, 140737235550272}, len =
1,
  precision = 64}, static is_sign_extended = }
  (gdb) p max
  $3 = { = {val = {-2, 18136478, 140737235550272}, len =
1,
  precision = 64}, static is_sign_extended = }




as one would expect, when the same variable is between -4 and 123
it's represented by

 VR_ANTI_RANGE [128, -129]

rather than the expected

 VR_ANTI_RANGE [124, -5]

I'd like to know if this is a deliberate feature or a bug.



It must be a misinterpretation on your side.  Or an implementation bug.



Given the above, which does it look like it is?

Thanks
Martin




Re: anti-ranges of signed variables

2016-11-11 Thread Martin Sebor

On 11/11/2016 10:53 AM, Richard Biener wrote:

On November 11, 2016 6:34:37 PM GMT+01:00, Martin Sebor <mse...@gmail.com> 
wrote:

I noticed that variables of signed integer types that are constrained
to a specific subrange of values of the type like so:

 [-TYPE_MAX + N, N]

are reported by get_range_info as the anti-range

 [-TYPE_MAX, TYPE_MIN - 1]

for all positive N of the type regardless of the variable's actual
range.  Basically, such variables are treated the same as variables
of the same type that have no range info associated with them at all
(such as function arguments or global variables).

For example, while a signed char variable between -1 and 126 is
represented by

 VR_ANTI_RANGE [127, -2]


? I'd expect [-1, 126].  And certainly never range-min > range-max


Okay.  With this code:

  void f (void *d, const void *s, signed char i)
  {
if (i < -1 || 126 < i) i = -1;
__builtin_memcpy (d, s, i);
  }

I see the following in the output of -fdump-tree-vrp:

  prephitmp_11: ~[127, 18446744073709551614]
  ...
  # prephitmp_11 = PHI <_12(3), 18446744073709551615(2)>
  __builtin_memcpy (d_8(D), s_9(D), prephitmp_11);

and this in GDB:

  Breakpoint 1, determine_block_size (len=0x70daacf0,
  ...
  2913else if (range_type == VR_ANTI_RANGE)
  (gdb) s
  2916if (min == 0)
  (gdb) p range_type
  $1 = VR_ANTI_RANGE
  (gdb) p min
  $2 = { = {val = {127, 18136432, 140737235550272}, 
len = 1,

  precision = 64}, static is_sign_extended = }
  (gdb) p max
  $3 = { = {val = {-2, 18136478, 140737235550272}, 
len = 1,

  precision = 64}, static is_sign_extended = }




as one would expect, when the same variable is between -4 and 123
it's represented by

 VR_ANTI_RANGE [128, -129]

rather than the expected

 VR_ANTI_RANGE [124, -5]

I'd like to know if this is a deliberate feature or a bug.


It must be a misinterpretation on your side.  Or an implementation bug.


Given the above, which does it look like it is?

Thanks
Martin


use of exceptions in GCC

2016-11-16 Thread Martin Sebor

GCC is built with -fno-exceptions.  I assume that's mainly to avoid
having to catch and handle exceptions in what was originally C code.
I also assume that also means that there's a policy or convention in
place against throwing exceptions in GCC or making use of constructs
that might throw (such as the non-placement new expressions).

By coincidence, bootstrapping my patch for bugs 77531 and 78284
exposed a few uses of the non-placement array new expression in
dominance.c (in dom_info::dom_init) that may throw an exception(*).

I'm wondering if those calls should be changed to avoid exceptions.

I'm also curious if there really is a policy/convention for dealing
with exceptions in GCC, what it actually is/says.

Thanks
Martin

[*] My patch exposed these because when -fno-exceptions is used
(and coincidentally also in C++ 98 mode) GCC emits conditional calls
to operator new[](SIZE_MAX) when the size computation that multiplies
the number of elements by the element size exceeds SIZE_MAX.  The
-Walloc-size-larger-than warning flags for attempting to allocate
more storage than the size of the biggest object, or SIZE_MAX / 2.


Re: use of exceptions in GCC

2016-11-16 Thread Martin Sebor

On 11/16/2016 07:36 PM, Segher Boessenkool wrote:

On Wed, Nov 16, 2016 at 01:48:41PM -0700, Martin Sebor wrote:

I'm also curious if there really is a policy/convention for dealing
with exceptions in GCC, what it actually is/says.


https://gcc.gnu.org/codingconventions.html#Exceptions


Thanks.  I guess I should have reviewed that document before asking.

The rationale makes it sound like the use of throwing new expressions
(i.e., those in dominance.c) or any other construct that may indirectly
throw is reasonable, as long as it doesn't prevent building GCC with
-fno-exceptions.

Martin


Re: anti-ranges of signed variables

2016-11-11 Thread Martin Sebor

On 11/11/2016 12:30 PM, Martin Sebor wrote:

On 11/11/2016 12:12 PM, Andrew Pinski wrote:

On Fri, Nov 11, 2016 at 10:51 AM, Martin Sebor <mse...@gmail.com> wrote:

On 11/11/2016 10:53 AM, Richard Biener wrote:


On November 11, 2016 6:34:37 PM GMT+01:00, Martin Sebor
<mse...@gmail.com>
wrote:


I noticed that variables of signed integer types that are constrained
to a specific subrange of values of the type like so:

 [-TYPE_MAX + N, N]

are reported by get_range_info as the anti-range

 [-TYPE_MAX, TYPE_MIN - 1]

for all positive N of the type regardless of the variable's actual
range.  Basically, such variables are treated the same as variables
of the same type that have no range info associated with them at all
(such as function arguments or global variables).

For example, while a signed char variable between -1 and 126 is
represented by

 VR_ANTI_RANGE [127, -2]



? I'd expect [-1, 126].  And certainly never range-min > range-max



Okay.  With this code:

  void f (void *d, const void *s, signed char i)
  {
if (i < -1 || 126 < i) i = -1;
__builtin_memcpy (d, s, i);
  }

I see the following in the output of -fdump-tree-vrp:

  prephitmp_11: ~[127, 18446744073709551614]



prephitmp_11 is an unsigned type as the last argument to memcpy is
size_t which is an unsigned type. We had this discussion in bugzilla
about this same thing when we were talking about pointer plus.


Right.  But that wasn't my question (the reported range above
correctly reflects the variable's range in the code).

My question is: why is the range for i in

  if (i < -3 || 124 < i) i = -1;

the same as in

  if (i < -4 || 123 < i) i = -1;

In both cases it's represented as

  prephitmp_12: ~[128, 18446744073709551487]

which is unrelated to i's actual range.  This seems to be true
for all signed types in the range [-TYPE_MAX + N, N].


There are still very few clients of get_range_info in GCC so it
took me some time to figure out how to demonstrate this effect
in a standalone test case with unmodified GCC.  It finally dawned
on me that I could use Aldy's -Walloca-larger-than to reproduce
it.  I raised bug 78327 with the details.

Martin



Re: style convention: /*foo_p=*/ to annotate bool arguments

2016-10-11 Thread Martin Sebor

To close the loop on this thread, although there was mild support
for both of these conventions there were also objections to both,
including a suggestion for an alternative to the "/*foo_p=*/" style
that would be preferred by most people who responded.

With that I don't have the sense that there is consensus to adopt
either style as conventions for GCC code.

Jason and Jeff (and other reviewers), what does this mean for those
of us submitting patches?  Do we discontinue using the "/*foo_p=*/"
style in the C++ front end or should we train ourselves to remember
to use it there but nowhere else?

Thanks
Martin

On 10/03/2016 05:48 PM, Martin Sebor wrote:

In a recent review Jason and I discussed the style convention
commonly followed in the C++ front end to annotate arguments
in calls to functions taking bool parameters with a comment
along the lines of

  foo (1, 2, /*bar_p=*/true);

I pointed out to Jason that in another code review, Jeff asked
to remove the same comment from the middle-end [1].  In the
interest of consistency Jason and I thought I should bring this
up for discussion so we can all be clear on whether or not this
is something worth standardizing and documenting.

As a separate question, in the same discussion I mention to Jason
a convention that I myself have found useful where the value of
a default argument is indicated in a comment in a function
definition that is declared elsewhere to take one, e.g., like so:

  // In some header:
  void foo (int, int, bool = -1);

  // In some .c file:
  void foo (int x, int y, bool bar_p /* = false */)
  {
...
  }

Jason thought this would be a good convention.  Is there any
interest in/support for adopting it?

Thanks
Martin

[1] https://gcc.gnu.org/ml/gcc-patches/2016-08/msg00354.html
[2] https://gcc.gnu.org/ml/gcc-patches/2016-09/msg01469.html




Re: bootstrap possibly broken on trunk ?

2016-10-13 Thread Martin Sebor

On 10/13/2016 11:42 AM, Prathamesh Kulkarni wrote:

Hi,
I am getting the following error when bootstrapping trunk (tried with r241108)
on x86_64-unknown-linux-gnu during stage-1:

../../../../gcc/libstdc++-v3/src/c++11/compatibility-thread-c++0x.cc:121:12:
error: ISO C++ forbids declaration of \u2018_Bind_simple_helper\u2019
with no type [-fpermissive]
   template _Bind_simple_helper>::__type __bind_simple(void (thread::*&&)(),
reference_wrapper&&);
^~~
../../../../gcc/libstdc++-v3/src/c++11/compatibility-thread-c++0x.cc:121:12:
error: \u2018_Bind_simple_helper\u2019 is not a template function
../../../../gcc/libstdc++-v3/src/c++11/compatibility-thread-c++0x.cc:121:31:
error: expected \u2018;\u2019 before \u2018<\u2019 token
   template _Bind_simple_helper>::__type __bind_simple(void (thread::*&&)(),
reference_wrapper&&);

Could someone please confirm this for me ?


I don't see this at r241131 but I just raised bug 77977 for a stage3
bootstrap error in libgo on x86_64: cannot stat 'stubs.o'

Martin


Re: -Wformat-length and floats

2016-11-29 Thread Martin Sebor

On 11/29/2016 09:32 AM, Martin Sebor wrote:

On 11/29/2016 08:38 AM, Gerald Pfeifer wrote:

This took me a bit to get to the bottom of, but I know believe
that we need to work both on the documentation and implementation
of -Wformat-length, in particular when it comes to floats.

  #include 

  typedef struct M {
  float a, b, c;
  } M;

  char *foo(M *m) {
  static char buf[64];
  sprintf(buf, "%.2f %.2f %.2f", m->a, m->b, m->c);
  return buf;
  }

First of all, it turns out that floats are not covered in the
documentation at all.  I've had a look at the code, and think
I'll be able to propose a doc change latest this coming weekend.


Thanks for looking at this and bringing it up for discussion!
Suggestions for improvements are very welcome and appreciated.



Now to what actually happens in the example above:

  # gcc -c -o x.o -Wall x.c
  x.c: In function ‘foo’:
  x.c:9:24: warning: ‘%.2f’ directive writing between 4 and 317 bytes
into a region of size 0 [-Wformat-length=]
 sprintf(buf, "%.2f %.2f %.2f", m->a, m->b, m->c);
^~~~
  x.c:9:5: note: format output between 15 and 954 bytes into a
destination of size 64
 sprintf(buf, "%.2f %.2f %.2f", m->a, m->b, m->c);
 ^~~~


After spending some more time on this I think I see the problem
you are pointing out: this is a false positive at level 1 of the
warning.  I opened bug 78605 for it with a slightly modified test
case.  Please chime in on it if I missed something.

Thanks
Martin



Re: strange test failures

2016-12-10 Thread Martin Sebor

On 12/10/2016 06:52 AM, Prathamesh Kulkarni wrote:

On 9 December 2016 at 05:17, Martin Sebor <mse...@gmail.com> wrote:

I'm seeing failures in a few C tests that I'm not sure what to make
out of.  The tests fail due to undefined symbols while linking even
though they're not meant to link.  Among these are a number (but not
all) of the gcc.dg/divmod-[1-6]{-simode}.c tests.

  FAIL: gcc.dg/divmod-1.c (test for excess errors)

The log shows that they are being built without the -S option:

Oops, I somehow missed adding dg-do compile to the divmod test-cases.
Sorry about that. The attached patch explicitly adds dg-do compile to
the test-cases.
I wonder though how I didn't see these failures. Is compile the default ?
Anyway I suppose explicitly marking the tests to be compile-only is better.
Is the attached patch OK to commit ?


Thanks Prathamesh!  I confess I don't know what the default is.
But these tests pass in my other builds, either with other patches
of mine or in a clean pull of the top of trunk. There are others
besides the divmod where I see the same problem, including at least
these:

  pr38338.c
  ipa/inlinehint-1.c
  gcc.dg/params/blocksort-part.c
  tree-ssa/ipa-split-3.c
  Woverlength-strings-pedantic-c89.c
  torture/pr32897.c

There tend to be different sets in different builds.  Those above
are also missing dg-do compile.  So something else must be going
on here to cause these failures only under certain conditions.


PS: In ChangeLog, is it acceptable to use regex ?
for eg:
testsuite/
* gcc.dg/divmod-*.c: Add dg-do compile.
instead of listing each divmod test-case explicitly.


The GNU ChangeLog style guide recommends against it:
https://www.gnu.org/prep/standards/html_node/Style-of-Change-Logs.html

Martin



Thanks,
Prathamesh


spawn -ignore SIGHUP /opt/notnfs/msebor/build/gcc-78622/gcc/xgcc
-B/opt/notnfs/msebor/build/gcc-78622/gcc/
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.dg/divmod-1.c
-fno-diagnostics-show-caret -fdiagnostics-color=never -O2
-fdump-tree-widening_mul-details -lm -o ./divmod-1.exe
/lib/../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
...

When I run make check-c I get these same failures in different tests.

In successful builds the logs show that the tests are only assembled
(i.e., compiled with -S), never linked.

I can reproduce this on two different x86_64 machines but so far
only with my patches to GCC (the middle end but none of these tests
or the GCC .exp files).  Is it possible that somehow my changes cause
this?  (The removal of the -S option.)  Am I missing something about
how GCC tests are run?

Thanks
Martin

PS I also see other strange errors like

RROR: tcl error sourcing
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.c-torture/execute/execute.exp.
ERROR: unmatched open brace in list

ERROR: tcl error sourcing
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.c-torture/execute/ieee/ieee.exp.
ERROR: torture-init: torture_without_loops is not empty as expected




Re: strange test failures

2016-12-10 Thread Martin Sebor

On 12/10/2016 11:03 AM, Martin Sebor wrote:

On 12/10/2016 06:52 AM, Prathamesh Kulkarni wrote:

On 9 December 2016 at 05:17, Martin Sebor <mse...@gmail.com> wrote:

I'm seeing failures in a few C tests that I'm not sure what to make
out of.  The tests fail due to undefined symbols while linking even
though they're not meant to link.  Among these are a number (but not
all) of the gcc.dg/divmod-[1-6]{-simode}.c tests.

  FAIL: gcc.dg/divmod-1.c (test for excess errors)

The log shows that they are being built without the -S option:

Oops, I somehow missed adding dg-do compile to the divmod test-cases.
Sorry about that. The attached patch explicitly adds dg-do compile to
the test-cases.
I wonder though how I didn't see these failures. Is compile the default ?
Anyway I suppose explicitly marking the tests to be compile-only is
better.
Is the attached patch OK to commit ?


Thanks Prathamesh!  I confess I don't know what the default is.
But these tests pass in my other builds, either with other patches
of mine or in a clean pull of the top of trunk. There are others
besides the divmod where I see the same problem, including at least
these:

  pr38338.c
  ipa/inlinehint-1.c
  gcc.dg/params/blocksort-part.c
  tree-ssa/ipa-split-3.c
  Woverlength-strings-pedantic-c89.c
  torture/pr32897.c

There tend to be different sets in different builds.  Those above
are also missing dg-do compile.  So something else must be going
on here to cause these failures only under certain conditions.


I've narrowed the problem down to a bug in a torture/execute test
added in my patch.  It's caused by an invalid dg-warn directive:

/* { dg-warn "..." "int32plus" { target { int32plus } } */

It's "dg-warn" rather than "dg-warning" and it's missing the final
closing curly brace.

With this fixed all the other strange failures clear up.  So your
patch isn't necessary.  I guess since I run make check with a large
-jN to parallelize the tests a DejaGnu error in one of the tests
can cause cascading failures in tests run later on.

Martin




PS: In ChangeLog, is it acceptable to use regex ?
for eg:
testsuite/
* gcc.dg/divmod-*.c: Add dg-do compile.
instead of listing each divmod test-case explicitly.


The GNU ChangeLog style guide recommends against it:
https://www.gnu.org/prep/standards/html_node/Style-of-Change-Logs.html

Martin



Thanks,
Prathamesh


spawn -ignore SIGHUP /opt/notnfs/msebor/build/gcc-78622/gcc/xgcc
-B/opt/notnfs/msebor/build/gcc-78622/gcc/
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.dg/divmod-1.c
-fno-diagnostics-show-caret -fdiagnostics-color=never -O2
-fdump-tree-widening_mul-details -lm -o ./divmod-1.exe
/lib/../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
...

When I run make check-c I get these same failures in different tests.

In successful builds the logs show that the tests are only assembled
(i.e., compiled with -S), never linked.

I can reproduce this on two different x86_64 machines but so far
only with my patches to GCC (the middle end but none of these tests
or the GCC .exp files).  Is it possible that somehow my changes cause
this?  (The removal of the -S option.)  Am I missing something about
how GCC tests are run?

Thanks
Martin

PS I also see other strange errors like

RROR: tcl error sourcing
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.c-torture/execute/execute.exp.

ERROR: unmatched open brace in list

ERROR: tcl error sourcing
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.c-torture/execute/ieee/ieee.exp.

ERROR: torture-init: torture_without_loops is not empty as expected






strange test failures

2016-12-08 Thread Martin Sebor

I'm seeing failures in a few C tests that I'm not sure what to make
out of.  The tests fail due to undefined symbols while linking even
though they're not meant to link.  Among these are a number (but not
all) of the gcc.dg/divmod-[1-6]{-simode}.c tests.

  FAIL: gcc.dg/divmod-1.c (test for excess errors)

The log shows that they are being built without the -S option:

spawn -ignore SIGHUP /opt/notnfs/msebor/build/gcc-78622/gcc/xgcc 
-B/opt/notnfs/msebor/build/gcc-78622/gcc/ 
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.dg/divmod-1.c 
-fno-diagnostics-show-caret -fdiagnostics-color=never -O2 
-fdump-tree-widening_mul-details -lm -o ./divmod-1.exe

/lib/../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
...

When I run make check-c I get these same failures in different tests.

In successful builds the logs show that the tests are only assembled
(i.e., compiled with -S), never linked.

I can reproduce this on two different x86_64 machines but so far
only with my patches to GCC (the middle end but none of these tests
or the GCC .exp files).  Is it possible that somehow my changes cause
this?  (The removal of the -S option.)  Am I missing something about
how GCC tests are run?

Thanks
Martin

PS I also see other strange errors like

RROR: tcl error sourcing 
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.c-torture/execute/execute.exp.

ERROR: unmatched open brace in list

ERROR: tcl error sourcing 
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.c-torture/execute/ieee/ieee.exp.

ERROR: torture-init: torture_without_loops is not empty as expected


Re: strange test failures

2016-12-09 Thread Martin Sebor

On 12/08/2016 05:32 PM, Jeff Law wrote:

On 12/08/2016 04:47 PM, Martin Sebor wrote:

I'm seeing failures in a few C tests that I'm not sure what to make
out of.  The tests fail due to undefined symbols while linking even
though they're not meant to link.  Among these are a number (but not
all) of the gcc.dg/divmod-[1-6]{-simode}.c tests.

  FAIL: gcc.dg/divmod-1.c (test for excess errors)

The log shows that they are being built without the -S option:

spawn -ignore SIGHUP /opt/notnfs/msebor/build/gcc-78622/gcc/xgcc
-B/opt/notnfs/msebor/build/gcc-78622/gcc/
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.dg/divmod-1.c
-fno-diagnostics-show-caret -fdiagnostics-color=never -O2
-fdump-tree-widening_mul-details -lm -o ./divmod-1.exe
/lib/../lib64/crt1.o: In function `_start':
(.text+0x20): undefined reference to `main'
...

When I run make check-c I get these same failures in different tests.

In successful builds the logs show that the tests are only assembled
(i.e., compiled with -S), never linked.

I can reproduce this on two different x86_64 machines but so far
only with my patches to GCC (the middle end but none of these tests
or the GCC .exp files).  Is it possible that somehow my changes cause
this?  (The removal of the -S option.)  Am I missing something about
how GCC tests are run?

IIRC the default for a dg test is to compile, but not assemble or run.
The log shows that it's trying to go through the link step.





Thanks
Martin

PS I also see other strange errors like

RROR: tcl error sourcing
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.c-torture/execute/execute.exp.


ERROR: unmatched open brace in list

ERROR: tcl error sourcing
/opt/notnfs/msebor/src/gcc/gcc-78622/gcc/testsuite/gcc.c-torture/execute/ieee/ieee.exp.


ERROR: torture-init: torture_without_loops is not empty as expected

This all sounds like you've got a mucked up testsuite/lib directory or
something like that.  I'd start by verifying the contents of your
testsuite/lib directory.

I did a testrun on Dec 7 on the trunk and didn't see any of these problems.


I've confirmed by recreating the tree and re-running bootstrap
and the C tests (make check-c) that it's the patch that's causing
it.  I just don't know yet how it's possible.

Martin



Re: k-byte memset/memcpy/strlen builtins

2017-01-12 Thread Martin Sebor

On 01/11/2017 09:16 AM, Robin Dapp wrote:

Hi,

When examining the performance of some test cases on s390 I realized
that we could do better for constructs like 2-byte memcpys or
2-byte/4-byte memsets. Due to some s390-specific architectural
properties, we could be faster by e.g. avoiding excessive unrolling and
using dedicated memory instructions (or similar).


There are at least two enhancement requests in Bugzilla to improve
memcmp when one or more of the arguments are constant: bugs 12086
and 78257 (the former for constant small lengths and the latter
for constant byte arrays).  It seems that one or both of these might
also benefit from some of your ideas and/or vice versa.

Martin


For 1-byte memset/memcpy the builtin functions provide a straightforward
way to achieve this. At first sight it seemed possible to extend
tree-loop-distribution.c to include the additional variants we need.
However, multibyte memsets/memcpys are not covered by the C standard and
I'm therefore unsure if such an approach is preferable or if there are
more idiomatic ways or places where to add the functionality.

The same question goes for 2-byte strlen. I didn't see a recognition
pattern for strlen (apart from optimizations due to known string length
in tree-ssa-strlen.c). Would it make sense to include strlen recognition
and subsequently handling for 2-byte strlen? The situation might of
course more complicated than memset because of encodings etc. My snippet
in question used a fixed-length encoding of 2 bytes, however.

Another simple idea to tackle this would be a peephole optimization but
I'm not sure if this is really feasible for something like memset.
Wouldn't the peephole have to be recursive then?

Regards
 Robin





Re: -Wformat-length and floats

2016-11-29 Thread Martin Sebor

On 11/29/2016 08:38 AM, Gerald Pfeifer wrote:

This took me a bit to get to the bottom of, but I know believe
that we need to work both on the documentation and implementation
of -Wformat-length, in particular when it comes to floats.

  #include 

  typedef struct M {
  float a, b, c;
  } M;

  char *foo(M *m) {
  static char buf[64];
  sprintf(buf, "%.2f %.2f %.2f", m->a, m->b, m->c);
  return buf;
  }

First of all, it turns out that floats are not covered in the
documentation at all.  I've had a look at the code, and think
I'll be able to propose a doc change latest this coming weekend.


Thanks for looking at this and bringing it up for discussion!
Suggestions for improvements are very welcome and appreciated.



Now to what actually happens in the example above:

  # gcc -c -o x.o -Wall x.c
  x.c: In function ‘foo’:
  x.c:9:24: warning: ‘%.2f’ directive writing between 4 and 317 bytes into a 
region of size 0 [-Wformat-length=]
 sprintf(buf, "%.2f %.2f %.2f", m->a, m->b, m->c);
^~~~
  x.c:9:5: note: format output between 15 and 954 bytes into a destination of 
size 64
 sprintf(buf, "%.2f %.2f %.2f", m->a, m->b, m->c);
 ^~~~

A couple of issues I find really confusing:

 - Where is the number of 317 bytes as the upper limit for a sole
   %.2f coming from?


317 the number of bytes that printf("%f", -DBL_MAX) results in.
There's a bug in that the computation doesn't take precision into
account for %e or %g with unknown arguments so the maximum for
%.2f should actually be 312.  Let me fix that.


 - Where is the number of 954 bytes coming from for the full call?


It's the result of 3 * 317 plus the spaces (again, with the precision
bug).



 - What is a region of size 0?  Why 0?


Each directive writes into a region of the destination.  The start
of the region moves toward the end of the destination as bytes are
"added" until the remaining space is too small for the rest of the
format string or a directive, or until the format string has been
exhausted.  An example that might make it clearer is

  char buf[4];
  sprintf(buf, "i=%i", 12345);

We get:

  warning: ‘%i’ directive writing 5 bytes into a region of size 2
  note: format output 8 bytes into a destination of size 4



 - And what is the difference between a region and a destination?


Destination is the whole destination object (or buffer) of the call.
The region is the are of the destination the output of each directive
is being written into (as per the above).



I'll see what I can do about documentation; any input on the above
related to that will be helpful.

And something tells me that there may be in issue with the diagnostics
code?


It could be, though at the moment I'm not aware of any.  I am
grateful for testing people do on their own and for this kind
of feedback.  There's only so much that compiling software can
reveal.

FWIW, some time ago in bug 77696 David brought up some related
issues and questions.  I made an effort to describe the logic
behind the wording choices made in the text of the diagnostics
and the common theme between their various forms.  It might be
helpful (to all) for you to review the bug and provide additional
input on the wording of the diagnostics there.

Certainly if you find what looks like a bug or where you are not
sure please open an issue for it in Bugzilla or raise it here.

Thanks
Martin


Re: -Wformat-length and floats

2016-11-29 Thread Martin Sebor

I'll see what I can do about documentation; any input on the above
related to that will be helpful.


This might be a good time to mention that I've thinking about writing
up more detailed documentation than what may be suitable for the GCC
manual and posting it either on the Wiki or someplace where users can
easily find it.  I should be able to get to it once all my patches
have been approved and committed and I'm done fixing bugs in them.
If you (or anyone else) have suggestions I'd be grateful for them.

Martin



Re: Diagnostics that should not be translated

2017-03-15 Thread Martin Sebor

On 03/15/2017 10:07 AM, Roland Illig wrote:

Am 15.03.2017 um 03:43 schrieb Martin Sebor:

Would using the existing internal_error{,no_backtrace}, and
sorry work for this? (I.e., not translating those.)  If my
count is right there are nearly 500 calls to these three in
GCC sources so I'm not sure that would put enough of a dent
in the 12K messages to translate but I'm even less sure that
adding yet another API would do even that much.


In relative terms the 500 may seem like not so much, but in absolute
terms they are still worth 1 to 3 days of translating work. Especially
since many of the terms in the internal errors are not as carefully
worded as the diagnostics targeted at the GCC user, and they contain
lots of technical terms for which there is no obvious translation.

For the German translation I took the easy path of making the German
internal errors exactly the same as the English ones, so whether this is
addressed or not won't make a difference for the upcoming release. It's
just that I think the other translators shouldn't need to go through the
same steps as I did.


I would suggest to open a request in Bugzilla then and explain
that internal_error{,no_backtrace} strings don't need to be
translated.  (From Richard's reply it sounds like the "sorry"
ones still do).

Personally, I think it's less work for everyone not to have to
worry about translating these so it seems like a win-win.  It
would be helpful to put in place some sort of a checker to catch
some of these problems (or unnecessary annotation if there's
consensus about your proposal) early, during development.

Since there have been a number of general suggestions recently
to improve how GCC deals with internationalization it might
also be helpful to summarize those that end up adopted to
the GCC Diagnostic Guidelines Wiki:

  https://gcc.gnu.org/wiki/DiagnosticsGuidelines

Martin


Re: Diagnostics that should not be translated

2017-03-14 Thread Martin Sebor

On 03/12/2017 04:51 PM, Roland Illig wrote:

Hi,

the gcc.pot file currently contains more than 12000 messages to be
translated, which is a very high number. Many of these messages are
diagnostics, and they can be categorized as follows:

* errors in user programs, reported via error ()
* additional info for internal errors, reported via error ()
* internal errors, reported via gfc_internal_error ()
* others

In my opinion, there is no point in having internal error messages
translated, since their only purpose is to be sent back to the GCC
developers, instead of being interpreted and acted upon by the GCC user.
By not translating the internal errors, the necessary work for
translators can be cut down by several hundred messages.

Therefore the internal errors should not be translated at all.
https://gcc.gnu.org/codingconventions.html#Diagnostics currently
mentions a few ways of emitting diagnostics, but there is no way to
produce untranslated diagnostics. Therefore I'd like to have a new
function error_no_i18n that is used instead of error for nonfatal
internal errors, like this:

@code{error_no_i18n} is for diagnostics that are printed before invoking
internal_error. They are not translated.


AFAIK, internal errors are reported using the internal_error
or internal_error_no_backtrace APIs.

Would using the existing internal_error{,no_backtrace}, and
sorry work for this? (I.e., not translating those.)  If my
count is right there are nearly 500 calls to these three in
GCC sources so I'm not sure that would put enough of a dent
in the 12K messages to translate but I'm even less sure that
adding yet another API would do even that much.

There are 20-some-odd functions to report diagnostics in GCC
(counting those in diagnostic-core.h).  I haven't used them all
or even understand all their differences and use cases yet so
I would rather no add more to the list if it can be helped.

Martin



Re: [wwwdocs, PATCH] C++ terminology: the One Definition Rule in diagnostics

2017-03-14 Thread Martin Sebor

On 03/14/2017 08:18 AM, David Malcolm wrote:

Looking at PR ipa/8, which notes that ipa-devirt.c has two trailing
spaces in:

if (warning_at (
  DECL_SOURCE_LOCATION(TYPE_NAME (DECL_CONTEXT (vtable->decl))),
  OPT_Wodr,
  "virtual table of type %qD violates "
  "one definition rule  ",
  DECL_CONTEXT (vtable->decl)))

and thus emits:

  foo.cc: virtual table of type 'foo' violates one definition rule

...I think the wording here is a little confusing to a novice C++
coder: it could be misread that the compiler is complaining that there
are "definition rules", and that one of them has been somehow violated
("why aren't you telling me which one?"), as opposed to the correct
reading, that there is a rule that there must be one definition.


Heh.  I never would have thought of that.  But you're right.
It reminds me just how wide-spread the convention of dropping
articles is in diagnostics.  Would it make sense to put in
place a guideline to use them consistently?

  https://gcc.gnu.org/wiki/DiagnosticsGuidelines


There are 5 such diagnostics, all in ipa-devirt.c.

So I think the wording could be improved by adding a "the", and maybe
capitalizing, to make it:

  foo.cc: virtual table of type 'foo' violates the One Definition Rule

(giving a big hint to the user that the "One Definition Rule" is the thing that they 
should be typing into their search engine, and that "ODR" is an acronym form of it).

I've even seen it hyphenated in some places, which would make it:

  foo.cc: virtual table of type 'foo' violates the One-Definition Rule

or:

  foo.cc: virtual table of type 'foo' violates the one-definition rule



The gcc.po file has these forms:

  msgid "Merge C++ types using One Definition Rule."
  msgid "virtual table of type %qD violates one definition rule"
  msgid "virtual table of type %qD violates one definition rule  "
  msgid "type %qT violates the C++ One Definition Rule"
  msgid "type %qT itself violate the C++ One Definition Rule"
  msgid "%qD violates the C++ One Definition Rule "

I like the last one.  My only question is about capitalization.
GCC is inconsistent about it (not just in this case but in many
others).  Most messages are all lowercase, but some capitalize
established terms or keywords (FORTRAN), some even start with
a capital letter (FORTRAN).  The GCC Diagnostic Guidelines don't
mention capitalization and the GNU Coding Standard recommends
only that messages issued by interactive programs start with
a capital letter and others don't.  It would be nice to have
clarity about what the convention is (or what we agree it
should be) and eventually converge on it in all sources.

Martin



Here's a patch to the terminology part of our conventions, which suggests adding a 
"the" and capitalizing (but without the hyphen).


Thoughts?
Dave





Re: question about GCC Wiki

2017-07-29 Thread Martin Sebor

On 07/29/2017 07:37 AM, Segher Boessenkool wrote:

On Mon, Jul 24, 2017 at 01:08:35PM -0600, Martin Sebor wrote:

...or more precisely, about MoinMoin hyperlink formatting.

I'd like to insert hyperlinks to anchors in the GCC HTML manual
on some Wiki pages.  Specifically, I'd like to be able to link
to the description of a command line option or an attribute,
and I'd also like to render the text of the link in monospace
font.  But try as I might, I haven't been able to convince Moin
Moin to do this for me.  (The usual {{{[link]}}} is documented
to disable the link so that's out.  Ditto with the backtick.)


[[https://some/link|text to put here|class=monospace]]

and define  a.monospace  in the css?


Thanks for the suggestion.  I know how to do this in static HTML
(though there I can simply put the link in between ) but
I have no idea how to make it happen in MoinMoin.  The manual makes
it sound like the admin would need to first install some sort of
a theme and then I would customize it.  That seems like a lot of
trouble to make a piece of text monospace.

Martin


void function declared attribute const

2017-08-02 Thread Martin Sebor

Hi Honza,

While testing improvements to GCC attribute handling I came
across the warning below:

In file included from 
/ssd/src/gcc/81544/libstdc++-v3/src/c++98/mt_allocator.cc:31:0:
/ssd/build/gcc-81544/x86_64-pc-linux-gnu/libstdc++-v3/include/ext/mt_allocator.h:359:43: 
warning: ‘const’ attribute on function returning ‘void’ [-Wattributes]

   _M_destroy_thread_key(void*) throw ();
  ^

Git log shows you added the attribute in r146330 and there was
a lively debate preceding the change on the libstdc++ mailing
list:
  https://gcc.gnu.org/ml/libstdc++/2009-04/msg00053.html

A patch including this function was posted here:
  https://gcc.gnu.org/ml/libstdc++/2009-04/msg00111.html

I've re-read much of the discussion but couldn't find my answer
there so I'm wondering if you could help me understand the
rationale for it.  (Btw., I understand what the attribute does,
I'm just not sure why it's helpful or even used in this instance.

The function is declared like this in the header:

  // XXX GLIBCXX_ABI Deprecated
  _GLIBCXX_CONST void
  _M_destroy_thread_key(void*) throw ();

and defined like so in mt_allocator.cc:

  void
  __pool::_M_destroy_thread_key(void*) throw () { }

So the definition trivially meets the requirements on a const
function, and calls to it will be eliminated (with optimization).

My question is: since the function is deprecated and its
definition only provided for ABI compatibility, should calls
to it ever actually be compiled with future revisions of GCC?

I mean, since it's an implementation-only function, there should
be no code outside of libstdc++ that makes direct calls to it.
Code that did make calls to it (e.g., as a result of macro or
inline expansion) was presumably compiled into object files by
older versions of GCC (before the deprecation and before the
addition of the attribute) and won't benefit from the const
attribute now.  And since the function is deprecated, no code
newly compiled with GCC should make calls to it, either directly
or otherwise.  (Would using attribute deprecated on the function
make sense?)

The reason for my question is to understand if the warning is
justified (it's based on the documentation of attribute const:
"It does not make sense for a const function to return void.")

If it does make sense to declare a function const that returns
void then I'll remove the warning and update the manual and
mention this use case.  Otherwise, if you confirm that the
function shouldn't be called in new code I'll submit a patch
to remove the const attribute.

Thanks
Martin


Re: default function alignment

2017-08-02 Thread Martin Sebor

On 08/02/2017 11:37 AM, Joseph Myers wrote:

On Wed, 2 Aug 2017, Martin Sebor wrote:


If there is no way, would enhancing target-supports.exp to
include a header that defines the macro? (I assume that would
be gcc/target.h for FUNCTION_BOUNDARY).


target.h is for target hooks, not target macros, and we want to move away
from target macros, so adding dependencies on the existence of a macro
with this value doesn't seem like a good idea (anyway, FUNCTION_BOUNDARY
can depend on command-line options passed to the compiler, so getting it
from a compiler header can't possibly work).

We have -fbuilding-libgcc to pass configuration to target library builds
that shouldn't otherwise be needed in normal user code built by GCC (via
defining extra predefined macros if that option is passed).  I suppose you
could have something like that to provide such information to the
testsuite.


Thanks.  That's good to know about.  To keep the patch from getting
too big I think might just take a chance and hope there is no maximum
function alignment.  If there is I'll deal with the fallout later.

Martin


Re: [Bug web/?????] New: Fwd: failure notice: Bugzilla down.

2017-08-15 Thread Martin Sebor

On 08/15/2017 10:27 AM, Joseph Myers wrote:

On Tue, 15 Aug 2017, Martin Sebor wrote:


It looks like the data loss extends beyond 8/14.  Bug 81840
was created Sunday afternoon but is not in the database:

  https://gcc.gnu.org/ml/gcc-bugs/2017-08/msg01303.html

(Strangely, 81841 is there, as is 81839.)


That's another 81839 replacing the original 81839.  As I noted on
overseers, the cut-off for GCC Bugzilla appears to be between
<https://gcc.gnu.org/ml/gcc-bugs/2017-08/msg01289.html> and
<https://gcc.gnu.org/ml/gcc-bugs/2017-08/msg01290.html>, early Sunday
morning UTC (and for sourceware Bugzilla it may well be similar).


Thanks for the update.  Can I subscribe to this list?

(It might be helpful to post an announcement of what's going on
here for those of us who aren't subscribed to the overseers list
or even are aware of its archives.)

Martin



Re: [Bug web/?????] New: Fwd: failure notice: Bugzilla down.

2017-08-15 Thread Martin Sebor

On 08/15/2017 07:27 AM, Jonathan Wakely wrote:

On 15 August 2017 at 04:10, Martin Sebor wrote:

On 08/14/2017 04:22 PM, Eric Gallager wrote:


I'm emailing this manually to the list because Bugzilla is down and I
can't file a bug on Bugzilla about Bugzilla being down. The error
message looks like this:



Even if it were possible, there wouldn't be any point in filing a bug
that bugzilla's down, and so not much point emailing gcc-bugs either
(since that's for bugzilla mail).  Using gcc@ seems like the right
list to me.

N.B. since the server is being restored from a backup all of
yesterday's changes to bugzilla have been lost, including all Richi's
7.2 release changes, and Eric's housekeeping.

I don't suggest redoing all that work until all services are fully
restored, in case it's lost again.


It looks like the data loss extends beyond 8/14.  Bug 81840
was created Sunday afternoon but is not in the database:

  https://gcc.gnu.org/ml/gcc-bugs/2017-08/msg01303.html

(Strangely, 81841 is there, as is 81839.)

Is there a plan to restore the lost data somehow or should
we try to do that ourselves for the bugs we know about?

Martin


tests for GCC internal functions

2017-08-16 Thread Martin Sebor

Is there a setup for writing and running as part of the test
suite unit tests that exercise internal GCC functions like
error and warning?  I ask because of a couple of bugs that
were recently reported for the %s directive in GCC's
diagnostics (81859 and 81586) that were only caught by running
valgrind on GCC, but that could have been caught by GCC's own
tests for these APIs if they existed.

I can't find any except some in the gcc.dg/plugin directory
(though they weren't written to exercise the functions but
rather the plugin interface).  Would adding a plugin for
this sort of thing be the right way to do it?

Thanks
Martin


Re: tests for GCC internal functions

2017-08-16 Thread Martin Sebor

On 08/16/2017 09:20 AM, David Malcolm wrote:

On Wed, 2017-08-16 at 16:51 +0200, Marek Polacek wrote:

On Wed, Aug 16, 2017 at 08:46:20AM -0600, Martin Sebor wrote:

Is there a setup for writing and running as part of the test
suite unit tests that exercise internal GCC functions like
error and warning?  I ask because of a couple of bugs that
were recently reported for the %s directive in GCC's
diagnostics (81859 and 81586) that were only caught by running
valgrind on GCC, but that could have been caught by GCC's own
tests for these APIs if they existed.

I can't find any except some in the gcc.dg/plugin directory
(though they weren't written to exercise the functions but
rather the plugin interface).  Would adding a plugin for
this sort of thing be the right way to do it?


Isn't -fself-test the thing you're looking for?


Indeed; looking at those bugs, it looks like you want to be adding
to/extending the selftests near end of pretty-print.c (that way you can
easily re-run them under valgrind via "make selftest-valgrind");
probably in selftest::test_pp_format, which already has a little
coverage for %s.


You're both right, thanks!  It didn't occur to me to look in
the file I'm changing where there already are a bunch of self
tests for the pretty printer.

Martin



Re: [Bug web/?????] New: Fwd: failure notice: Bugzilla down.

2017-08-14 Thread Martin Sebor

On 08/14/2017 04:22 PM, Eric Gallager wrote:

I'm emailing this manually to the list because Bugzilla is down and I
can't file a bug on Bugzilla about Bugzilla being down. The error
message looks like this:


Bugzilla and the rest of gcc.gnu.org have been down much of
the afternoon/evening due to a hard drive upgrade (the old disk
apparently failed).  You're not the only one who found out about
it the hard way.  I (and I suspect most others) also discovered
it when things like Git and SVN (and Bugzilla) stopped working.

I've CC'd the gcc list to let others know (not sure what list
to subscribe to in order to get a heads up on these kinds of
maintenance issues).

Martin



Software error:

Can't connect to the database.
Error: Can't connect to local MySQL server through socket
'/var/lib/mysql/mysql.sock' (2)
  Is your database installed and up and running?
  Do you have the correct username and password selected in localconfig?


For help, please send mail to the webmaster
(sourcemas...@sourceware.org), giving this error message and the time
and date of the error.

Unfortunately, the email address listed for the webmaster,
, sends me its own error message when I
try to email it, because the address apparently doesn't exist. I'm
forwarding a copy of the failure email with this message. Is there a
different address I should be emailing instead?

Thanks,
Eric Gallager

-- Forwarded message --
From: mailer-dae...@sourceware.org
Date: 14 Aug 2017 22:03:54 -
Subject: failure notice
To: eg...@gwmail.gwu.edu

Hi. This is the qmail-send program at sourceware.org.
I'm afraid I wasn't able to deliver your message to the following addresses.
This is a permanent error; I've given up. Sorry it didn't work out.

:
Sorry, no mailbox here by that name. (#5.1.1)

--- Below this line is a copy of the message.

Return-Path: 
Received: (qmail 24677 invoked by uid 89); 14 Aug 2017 22:03:52 -
Authentication-Results: sourceware.org; auth=none
X-Virus-Checked: by ClamAV 0.99.2 on sourceware.org
X-Virus-Found: No
X-Spam-SWARE-Status: No, score=0.5 required=5.0
tests=RCVD_IN_DNSWL_NONE,RCVD_IN_SORBS_SPAM,SPF_PASS autolearn=no
version=3.3.2 spammy=
X-Spam-Status: No, score=0.5 required=5.0
tests=RCVD_IN_DNSWL_NONE,RCVD_IN_SORBS_SPAM,SPF_PASS autolearn=no
version=3.3.2
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on sourceware.org
X-Spam-Level:
X-HELO: mail-wm0-f42.google.com
Received: from mail-wm0-f42.google.com (HELO mail-wm0-f42.google.com)
(74.125.82.42)
 by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon,
14 Aug 2017 22:03:51 +
Received: by mail-wm0-f42.google.com with SMTP id i66so3361599wmg.0
for ; Mon, 14 Aug 2017 15:03:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=gwmail-gwu-edu.20150623.gappssmtp.com; s=20150623;
h=mime-version:from:date:message-id:subject:to;
bh=A6iFIzlnSazSObmM6Wb7tDW+ZAb4ojY1zG6bxpdCLCQ=;
b=cvRTbJxAB1UKTO01xqOGTtvgLjbLLOumX/FUdM3ruLkvMpJx6QiO7X51cU6X6jcx24
 ReoVRxaW8NwWcVJts9JB5tPJSg/8+ljL8ywMoa9Aa6S1MQ4PS246aVgKM5aUNmaB1E+u
 NrQMAmsxixJG/wmrAnSpQ/imIt7XNHRBNnZ5/B1JiGAt3ChjHwTNKV/9W4fxydsJL9BE
 iL9D/duAl6C/RAvBPQ5BC0Kv+/HtzhH8bwgElbjP93hMJk/nAfceI7Rh/4zIgY5V6ZHS
 E7LzrUetB7aeyzdlTy6i/3Lsyb0wnUuocZdZtGndQD24WSngk3fInlHwWrcUVdTpRMQ5
 e4tg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=1e100.net; s=20161025;
h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
bh=A6iFIzlnSazSObmM6Wb7tDW+ZAb4ojY1zG6bxpdCLCQ=;
b=GQbdg3EVSV+aMfKeSffGxae54jos3nNUt3qQWoSthUOEFy57YAeV5Fn86IOYLTw4fI
 gZKi7samoFHs0yHdF4sawi+xVMfZMzGPqGrJjsAIHYSE9T+o9vx0cEJPhPs0rswqinCO
 WovmT8Kh1jNEaIyxLwZGk07lt5oyWTk72zaT+CxWjamKbLJzDfqWMfpahDuTVI74QQie
 m8qtMTBQjek1Di51mxiBcuaLlr1xNqwrVS3JQEjdl2NCOBEDodLosGkUJCFR29hw6EAK
 ag3Lo96yUeNlfjOXv8/FYbCTrbcrhxbms02n0p3oK3cSHkoqZKcUOQQJzuNq0c38W9PU
 6A0w==
X-Gm-Message-State: AHYfb5ilSTxPFsi8yTAJsSJ6LpZu+DahzJygxoF6Q5usa2PIRZDQhD5i
b5akwEZnwfnttac6yAR/os3Kbj3Gqwbh
X-Received: by 10.80.215.68 with SMTP id i4mr25610854edj.258.1502748228995;
 Mon, 14 Aug 2017 15:03:48 -0700 (PDT)
MIME-Version: 1.0
Received: by 10.80.129.227 with HTTP; Mon, 14 Aug 2017 15:03:48 -0700 (PDT)
From: Eric Gallager 
Date: Mon, 14 Aug 2017 18:03:48 -0400
Message-ID: 

Re: Killing old dead bugs

2017-07-17 Thread Martin Sebor

On 07/17/2017 02:14 AM, Yuri Gribov wrote:

Hi Mikhail,

On Sun, Jul 2, 2017 at 6:39 PM, Mikhail Maltsev  wrote:

Hi. Yes, bug maintenance is appreciated. See this message and replies
to it: https://gcc.gnu.org/ml/gcc/2016-04/msg00258.html .


Replies in your link suggest to leave a final comment in bugs with
explanatory suggestion to close them so that maintainers who read
gcc-bugs list hopefully notice them and act appropriately.
Unfortunately I found this to _not_ work in practice. Below you can
see a list of bugs I've investigated (and often bisected) in the past
weeks - none of them were closed by maintainers (or at least
commented).

So I'm afraid we have to conclude that there's no working process to
close stale bugs in place (which may be one of the reasons of bugcount
growth).


The informal process that some (most?) of us have been following
is to close them with a comment explaining our rationale.  It's
good to fill in the Known to fail/Known to work versions if they
can be determined.  Mentioning the commit that fixed the bug as
you did for the bugs below is ideal.  Adding a test case if one
doesn't exist in the test suite is also very useful, though quite
a bit more work.  In my experience, if a bug is closed that should
stay open, someone usually notices pretty quickly and reopens it,
so I wouldn't be too worried about doing something wrong.

The process for managing bugs is in more detail described here:

  https://gcc.gnu.org/bugs/management.html

If you think it should be clarified in some way please feel free
to send in a patch.

Martin



* Bug 41992 - ICE on invalid dereferencing of void *
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00860.html)
* Bug 63245 - renderMemorySnippet shouldn't show more bytes than the
underlying type
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00645.html)
* Bug 61693 - [asan] is not intercepting aligned_alloc
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00643.html)
* Bug 61771 - Test failures in ASan testsuite on ARM Linux due to FP
format mismatch between libasan and GCC
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00646.html)
* Bug 78028 - ASAN doesn't find memory leak
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00653.html)
* Bug 55316 - gcc/libsanitizer/asan/asan_linux.cc:70:3: error: #error
"Unsupported arch"
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00636.html)
* Bug 78654 - ubsan can lead to excessive stack usage
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00640.html)
* Bug 60892 - GCC (libsanitizer) fails to build with Linux 2.6.21
headers (https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00649.html)
* Bug 61995 - gcc 4.9.1 fails to compile with error in libsanitizer
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00648.html)
* Bug 80027 - ASAN breaks DT_RPATH $ORIGIN in dlopen()
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00787.html)
* Bug 54123 - inline functions not optimized as well as static inline
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg01321.html)

-Y





Re: Missed optimization with const member

2017-07-10 Thread Martin Sebor

On 07/07/2017 06:26 AM, Ion Gaztañaga wrote:

On 05/07/2017 17:24, Martin Sebor wrote:


[*] While the example (copied below) is valid, accessing the object
after someFunction() has returned via a reference or pointer to it
is not.

   void somefunction(const Object& object);
   {
 void* p = 
 object.~Object();
 new(p) Object();
   }


I think it's problematic as explained in p0532r0.pdf. We construct and
destroy objects in the internal buffer of std::vector and we don't
update the pointer every time. I don't see myself understanding when
std::launder must be used, looks too expert-grade feature.


The problem with vector is new in C++ 17 and caused by lifting
the assignability requirement without fully considering the
ramifications.

But there are a number of intertwined problems here and this
is just one of them.   The text that makes references and
pointers invalid was added in C++ 03 to fix one problem (CWG
issue #89).  Launder tries to patch over a problem caused by
introducing optional in C++ 14, again without considering CWG
89.  As a result of its narrow focus (as Niko's paper points
out) it doesn't fix other manifestations of it.  CWG issue
#2182 describes a related defect in arithmetic involving
pointers to individual objects constructed at consecutive
locations in the same block of memory that launder doesn't
do anything for. As Niko's paper also highlights, launder
isn't a complete or, IMO, even a very good solution to
the problems.  It adds even more complexity without solving
all the underlying problems.  Adding more, complex features
that even experts have trouble understanding is what caused
these problems to begin with.

Martin


timeouts/malloc failures in ada tests?

2017-07-07 Thread Martin Sebor

I see large numbers of timeouts in Ada tests on trunk in parallel
run s (make -j96) on x86_64.  Messages like the one below appear
in the logs, suggesting some sort of heap corruption.  I'm having
trouble reproducing it outside the rest of the test suite (i.e.,
by just running the Ada tests by themselves) but maybe I'm doing
it wrong.  Unless this is a known problem I can try to pinpoint
it closer if someone could share the magic spell to run just Ada
tests to speed up the debugging.

Is it (a known problem)?

Martin

PS I've been using make -j96 for a long time on this machine and
this a recent regression.  The machine has 56 cores, 200GB RAM,
and runs Fedora 25.

spawn -ignore SIGHUP /opt/notnfs/msebor/build/gcc-81117/gcc/gnatclean -c 
-q -n d ispatch2_p
gnatclean: malloc.c:2403: sysmalloc: Assertion `(old_top == initial_top 
(av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && 
prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 
0)' failed.

WARNING: program timed out


Re: Could preprocessor warn for unsafe macros and side-effects?

2017-07-12 Thread Martin Sebor

On 07/11/2017 11:50 PM, sa...@hederstierna.com wrote:

Hi

Reading about macro pitfalls and eg duplication side-effects
https://gcc.gnu.org/onlinedocs/cpp/Macro-Pitfalls.html#Macro-Pitfalls

would it be possible to let the preprocessor generate warnings for any of these 
pitfalls?


The preprocessor has no knowledge of the language rules but it is
possible to detect some of these problems in the parser.  In fact,
a checker for one of the CERT problems was added not too long ago:
-Wmulti-statement-macros.  I would suggest to open an enhancement
request asking for features you would find particularly valuable.



Maybe all language specific parts are not know at this early preprocessing 
stage, but possibly some info could be stored for use in later pass?

I'm thinking of eg. for "function-like macros" with arguments, checking

-Wmacro-side-effects

* IF function-like macro expands/duplicates an argument more than once THEN
WARN if function() is part as the argument
WARN if unary ++ or -- is used on variable as part of argument
WARN if assignment operator = is part of argument
WARN if volatile variable part as the argument

-Wmacro-operator-precedence

* WARN if macro argument contains an expression with operator(s), an a _higher_ 
precedence operator is used within the macro on this argument, without 
parenthesis around

I'm not sure its even possible at preprocessing stage, but it would be nice to 
have,
I saw some static code analysis tools like Coverity detects these
https://www.securecoding.cert.org/confluence/display/c/PRE31-C.+Avoid+side+effects+in+arguments+to+unsafe+macros

Of course it might generate some false-positives so warning might not be 
enabled by default, maybe just -Wall or -Wextra,
but perhaps it hard to solve, and I'm not sure where and how to implement the 
checking algorithm.


Those sound like good ideas.  Some of them and the challenges
with implementing them were discussed in the context of the
-Wmulti-statement-macros enhancement:

  https://gcc.gnu.org/ml/gcc-patches/2017-06/msg00064.html

Martin


Re: Killing old dead bugs

2017-07-18 Thread Martin Sebor

On 07/17/2017 02:25 PM, Yuri Gribov wrote:

On Mon, Jul 17, 2017 at 4:23 PM, Martin Sebor <mse...@gmail.com> wrote:

On 07/17/2017 02:14 AM, Yuri Gribov wrote:


Hi Mikhail,

On Sun, Jul 2, 2017 at 6:39 PM, Mikhail Maltsev <malts...@gmail.com>
wrote:


Hi. Yes, bug maintenance is appreciated. See this message and replies
to it: https://gcc.gnu.org/ml/gcc/2016-04/msg00258.html .



Replies in your link suggest to leave a final comment in bugs with
explanatory suggestion to close them so that maintainers who read
gcc-bugs list hopefully notice them and act appropriately.
Unfortunately I found this to _not_ work in practice. Below you can
see a list of bugs I've investigated (and often bisected) in the past
weeks - none of them were closed by maintainers (or at least
commented).

So I'm afraid we have to conclude that there's no working process to
close stale bugs in place (which may be one of the reasons of bugcount
growth).



The informal process that some (most?) of us have been following
is to close them with a comment explaining our rationale.
It's good to fill in the Known to fail/Known to work versions if they
can be determined.  Mentioning the commit that fixed the bug as
you did for the bugs below is ideal.  Adding a test case if one
doesn't exist in the test suite is also very useful, though quite
a bit more work.  In my experience, if a bug is closed that should
stay open, someone usually notices pretty quickly and reopens it,
so I wouldn't be too worried about doing something wrong.


Martin,

Firstly, thanks for detailed explanation.

What to do about bugs originating in upstream packages?  I noticed
they sometimes get closed with "RESOLVED MOVED" resolution
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58841) but often this
does not happen and they just hang in tracker forever for no good
reason.

Actually what I tried to emphasize is that it's impossible for a
casual commiter (who does not have maintainer access to Bugzilla i.e.
rights to close bugs) to contribute to project by cleaning stale bugs,
because requests to close them are mostly ignored (because
maintainers, obviously, have much more interesting work to do).


I take your point.  I didn't realize closing bugs was restricted.
Given the work you've done on the bugs below (and elsewhere) you
should be able to close them.  If you aren't and would like to be
able to, please request it by emailing overse...@gcc.gnu.org ((at
least I think that's the right way to go about it), or follow up
here and I'm sure someone with the right karma will make it happen.

Martin




The process for managing bugs is in more detail described here:

  https://gcc.gnu.org/bugs/management.html

If you think it should be clarified in some way please feel free
to send in a patch.

Martin




* Bug 41992 - ICE on invalid dereferencing of void *
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00860.html)
* Bug 63245 - renderMemorySnippet shouldn't show more bytes than the
underlying type
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00645.html)
* Bug 61693 - [asan] is not intercepting aligned_alloc
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00643.html)
* Bug 61771 - Test failures in ASan testsuite on ARM Linux due to FP
format mismatch between libasan and GCC
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00646.html)
* Bug 78028 - ASAN doesn't find memory leak
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00653.html)
* Bug 55316 - gcc/libsanitizer/asan/asan_linux.cc:70:3: error: #error
"Unsupported arch"
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00636.html)
* Bug 78654 - ubsan can lead to excessive stack usage
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00640.html)
* Bug 60892 - GCC (libsanitizer) fails to build with Linux 2.6.21
headers (https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00649.html)
* Bug 61995 - gcc 4.9.1 fails to compile with error in libsanitizer
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00648.html)
* Bug 80027 - ASAN breaks DT_RPATH $ORIGIN in dlopen()
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg00787.html)
* Bug 54123 - inline functions not optimized as well as static inline
(https://gcc.gnu.org/ml/gcc-bugs/2017-07/msg01321.html)

-Y







question about GCC Wiki

2017-07-24 Thread Martin Sebor

...or more precisely, about MoinMoin hyperlink formatting.

I'd like to insert hyperlinks to anchors in the GCC HTML manual
on some Wiki pages.  Specifically, I'd like to be able to link
to the description of a command line option or an attribute,
and I'd also like to render the text of the link in monospace
font.  But try as I might, I haven't been able to convince Moin
Moin to do this for me.  (The usual {{{[link]}}} is documented
to disable the link so that's out.  Ditto with the backtick.)

The MoinMoin HelpOnLinking page mentions setting attributes
on links for some visual effects but it's short on details (it
refers to "some HTML reference" for more and I haven't been able
to find anything about this in the places I looked):

  Available attributes: class, title, target, accesskey (see
  some html reference if you want to know what they mean).

https://gcc.gnu.org/wiki/HelpOnLinking#Setting_attributes_of_the_.3Ca.3E_tag

Does anyone happen to know how to do what I want? (If it's even
possible in MoinMoin?)

Thanks
Martin

PS Is the GUI editor on the Wiki not working? (It doesn't work
for me.)


Re: Missed optimization with const member

2017-07-05 Thread Martin Sebor

On 07/04/2017 06:02 PM, Geza Herman wrote:

Hi,

I've included a small program at the end of my email.

Here's what happens: in callInitA(), an Object put onto the stack (which
has a const member variable, initialized to 0). Then somefunction called
(which is intentionally not defined). Then ~Object() is called, which
has an "if", which has a not-immediately-obvious, but always false
condition. Compiling with -03, everything gets inlined.

My question is about the inlined ~Object(). As m_initType is always 0,
why does not optimize the destructor GCC away? GCC inserts code that
checks the value of m_initType.

Is it because such construct is rare in practice? Or is it hard to do an
optimization like that?

I've checked other compilers (clang, icc, msvc), none of them optimized
this.

(I've tried gcc 5.1, 6.1, 7.1)


I agree this would be a useful optimization.  I recently raised
bug 80794 with this same observation/request.  There are some
concerns with it (Jonathan gave a partial example(*)) but I don't
think they necessarily rule it out for the reasons discussed in
the bug (my comments 2 and 5).  It would be helpful if you added
your observations/interest there.

In general, GCC doesn't implement these kinds of optimizations
(based on the const qualifier) because its optimizer doesn't have
a way to distinguish between a const reference (or pointer) to
a non-const object that can be safely modified by casting
the constness away and a reference to const-ualified object
that cannot.  With this limitation overcome it should be possible
to implement this and other interesting optimizations that exploit
the const qualifier (another opportunity is discussed in bug 81009).
To avoid breaking existing (though I suspect rare) programs that
play tricks like those in the examples GCC might need to expose
its internal annotation to users to decorate functions/ and either
opt in to the optimization or opt out of it.  The annotation could
take the form of  a new attribute similar to that already provided
by some compilers.  For instance some make it possible to annotate
function arguments as "in" or "out" in addition to const.  Another
possibility is to make use of restrict in C++).

Martin

[*] While the example (copied below) is valid, accessing the object
after someFunction() has returned via a reference or pointer to it
is not.

  void somefunction(const Object& object);
  {
void* p = 
object.~Object();
new(p) Object();
  }

In simple cases like the one above, decorating the reference with
the __restrict keyword would invalidate the code and make the
optimization safe.

  void somefunction(const Object& __restrict object);


Re: suggestion: c compiler warning for failure to test result

2017-04-25 Thread Martin Sebor

On 04/25/2017 02:35 PM, Joe Perches wrote:

A possibly useful addition similar to:

__attribute__((warn_unused_result))

might be

__attribute__((warn_untested_result))

for things like allocation failures that
are not verified before use.


I agree that this would be a useful feature.  In fact, I've been
thinking about implementing something like it, though not quite
as general.  (My initial thought was to key the warning off
an existing attribute like alloc_size for functions that aren't
also decorated with returns_nonnull.)  With warn_untested_result
even non-allocation functions (such as fopen) could be decorated,
so that seems like a better approach.

Can you please open an enhancement request in Bugzilla?

Thanks
Martin



For instance:

void *malloc(size_t size);

could become

void * __attribute((warn_untested_result)) malloc(size_t size)

so that

#include 

struct foo {
int bar;
};

struct foo *alloc_foo(void)
{
struct foo *baz = malloc(sizeof(struct foo));
baz->bar = 1;
return baz;
}

The compiler could emit a warning on the set
of baz->bar as an intermediate test of baz
is not performed before any use of baz.

struct foo *alloc_foo(void)
{
struct foo *baz =
malloc(sizeof(struct foo));
if (baz) baz->bar = 1;
return
baz;
}

Similarly, alloc_foo could use that new attribute.





default function alignment

2017-08-02 Thread Martin Sebor

I'm writing a test to verify that multiple attribute aligned
specifiers on a function declaration are handled correctly
(bug 81566).  In the test I need to know the default function
alignment for the target(*).  I've the FUNCTION_BOUNDARY macro
used to set the default alignment for a function (IIUC).  If
that is the right macro, or if there is a more appropriate
one, is there a way to get at its value in a unit test?

If there is no way, would enhancing target-supports.exp to
include a header that defines the macro? (I assume that would
be gcc/target.h for FUNCTION_BOUNDARY).

Thanks
Martin

[*] I believe I need this to avoid test failures on targets
where the minimum function alignment is greater than 1 byte.
I might be able to use the largest known alignment (AFAICS,
that's 16 bytes on IA64 and the FR-V) as the minimum unless
there is also a target-specific maximum that I don't know
about.  Is there?  (And if so, what's the macro for it?)


Re: help with PR78809 - inline strcmp for small constant strings

2017-08-06 Thread Martin Sebor

On 08/04/2017 06:59 AM, Prathamesh Kulkarni wrote:

Hi,
I was having a look at PR78809.
For the test-case:
int t1(const char *s) { return __builtin_strcmp (s, "a"); }

for aarch64, trunk with -O2 generates:
t1:
adrpx1, .LANCHOR0
add x1, x1, :lo12:.LANCHOR0
b   strcmp

For i386, it seems strcmp is expanded inline via cmpstr optab by
expand_builtin_strcmp
if one of the strings is constant. Could we similarly define cmpstr
pattern for AArch64 ?

For constant strings of small length (upto 3?), I was wondering if
it'd be a good idea to
manually unroll strcmp loop, similar to __strcmp_* macros in bits/string.h  ?
For eg in gimple-fold, transform
x = __builtin_strcmp(s, "ab")
to
x = s[0] - 'a';
if (x == 0)
{
  x = s[1] - 'b';
  if (x == 0)
x = s[2];
}


IMO, in general, it's better do this sort of expansion into "low
level" code late, after higher-level optimizations have been done.
Otherwise it can defeat such optimizations in passes that aren't
prepared to handle the low level code.  For instance, in the
following, the call to strcmp could readily be eliminated in
tree-ssa-strlen because the pass knows both of the strings being
compared.  But if the strcmp() call were expanded as you show above
the optimization couldn't be implemented nearly as easily (not
without essentially undoing the work previously done by the folder).

  void f (char *s)
  {
strcpy (s, "a");

if (strcmp (s, "ab") > 0)
  abort ();
  }

Not only that, unless all the arguments are known, the early
expansion can also defeat checks for their validity.  For example,
in strcmp (a, b), if a and b point to the same non-null terminated
or even uninitialized) array the call is folded into zero without
a warning.  If/when tree-ssa-strlen (or some other pass) is enhanced
to detect such invalid strings this bug could be easily detected but
only as long as strcmp call makes it to the pass untransformed.

Martin

PS The examples above are only hypothetical because tree-ssa-
strlen doesn't at present handle strcmp.  Bugs 81703 and
81704 are examples where folding does get in the way today.


  1   2   3   4   5   6   7   8   9   10   >