Re: [PATCH, ivopt] Try aligned offset when get_address_cost

2014-08-06 Thread Zhenqiang Chen
On 5 August 2014 21:59, Richard Biener richard.guent...@gmail.com wrote:
 On Mon, Aug 4, 2014 at 11:09 AM, Zhenqiang Chen zhenqiang.c...@arm.com 
 wrote:


 -Original Message-
 From: Bin.Cheng [mailto:amker.ch...@gmail.com]
 Sent: Monday, August 04, 2014 4:41 PM
 To: Zhenqiang Chen
 Cc: gcc-patches List
 Subject: Re: [PATCH, ivopt] Try aligned offset when get_address_cost

 On Mon, Aug 4, 2014 at 2:28 PM, Zhenqiang Chen
 zhenqiang.c...@arm.com wrote:
  Hi,
 
  For some TARGET, like ARM THUMB1, the offset in load/store should be
  nature aligned. But in function get_address_cost, when computing
  max_offset, it only tries byte-aligned offsets:
 
((unsigned HOST_WIDE_INT) 1  i) - 1
 
  which can not meet thumb_legitimate_offset_p check called from
  thumb1_legitimate_address_p for HImode and SImode.
 
  The patch adds additional try for aligned offset:
 
((unsigned HOST_WIDE_INT) 1  i) - GET_MODE_SIZE (address_mode).
 
  Bootstrap and no make check regression on X86-64.
  No make check regression on qemu for Cortex-m0 and Cortex-m3.
  For Cortex-m0, no performance changes with coremark and dhrystone.
  Coremark code size is ~0.44 smaller. And eembcv2 code size is ~0.22
  smaller. CSiBE code size is ~0.05% smaller.
 
  OK for trunk?
 
  Thanks!
  -Zhenqiang
 
  ChangeLog
  2014-08-04  Zhenqiang Chen  zhenqiang.c...@arm.com
 
  * tree-ssa-loop-ivopts.c (get_address_cost): Try aligned offset.
 
  testsuite/ChangeLog:
  2014-08-04  Zhenqiang Chen  zhenqiang.c...@arm.com
 
  * gcc.target/arm/get_address_cost_aligned_max_offset.c: New
 test.
 
  diff --git a/gcc/tree-ssa-loop-ivopts.c b/gcc/tree-ssa-loop-ivopts.c
  index 3b4a6cd..562122a 100644
  --- a/gcc/tree-ssa-loop-ivopts.c
  +++ b/gcc/tree-ssa-loop-ivopts.c
  @@ -3308,6 +3308,18 @@ get_address_cost (bool symbol_present, bool
  var_present,
XEXP (addr, 1) = gen_int_mode (off, address_mode);
if (memory_address_addr_space_p (mem_mode, addr, as))
  break;
  + /* For some TARGET, like ARM THUMB1, the offset should be
 nature
  +aligned.  Try an aligned offset if address_mode is not
 QImode.
  */
  + off = (address_mode == QImode)
  +   ? 0
  +   : ((unsigned HOST_WIDE_INT) 1  i)
  +   - GET_MODE_SIZE (address_mode);
  + if (off  0)
  +   {
  + XEXP (addr, 1) = gen_int_mode (off, address_mode);
  + if (memory_address_addr_space_p (mem_mode, addr, as))
  +   break;
  +   }
 Hi, Why not just check address_mode != QImode? Set off to 0 then check
 it
 seems unnecessary.

 Thanks for the comments.

 ((unsigned HOST_WIDE_INT) 1  i) - GET_MODE_SIZE (address_mode) might be a
 negative value except QImode. A negative value can not be max_offset. So we
 do not need to check it.

 For QImode, ((unsigned HOST_WIDE_INT) 1  i) - GET_MODE_SIZE
 (address_mode) == ((unsigned HOST_WIDE_INT) 1  i) - 1. It is already
 checked. So no need to check it again.

 I think the compiler can optimize the patch like

 diff --git a/gcc/tree-ssa-loop-ivopts.c b/gcc/tree-ssa-loop-ivopts.c
 index 3b4a6cd..213598a 100644
 --- a/gcc/tree-ssa-loop-ivopts.c
 +++ b/gcc/tree-ssa-loop-ivopts.c
 @@ -3308,6 +3308,19 @@ get_address_cost (bool symbol_present, bool
 var_present,
   XEXP (addr, 1) = gen_int_mode (off, address_mode);
   if (memory_address_addr_space_p (mem_mode, addr, as))
 break;
 + /* For some TARGET, like ARM THUMB1, the offset should be nature
 +aligned.  Try an aligned offset if address_mode is not QImode.
 */
 + if (address_mode != QImode)
 +   {
 + off = ((unsigned HOST_WIDE_INT) 1  i)
 + - GET_MODE_SIZE (address_mode);
 + if (off  0)
 +   {
 + XEXP (addr, 1) = gen_int_mode (off, address_mode);
 + if (memory_address_addr_space_p (mem_mode, addr, as))
 +   break;
 +   }
 +   }
 }
if (i == -1)
  off = 0;

 But is off now guaranteed to be the max value? (1  (i-1) ) - 1 for
 small i is larger than (1  i) - GET_MODE_SIZE (address_mode).

 That is, I think you want to guard this with 1  (i - 1) 
 GET_MODE_SIZE (address_mode)?

Yes. Without off  0, it can not guarantee the off is the max value.
With off  0, it can guarantee that

(1  i) - GET_MODE_SIZE (address_mode) is greater than (1  (i-1) ) - 1.

 You don't adjust the negative offset side - why?

-((unsigned HOST_WIDE_INT) 1  i) is already the min aligned offset.

Thanks!
-Zhenqiang

 Richard.


 Thanks,
 bin
  }
 if (i == -1)
   off = 0;
  diff --git
  a/gcc/testsuite/gcc.target/arm/get_address_cost_aligned_max_offset.c
  b/gcc/testsuite/gcc.target/arm/get_address_cost_aligned_max_offset.c
  new file mode 100644
  index 000..cc3e2f7
  --- /dev/null
  +++
 

Re: [PATCH, libfortran] Backport xmallocarray to 4.8/4.9 (CVE-2014-5044)

2014-08-06 Thread Tobias Burnus

Jakub Jelinek wrote:

On Sat, Aug 02, 2014 at 12:09:24AM +0300, Janne Blomqvist wrote:

--- libgfortran/runtime/memory.c.jj 2014-06-18 08:50:33.0 +0200
+++ libgfortran/runtime/memory.c2014-08-01 14:41:08.385856116 +0200
@@ -56,7 +56,9 @@ xmallocarray (size_t nmemb, size_t size)

if (!nmemb || !size)
  size = nmemb = 1;
-  else if (nmemb  SIZE_MAX / size)
+#define HALF_SIZE_T (((size_t) 1)  (__CHAR_BIT__ * sizeof (size_t) / 2))
+  else if (__builtin_expect ((nmemb | size) = HALF_SIZE_T, 0)
+   nmemb  SIZE_MAX / size)
  {
errno = ENOMEM;
os_error (Integer overflow in xmallocarray);

Nice, though as os_error() has the _Noreturn specifier the
__builtin_expect() is not necessary, right? In libgfortran.h we have

The reason for __builtin_expect here was to make already the
nmemb  SIZE_MAX / size
computation as unlikely, the noreturn predictor will of course DTRT with the
{} block.


But there is a difference in probability between __builtin_expect and 
noreturn. __builtin_expect had until two years ago a probability of 
99%, now it has a probability of only 90% (which is tunable with a 
-param)* – while noreturn has a higher probability. Thus, at least if 
you had used

else if (unlikely(...  ...))
os_error
you would have made the basic block with os_error more likely than 
without the unlikely (alias __builtin_expect). However, I don't know 
what happens with using unlikely(cond1)  cond2.


Tobias

* Or internally in the compiler only, by passing a third argument.


PING – Re: [Patch, Fortran] -fcoarray=lib - support CRITICAL, prepare for locking support

2014-08-06 Thread Tobias Burnus
* PING * – of the patch with the obvious change mentioned by Alessandro 
(i.e. using if(is_lock_type))?


Tobias

On 1 August 2014 21:57, Alessandro Fanfarillo wrote:

Hello,

I was implementing lock/unlock on the library side when I found a
possible problem in the patch:

if (is_lock_type == GFC_CAF_CRITICAL)
+reg_type = sym-attr.artificial ? GFC_CAF_CRITICAL : GFC_CAF_LOCK_STATIC;
+  else
+reg_type = GFC_CAF_COARRAY_STATIC;

the if statement cannot be true since is_lock_type is a boolean and
GFC_CAF_CRITICAL is 4.

Using if (is_lock_type) it produces the right result for the lock registration.


Regards

Alessandro

2014-07-28 14:37 GMT-06:00 Tobias Burnus bur...@net-b.de:

This patch implements -fcoarray=lib support for CRITICAL blocks and includes
some preparatory work for locking. In particular:

* Updated the documentation for locking/critical, minor cleanup. The patch
goes on top of the unreviewed patch
https://gcc.gnu.org/ml/fortran/2014-07/msg00155.html
* Add libcaf_single implementation for lock/unlock
* Add lock/unlock calls for CRITICAL
* Register static/SAVEd locking variables and locking variables for critical
sections.

Build and currently regtesting on x86-64-gnu-linux.
OK when it regtested successfully?

  * * *

Still to be done as follow up:
* Handling the registering of lock-type components in statically allocated
derived types
* Handling the registering of lock-type variables and components with
allocate and with implicit/explicit deallocate
* Calling lock/unlock function for those
* Test case for locking and critical blocks

Other coarray to-do items:
* Type-conversion test case missing
* Vector subscript library implementation + test cases
* Extending the documentation
* Issues with striding for coarray components of derived types
* Nonallocatable polymophic coarrays and select type/associated
* Allocatable/pointer components of coarrays; co_reduce and co_broadcast

Tobias




Re: Patch for constexpr variable templates

2014-08-06 Thread Paolo Carlini

Hi,

On 08/06/2014 06:41 AM, Braden Obrzut wrote:
I can confirm that this is caused by a change to pt.c that happened, I 
think, the day before my last patch.


This can be fixed by first checking that the template is a function 
template at that line in pt.c.  Since variable templates can't be 
friends, it might also be suitable to skip that entire block if it is 
a function template.

Patch looks simple, then.

Should I submit that as a patch?

I suppose Jason would be glad to review and apply it!

Thanks,
Paolo.


Re: Fix build of *86*-linux-android with --enable-shared

2014-08-06 Thread Alexander Ivchenko
Thanks for looking at this.

Bootstrapped and reg-tested on x86_64-unknown-linux-gnu.



2014-08-06 0:18 GMT+04:00 Jeff Law l...@redhat.com:
 On 08/04/14 00:08, Alexander Ivchenko wrote:

 Hi,

 libcilkrts is compiled with -nostdlib, that means we have to
 explicitly specify the pthread library we should link with (e.g. we
 don't have such problem with libgomp, because it is C). And, indeed,
 -lpthread is hard-coded in the Makefile for cilkrts. For Android
 this doesn't work, because lpthread is absent and pthreads are part of
 libc.

 I also noticed, that configure check for
 pthread_{,attr_}[sg]etaffinity_np always fails, because at the point
 where it is placed in configure.ac, -pthread is not set. We just
 have to put this check after we added -pthread to CFLAGS. This patch
 addresses this as well.



 diff --git a/libcilkrts/ChangeLog b/libcilkrts/ChangeLog
 index 3881c82..ab10a0b 100644
 --- a/libcilkrts/ChangeLog
 +++ b/libcilkrts/ChangeLog
 @@ -1,3 +1,15 @@
 +2014-08-01  Alexander Ivchenko  alexander.ivche...@intel.com
 +
 + * configure.ac: Move pthread affinity test to the place where
 + '-pthread' passed to CFLAGS. Otherwise the test always fails.
 + (XCFLAGS): New variable for correctly passing
 + '-pthread'.
 + (XLDFLAGS): New variable for passing the correct pthread lib.
 + * configure: Regenerate.
 + * Makefile.am (AM_CFLAGS): Add $XCFLAGS.
 + (AM_LDFLAGS): Add $XLDFLAGS.
 + * Makefile.in: Regenerate.

 So can you confirm that you've bootstrapped this on x86_64-unknown-linux-gnu
 and that there were no regressions?  Also double-check the indention in the
 ChangeLog entry, though it may just be your mailer that has mucked that up.

 Once the bootstrap and regression test are OK, this is OK.

 jeff



Re: [C PATCH] Discard P - (P + CST) optimization in pointer_diff (PR c/61240)

2014-08-06 Thread Richard Biener
On Tue, 5 Aug 2014, Jeff Law wrote:

 On 08/05/14 08:36, Marek Polacek wrote:
  On Mon, Aug 04, 2014 at 02:04:36PM +0200, Richard Biener wrote:
Looks like .fre can optimize q - (q - 1) into 1:
bb 2:
q.0_3 = (long int) MEM[(void *)i + 4B];
_5 = (long int) i;
-  _6 = q.0_3 - _5;
-  t.1_7 = _6 /[ex] 4;
-  t ={v} t.1_7;
+  t ={v} 1;
i ={v} {CLOBBER};
return;

But associate_plusminus doesn't optimize it:
   else if (code == MINUS_EXPR
 CONVERT_EXPR_CODE_P (def_code)
 TREE_CODE (gimple_assign_rhs1 (def_stmt)) ==
SSA_NAME
 TREE_CODE (rhs2) == SSA_NAME)
 {
   /* (T)(P + A) - (T)P - (T)A.  */
becase gimple_assign_rhs1 (def_stmt) is not an SSA_NAME, but ADDR_EXPR
(it's
MEM[(void *)i + 4B]).  Then there's transformation A - (A +- B) - -+
B
below, but that doesn't handle casts.

So - should I try to handle it in associate_plusminus?
   
   Yes please, with a (few) testcase(s).
  
  Ok, so the following adds the (T)P - (T)(P + A) - (T)-A
  transformation.  It is based on code hunk that does the
  (T)(P + A) - (T)P - (T)A transformation.  The difference it makes is
  in the .optimized dump something like:
  
int fn2(int, int) (int p, int i)
{
  -  unsigned int p.2_2;
  +  unsigned int _3;
  int _4;
  -  unsigned int _5;
  unsigned int _6;
  -  int _7;
  
  bb 2:
  -  p.2_2 = (unsigned int) p_1(D);
  -  _4 = p_1(D) + i_3(D);
  -  _5 = (unsigned int) _4;
  -  _6 = p.2_2 - _5;
  -  _7 = (int) _6;
  -  return _7;
  +  _6 = (unsigned int) i_2(D);
  +  _3 = -_6;
  +  _4 = (int) _3;
  +  return _4;
  
  i.e., the PLUS_EXPR and MINUS_EXPR are gone, and NEGATE_EXPR is used
  instead.
  During bootstrap with --enable-languages=c,c++ this optimization triggered
  238 times.
  
  Bootstrapped/regtested on x86_64-linux, ok for trunk?
  
  2014-08-05  Marek Polacek  pola...@redhat.com
  
  PR c/61240
  * tree-ssa-forwprop.c (associate_plusminus): Add (T)P - (T)(P + A)
  - (T)-A transformation.
  c/
  * c-typeck.c (pointer_diff): Remove P - (P + CST) optimization.
  testsuite/
  * gcc.dg/pr61240.c: New test.
  * gcc.dg/tree-ssa/forwprop-29.c: New test.
 So I'm all for delaying folding when possible, so I'm comfortable with the
 general direction this is going.
 
 My concern is the code we're removing discusses the need to simplify when
 these expressions are in static initializers.  What's going to ensure that
 we're still simplifying instances which appear in static initializers?  I
 don't see anything which tests that.   And does it still work for targets
 which utilize PSImode?

You mean stuff like

int i[4];
const int p = i - (i + 2);

?  I don't think that i - (i + 2) fulfills the requirements of an
integral constant expression, so we are free to reject it.

OTOH, ISTR that we tried hard in the past to support initializers
that implement offsetof in various ways.

Btw, before the idea of moving foldings to GIMPLE the consensus was
that moving foldings from frontends and convert to fold-const.c was
the appropriate thing to do.

As this is offsetof-like moving it to fold-const.c would work for
me as well (also given that forwprop is hardly a generic folding
framework).

Bah, I should accelerate that match-and-simplify stuff ...

Richard.


Re: [C PATCH] Discard P - (P + CST) optimization in pointer_diff (PR c/61240)

2014-08-06 Thread Marek Polacek
On Tue, Aug 05, 2014 at 02:14:21PM -0600, Jeff Law wrote:
 My concern is the code we're removing discusses the need to simplify when
 these expressions are in static initializers.  What's going to ensure that
 we're still simplifying instances which appear in static initializers?  I
 don't see anything which tests that.   And does it still work for targets
 which utilize PSImode?

Aw nuts.  So with the patch we'd start erroring out on
static __PTRDIFF_TYPE__ d1 = p - (p + 1);
static __PTRDIFF_TYPE__ d2 = p - (p - 1);
(it's nowhere in the testsuite/code base - and I hadn't noticed that
until today :()
while we'd still accept
static __PTRDIFF_TYPE__ d5 = (p - 1) - p;
static __PTRDIFF_TYPE__ d6 = (p + 1) - p;
(Those are not constant expression according to ISO C.)

The reason is that fold_build can fold
(long int) (p + 4) - (long int) p to 4, but not
(long int) p - (long int) (p + 4).

That means we have to have a way how to fold the latter, but only in
static initializers.  So I guess I need to implement this in
fold-const.c...   Oh well.

Nevertheless, I'd guess the fwprop bits could go in separately (it's
beneficial for C++).

As for PSImode, I dunno - seems only m32c and AVR use that?  I have no
way how to perform testing on such targets.

Marek


Re: [C PATCH] Discard P - (P + CST) optimization in pointer_diff (PR c/61240)

2014-08-06 Thread Jakub Jelinek
On Wed, Aug 06, 2014 at 10:22:19AM +0200, Marek Polacek wrote:
 On Tue, Aug 05, 2014 at 02:14:21PM -0600, Jeff Law wrote:
  My concern is the code we're removing discusses the need to simplify when
  these expressions are in static initializers.  What's going to ensure that
  we're still simplifying instances which appear in static initializers?  I
  don't see anything which tests that.   And does it still work for targets
  which utilize PSImode?
 
 Aw nuts.  So with the patch we'd start erroring out on
 static __PTRDIFF_TYPE__ d1 = p - (p + 1);
 static __PTRDIFF_TYPE__ d2 = p - (p - 1);
 (it's nowhere in the testsuite/code base - and I hadn't noticed that
 until today :()
 while we'd still accept
 static __PTRDIFF_TYPE__ d5 = (p - 1) - p;
 static __PTRDIFF_TYPE__ d6 = (p + 1) - p;
 (Those are not constant expression according to ISO C.)
 
 The reason is that fold_build can fold
 (long int) (p + 4) - (long int) p to 4, but not
 (long int) p - (long int) (p + 4).
 
 That means we have to have a way how to fold the latter, but only in
 static initializers.  So I guess I need to implement this in
 fold-const.c...   Oh well.
 
 Nevertheless, I'd guess the fwprop bits could go in separately (it's
 beneficial for C++).

Well, if you are going to implement it in fwprop AND fold-const, then the
natural place to fix that would be in *.pd on the match-and-simplify branch.

Jakub


Re: [PATCH][AArch64] Use REG_P and CONST_INT_P instead of GET_CODE + comparison

2014-08-06 Thread Richard Earnshaw
On 05/08/14 12:18, Kyrill Tkachov wrote:
 Hi all,
 
 This is a cleanup to replace usages of GET_CODE (x) == CONST_INT with 
 CONST_INT_P (x) and GET_CODE (x) == REG with REG_P (x). No functional 
 changes.
 
 Tested on aarch64-none-elf and bootstrapped on aarch64-linux.
 
 Ok for trunk?
 
 Thanks,
 Kyrill
 
 2014-08-05  Kyrylo Tkachov  kyrylo.tkac...@arm.com
 
  * config/aarch64/aarch64.c (aarch64_classify_address): Use REG_P and
  CONST_INT_P instead of GET_CODE and compare.
  (aarch64_select_cc_mode): Likewise.
  (aarch64_print_operand): Likewise.
  (aarch64_rtx_costs): Likewise.
  (aarch64_simd_valid_immediate): Likewise.
  (aarch64_simd_check_vect_par_cnst_half): Likewise.
  (aarch64_simd_emit_pair_result_insn): Likewise.
 
 
OK.

R.




Re: [C PATCH] Discard P - (P + CST) optimization in pointer_diff (PR c/61240)

2014-08-06 Thread Marek Polacek
On Wed, Aug 06, 2014 at 10:26:29AM +0200, Jakub Jelinek wrote:
 Well, if you are going to implement it in fwprop AND fold-const, then the
 natural place to fix that would be in *.pd on the match-and-simplify branch.

True.  So I guess I'll have to put this one on hold for a while...

Marek


Re: [C PATCH] Discard P - (P + CST) optimization in pointer_diff (PR c/61240)

2014-08-06 Thread Richard Biener
On Wed, 6 Aug 2014, Marek Polacek wrote:

 On Tue, Aug 05, 2014 at 02:14:21PM -0600, Jeff Law wrote:
  My concern is the code we're removing discusses the need to simplify when
  these expressions are in static initializers.  What's going to ensure that
  we're still simplifying instances which appear in static initializers?  I
  don't see anything which tests that.   And does it still work for targets
  which utilize PSImode?
 
 Aw nuts.  So with the patch we'd start erroring out on
 static __PTRDIFF_TYPE__ d1 = p - (p + 1);
 static __PTRDIFF_TYPE__ d2 = p - (p - 1);
 (it's nowhere in the testsuite/code base - and I hadn't noticed that
 until today :()
 while we'd still accept
 static __PTRDIFF_TYPE__ d5 = (p - 1) - p;
 static __PTRDIFF_TYPE__ d6 = (p + 1) - p;
 (Those are not constant expression according to ISO C.)
 
 The reason is that fold_build can fold
 (long int) (p + 4) - (long int) p to 4, but not
 (long int) p - (long int) (p + 4).
 
 That means we have to have a way how to fold the latter, but only in
 static initializers.  So I guess I need to implement this in
 fold-const.c...   Oh well.
 
 Nevertheless, I'd guess the fwprop bits could go in separately (it's
 beneficial for C++).
 
 As for PSImode, I dunno - seems only m32c and AVR use that?  I have no
 way how to perform testing on such targets.

The issue for those targets is mainly how they define their 'sizetype'
vs. their pointer types.  For m32c IIRC sizetype is 16bits while
pointer types are 24bit (PSImode).  That means that pointer-to-int
conversion is usually widening and that pointer-offsetting cannot
access large objects fully.  See also

/* Allow conversions from pointer type to integral type only if
   there is no sign or zero extension involved.
   For targets were the precision of ptrofftype doesn't match that
   of pointers we need to allow arbitrary conversions to 
ptrofftype.  */
if ((POINTER_TYPE_P (lhs_type)
  INTEGRAL_TYPE_P (rhs1_type))
|| (POINTER_TYPE_P (rhs1_type)
 INTEGRAL_TYPE_P (lhs_type)
 (TYPE_PRECISION (rhs1_type) = TYPE_PRECISION 
(lhs_type)
|| ptrofftype_p (sizetype
  return false;

which we may restrict better with checking whether the pointer
uses a partial integer mode.  Not sure how PSImode - SImode
extends on RTL?

Richard.


Re: [C PATCH] Discard P - (P + CST) optimization in pointer_diff (PR c/61240)

2014-08-06 Thread Richard Biener
On Wed, 6 Aug 2014, Marek Polacek wrote:

 On Wed, Aug 06, 2014 at 10:26:29AM +0200, Jakub Jelinek wrote:
  Well, if you are going to implement it in fwprop AND fold-const, then the
  natural place to fix that would be in *.pd on the match-and-simplify branch.
 
 True.  So I guess I'll have to put this one on hold for a while...

You can restrict it to fold-const.c for now.  I really hope to get
match-and-simplify into mergeable state this month.

Richard.


Re: [C PATCH] Discard P - (P + CST) optimization in pointer_diff (PR c/61240)

2014-08-06 Thread Marek Polacek
On Wed, Aug 06, 2014 at 10:28:13AM +0200, Richard Biener wrote:
 On Wed, 6 Aug 2014, Marek Polacek wrote:
 
  On Wed, Aug 06, 2014 at 10:26:29AM +0200, Jakub Jelinek wrote:
   Well, if you are going to implement it in fwprop AND fold-const, then the
   natural place to fix that would be in *.pd on the match-and-simplify 
   branch.
  
  True.  So I guess I'll have to put this one on hold for a while...
 
 You can restrict it to fold-const.c for now.  I really hope to get
 match-and-simplify into mergeable state this month.

Okay, I'll look into it.  Thanks.

Marek


Re: [PATCH 17/50] df-problems.c:find_memory

2014-08-06 Thread Richard Earnshaw
On 05/08/14 22:29, Jeff Law wrote:
 On 08/03/14 08:02, Richard Sandiford wrote:
 This also fixes what I think is a bug: find_memory used to stop at the
 first MEM it found.  If that MEM was nonvolatile and nonconstant, we'd
 return MEMREF_NORMAL even if there was another volatile MEM.


 gcc/
  * df-problems.c: Include rtl-iter.h.
  (find_memory): Turn from being a for_each_rtx callback to being
  a function that examines each subrtx itself.  Continue to look for
  volatile references even after a nonvolatile one has been found.
  (can_move_insns_across): Update calls accordingly.
 OK.
 
 It'd probably be fairly difficult to test for that bug as most of our 
 targets don't allow multiple memory operands in a single insn.  But I 
 agree with your assessment.  Good catch.
 
 jeff
 
 

ARM (and AArch64) have patterns with multiple MEMs; but the mems have to
be related addresses and (I think) be non-volatile (certainly if
volatile is permitted in one MEM it must also be in the others within
the pattern).  Patterns generally enforce all of this through the
pattern itself, the constraints or the condition on the insn.

R.



Re: Back porting the LTO fix to upstream gcc 4.9 branch

2014-08-06 Thread Richard Earnshaw
On 06/08/14 06:54, Hale Wang wrote:
 Refer to: https://gcc.gnu.org/ml/gcc-patches/2014-06/msg01429.html.
 
 Sorry for an extra whitespace.
 
 -Original Message-
 From: gcc-patches-ow...@gcc.gnu.org [mailto:gcc-patches-
 ow...@gcc.gnu.org] On Behalf Of Hale Wang
 Sent: 2014年8月6日 13:50
 To: GCC Patches
 Cc: Mike Stump; Richard Biener
 Subject: Back porting the LTO fix to upstream gcc 4.9 branch

 Hi,

 I have submitted the patch to fix the ABI mis-matching error caused by LTO
 on
 18th June 2014.

 Refer to : https://gcc.gnu.org/ml/gcc-patches/2014-06/msg01429.html  for
 details.

 This fix was done for trunk. We need this fix included for gcc 4.9 branch.
 So could we back porting this fix to upstream gcc 4.9 branch?

 Thanks and Best Regards,
 Hale Wang



 
 
 
 


OK unless a RM objects within 24 hours.

R.



Re: [PATCH 1/2] convert the rest of the users of pointer_map to hash_map

2014-08-06 Thread Richard Biener
On Wed, Aug 6, 2014 at 3:28 AM,  tsaund...@mozilla.com wrote:
 From: Trevor Saunders tsaund...@mozilla.com

 hi,

 just what it says on the tin.

 bootstrapped + regtested on x86_64-unknown-linux-gnu, also bootstrapped on
 i686-unknown-linux-gnu, ran config-list.mk, ok?  gcc/

Ok.

Time to remove pointer_map?

Thanks,
Richard.

 Trev

 * hash-map.h (default_hashmap_traits): Adjust overloads of hash
 function to not conflict.
 * alias.c, cfgexpand.c, dse.c, except.h, gimple-expr.c,
 gimple-ssa-strength-reduction.c, gimple-ssa.h, ifcvt.c,
 lto-streamer-out.c, lto-streamer.h, tree-affine.c, tree-affine.h,
 tree-predcom.c, tree-scalar-evolution.c, tree-ssa-loop-im.c,
 tree-ssa-loop-niter.c, tree-ssa.c, value-prof.c: Use hash_map instead
 of pointer_map.

 gcc/cp/

 * cp-tree.h, pt.c: Use hash_map instead of pointer_map.

 gcc/lto/

 * lto-partition.c, lto.c: Use hash_map instead of pointer_map.
 ---
  gcc/alias.c |  5 +--
  gcc/cfgexpand.c | 89 
 +
  gcc/cp/cp-tree.h|  3 +-
  gcc/cp/pt.c | 23 --
  gcc/dse.c   |  5 +--
  gcc/except.h|  1 -
  gcc/gimple-expr.c   |  5 +--
  gcc/gimple-ssa-strength-reduction.c |  2 +-
  gcc/gimple-ssa.h|  3 +-
  gcc/hash-map.h  |  7 +--
  gcc/ifcvt.c | 53 ++
  gcc/lto-streamer-out.c  |  7 +--
  gcc/lto-streamer.h  |  2 +-
  gcc/lto/lto-partition.c | 17 +++
  gcc/lto/lto.c   | 15 +++
  gcc/tree-affine.c   | 28 +---
  gcc/tree-affine.h   |  9 ++--
  gcc/tree-predcom.c  |  2 +-
  gcc/tree-scalar-evolution.c |  2 +-
  gcc/tree-ssa-loop-im.c  |  4 +-
  gcc/tree-ssa-loop-niter.c   | 36 ++-
  gcc/tree-ssa.c  |  2 +-
  gcc/value-prof.c| 42 ++---
  23 files changed, 174 insertions(+), 188 deletions(-)

 diff --git a/gcc/alias.c b/gcc/alias.c
 index 0246dd7..d8e10db 100644
 --- a/gcc/alias.c
 +++ b/gcc/alias.c
 @@ -302,10 +302,9 @@ ao_ref_from_mem (ao_ref *ref, const_rtx mem)
 ! is_global_var (base)
 cfun-gimple_df-decls_to_pointers != NULL)
  {
 -  void *namep;
 -  namep = pointer_map_contains (cfun-gimple_df-decls_to_pointers, 
 base);
 +  tree *namep = cfun-gimple_df-decls_to_pointers-get (base);
if (namep)
 -   ref-base = build_simple_mem_ref (*(tree *)namep);
 +   ref-base = build_simple_mem_ref (*namep);
  }

ref-ref_alias_set = MEM_ALIAS_SET (mem);
 diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
 index b20be10..5ac22a2 100644
 --- a/gcc/cfgexpand.c
 +++ b/gcc/cfgexpand.c
 @@ -216,7 +216,7 @@ struct stack_var
  static struct stack_var *stack_vars;
  static size_t stack_vars_alloc;
  static size_t stack_vars_num;
 -static struct pointer_map_t *decl_to_stack_part;
 +static hash_maptree, size_t *decl_to_stack_part;

  /* Conflict bitmaps go on this obstack.  This allows us to destroy
 all of them in one big sweep.  */
 @@ -300,10 +300,10 @@ add_stack_var (tree decl)
 = XRESIZEVEC (struct stack_var, stack_vars, stack_vars_alloc);
  }
if (!decl_to_stack_part)
 -decl_to_stack_part = pointer_map_create ();
 +decl_to_stack_part = new hash_maptree, size_t;

v = stack_vars[stack_vars_num];
 -  * (size_t *)pointer_map_insert (decl_to_stack_part, decl) = stack_vars_num;
 +  decl_to_stack_part-put (decl, stack_vars_num);

v-decl = decl;
v-size = tree_to_uhwi (DECL_SIZE_UNIT (SSAVAR (decl)));
 @@ -375,7 +375,7 @@ visit_op (gimple, tree op, tree, void *data)
 DECL_P (op)
 DECL_RTL_IF_SET (op) == pc_rtx)
  {
 -  size_t *v = (size_t *) pointer_map_contains (decl_to_stack_part, op);
 +  size_t *v = decl_to_stack_part-get (op);
if (v)
 bitmap_set_bit (active, *v);
  }
 @@ -395,8 +395,7 @@ visit_conflict (gimple, tree op, tree, void *data)
 DECL_P (op)
 DECL_RTL_IF_SET (op) == pc_rtx)
  {
 -  size_t *v =
 -   (size_t *) pointer_map_contains (decl_to_stack_part, op);
 +  size_t *v = decl_to_stack_part-get (op);
if (v  bitmap_set_bit (active, *v))
 {
   size_t num = *v;
 @@ -447,8 +446,7 @@ add_scope_conflicts_1 (basic_block bb, bitmap work, bool 
 for_conflict)
   if (TREE_CODE (lhs) != VAR_DECL)
 continue;
   if (DECL_RTL_IF_SET (lhs) == pc_rtx
 -  (v = (size_t *)
 - pointer_map_contains (decl_to_stack_part, lhs)))
 +  (v = decl_to_stack_part-get (lhs)))
 bitmap_clear_bit (work, *v);
 }
else if (!is_gimple_debug (stmt))
 @@ -587,6 +585,26 @@ 

Re: [PATCH 2/2] remove pointer-set.[ch]

2014-08-06 Thread Richard Biener
On Wed, Aug 6, 2014 at 3:28 AM,  tsaund...@mozilla.com wrote:
 From: Trevor Saunders tsaund...@mozilla.com

 hi,

 just what it says on the tin.

 bootstrapped + regtested on x86_64-unknown-linux-gnu, also bootstrapped on
 i686-unknown-linux-gnu, ran config-list.mk, ok?  gcc/

Ok.

Thanks,
Richard.

 Trev

 gcc/

 * Makefile.in: Remove references to pointer-set.c and pointer-set.h.
 * alias.c, cfgexpand.c, cgraphbuild.c,
 config/aarch64/aarch64-builtins.c, config/aarch64/aarch64.c,
 config/alpha/alpha.c, config/darwin.c, config/i386/i386.c,
 config/i386/winnt.c, config/ia64/ia64.c, config/m32c/m32c.c,
 config/mep/mep.c, config/mips/mips.c, config/rs6000/rs6000.c,
 config/s390/s390.c, config/sh/sh.c, config/sparc/sparc.c,
 config/spu/spu.c, config/stormy16/stormy16.c, config/tilegx/tilegx.c,
 config/tilepro/tilepro.c, config/xtensa/xtensa.c, dominance.c,
 dse.c, except.c, gengtype.c, gimple-expr.c,
 gimple-ssa-strength-reduction.c, gimplify.c, ifcvt.c,
 ipa-visibility.c, lto-streamer.h, omp-low.c, predict.c, stmt.c,
 tree-affine.c, tree-cfg.c, tree-eh.c, tree-inline.c, tree-nested.c,
 tree-scalar-evolution.c, tree-ssa-loop-im.c, tree-ssa-loop-niter.c,
 tree-ssa-phiopt.c, tree-ssa-structalias.c, tree-ssa-uninit.c,
 tree-ssa.c, tree.c, var-tracking.c, varpool.c: Remove includes of
 pointer-set.h.
 * pointer-set.c: Remove file.
 * pointer-set.h: Remove file.

 gcc/c-family/

 * c-gimplify.c, cilk.c: Remove includes of pointer-set.h.

 c/

 * c-typeck.c: Remove include of pointer-set.h.

 cp/

 * class.c, cp-gimplify.c, decl.c, decl2.c, error.c, method.c,
 optimize.c, pt.c, semantics.c: Remove includes of pointer-set.h.
 ---
  gcc/Makefile.in  |   7 +-
  gcc/alias.c  |   1 -
  gcc/c-family/c-gimplify.c|   1 -
  gcc/c-family/cilk.c  |   1 -
  gcc/c/c-typeck.c |   1 -
  gcc/cfgexpand.c  |   1 -
  gcc/cgraphbuild.c|   1 -
  gcc/config/aarch64/aarch64-builtins.c|   1 -
  gcc/config/aarch64/aarch64.c |   1 -
  gcc/config/alpha/alpha.c |   1 -
  gcc/config/darwin.c  |   1 -
  gcc/config/i386/i386.c   |   1 -
  gcc/config/i386/winnt.c  |   1 -
  gcc/config/ia64/ia64.c   |   1 -
  gcc/config/m32c/m32c.c   |   1 -
  gcc/config/mep/mep.c |   1 -
  gcc/config/mips/mips.c   |   1 -
  gcc/config/rs6000/rs6000.c   |   1 -
  gcc/config/s390/s390.c   |   1 -
  gcc/config/sh/sh.c   |   1 -
  gcc/config/sparc/sparc.c |   1 -
  gcc/config/spu/spu.c |   1 -
  gcc/config/stormy16/stormy16.c   |   1 -
  gcc/config/tilegx/tilegx.c   |   1 -
  gcc/config/tilepro/tilepro.c |   1 -
  gcc/config/xtensa/xtensa.c   |   1 -
  gcc/cp/class.c   |   1 -
  gcc/cp/cp-gimplify.c |   1 -
  gcc/cp/decl.c|   1 -
  gcc/cp/decl2.c   |   1 -
  gcc/cp/error.c   |   1 -
  gcc/cp/method.c  |   1 -
  gcc/cp/optimize.c|   1 -
  gcc/cp/pt.c  |   1 -
  gcc/cp/semantics.c   |   1 -
  gcc/dominance.c  |   1 -
  gcc/dse.c|   1 -
  gcc/except.c |   1 -
  gcc/gengtype.c   |   2 +-
  gcc/gimple-expr.c|   1 -
  gcc/gimple-ssa-strength-reduction.c  |   1 -
  gcc/gimplify.c   |   1 -
  gcc/ifcvt.c  |   1 -
  gcc/ipa-visibility.c |   1 -
  gcc/lto-streamer.h   |   1 -
  gcc/omp-low.c|   1 -
  gcc/pointer-set.c| 271 
 ---
  gcc/pointer-set.h|  59 -
  gcc/predict.c|   1 -
  gcc/stmt.c   |   1 -
  gcc/testsuite/g++.dg/plugin/selfassign.c |   1 -
  gcc/testsuite/gcc.dg/plugin/finish_unit_plugin.c |   1 -
  gcc/testsuite/gcc.dg/plugin/ggcplug.c|   1 -
  

Re: [PATCH] Testcase for PR61801

2014-08-06 Thread Jakub Jelinek
Hi!

I've cleaned up the testcase some more, tested on 4.8/4.9/trunk that
it fails without the sched-deps.c fix too (both -m32 and -m64) and
works with the fix.  Committed to all branches.

2014-08-06  Jakub Jelinek  ja...@redhat.com

PR rtl-optimization/61801
* gcc.target/i386/pr61801.c: Rewritten.

--- gcc/testsuite/gcc.target/i386/pr61801.c.jj  2014-08-01 09:23:37.0 
+0200
+++ gcc/testsuite/gcc.target/i386/pr61801.c 2014-08-06 10:30:32.133472004 
+0200
@@ -1,22 +1,21 @@
+/* PR rtl-optimization/61801 */
 /* { dg-do compile } */
 /* { dg-options -Os -fcompare-debug } */
 
-int a, b, c;
-void fn1 ()
+int a, c;
+int bar (void);
+void baz (void);
+
+void
+foo (void)
 {
   int d;
-  if (fn2 ()  !0)
+  if (bar ())
 {
-  b = (
-  {
-  int e;
-  fn3 ();
-  switch (0)
-  default:
-  asm volatile( : =a(e) : 0(a), i(0));
-  e;
-  });
-  d = b;
+  int e;
+  baz ();
+  asm volatile ( : =a (e) : 0 (a), i (0));
+  d = e;
 }
   c = d;
 }

Jakub


[PATCH][match-and-simplify] Fix remaining testsuite ICEs

2014-08-06 Thread Richard Biener

The following fixes the remaining ICEs I see when testing all
languages (but ada and go).

The tree-cfg.c hunk highlights one change in the behavior
of fold_stmt, namely that it now follows SSA edges by default.
Maybe that's undesired?  On a related note, fold_stmt_inplace
preserves the actual statement object gsi_stmt points to,
but in reality callers use it to avoid creating new SSA names
thus would it be ok if fold_stmt_inplace made sure to
preserve the number of statements (and only change what
gsi points to) only?

Bootstrapped / tested on x86_64-unknown-linux-gnu, applied.

Richard.

2014-08-16  Richard Biener  rguent...@suse.de

* tree-cfg.c (no_follow_ssa_edges): New function.
(replace_uses_by): Do not follow SSA edges when folding the
stmt.
* gimple-match-head.c (maybe_push_res_to_seq): Disallow
stmts that mention SSA names occuring in abnormal PHIs.
(gimple_simplify): Likewise.

Index: gcc/tree-cfg.c
===
--- gcc/tree-cfg.c  (revision 213651)
+++ gcc/tree-cfg.c  (working copy)
@@ -1681,6 +1681,14 @@ gimple_can_merge_blocks_p (basic_block a
   return true;
 }
 
+/* ???  Maybe this should be a generic overload of fold_stmt.  */
+
+static tree
+no_follow_ssa_edges (tree)
+{
+  return NULL_TREE;
+}
+
 /* Replaces all uses of NAME by VAL.  */
 
 void
@@ -1737,7 +1745,16 @@ replace_uses_by (tree name, tree val)
  recompute_tree_invariant_for_addr_expr (op);
  }
 
- if (fold_stmt (gsi))
+ /* If we have sth like
+  neighbor_29 = name + -1;
+  _33 = name + neighbor_29;
+and end up visiting _33 first then folding will
+simplify the stmt to _33 = name; and the new
+immediate use will be inserted before the stmt
+iterator marker and thus we fail to visit it
+again, ICEing within the has_zero_uses assert.
+Avoid that by never following SSA edges.  */
+ if (fold_stmt (gsi, no_follow_ssa_edges))
stmt = gsi_stmt (gsi);
 
  if (maybe_clean_or_replace_eh_stmt (orig_stmt, stmt))
Index: gcc/gimple-match-head.c
===
--- gcc/gimple-match-head.c (revision 213651)
+++ gcc/gimple-match-head.c (working copy)
@@ -305,6 +305,17 @@ maybe_push_res_to_seq (code_helper rcode
return NULL_TREE;
   if (!res)
res = make_ssa_name (type, NULL);
+  /* Play safe and do not allow abnormals to be mentioned in
+ newly created statements.  */
+  if ((TREE_CODE (ops[0]) == SSA_NAME
+   SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ops[0]))
+ || (ops[1]
+  TREE_CODE (ops[1]) == SSA_NAME
+  SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ops[1]))
+ || (ops[2]
+  TREE_CODE (ops[2]) == SSA_NAME
+  SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ops[2])))
+   return NULL_TREE;
   gimple new_stmt = gimple_build_assign_with_ops (rcode, res,
  ops[0], ops[1], ops[2]);
   gimple_seq_add_stmt_without_update (seq, new_stmt);
@@ -321,6 +332,17 @@ maybe_push_res_to_seq (code_helper rcode
res = make_ssa_name (type, NULL);
   unsigned nargs = type_num_arguments (TREE_TYPE (decl));
   gcc_assert (nargs = 3);
+  /* Play safe and do not allow abnormals to be mentioned in
+ newly created statements.  */
+  if ((TREE_CODE (ops[0]) == SSA_NAME
+   SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ops[0]))
+ || (nargs = 2
+  TREE_CODE (ops[1]) == SSA_NAME
+  SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ops[1]))
+ || (nargs == 3
+  TREE_CODE (ops[2]) == SSA_NAME
+  SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ops[2])))
+   return NULL_TREE;
   gimple new_stmt = gimple_build_call (decl, nargs, ops[0], ops[1], 
ops[2]);
   gimple_call_set_lhs (new_stmt, res);
   gimple_seq_add_stmt_without_update (seq, new_stmt);
@@ -414,8 +436,8 @@ gimple_simplify (enum tree_code code, tr
 
 tree
 gimple_simplify (enum built_in_function fn, tree type,
-  tree arg0,
-  gimple_seq *seq, tree (*valueize)(tree))
+tree arg0,
+gimple_seq *seq, tree (*valueize)(tree))
 {
   if (constant_for_folding (arg0))
 {
@@ -683,6 +705,17 @@ gimple_simplify (gimple_stmt_iterator *g
   if (is_gimple_assign (stmt)
rcode.is_tree_code ())
 {
+  /* Play safe and do not allow abnormals to be mentioned in
+ newly created statements.  */
+  if ((TREE_CODE (ops[0]) == SSA_NAME
+   SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ops[0]))
+ || (ops[1]
+  TREE_CODE (ops[1]) == SSA_NAME
+  SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ops[1]))
+ || (ops[2]
+  TREE_CODE (ops[2]) == SSA_NAME
+  

Re: [PATCH libstdc++ v3] - Add xmethods for std::vector and std::unique_ptr

2014-08-06 Thread Jonathan Wakely
On 5 August 2014 21:29, Siva Chandra wrote:
 Hi Jonathan,

 Thanks a lot for taking a look. The patch in question, and the GDB
 support, do not yet work with Python3. If that is a necessary
 requirement, I can make the changes and send a new version of the
 patch.

Some GNU/Linux distros already build GDB using Python3, so they will
be unable to use these xmethods.

However, I think it can be committed now and fixed later.

OK for trunk. Do you need me to do the commit for you?


Re: [PATCH 2/3]Improve induction variable elimination

2014-08-06 Thread Bin.Cheng
Forgot the patch~

On Wed, Aug 6, 2014 at 10:32 AM, Bin.Cheng amker.ch...@gmail.com wrote:
 On Fri, Jul 25, 2014 at 8:35 PM, Richard Biener
 richard.guent...@gmail.com wrote:
 On Thu, Jul 17, 2014 at 11:08 AM, Bin Cheng bin.ch...@arm.com wrote:
 Hi,
 As quoted from the function difference_cannot_overflow_p,

   /* TODO: deeper inspection may be necessary to prove the equality.  */
   switch (code)
 {
 case PLUS_EXPR:
   return expr_equal_p (e1, offset) || expr_equal_p (e2, offset);
 case POINTER_PLUS_EXPR:
   return expr_equal_p (e2, offset);

 default:
   return false;
 }

 The overflow check can be improved by using deeper inspection to prove the
 equality.  This patch deals with that by making below two improvements:
   a) Handles constant cases.
   b) Uses affine expansion as deeper inspection to check the equality.

 As a result, functions strip_wrap_conserving_type_conversions and
 expr_equal_p can be removed now.  A test case is also added to illustrate iv
 elimination opportunity captured by this patch.

 Thanks,
 bin

 You add special casing for constants but I don't see any testcases for that.
 Specifically

 +  /* No overflow if offset is zero.  */
 +  if (offset == integer_zero_node)
  return true;

 is a bogus check (use integer_zerop).  Apart from the special-casing of
 constants the patch looks good to me.

 Hi Richard,
 I modified the patch according to your comments by removing the
 constant case.  Re-bootstrap and test on x86_64 and x86.  Is this
 version OK?

 Thanks,
 bin

 2014-08-06  Bin Cheng  bin.ch...@arm.com

 * tree-ssa-loop-ivopts.c (ivopts_data): New field name_expansion.
 (tree_ssa_iv_optimize_init): Initialize name_expansion.
 (tree_ssa_iv_optimize_finalize): Free name_expansion.
 (strip_wrap_conserving_type_conversions, expr_equal_p): Delete.
 (difference_cannot_overflow_p): New parameter.  Use affine
 expansion for equality check.
 (iv_elimination_compare_lt): Pass new argument.

 gcc/testsuite/ChangeLog
 2014-08-06  Bin Cheng  bin.ch...@arm.com

 * gcc.dg/tree-ssa/ivopts-lt-2.c: New test.
Index: gcc/tree-ssa-loop-ivopts.c
===
--- gcc/tree-ssa-loop-ivopts.c  (revision 213529)
+++ gcc/tree-ssa-loop-ivopts.c  (working copy)
@@ -323,6 +323,9 @@ struct ivopts_data
   /* A bitmap of important candidates.  */
   bitmap important_candidates;
 
+  /* Cache used by tree_to_aff_combination_expand.  */
+  struct pointer_map_t *name_expansion;
+
   /* The maximum invariant id.  */
   unsigned max_inv_id;
 
@@ -876,6 +879,7 @@ tree_ssa_iv_optimize_init (struct ivopts_data *dat
   data-iv_candidates.create (20);
   data-inv_expr_tab = new hash_tableiv_inv_expr_hasher (10);
   data-inv_expr_id = 0;
+  data-name_expansion = NULL;
   decl_rtl_to_reset.create (20);
 }
 
@@ -4448,75 +4452,20 @@ iv_elimination_compare (struct ivopts_data *data,
   return (exit-flags  EDGE_TRUE_VALUE ? EQ_EXPR : NE_EXPR);
 }
 
-static tree
-strip_wrap_conserving_type_conversions (tree exp)
-{
-  while (tree_ssa_useless_type_conversion (exp)
- (nowrap_type_p (TREE_TYPE (exp))
-== nowrap_type_p (TREE_TYPE (TREE_OPERAND (exp, 0)
-exp = TREE_OPERAND (exp, 0);
-  return exp;
-}
-
-/* Walk the SSA form and check whether E == WHAT.  Fairly simplistic, we
-   check for an exact match.  */
-
-static bool
-expr_equal_p (tree e, tree what)
-{
-  gimple stmt;
-  enum tree_code code;
-
-  e = strip_wrap_conserving_type_conversions (e);
-  what = strip_wrap_conserving_type_conversions (what);
-
-  code = TREE_CODE (what);
-  if (TREE_TYPE (e) != TREE_TYPE (what))
-return false;
-
-  if (operand_equal_p (e, what, 0))
-return true;
-
-  if (TREE_CODE (e) != SSA_NAME)
-return false;
-
-  stmt = SSA_NAME_DEF_STMT (e);
-  if (gimple_code (stmt) != GIMPLE_ASSIGN
-  || gimple_assign_rhs_code (stmt) != code)
-return false;
-
-  switch (get_gimple_rhs_class (code))
-{
-case GIMPLE_BINARY_RHS:
-  if (!expr_equal_p (gimple_assign_rhs2 (stmt), TREE_OPERAND (what, 1)))
-   return false;
-  /* Fallthru.  */
-
-case GIMPLE_UNARY_RHS:
-case GIMPLE_SINGLE_RHS:
-  return expr_equal_p (gimple_assign_rhs1 (stmt), TREE_OPERAND (what, 0));
-default:
-  return false;
-}
-}
-
 /* Returns true if we can prove that BASE - OFFSET does not overflow.  For now,
we only detect the situation that BASE = SOMETHING + OFFSET, where the
calculation is performed in non-wrapping type.
 
TODO: More generally, we could test for the situation that
 BASE = SOMETHING + OFFSET' and OFFSET is between OFFSET' and zero.
-This would require knowing the sign of OFFSET.
+This would require knowing the sign of OFFSET.  */
 
-Also, we only look for the first addition in the computation of BASE.
-More complex analysis would be better, but introducing it just for
-this optimization seems like an 

Re: [PATCH][match-and-simplify] Fix remaining testsuite ICEs

2014-08-06 Thread Richard Biener
On Wed, 6 Aug 2014, Richard Biener wrote:

 
 The following fixes the remaining ICEs I see when testing all
 languages (but ada and go).
 
 The tree-cfg.c hunk highlights one change in the behavior
 of fold_stmt, namely that it now follows SSA edges by default.
 Maybe that's undesired?  On a related note, fold_stmt_inplace
 preserves the actual statement object gsi_stmt points to,
 but in reality callers use it to avoid creating new SSA names
 thus would it be ok if fold_stmt_inplace made sure to
 preserve the number of statements (and only change what
 gsi points to) only?
 
 Bootstrapped / tested on x86_64-unknown-linux-gnu, applied.

The following clarifies the comment.

Richard.

2014-08-06  Richard Biener  rguent...@suse.de

* tree-cfg.c (replace_uses_by): Clarify added comment.

Index: gcc/tree-cfg.c
===
--- gcc/tree-cfg.c  (revision 213655)
+++ gcc/tree-cfg.c  (working copy)
@@ -1748,11 +1748,12 @@ replace_uses_by (tree name, tree val)
  /* If we have sth like
   neighbor_29 = name + -1;
   _33 = name + neighbor_29;
-and end up visiting _33 first then folding will
-simplify the stmt to _33 = name; and the new
-immediate use will be inserted before the stmt
-iterator marker and thus we fail to visit it
-again, ICEing within the has_zero_uses assert.
+and substitute 1 for name then when visiting
+_33 first then folding will simplify the stmt
+to _33 = name; and the new immediate use will
+be inserted before the stmt iterator marker and
+thus we fail to visit it again, ICEing within the
+has_zero_uses assert.
 Avoid that by never following SSA edges.  */
  if (fold_stmt (gsi, no_follow_ssa_edges))
stmt = gsi_stmt (gsi);


Re: [PATCH v2] gcc/testsuite: Disable pr44194-1.c for BE Power64/Linux

2014-08-06 Thread Maciej W. Rozycki
On Wed, 6 Aug 2014, David Edelsohn wrote:

   D'oh, there's even a predicate procedure in our test framework already to
  cover it.  Thanks for straightening me out, an updated patch follows.
  This scores:
 
  UNSUPPORTED: gcc.dg/pr44194-1.c
 
  in my testing, like the previous version.  OK to apply?
 
 Okay.

 Applied, thanks.

  Maciej


Fix libgomp crash without TLS (PR42616)

2014-08-06 Thread Varvara Rainchik
Hi,

The issue was firstly observed on NDK gcc since TLS is not supported
in Android bionic. I also see the same failure on gcc configured for
linux with –disable-tls, libgomp make check log:

FAIL: libgomp.c/affinity-1.c execution test
FAIL: libgomp.c/icv-2.c execution test
FAIL: libgomp.c/lock-3.c execution test
FAIL: libgomp.c/target-6.c execution test

These tests except affinity-1.c fail because gomp_thread () function
returns null pointer. I’ve found 2 bugs, first one addresses this
problem on Windows:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=42616;
second one addresses original problem (for both cases, with and without TLS):
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=36242.
Tests from both bugs fail with –disable-tls. So, it seems that non TLS
case was fixed just partially. The following patch solves the problem.
With this patch 3 tests from make check pass, affinity-1.c fails, but
I think it’s other non TLS problem.
Changes are bootstrapped and regtested on x86_64-linux.


2014-08-06  Varvara Rainchik  varvara.rainc...@intel.com

* libgomp.h (gomp_thread): For non TLS case create thread data.
* team.c (create_non_tls_thread_data): New function.


---
diff --git a/libgomp/libgomp.h b/libgomp/libgomp.h
index a1482cc..cf3ec8f 100644
--- a/libgomp/libgomp.h
+++ b/libgomp/libgomp.h
@@ -479,9 +479,15 @@ static inline struct gomp_thread *gomp_thread (void)
}
#else
extern pthread_key_t gomp_tls_key;
+extern struct gomp_thread *create_non_tls_thread_data (void);
static inline struct gomp_thread *gomp_thread (void)
{
-  return pthread_getspecific (gomp_tls_key);
+  struct gomp_thread *thr = pthread_getspecific (gomp_tls_key);
+  if (thr == NULL)
+  {
+thr = create_non_tls_thread_data ();
+  }
+  return thr;
}
#endif

diff --git a/libgomp/team.c b/libgomp/team.c
index e6a6d8f..bf8bd4b 100644
--- a/libgomp/team.c
+++ b/libgomp/team.c
@@ -927,6 +927,17 @@ initialize_team (void)
 gomp_fatal (could not create thread pool destructor.);
}

+#ifndef HAVE_TLS
+struct gomp_thread *create_non_tls_thread_data (void)
+{
+  struct gomp_thread *thr = gomp_malloc (sizeof (struct gomp_thread));
+  pthread_setspecific (gomp_tls_key, thr);
+  gomp_sem_init (thr-release, 0);
+
+  return thr;
+}
+#endif
+
static void __attribute__((destructor))
team_destructor (void)
{
---


Is it ok?


Best regards,

Varvara


[PATCH][match-and-simplify] Some FP runtime fails

2014-08-06 Thread Richard Biener

Fixed with the following, committed.  Incidentially this is the
only constant folding pattern that also applies to floats - otherwise
the use of the integer_* predicates prevent that.

Richard.

2014-08-06  Richard Biener  rguent...@suse.de

* match-constant-folding.pd (minus @0 @0): Restrict to
modes without NaNs.

Index: gcc/match-constant-folding.pd
===
--- gcc/match-constant-folding.pd   (revision 213651)
+++ gcc/match-constant-folding.pd   (working copy)
@@ -24,6 +24,7 @@ along with GCC; see the file COPYING3.
 
 (simplify
   (minus @0 @0)
+  (if (!HONOR_NANS (TYPE_MODE (type
   { build_zero_cst (type); })
 
 (simplify


Re: Patch for constexpr variable templates

2014-08-06 Thread Braden Obrzut

Here's a patch for the more conservative option.

- Braden Obrzut

2014-08-06  Braden Obrzut  ad...@maniacsvault.net

* pt.c (check_explicit_specialization): Ensure tmpl is a function
template before checking if it is inline for COMDAT.

diff --git a/gcc/cp/pt.c b/gcc/cp/pt.c
index 57e7216..3bc3961 100644
--- a/gcc/cp/pt.c
+++ b/gcc/cp/pt.c
@@ -2817,7 +2817,7 @@ check_explicit_specialization (tree declarator,
 	   It's just the name of an instantiation.  But, it's not
 	   a request for an instantiation, either.  */
 	SET_DECL_IMPLICIT_INSTANTIATION (decl);
-	  else
+	  else if (DECL_FUNCTION_TEMPLATE_P (tmpl))
 	/* A specialization is not necessarily COMDAT.  */
 	DECL_COMDAT (decl) = DECL_DECLARED_INLINE_P (decl);
 


[PATCH] Fix PR61320

2014-08-06 Thread Richard Biener

The following fixes PR61320 - we were not properly treating
explicitely misaligned loads as misaligned.

Tested by various people on their STRICT_ALIGN targets, applied
to trunk and branch.

Richard.

2014-08-06  Richard Biener  rguent...@suse.de

PR tree-optimization/61320
* tree-ssa-loop-ivopts.c (may_be_unaligned_p): Properly
handle misaligned loads.

Index: gcc/tree-ssa-loop-ivopts.c
===
--- gcc/tree-ssa-loop-ivopts.c  (revision 213658)
+++ gcc/tree-ssa-loop-ivopts.c  (working copy)
@@ -1703,6 +1703,8 @@ may_be_unaligned_p (tree ref, tree step)
 return false;
 
   unsigned int align = TYPE_ALIGN (TREE_TYPE (ref));
+  if (GET_MODE_ALIGNMENT (TYPE_MODE (TREE_TYPE (ref)))  align)
+align = GET_MODE_ALIGNMENT (TYPE_MODE (TREE_TYPE (ref)));
 
   unsigned HOST_WIDE_INT bitpos;
   unsigned int ref_align;


Re: Remove unnecessary and harmful fixincludes for Android

2014-08-06 Thread Alexander Ivchenko
The only thing that I don't like about that, is that the user would
still have stdio.h fixed if gcc is built with sysroots older than r10.
But I guess it is not that critical :)


We still have to remove fix for compiler.h:

diff --git a/fixincludes/ChangeLog b/fixincludes/ChangeLog
index f7effee..b69897b 100644
--- a/fixincludes/ChangeLog
+++ b/fixincludes/ChangeLog
@@ -1,3 +1,9 @@
+2014-08-04  Alexander Ivchenko  alexander.ivche...@intel.com
+
+   * inclhack.def (complier_h_tradcpp): Remove.
+   * fixincl.x: Regenerate.
+   * tests/base/linux/compiler.h: Remove.
+
 2014-04-22  Rainer Orth  r...@cebitec.uni-bielefeld.de

* inclhack.def (math_exception): Bypass on *-*-solaris2.1[0-9]*.
diff --git a/fixincludes/inclhack.def b/fixincludes/inclhack.def
index 6a1136c..c363c66 100644
--- a/fixincludes/inclhack.def
+++ b/fixincludes/inclhack.def
@@ -1140,20 +1140,6 @@ fix = {
 };

 /*
- *  Old Linux kernel's compiler.h header breaks Traditional CPP
- */
-fix = {
-hackname  = complier_h_tradcpp;
-files = linux/compiler.h;
-
-select= #define __builtin_warning\\(x, y\\.\\.\\.\\) \\(1\\);
-c_fix = format;
-c_fix_arg = /* __builtin_warning(x, y...) is obsolete */;
-
-test_text = #define __builtin_warning(x, y...) (1);
-};
-
-/*
  *  Fix various macros used to define ioctl numbers.
  *  The traditional syntax was:
  *
diff --git a/fixincludes/tests/base/linux/compiler.h
b/fixincludes/tests/base/linux/compiler.h
deleted file mode 100644
index 7135276..000
--- a/fixincludes/tests/base/linux/compiler.h
+++ /dev/null
@@ -1,14 +0,0 @@
-/*  DO NOT EDIT THIS FILE.
-
-It has been auto-edited by fixincludes from:
-
-   fixinc/tests/inc/linux/compiler.h
-
-This had to be done to correct non-standard usages in the
-original, manufacturer supplied header file.  */
-
-
-
-#if defined( COMPLIER_H_TRADCPP_CHECK )
-/* __builtin_warning(x, y...) is obsolete */
-#endif  /* COMPLIER_H_TRADCPP_CHECK */


Bruce, I think I formally have to ask for your approval again :)

--Alexander


2014-08-06 4:31 GMT+04:00 enh e...@google.com:
 On Tue, Aug 5, 2014 at 5:26 PM, Bruce Korb bk...@gnu.org wrote:
 Hi,

 Lines 42  43 are not needed for fixincludes, but it is your choice.
 With that change, you should not need to add that test to fixincludes
 because __gnuc_va_list will be found within the comment and satisfy
 the bypass expression.

 okay, i'll reword to explicitly say that it's the reference to
 __gnuc_va_list that gets us the fixincludes behavior we want (which
 should also ensure that no one cleans up the reference to, say,
 __builtin_va_list).

 That was the long way of saying:
Looks good to me.

 On Tue, Aug 5, 2014 at 5:09 PM, enh e...@google.com wrote:
 does https://android-review.googlesource.com/103445 look okay?

 On Tue, Aug 5, 2014 at 12:01 PM, Bruce Korb bk...@gnu.org wrote:
 Hi,

 On Tue, Aug 5, 2014 at 10:36 AM, enh e...@google.com wrote:
 you can see the current version of bionic's stdio.h here:

 https://android.googlesource.com/platform/bionic/+/master/libc/include/stdio.h

 i'm happy to add any string to the header file that makes things
 easier. if you want 'x-gcc-no-fixincludes' or whatever in there, just
 say :-)

 That would be great, but you could also add:

/* this file depends on __gnuc_va_list being used for va_list */

 and not bother changing fixincludes at all. :)  But either of those two
 comments added to the header would be preferable to looking for BIONIC.
 Thank you!

 With one of the two changes, the patch is approved.   Thanks!


Re: [PATCH v2] gcc/testsuite: Disable pr44194-1.c for BE Power64/Linux

2014-08-06 Thread Marek Polacek
On Wed, Aug 06, 2014 at 11:03:02AM +0100, Maciej W. Rozycki wrote:
 On Wed, 6 Aug 2014, David Edelsohn wrote:
 
D'oh, there's even a predicate procedure in our test framework already to
   cover it.  Thanks for straightening me out, an updated patch follows.
   This scores:
  
   UNSUPPORTED: gcc.dg/pr44194-1.c
  
   in my testing, like the previous version.  OK to apply?
  
  Okay.
 
  Applied, thanks.

I see
ERROR: gcc.dg/pr44194-1.c: unknown dg option: \} for }
now (x86_64-unknown-linux-gnu).

Marek


Re: Replacement of isl_int by isl_val

2014-08-06 Thread Mircea Namolaru


 On 08/03/14 17:44, Mircea Namolaru wrote:
  2014-08-03  Mircea Namolarumircea.namol...@inria.fr
 
  Replacement of isl-int by isl_val
  * graphite-clast-to-gimple.c: include isl/val.h, isl/val_gmp.h
  (compute_bounds_for_param): use isl_val instead of isl_int
  (compute_bounds_for_loop): likewise
  * graphite-interchange.c: include isl/val.h, isl/val_gmp.h
   (build_linearized_memory_access): use isl_val instead of isl_int
   (pdr_stride_in_loop): likewise
  * graphite-optimize-isl.c:
  (getPrevectorMap): use isl_val instead of isl_int
  * graphite-poly.c:
  (pbb_number_of_iterations_at_time): use ils_val instead of isl_int
  graphite-sese-to-poly.c: include isl/val.h, isl/val_gmp.h
  (extern the_isl_ctx): declare
  (build_pbb_scattering_polyhedrons): use isl_val instead of isl_int
  (extract_affine_gmp): likewise
  (wrap): likewise
  (build_loop_iteration_domains): likewise
  (add_param_constraints): likewise
  (add_param_constraints): likewise
 This is good.  Please install if you haven't already.
 
 jeff
 

I don't have maintainer (write) permissions. 

Many thanks, Mircea


Re: [PATCH 2/2] Enable elimination of zext/sext

2014-08-06 Thread Richard Biener
On Tue, Aug 5, 2014 at 4:21 PM, Jakub Jelinek ja...@redhat.com wrote:
 On Tue, Aug 05, 2014 at 04:17:41PM +0200, Richard Biener wrote:
 what's the semantic of setting SRP_SIGNED_AND_UNSIGNED
 on the subreg?  That is, for the created (subreg:lhs_mode
 (reg:PROMOTE_MODE of ssa N))?

 SRP_SIGNED_AND_UNSIGNED on a subreg should mean that
 the subreg is both zero and sign extended, which means
 that the topmost bit of the narrower mode is known to be zero,
 and all bits above it in the wider mode are known to be zero too.
 SRP_SIGNED means that the topmost bit of the narrower mode is
 either 0 or 1 and depending on that the above wider mode bits
 are either all 0 or all 1.
 SRP_UNSIGNED means that regardless of the topmost bit value,
 all above wider mode bits are 0.

Ok, then from the context of the patch we already know that
either SRP_UNSIGNED or SRP_SIGNED is true which means
that the value is sign- or zero-extended.

I suppose inside promoted_for_type_p
TYPE_MODE (TREE_TYPE (ssa)) == lhs_mode, I'm not sure
why you pass !unsignedp as lhs_uns.

Now, from 'ssa' alone we can't tell anything about a larger mode
registers value if that is either zero- or sign-extended.  But we
know that those bits are properly zero-extended if unsignedp
and properly sign-extended if !unsignedp?

So what the predicate tries to prove is that sign- and zero-extending
results in the same larger-mode value.  This is true if the
MSB of the smaller mode is not set.

Let's assume that smaller mode is that of 'ssa' then the test
is just

  return (!tree_int_cst_sign_bit (min)  !tree_int_cst_sign_bit (max));

no?

Thanks,
Richard.

 Jakub


[c++-concepts]

2014-08-06 Thread Andrew Sutton
Rewrite the constraint-checking code so that it doesn't instantiate
concept definitions. Instead of doing a simple constexpr evaluation on
the associated constraints, this now iterates over the decomposed
assumptions to determine satisfaction.

2014-08-06  Andrew Sutton  andrew.n.sut...@gmail.com
* gcc/cp/constraints.c (tsubst_requires_body, instantiate_requirements):
Lift the unevaluated operand guard to the entire constraint expression.
(check_satisfied, all_constraints_satisfied,
any_conjunctions_satisfied): Rewrite constraint checking to use
atomic constraints. Prevents instantiation of concepts.
(check_diagnostic_constraints): Recursively decompose and check
constraints for fine-grain diagnostics.
(diagnose_*): Use new constraint checking function.

Andrew Sutton
Index: gcc/cp/constraint.cc
===
--- gcc/cp/constraint.cc	(revision 213130)
+++ gcc/cp/constraint.cc	(working copy)
@@ -1117,7 +1117,6 @@ tsubst_local_parms (tree t,
 tree
 tsubst_requirement_body (tree t, tree args, tree in_decl)
 {
-  cp_unevaluated guard;
   tree r = NULL_TREE;
   while (t)
 {
@@ -1143,6 +1142,7 @@ tsubst_requires_expr (tree t, tree args,
   return finish_requires_expr (p, r);
 }
 
+
 // Substitute ARGS into the valid-expr expression T.
 tree
 tsubst_validexpr_expr (tree t, tree args, tree in_decl)
@@ -1207,6 +1207,7 @@ tsubst_nested_req (tree t, tree args, tr
 tree
 instantiate_requirements (tree reqs, tree args, bool do_not_fold)
 {
+  cp_unevaluated guard;
   if (do_not_fold)
 ++processing_template_decl;
   tree r = tsubst_expr (reqs, args, tf_none, NULL_TREE, false);
@@ -1248,22 +1249,55 @@ tsubst_constraint_info (tree ci, tree ar
 // evaluation of constraints.
 
 namespace {
-// Returns true if the requirements expression REQS is satisfied
-// and false otherwise. The requirements are checked by simply 
-// evaluating REQS as a constant expression.
-static inline bool
-check_requirements (tree reqs)
+// Returns true iff the atomic constraint, REQ, is satisfied. This
+// is the case when substitution succeeds and the resulting expression
+// evaluates to true.
+static bool
+check_satisfied (tree req, tree args) 
 {
+  // Instantiate and evaluate the requirements. 
+  req = instantiate_requirements (req, args, false);
+  if (req == error_mark_node)
+return false;
+
   // Reduce any remaining TRAIT_EXPR nodes before evaluating.
-  reqs = fold_non_dependent_expr (reqs);
+  req = fold_non_dependent_expr (req);
 
   // Requirements are satisfied when REQS evaluates to true.
-  return cxx_constant_value (reqs) == boolean_true_node;
+  tree result = cxx_constant_value (req);
+
+  return result == boolean_true_node;
+}
+
+// Returns true iff all atomic constraints in the list are satisfied.
+static bool
+all_constraints_satisfied (tree reqs, tree args)
+{
+  int n = TREE_VEC_LENGTH (reqs);
+  for (int i = 0; i  n; ++i)
+{
+  tree req = TREE_VEC_ELT (reqs, i);
+  if (!check_satisfied (req, args))
+return false;
+}
+  return true;
 }
 
-// Returns true if the requirements expression REQS is satisfied 
-// and false otherwise. The requirements are checked by first
-// instantiating REQS and then evaluating it as a constant expression.
+// Returns true if any conjunction of assumed requirements are satisfied.
+static bool
+any_conjunctions_satisfied (tree reqs, tree args)
+{
+  int n = TREE_VEC_LENGTH (reqs);
+  for (int i = 0; i  n; ++i)
+{
+  tree con = TREE_VEC_ELT (reqs, i);
+  if (all_constraints_satisfied (con, args))
+return true;
+}
+  return false;
+}
+
+// Returns true iff the assumptions in REQS are satisfied.
 static inline bool
 check_requirements (tree reqs, tree args)
 {
@@ -1272,11 +1306,7 @@ check_requirements (tree reqs, tree args
   if (args  uses_template_parms (args))
 return true;
 
-  // Instantiate and evaluate the requirements. 
-  reqs = instantiate_requirements (reqs, args, false);
-  if (reqs == error_mark_node)
-return false;
-  return check_requirements (reqs);
+  return any_conjunctions_satisfied (reqs, args);
 }
 } // namespace
 
@@ -1295,7 +1325,7 @@ check_constraints (tree cinfo)
   // all remaining expressions that are not constant expressions
   // (e.g., template-id expressions).
   else
-return check_requirements (CI_ASSOCIATED_REQS (cinfo), NULL_TREE);
+return check_requirements (CI_ASSUMPTIONS (cinfo), NULL_TREE);
 }
 
 // Check the constraints in CINFO against the given ARGS, returning
@@ -1309,8 +1339,9 @@ check_constraints (tree cinfo, tree args
   // Invlaid requirements cannot be satisfied.
   else if (!valid_requirements_p (cinfo))
 return false;
-  else
-return check_requirements (CI_ASSOCIATED_REQS (cinfo), args);
+  else {
+return check_requirements (CI_ASSUMPTIONS (cinfo), args);
+  }
 }
 
 // Check the constraints of the declaration or type T, against 
@@ -1322,6 

Re: [PATCH, ivopt] Try aligned offset when get_address_cost

2014-08-06 Thread Richard Biener
On Wed, Aug 6, 2014 at 8:34 AM, Zhenqiang Chen
zhenqiang.c...@linaro.org wrote:
 On 5 August 2014 21:59, Richard Biener richard.guent...@gmail.com wrote:
 On Mon, Aug 4, 2014 at 11:09 AM, Zhenqiang Chen zhenqiang.c...@arm.com 
 wrote:


 -Original Message-
 From: Bin.Cheng [mailto:amker.ch...@gmail.com]
 Sent: Monday, August 04, 2014 4:41 PM
 To: Zhenqiang Chen
 Cc: gcc-patches List
 Subject: Re: [PATCH, ivopt] Try aligned offset when get_address_cost

 On Mon, Aug 4, 2014 at 2:28 PM, Zhenqiang Chen
 zhenqiang.c...@arm.com wrote:
  Hi,
 
  For some TARGET, like ARM THUMB1, the offset in load/store should be
  nature aligned. But in function get_address_cost, when computing
  max_offset, it only tries byte-aligned offsets:
 
((unsigned HOST_WIDE_INT) 1  i) - 1
 
  which can not meet thumb_legitimate_offset_p check called from
  thumb1_legitimate_address_p for HImode and SImode.
 
  The patch adds additional try for aligned offset:
 
((unsigned HOST_WIDE_INT) 1  i) - GET_MODE_SIZE (address_mode).
 
  Bootstrap and no make check regression on X86-64.
  No make check regression on qemu for Cortex-m0 and Cortex-m3.
  For Cortex-m0, no performance changes with coremark and dhrystone.
  Coremark code size is ~0.44 smaller. And eembcv2 code size is ~0.22
  smaller. CSiBE code size is ~0.05% smaller.
 
  OK for trunk?
 
  Thanks!
  -Zhenqiang
 
  ChangeLog
  2014-08-04  Zhenqiang Chen  zhenqiang.c...@arm.com
 
  * tree-ssa-loop-ivopts.c (get_address_cost): Try aligned offset.
 
  testsuite/ChangeLog:
  2014-08-04  Zhenqiang Chen  zhenqiang.c...@arm.com
 
  * gcc.target/arm/get_address_cost_aligned_max_offset.c: New
 test.
 
  diff --git a/gcc/tree-ssa-loop-ivopts.c b/gcc/tree-ssa-loop-ivopts.c
  index 3b4a6cd..562122a 100644
  --- a/gcc/tree-ssa-loop-ivopts.c
  +++ b/gcc/tree-ssa-loop-ivopts.c
  @@ -3308,6 +3308,18 @@ get_address_cost (bool symbol_present, bool
  var_present,
XEXP (addr, 1) = gen_int_mode (off, address_mode);
if (memory_address_addr_space_p (mem_mode, addr, as))
  break;
  + /* For some TARGET, like ARM THUMB1, the offset should be
 nature
  +aligned.  Try an aligned offset if address_mode is not
 QImode.
  */
  + off = (address_mode == QImode)
  +   ? 0
  +   : ((unsigned HOST_WIDE_INT) 1  i)
  +   - GET_MODE_SIZE (address_mode);
  + if (off  0)
  +   {
  + XEXP (addr, 1) = gen_int_mode (off, address_mode);
  + if (memory_address_addr_space_p (mem_mode, addr, as))
  +   break;
  +   }
 Hi, Why not just check address_mode != QImode? Set off to 0 then check
 it
 seems unnecessary.

 Thanks for the comments.

 ((unsigned HOST_WIDE_INT) 1  i) - GET_MODE_SIZE (address_mode) might be a
 negative value except QImode. A negative value can not be max_offset. So we
 do not need to check it.

 For QImode, ((unsigned HOST_WIDE_INT) 1  i) - GET_MODE_SIZE
 (address_mode) == ((unsigned HOST_WIDE_INT) 1  i) - 1. It is already
 checked. So no need to check it again.

 I think the compiler can optimize the patch like

 diff --git a/gcc/tree-ssa-loop-ivopts.c b/gcc/tree-ssa-loop-ivopts.c
 index 3b4a6cd..213598a 100644
 --- a/gcc/tree-ssa-loop-ivopts.c
 +++ b/gcc/tree-ssa-loop-ivopts.c
 @@ -3308,6 +3308,19 @@ get_address_cost (bool symbol_present, bool
 var_present,
   XEXP (addr, 1) = gen_int_mode (off, address_mode);
   if (memory_address_addr_space_p (mem_mode, addr, as))
 break;
 + /* For some TARGET, like ARM THUMB1, the offset should be nature
 +aligned.  Try an aligned offset if address_mode is not QImode.
 */
 + if (address_mode != QImode)
 +   {
 + off = ((unsigned HOST_WIDE_INT) 1  i)
 + - GET_MODE_SIZE (address_mode);
 + if (off  0)
 +   {
 + XEXP (addr, 1) = gen_int_mode (off, address_mode);
 + if (memory_address_addr_space_p (mem_mode, addr, as))
 +   break;
 +   }
 +   }
 }
if (i == -1)
  off = 0;

 But is off now guaranteed to be the max value? (1  (i-1) ) - 1 for
 small i is larger than (1  i) - GET_MODE_SIZE (address_mode).

 That is, I think you want to guard this with 1  (i - 1) 
 GET_MODE_SIZE (address_mode)?

 Yes. Without off  0, it can not guarantee the off is the max value.
 With off  0, it can guarantee that

 (1  i) - GET_MODE_SIZE (address_mode) is greater than (1  (i-1) ) - 1.

 You don't adjust the negative offset side - why?

 -((unsigned HOST_WIDE_INT) 1  i) is already the min aligned offset.

Ok.

The patch is ok then.

Thanks,
Richard.

 Thanks!
 -Zhenqiang

 Richard.


 Thanks,
 bin
  }
 if (i == -1)
   off = 0;
  diff --git
  a/gcc/testsuite/gcc.target/arm/get_address_cost_aligned_max_offset.c
  

Re: [PATCH 2/2] Enable elimination of zext/sext

2014-08-06 Thread Kugan
On 06/08/14 22:09, Richard Biener wrote:
 On Tue, Aug 5, 2014 at 4:21 PM, Jakub Jelinek ja...@redhat.com wrote:
 On Tue, Aug 05, 2014 at 04:17:41PM +0200, Richard Biener wrote:
 what's the semantic of setting SRP_SIGNED_AND_UNSIGNED
 on the subreg?  That is, for the created (subreg:lhs_mode
 (reg:PROMOTE_MODE of ssa N))?

 SRP_SIGNED_AND_UNSIGNED on a subreg should mean that
 the subreg is both zero and sign extended, which means
 that the topmost bit of the narrower mode is known to be zero,
 and all bits above it in the wider mode are known to be zero too.
 SRP_SIGNED means that the topmost bit of the narrower mode is
 either 0 or 1 and depending on that the above wider mode bits
 are either all 0 or all 1.
 SRP_UNSIGNED means that regardless of the topmost bit value,
 all above wider mode bits are 0.
 
 Ok, then from the context of the patch we already know that
 either SRP_UNSIGNED or SRP_SIGNED is true which means
 that the value is sign- or zero-extended.
 
 I suppose inside promoted_for_type_p
 TYPE_MODE (TREE_TYPE (ssa)) == lhs_mode, I'm not sure
 why you pass !unsignedp as lhs_uns.

In expand_expr_real_1, it is already known that it is promoted for
unsigned_p and we are setting SUBREG_PROMOTED_SET (temp, unsignedp).

If we can prove that it is also promoted for !unsignedp, we can set
SUBREG_PROMOTED_SET (temp, SRP_SIGNED_AND_UNSIGNED).

promoted_for_type_p should prove this based on the value range info.

 
 Now, from 'ssa' alone we can't tell anything about a larger mode
 registers value if that is either zero- or sign-extended.  But we
 know that those bits are properly zero-extended if unsignedp
 and properly sign-extended if !unsignedp?
 
 So what the predicate tries to prove is that sign- and zero-extending
 results in the same larger-mode value.  This is true if the
 MSB of the smaller mode is not set.
 
 Let's assume that smaller mode is that of 'ssa' then the test
 is just
 
   return (!tree_int_cst_sign_bit (min)  !tree_int_cst_sign_bit (max));
 
 no?

hmm,  is this because we will never have a call to promoted_for_type_p
with same sign (ignoring PROMOTE_MODE) for 'ssa' and the larger mode.
The case with larger mode signed and 'ssa' unsigned will not work.
Therefore larger mode unsigned and 'ssa' signed will be the only case
that we should consider.

However, with PROMOTE_MODE, isnt that we will miss some cases with this.

Thanks,
Kugan




Re: [PATCH 2/2] Enable elimination of zext/sext

2014-08-06 Thread Richard Biener
On Wed, Aug 6, 2014 at 3:21 PM, Kugan kugan.vivekanandara...@linaro.org wrote:
 On 06/08/14 22:09, Richard Biener wrote:
 On Tue, Aug 5, 2014 at 4:21 PM, Jakub Jelinek ja...@redhat.com wrote:
 On Tue, Aug 05, 2014 at 04:17:41PM +0200, Richard Biener wrote:
 what's the semantic of setting SRP_SIGNED_AND_UNSIGNED
 on the subreg?  That is, for the created (subreg:lhs_mode
 (reg:PROMOTE_MODE of ssa N))?

 SRP_SIGNED_AND_UNSIGNED on a subreg should mean that
 the subreg is both zero and sign extended, which means
 that the topmost bit of the narrower mode is known to be zero,
 and all bits above it in the wider mode are known to be zero too.
 SRP_SIGNED means that the topmost bit of the narrower mode is
 either 0 or 1 and depending on that the above wider mode bits
 are either all 0 or all 1.
 SRP_UNSIGNED means that regardless of the topmost bit value,
 all above wider mode bits are 0.

 Ok, then from the context of the patch we already know that
 either SRP_UNSIGNED or SRP_SIGNED is true which means
 that the value is sign- or zero-extended.

 I suppose inside promoted_for_type_p
 TYPE_MODE (TREE_TYPE (ssa)) == lhs_mode, I'm not sure
 why you pass !unsignedp as lhs_uns.

 In expand_expr_real_1, it is already known that it is promoted for
 unsigned_p and we are setting SUBREG_PROMOTED_SET (temp, unsignedp).

 If we can prove that it is also promoted for !unsignedp, we can set
 SUBREG_PROMOTED_SET (temp, SRP_SIGNED_AND_UNSIGNED).

 promoted_for_type_p should prove this based on the value range info.


 Now, from 'ssa' alone we can't tell anything about a larger mode
 registers value if that is either zero- or sign-extended.  But we
 know that those bits are properly zero-extended if unsignedp
 and properly sign-extended if !unsignedp?

 So what the predicate tries to prove is that sign- and zero-extending
 results in the same larger-mode value.  This is true if the
 MSB of the smaller mode is not set.

 Let's assume that smaller mode is that of 'ssa' then the test
 is just

   return (!tree_int_cst_sign_bit (min)  !tree_int_cst_sign_bit (max));

 no?

 hmm,  is this because we will never have a call to promoted_for_type_p
 with same sign (ignoring PROMOTE_MODE) for 'ssa' and the larger mode.
 The case with larger mode signed and 'ssa' unsigned will not work.
 Therefore larger mode unsigned and 'ssa' signed will be the only case
 that we should consider.

 However, with PROMOTE_MODE, isnt that we will miss some cases with this.

No, PROMOTE_MODE will still either sign- or zero-extend.  If either
results in zeros in the upper bits then PROMOTE_MODE doesn't matter.

Richard.

 Thanks,
 Kugan




Re: [PATCH libstdc++ v3] - Add xmethods for std::vector and std::unique_ptr

2014-08-06 Thread Siva Chandra
On Wed, Aug 6, 2014 at 2:47 AM, Jonathan Wakely jwakely@gmail.com wrote:
 Some GNU/Linux distros already build GDB using Python3, so they will
 be unable to use these xmethods.

 However, I think it can be committed now and fixed later.

For the libstdc++ side, it is a very small fix (using 'print' with
function call syntax in one place.) I have attached a new patch which
has this fix.

 OK for trunk. Do you need me to do the commit for you?

Yes, you will need to commit it for me.
diff --git a/libstdc++-v3/python/libstdcxx/v6/xmethods.py 
b/libstdc++-v3/python/libstdcxx/v6/xmethods.py
new file mode 100644
index 000..f20f411
--- /dev/null
+++ b/libstdc++-v3/python/libstdcxx/v6/xmethods.py
@@ -0,0 +1,103 @@
+# Xmethods for libstc++.
+
+# Copyright (C) 2014 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see http://www.gnu.org/licenses/.
+
+import gdb
+import gdb.xmethod
+import re
+
+matcher_name_prefix = 'libstdc++::'
+
+# Xmethods for std::vector
+
+class VectorSizeWorker(gdb.xmethod.XMethodWorker):
+def __init__(self):
+self.name = 'size'
+self.enabled = True
+
+def get_arg_types(self):
+return None
+
+def __call__(self, obj):
+return obj['_M_impl']['_M_finish'] - obj['_M_impl']['_M_start']
+
+class VectorSubscriptWorker(gdb.xmethod.XMethodWorker):
+def __init__(self):
+self.name = 'operator[]'
+self.enabled = True
+
+def get_arg_types(self):
+return gdb.lookup_type('std::size_t')
+
+def __call__(self, obj, subscript):
+return obj['_M_impl']['_M_start'][subscript]
+
+class VectorMethodsMatcher(gdb.xmethod.XMethodMatcher):
+def __init__(self):
+gdb.xmethod.XMethodMatcher.__init__(self,
+matcher_name_prefix + 'vector')
+self._subscript_worker = VectorSubscriptWorker()
+self._size_worker = VectorSizeWorker()
+self.methods = [self._subscript_worker, self._size_worker]
+
+def match(self, class_type, method_name):
+if not re.match('^std::vector.*$', class_type.tag):
+return None
+if method_name == 'operator[]' and self._subscript_worker.enabled:
+return self._subscript_worker
+elif method_name == 'size' and self._size_worker.enabled:
+return self._size_worker
+
+# Xmethods for std::unique_ptr
+
+class UniquePtrGetWorker(gdb.xmethod.XMethodWorker):
+def __init__(self):
+self.name = 'get'
+self.enabled = True
+
+def get_arg_types(self):
+return None
+
+def __call__(self, obj):
+return obj['_M_t']['_M_head_impl']
+
+class UniquePtrDerefWorker(UniquePtrGetWorker):
+def __init__(self):
+UniquePtrGetWorker.__init__(self)
+self.name = 'operator*'
+
+def __call__(self, obj):
+return UniquePtrGetWorker.__call__(self, obj).dereference()
+
+class UniquePtrMethodsMatcher(gdb.xmethod.XMethodMatcher):
+def __init__(self):
+gdb.xmethod.XMethodMatcher.__init__(self,
+matcher_name_prefix + 'unique_ptr')
+self._get_worker = UniquePtrGetWorker()
+self._deref_worker = UniquePtrDerefWorker()
+self.methods = [self._get_worker, self._deref_worker]
+
+def match(self, class_type, method_name):
+if not re.match('^std::unique_ptr.*$', class_type.tag):
+return None
+if method_name == 'operator*' and self._deref_worker.enabled:
+return self._deref_worker
+elif method_name == 'get' and self._get_worker.enabled:
+return self._get_worker
+
+def register_libstdcxx_xmethods(locus):
+gdb.xmethod.register_xmethod_matcher(locus, VectorMethodsMatcher())
+gdb.xmethod.register_xmethod_matcher(locus, UniquePtrMethodsMatcher())
diff --git a/libstdc++-v3/testsuite/lib/gdb-test.exp 
b/libstdc++-v3/testsuite/lib/gdb-test.exp
index 9cb6ecf..1a68217 100644
--- a/libstdc++-v3/testsuite/lib/gdb-test.exp
+++ b/libstdc++-v3/testsuite/lib/gdb-test.exp
@@ -79,7 +79,7 @@ proc whatis-test {var result} {
 #
 # Argument 0 is the marker on which to put a breakpoint
 # Argument 2 handles expected failures and the like
-proc gdb-test { marker {selector {}} } {
+proc gdb-test { marker {selector {}} {load_xmethods 0} } {
 if { ![isnative] || [is_remote target] } { return }
 
 if {[string length $selector]  0} {
@@ 

[PATCH] Fix PR62034

2014-08-06 Thread Richard Biener

The following avoids recursing for popping SCCs.

LTO bootstrapped and tested on x86_64-unknown-linux-gnu, applied.

Richard.

2014-08-06  Richard Biener  rguent...@suse.de

PR lto/62034
* lto-streamer-in.c (lto_input_tree_1): Assert we do not read
SCCs here.
(lto_input_tree): Pop SCCs here.

Index: gcc/lto-streamer-in.c
===
--- gcc/lto-streamer-in.c   (revision 213660)
+++ gcc/lto-streamer-in.c   (working copy)
@@ -1324,15 +1324,7 @@ lto_input_tree_1 (struct lto_input_block
   streamer_tree_cache_append (data_in-reader_cache, result, hash);
 }
   else if (tag == LTO_tree_scc)
-{
-  unsigned len, entry_len;
-
-  /* Input and skip the SCC.  */
-  lto_input_scc (ib, data_in, len, entry_len);
-
-  /* Recurse.  */
-  return lto_input_tree (ib, data_in);
-}
+gcc_unreachable ();
   else
 {
   /* Otherwise, materialize a new node from IB.  */
@@ -1345,7 +1337,15 @@ lto_input_tree_1 (struct lto_input_block
 tree
 lto_input_tree (struct lto_input_block *ib, struct data_in *data_in)
 {
-  return lto_input_tree_1 (ib, data_in, streamer_read_record_start (ib), 0);
+  enum LTO_tags tag;
+
+  /* Input and skip SCCs.  */
+  while ((tag = streamer_read_record_start (ib)) == LTO_tree_scc)
+{
+  unsigned len, entry_len;
+  lto_input_scc (ib, data_in, len, entry_len);
+}
+  return lto_input_tree_1 (ib, data_in, tag, 0);
 }
 
 


[PATCH][match-and-simplify] Fix ICE with updating EH info

2014-08-06 Thread Richard Biener

Wasn't done properly.

Committed.

Richard.

2014-08-06  Richard Biener  rguent...@suse.de

* tree-ssa-forwprop.c (pass_forwprop::execute): Properly
clean EH info on folded stmts.

Index: gcc/tree-ssa-forwprop.c
===
--- gcc/tree-ssa-forwprop.c (revision 213651)
+++ gcc/tree-ssa-forwprop.c (working copy)
@@ -3663,11 +3663,12 @@ pass_forwprop::execute (function *fun)
   !gsi_end_p (gsi); gsi_next (gsi))
{
  gimple stmt = gsi_stmt (gsi);
+ gimple orig_stmt = stmt;
 
  if (fold_stmt (gsi, fwprop_ssa_val))
{
  stmt = gsi_stmt (gsi);
- if (maybe_clean_or_replace_eh_stmt (stmt, stmt)
+ if (maybe_clean_or_replace_eh_stmt (orig_stmt, stmt)
   gimple_purge_dead_eh_edges (bb))
cfg_changed = true;
  update_stmt (stmt);


Re: [PATCH, trans-mem, PR 61393] Copy tm_clone field of cgraph_node when cloning the node

2014-08-06 Thread Martin Jambor
Hi,

On Wed, Jul 30, 2014 at 06:56:05PM +0200, Martin Jambor wrote:
 Hi,
 
 IPA-CP can wreck havoc to transactional memory support as described in
 the summary of the PR in bugzilla.  It seems the cause is that IPA-CP
 clones of nodes created by trans-mem do not have their tm_clone flag
 set.  For release branches we have decided to simply disable IPA-CP of
 trans-mem clones but for trunk we'd like to avoid this.  I am not 100%
 sure that just copying the flag is OK but it seems that it works for
 the provided testcase and nobody from the trans-mem people has
 commented in bugzilla for over a month.  So I suggest we commit this
 patch and wait and see if something breaks.  Hopefully nothing will.
 

Honza has approved the patch in person and so I have committed it as
revision 213666 after re-testing.  Hopefully it does not break
anything, if it does then read the paragraph above and remember it is
not really my fault :-)

Martin
 

 Bootstrapped and tested on x86_64-linux.  OK for trunk?
 
 Thanks,
 
 Martin
 
 
 2014-07-29  Martin Jambor  mjam...@suse.cz
 
   PR ipa/61393
   * cgraphclones.c (cgraph_node::create_clone): Also copy tm_clone.
 
 diff --git a/gcc/cgraphclones.c b/gcc/cgraphclones.c
 index f097da8..c04b5c8 100644
 --- a/gcc/cgraphclones.c
 +++ b/gcc/cgraphclones.c
 @@ -423,6 +423,7 @@ cgraph_node::create_clone (tree decl, gcov_type 
 gcov_count, int freq,
new_node-count = count;
new_node-frequency = frequency;
new_node-tp_first_run = tp_first_run;
 +  new_node-tm_clone = tm_clone;
  
new_node-clone.tree_map = NULL;
new_node-clone.args_to_skip = args_to_skip;


Re: Remove unnecessary and harmful fixincludes for Android

2014-08-06 Thread Bruce Korb
Hi,

On Wed, Aug 6, 2014 at 4:51 AM, Alexander Ivchenko aivch...@gmail.com wrote:
 We still have to remove fix for compiler.h:

Correct.  Thank you.


 Bruce, I think I formally have to ask for your approval again :)

I don't think so.  You've selected one of the changes we wrote about,
so With one of the two changes, the patch is approved. is sufficient,
as long as you have made either of the proposed changes.

But to be clear:  Approved.  (: again :)


[PINGv3][PATCH] Fix for PR 61561

2014-08-06 Thread Marat Zakirov

On 07/30/2014 04:56 PM, Marat Zakirov wrote:

On 07/23/2014 05:33 PM, Marat Zakirov wrote:

Hi all!

This is a friendly reminder message.

On 07/17/2014 03:22 PM, Marat Zakirov wrote:


On 07/16/2014 01:32 PM, Kyrill Tkachov wrote:


On 16/07/14 10:22, Marat Zakirov wrote:

Christophe,

Please look at a new patch.  Draft tests are OK.
I'll ask your commit approval when full regression 
(ARM/thumb1/thumb2)

tests are done.

Hi Marat,

I was about to propose the thumb2.md hunk myself, but I'll defer to 
the arm maintainers to comment on the other parts.


Also, in the ChangeLog it is helpful to specify which patterns are 
being affected, so in your case it would be something like:


* config/arm/thumb1.md (*thumb1_movhi_insn): Handle stack pointer.
(*thumb1_movqi_insn): Likewise.
* config/arm/thumb2.md (*thumb2_movhi_insn): Ditto.


Kyrill



Christophe, Kirill,

finally I've finished regression testing.
Please check if my patch is OK for trunk.

The following configures were used:

configure --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu 
--target=arm-linux-gnueabi --with-interwork --enable-long-long 
--enable-languages=c,c++,fortran --enable-shared --with-gnu-as 
--with-gnu-ld --with-arch=$ARCH --with-mode=$MODE


Thumb-1

$ARCH=armv4t
$MODE=thumb

Thumb-2

$ARCH=armv7
$MODE=thumb

ARM

$ARCH=armv7-a
$MODE=arm

No regressions detected, test pr61561.c passed in all cases.

Thank you all.
--Marat







gcc/ChangeLog:

2014-07-16  Marat Zakirov  m.zaki...@samsung.com

	* config/arm/thumb1.md (*thumb1_movhi_insn): Handle stack pointer.
	(*thumb1_movqi_insn): Likewise.
	* config/arm/thumb2.md (*thumb2_movhi_insn): Likewise.

diff --git a/gcc/config/arm/thumb1.md b/gcc/config/arm/thumb1.md
index c044fd5..47b5cbd 100644
--- a/gcc/config/arm/thumb1.md
+++ b/gcc/config/arm/thumb1.md
@@ -708,7 +708,7 @@
 
 (define_insn *thumb1_movhi_insn
   [(set (match_operand:HI 0 nonimmediate_operand =l,l,m,*r,*h,l)
-	(match_operand:HI 1 general_operand   l,m,l,*h,*r,I))]
+	(match_operand:HI 1 general_operand   lk,m,l,*h,*r,I))]
   TARGET_THUMB1
 (   register_operand (operands[0], HImode)
|| register_operand (operands[1], HImode))
@@ -762,7 +762,7 @@
 
 (define_insn *thumb1_movqi_insn
   [(set (match_operand:QI 0 nonimmediate_operand =l,l,m,*r,*h,l)
-	(match_operand:QI 1 general_operand  l, m,l,*h,*r,I))]
+	(match_operand:QI 1 general_operand  lk, m,l,*h,*r,I))]
   TARGET_THUMB1
 (   register_operand (operands[0], QImode)
|| register_operand (operands[1], QImode))
diff --git a/gcc/config/arm/thumb2.md b/gcc/config/arm/thumb2.md
index 6ea0810..7228069 100644
--- a/gcc/config/arm/thumb2.md
+++ b/gcc/config/arm/thumb2.md
@@ -318,7 +318,7 @@
 ;; of the messiness associated with the ARM patterns.
 (define_insn *thumb2_movhi_insn
   [(set (match_operand:HI 0 nonimmediate_operand =r,r,l,r,m,r)
-	(match_operand:HI 1 general_operand  r,I,Py,n,r,m))]
+	(match_operand:HI 1 general_operand  rk,I,Py,n,r,m))]
   TARGET_THUMB2
(register_operand (operands[0], HImode)
  || register_operand (operands[1], HImode))


[PATCH] Move POINTER_PLUS_EXPR folding to fold-const.c

2014-08-06 Thread Marek Polacek
As discussed in the other thread, I can't just remove folding from the
C FE and implement it on GIMPLE level, because that regressed some of
those not-really-kosher static initializers.  Instead, fold-const.c
has to be taught how to fold PTR0 - (PTR1 p+ A).  (Now it sounds so
obvious.)  I added some test to that effect.  And sorry - I'm clueless
about the PSImode targets :(.

Bootstrapped/regtested on x86_64-linux, ok for trunk?

2014-08-06  Marek Polacek  pola...@redhat.com

* fold-const.c (fold_binary_loc): Add folding of 
(PTR0 - (PTR1 p+ A) - (PTR0 - PTR1) - A.
c/
* c-typeck.c (pointer_diff): Remove P - (P + CST) optimization.
testsuite/
* gcc.dg/fold-reassoc-3.c: New test.

diff --git gcc/c/c-typeck.c gcc/c/c-typeck.c
index 1b664bd..998e386 100644
--- gcc/c/c-typeck.c
+++ gcc/c/c-typeck.c
@@ -3460,7 +3460,6 @@ pointer_diff (location_t loc, tree op0, tree op1)
   addr_space_t as0 = TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (op0)));
   addr_space_t as1 = TYPE_ADDR_SPACE (TREE_TYPE (TREE_TYPE (op1)));
   tree target_type = TREE_TYPE (TREE_TYPE (op0));
-  tree con0, con1, lit0, lit1;
   tree orig_op1 = op1;
 
   /* If the operands point into different address spaces, we need to
@@ -3490,7 +3489,6 @@ pointer_diff (location_t loc, tree op0, tree op1)
   else
 inttype = restype;
 
-
   if (TREE_CODE (target_type) == VOID_TYPE)
 pedwarn (loc, OPT_Wpointer_arith,
 pointer of type %void *% used in subtraction);
@@ -3498,50 +3496,6 @@ pointer_diff (location_t loc, tree op0, tree op1)
 pedwarn (loc, OPT_Wpointer_arith,
 pointer to a function used in subtraction);
 
-  /* If the conversion to ptrdiff_type does anything like widening or
- converting a partial to an integral mode, we get a convert_expression
- that is in the way to do any simplifications.
- (fold-const.c doesn't know that the extra bits won't be needed.
- split_tree uses STRIP_SIGN_NOPS, which leaves conversions to a
- different mode in place.)
- So first try to find a common term here 'by hand'; we want to cover
- at least the cases that occur in legal static initializers.  */
-  if (CONVERT_EXPR_P (op0)
-   (TYPE_PRECISION (TREE_TYPE (op0))
- == TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (op0, 0)
-con0 = TREE_OPERAND (op0, 0);
-  else
-con0 = op0;
-  if (CONVERT_EXPR_P (op1)
-   (TYPE_PRECISION (TREE_TYPE (op1))
- == TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (op1, 0)
-con1 = TREE_OPERAND (op1, 0);
-  else
-con1 = op1;
-
-  if (TREE_CODE (con0) == POINTER_PLUS_EXPR)
-{
-  lit0 = TREE_OPERAND (con0, 1);
-  con0 = TREE_OPERAND (con0, 0);
-}
-  else
-lit0 = integer_zero_node;
-
-  if (TREE_CODE (con1) == POINTER_PLUS_EXPR)
-{
-  lit1 = TREE_OPERAND (con1, 1);
-  con1 = TREE_OPERAND (con1, 0);
-}
-  else
-lit1 = integer_zero_node;
-
-  if (operand_equal_p (con0, con1, 0))
-{
-  op0 = lit0;
-  op1 = lit1;
-}
-
-
   /* First do the subtraction as integers;
  then drop through to build the divide operator.
  Do not do default conversions on the minus operator
diff --git gcc/fold-const.c gcc/fold-const.c
index 7180662..8d66957 100644
--- gcc/fold-const.c
+++ gcc/fold-const.c
@@ -10831,6 +10831,19 @@ fold_binary_loc (location_t loc,
  if (tmp)
return fold_build2_loc (loc, PLUS_EXPR, type, tmp, arg01);
}
+ /* PTR0 - (PTR1 p+ A) - (PTR0 - PTR1) - A, assuming PTR0 - PTR1
+simplifies. */
+ else if (TREE_CODE (arg1) == POINTER_PLUS_EXPR)
+   {
+ tree arg10 = fold_convert_loc (loc, type,
+TREE_OPERAND (arg1, 0));
+ tree arg11 = fold_convert_loc (loc, type,
+TREE_OPERAND (arg1, 1));
+ tree tmp = fold_binary_loc (loc, MINUS_EXPR, type, arg0,
+ fold_convert_loc (loc, type, arg10));
+ if (tmp)
+   return fold_build2_loc (loc, MINUS_EXPR, type, tmp, arg11);
+   }
}
   /* A - (-B) - A + B */
   if (TREE_CODE (arg1) == NEGATE_EXPR)
diff --git gcc/testsuite/gcc.dg/fold-reassoc-3.c 
gcc/testsuite/gcc.dg/fold-reassoc-3.c
index e69de29..313fb98 100644
--- gcc/testsuite/gcc.dg/fold-reassoc-3.c
+++ gcc/testsuite/gcc.dg/fold-reassoc-3.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options -fdump-tree-original } */
+
+int i;
+int *p = i;
+static __PTRDIFF_TYPE__ d = p - (p + 1);
+
+void
+foo (void)
+{
+  int *q = i;
+  static __PTRDIFF_TYPE__ e = q - (q + 1);
+}
+
+/* { dg-final { scan-tree-dump-not  -  original } } */
+/* { dg-final { scan-tree-dump-not  \\\+  original } } */
+/* { dg-final { cleanup-tree-dump orginal } } */

Marek


Re: [PINGv3][PATCH] Fix for PR 61561

2014-08-06 Thread Ramana Radhakrishnan



This is OK thanks.


Ramana


[PATCH][match-and-simplify] Fix codegen bug

2014-08-06 Thread Richard Biener

This avoids shadowing the outer ops[] array with inner ones
during code generation.  This makes the results of inner
expressions disappear ... oops.

Applied.

Richard.

2014-08-06  Richard Biener  rguent...@suse.de

* genmatch.c (gen_transform): Add depth parameter everywhere.
(expr::gen_transform): Make a depth dependent ops[] array
name and increase depth for siblings.
(gen_gimple, gen_generic): Pass 1 as depth to gen_transform
method calls.

Index: gcc/genmatch.c
===
--- gcc/genmatch.c  (revision 213651)
+++ gcc/genmatch.c  (working copy)
@@ -209,14 +209,15 @@ struct operand {
   enum op_type { OP_PREDICATE, OP_EXPR, OP_CAPTURE, OP_C_EXPR };
   operand (enum op_type type_) : type (type_) {}
   enum op_type type;
-  virtual void gen_transform (FILE *f, const char *, bool) = 0;
+  virtual void gen_transform (FILE *f, const char *, bool, int) = 0;
 };
 
 struct predicate : public operand
 {
   predicate (const char *ident_) : operand (OP_PREDICATE), ident (ident_) {}
   const char *ident;
-  virtual void gen_transform (FILE *, const char *, bool) { gcc_unreachable 
(); }
+  virtual void gen_transform (FILE *, const char *, bool, int)
+{ gcc_unreachable (); }
 };
 
 struct e_operation {
@@ -233,7 +234,7 @@ struct expr : public operand
   void append_op (operand *op) { ops.safe_push (op); }
   e_operation *operation;
   vecoperand * ops;
-  virtual void gen_transform (FILE *f, const char *, bool);
+  virtual void gen_transform (FILE *f, const char *, bool, int);
 };
 
 struct c_expr : public operand
@@ -245,7 +246,7 @@ struct c_expr : public operand
   veccpp_token code;
   unsigned nr_stmts;
   char *fname;
-  virtual void gen_transform (FILE *f, const char *, bool);
+  virtual void gen_transform (FILE *f, const char *, bool, int);
 };
 
 struct capture : public operand
@@ -254,7 +255,7 @@ struct capture : public operand
   : operand (OP_CAPTURE), where (where_), what (what_) {}
   const char *where;
   operand *what;
-  virtual void gen_transform (FILE *f, const char *, bool);
+  virtual void gen_transform (FILE *f, const char *, bool, int);
 };
 
 template
@@ -678,15 +679,15 @@ check_no_user_id (simplify *s)
 /* Code gen off the AST.  */
 
 void
-expr::gen_transform (FILE *f, const char *dest, bool gimple)
+expr::gen_transform (FILE *f, const char *dest, bool gimple, int depth)
 {
   fprintf (f, {\n);
-  fprintf (f,   tree ops[%u], res;\n, ops.length ());
+  fprintf (f,   tree ops%d[%u], res;\n, depth, ops.length ());
   for (unsigned i = 0; i  ops.length (); ++i)
 {
   char dest[32];
-  snprintf (dest, 32,   ops[%u], i);
-  ops[i]-gen_transform (f, dest, gimple);
+  snprintf (dest, 32,   ops%d[%u], depth, i);
+  ops[i]-gen_transform (f, dest, gimple, depth + 1);
 }
   if (gimple)
 {
@@ -694,30 +695,30 @@ expr::gen_transform (FILE *f, const char
 fail if seq == NULL.  */
   fprintf (f,   if (!seq)\n
   {\n
-res = gimple_simplify (%s, TREE_TYPE (ops[0]),
-  operation-op-id);
+res = gimple_simplify (%s, TREE_TYPE (ops%d[0]),
+  operation-op-id, depth);
   for (unsigned i = 0; i  ops.length (); ++i)
-   fprintf (f, , ops[%u], i);
+   fprintf (f, , ops%d[%u], depth, i);
   fprintf (f, , seq, valueize);\n);
   fprintf (f,   if (!res) return false;\n);
   fprintf (f, }\n);
   fprintf (f,   else\n);
   fprintf (f, res = gimple_build (seq, UNKNOWN_LOCATION, %s, 
-  TREE_TYPE (ops[0]), operation-op-id);
+  TREE_TYPE (ops%d[0]), operation-op-id, depth);
   for (unsigned i = 0; i  ops.length (); ++i)
-   fprintf (f, , ops[%u], i);
+   fprintf (f, , ops%d[%u], depth, i);
   fprintf (f, , valueize);\n);
 }
   else
 {
   if (operation-op-kind == id_base::CODE)
-   fprintf (f,   res = fold_build%d (%s, TREE_TYPE (ops[0]),
-ops.length(), operation-op-id);
+   fprintf (f,   res = fold_build%d (%s, TREE_TYPE (ops%d[0]),
+ops.length(), operation-op-id, depth);
   else
fprintf (f,   res = build_call_expr (builtin_decl_implicit (%s), %d,
 operation-op-id, ops.length());
   for (unsigned i = 0; i  ops.length (); ++i)
-   fprintf (f, , ops[%u], i);
+   fprintf (f, , ops%d[%u], depth, i);
   fprintf (f, );\n);
 }
   fprintf (f,   %s = res;\n, dest);
@@ -725,7 +726,7 @@ expr::gen_transform (FILE *f, const char
 }
 
 void
-c_expr::gen_transform (FILE *f, const char *dest, bool)
+c_expr::gen_transform (FILE *f, const char *dest, bool, int)
 {
   /* If this expression has an outlined function variant, call it.  */
   if (fname)
@@ -772,7 +773,7 @@ c_expr::gen_transform (FILE *f, const ch
 }
 
 void
-capture::gen_transform (FILE *f, const char *dest, bool)
+capture::gen_transform (FILE *f, const char *dest, 

[PATCH][match-and-simplify] Robusten gimple_build against non-SSA context

2014-08-06 Thread Richard Biener

We fold stmts from non-SSA so when we simplify a single stmt
into multiple ones (like strcat (x, foo) - memcpy (x + strlen (foo), 
)) then gimple_build fails because it unconditionally builds
SSA names.

Fixed.

Richard.

2014-08-06  Richard Biener  rguent...@suse.de

* gimple-fold.c (gimple_build): Allow to be called from
non-SSA context.

Index: gcc/gimple-fold.c
===
--- gcc/gimple-fold.c   (revision 213651)
+++ gcc/gimple-fold.c   (working copy)
@@ -3733,7 +3733,10 @@ gimple_build (gimple_seq *seq, location_
   tree res = gimple_simplify (code, type, op0, seq, valueize);
   if (!res)
 {
-  res = make_ssa_name (type, NULL);
+  if (gimple_in_ssa_p (cfun))
+   res = make_ssa_name (type, NULL);
+  else
+   res = create_tmp_reg (type, NULL);
   gimple stmt;
   if (code == REALPART_EXPR
  || code == IMAGPART_EXPR
@@ -3763,7 +3766,10 @@ gimple_build (gimple_seq *seq, location_
   tree res = gimple_simplify (code, type, op0, op1, seq, valueize);
   if (!res)
 {
-  res = make_ssa_name (type, NULL);
+  if (gimple_in_ssa_p (cfun))
+   res = make_ssa_name (type, NULL);
+  else
+   res = create_tmp_reg (type, NULL);
   gimple stmt = gimple_build_assign_with_ops (code, res, op0, op1);
   gimple_set_location (stmt, loc);
   gimple_seq_add_stmt_without_update (seq, stmt);
@@ -3787,7 +3793,10 @@ gimple_build (gimple_seq *seq, location_
  seq, valueize);
   if (!res)
 {
-  res = make_ssa_name (type, NULL);
+  if (gimple_in_ssa_p (cfun))
+   res = make_ssa_name (type, NULL);
+  else
+   res = create_tmp_reg (type, NULL);
   gimple stmt;
   if (code == BIT_FIELD_REF)
stmt = gimple_build_assign_with_ops (code, res,
@@ -3816,7 +3825,10 @@ gimple_build (gimple_seq *seq, location_
   tree res = gimple_simplify (fn, type, arg0, seq, valueize);
   if (!res)
 {
-  res = make_ssa_name (type, NULL);
+  if (gimple_in_ssa_p (cfun))
+   res = make_ssa_name (type, NULL);
+  else
+   res = create_tmp_reg (type, NULL);
   tree decl = builtin_decl_implicit (fn);
   gimple stmt = gimple_build_call (decl, 1, arg0);
   gimple_call_set_lhs (stmt, res);


[PATCH][match-and-simplify] Implement two-parameter builtin-function simplify

2014-08-06 Thread Richard Biener

$subject, applied.

Richard.

2014-08-06  Richard Biener  rguent...@suse.de

* gimple-fold.h (gimple_simplify): Add two-parameter builtin
function overload.
* gimple-match-head.c (gimple_simplify): Implement it.

Index: gcc/gimple-fold.h
===
--- gcc/gimple-fold.h   (revision 213651)
+++ gcc/gimple-fold.h   (working copy)
@@ -117,6 +117,8 @@ tree gimple_simplify (enum tree_code, tr
  gimple_seq *, tree (*)(tree));
 tree gimple_simplify (enum built_in_function, tree, tree,
  gimple_seq *, tree (*)(tree));
+tree gimple_simplify (enum built_in_function, tree, tree, tree,
+ gimple_seq *, tree (*)(tree));
 /* The following two APIs are an artifact and should vanish in favor
of the existing gimple_fold_stmt_to_constant and fold_stmt APIs.  */
 tree gimple_simplify (tree, gimple_seq *, tree (*)(tree));
Index: gcc/gimple-match-head.c
===
--- gcc/gimple-match-head.c (revision 213655)
+++ gcc/gimple-match-head.c (working copy)
@@ -464,6 +464,37 @@ gimple_simplify (enum built_in_function
   return maybe_push_res_to_seq (rcode, type, ops, seq);
 }
 
+tree
+gimple_simplify (enum built_in_function fn, tree type,
+tree arg0, tree arg1,
+gimple_seq *seq, tree (*valueize)(tree))
+{
+  if (constant_for_folding (arg0)
+   constant_for_folding (arg1))
+{
+  tree decl = builtin_decl_implicit (fn);
+  if (decl)
+   {
+ tree res = fold_builtin_n (UNKNOWN_LOCATION, decl, arg0, 2, false);
+ if (res)
+   {
+ /* fold_builtin_n wraps the result inside a NOP_EXPR.  */
+ STRIP_NOPS (res);
+ res = fold_convert (type, res);
+ if (CONSTANT_CLASS_P (res))
+   return res;
+   }
+   }
+}
+
+  code_helper rcode;
+  tree ops[3] = {};
+  if (!gimple_simplify (rcode, ops, seq, valueize,
+   fn, type, arg0, arg1))
+return NULL_TREE;
+  return maybe_push_res_to_seq (rcode, type, ops, seq);
+}
+
 static bool
 gimple_simplify (gimple stmt,
 code_helper *rcode, tree *ops,


Re: [PATCH][match-and-simplify] Implement two-parameter builtin-function simplify

2014-08-06 Thread Richard Biener
On Wed, 6 Aug 2014, Richard Biener wrote:

 
 $subject, applied.

Err, too fast.  Fixed.

Richard.

2014-08-06  Richard Biener  rguent...@suse.de

* gimple-match-head.c (gimple_simplify): Fix implementation.

Index: gcc/gimple-match-head.c
===
--- gcc/gimple-match-head.c (revision 213671)
+++ gcc/gimple-match-head.c (working copy)
@@ -475,7 +475,10 @@ gimple_simplify (enum built_in_function
   tree decl = builtin_decl_implicit (fn);
   if (decl)
{
- tree res = fold_builtin_n (UNKNOWN_LOCATION, decl, arg0, 2, false);
+ tree args[2];
+ args[0] = arg0;
+ args[1] = arg1;
+ tree res = fold_builtin_n (UNKNOWN_LOCATION, decl, args, 2, false);
  if (res)
{
  /* fold_builtin_n wraps the result inside a NOP_EXPR.  */


Re: [PINGv3][PATCH] Fix for PR 61561

2014-08-06 Thread Richard Earnshaw
On 06/08/14 15:14, Ramana Radhakrishnan wrote:
 
 
 This is OK thanks.
 
 
 Ramana
 


Hmm, minor nit.

 (define_insn *thumb1_movhi_insn
   [(set (match_operand:HI 0 nonimmediate_operand =l,l,m,*r,*h,l)
-   (match_operand:HI 1 general_operand   l,m,l,*h,*r,I))]
+   (match_operand:HI 1 general_operand   lk,m,l,*h,*r,I))]

This would be better expressed as:

   [(set (match_operand:HI 0 nonimmediate_operand =l,l,m,l*r,*h,l)
 (match_operand:HI 1 general_operand  l,m,l,k*h,*r,I))]

that is, to use the 4th alternative.  That's because the use of SP in
these operations does not clobber the flags.

Similarly for the movqi pattern.

R.



Re: [C++ Patch/RFC] PR 43906

2014-08-06 Thread Jason Merrill

On 08/05/2014 10:48 AM, Paolo Carlini wrote:

+   (VOID_TYPE_P (TREE_TYPE (type1))
+  || comptypes (TYPE_MAIN_VARIANT (TREE_TYPE (type0)),
+TYPE_MAIN_VARIANT (TREE_TYPE (type1)),
+COMPARE_BASE | COMPARE_DERIVED


Can we drop this now that we're calling composite_pointer_type?

Jason



Re: [GSoC] the separate option for all dimensions

2014-08-06 Thread Roman Gareev
I've tested the modified version of Graphite using the gcc test suite
and haven't found out new failed tests.

However, pr35356-2.c is still not suitable for testing. The ISL AST
generated from its source code doesn't contain MIN or MAX.

if (k = -1) {
  for (int c1 = 0; c1  n; c1 += 1)
S_7(c1);
} else if (k = n) {
  for (int c1 = 0; c1  n; c1 += 1)
S_7(c1);
} else {
  for (int c1 = 0; c1  k; c1 += 1)
S_7(c1);
  S_6(k);
  for (int c1 = k + 1; c1  n; c1 += 1)
S_7(c1);
}

Should we make pr35356-2.c to be similar to isl-codegen-loop-dumping.c
by replacing “MIN_EXPR\[^\\n\\r]*; and MAX_EXPR\[^\\n\\r]*; with
the regexp, which contains the the above-mentioned isl ast?

-- 
Cheers, Roman Gareev.


[GSoC] Elimination of CLooG library installation dependency

2014-08-06 Thread Roman Gareev
Hi Tobias,

I've attached the patch, which should eliminate CLooG library
installation dependency from GCC. The CLooG AST generator is still the
main code generator, but the isl ast generator will be chosen in case
of nonavailability of CLooG library.

However, I've found out a problem. Almost all the functions of the ISL
cannot be used without installed CLooG. (I get errors which contain
“undefined reference to...”). Maybe I missed something. What do you
think about this?

I also have a few questions about gcc. Could you please answer them?

Should Makefile.in be regenerated or manually changed? (I haven't
found out how to regenerate it.)

I've used printf to print “The CLooG code generator cannot be used
+(CLooG is not available). The ISL code generator was chosen.\n”.
Should another function be used for this purpose?


-- 
Cheers, Roman Gareev.
Index: Makefile.in
===
--- Makefile.in (revision 213622)
+++ Makefile.in (working copy)
@@ -219,6 +219,7 @@
HOST_LIBS=$(STAGE1_LIBS); export HOST_LIBS; \
GMPLIBS=$(HOST_GMPLIBS); export GMPLIBS; \
GMPINC=$(HOST_GMPINC); export GMPINC; \
+   ISLLIBS=$(HOST_ISLLIBS); export ISLLIBS; \
ISLINC=$(HOST_ISLINC); export ISLINC; \
CLOOGLIBS=$(HOST_CLOOGLIBS); export CLOOGLIBS; \
CLOOGINC=$(HOST_CLOOGINC); export CLOOGINC; \
@@ -310,6 +311,7 @@
 HOST_GMPINC = @gmpinc@
 
 # Where to find ISL
+HOST_ISLLIBS = @isllibs@
 HOST_ISLINC = @islinc@
 
 # Where to find CLOOG
Index: Makefile.tpl
===
--- Makefile.tpl(revision 213622)
+++ Makefile.tpl(working copy)
@@ -222,6 +222,7 @@
HOST_LIBS=$(STAGE1_LIBS); export HOST_LIBS; \
GMPLIBS=$(HOST_GMPLIBS); export GMPLIBS; \
GMPINC=$(HOST_GMPINC); export GMPINC; \
+   ISLLIBS=$(HOST_ISLLIBS); export ISLLIBS; \
ISLINC=$(HOST_ISLINC); export ISLINC; \
CLOOGLIBS=$(HOST_CLOOGLIBS); export CLOOGLIBS; \
CLOOGINC=$(HOST_CLOOGINC); export CLOOGINC; \
@@ -313,6 +314,7 @@
 HOST_GMPINC = @gmpinc@
 
 # Where to find ISL
+HOST_ISLLIBS = @isllibs@
 HOST_ISLINC = @islinc@
 
 # Where to find CLOOG
Index: gcc/config.in
===
--- gcc/config.in   (revision 213622)
+++ gcc/config.in   (working copy)
@@ -1705,6 +1705,10 @@
 #undef HAVE_cloog
 #endif
 
+/* Define if isl is in use. */
+#ifndef USED_FOR_TARGET
+#undef HAVE_isl
+#endif
 
 /* Define if F_SETLKW supported by fcntl. */
 #ifndef USED_FOR_TARGET
Index: gcc/configure
===
--- gcc/configure   (revision 213622)
+++ gcc/configure   (working copy)
@@ -27888,9 +27888,14 @@
 
 
 
+if test x${ISLLIBS} != x ; then
 
+$as_echo #define HAVE_isl 1 confdefs.h
 
+fi
 
+
+
 if test x${CLOOGLIBS} != x ; then
 
 $as_echo #define HAVE_cloog 1 confdefs.h
Index: gcc/configure.ac
===
--- gcc/configure.ac(revision 213622)
+++ gcc/configure.ac(working copy)
@@ -5514,6 +5514,9 @@
 
 AC_ARG_VAR(ISLLIBS,[How to link ISL])
 AC_ARG_VAR(ISLINC,[How to find ISL include files])
+if test x${ISLLIBS} != x ; then 
+   AC_DEFINE(HAVE_isl, 1, [Define if isl is in use.])
+fi
 
 AC_ARG_VAR(CLOOGLIBS,[How to link CLOOG])
 AC_ARG_VAR(CLOOGINC,[How to find CLOOG include files])
Index: gcc/graphite-blocking.c
===
--- gcc/graphite-blocking.c (revision 213622)
+++ gcc/graphite-blocking.c (working copy)
@@ -23,14 +23,16 @@
 
 #include config.h
 
-#ifdef HAVE_cloog
+#ifdef HAVE_isl
 #include isl/set.h
 #include isl/map.h
 #include isl/union_map.h
 #include isl/constraint.h
+#ifdef HAVE_cloog
 #include cloog/cloog.h
 #include cloog/isl/domain.h
 #endif
+#endif
 
 #include system.h
 #include coretypes.h
@@ -49,7 +51,7 @@
 #include tree-data-ref.h
 #include sese.h
 
-#ifdef HAVE_cloog
+#ifdef HAVE_isl
 #include graphite-poly.h
 
 
Index: gcc/graphite-dependences.c
===
--- gcc/graphite-dependences.c  (revision 213622)
+++ gcc/graphite-dependences.c  (working copy)
@@ -21,15 +21,17 @@
 
 #include config.h
 
-#ifdef HAVE_cloog
+#ifdef HAVE_isl
 #include isl/set.h
 #include isl/map.h
 #include isl/union_map.h
 #include isl/flow.h
 #include isl/constraint.h
+#ifdef HAVE_cloog
 #include cloog/cloog.h
 #include cloog/isl/domain.h
 #endif
+#endif
 
 #include system.h
 #include coretypes.h
@@ -49,7 +51,7 @@
 #include tree-scalar-evolution.h
 #include sese.h
 
-#ifdef HAVE_cloog
+#ifdef HAVE_isl
 #include graphite-poly.h
 #include graphite-htab.h
 
Index: gcc/graphite-interchange.c
===
--- gcc/graphite-interchange.c  (revision 213622)
+++ gcc/graphite-interchange.c  (working 

Re: [GSoC] the separate option for all dimensions

2014-08-06 Thread Tobias Grosser

On 06/08/2014 17:21, Roman Gareev wrote:

I've tested the modified version of Graphite using the gcc test suite
and haven't found out new failed tests.

However, pr35356-2.c is still not suitable for testing. The ISL AST
generated from its source code doesn't contain MIN or MAX.

 if (k = -1) {
   for (int c1 = 0; c1  n; c1 += 1)
 S_7(c1);
 } else if (k = n) {
   for (int c1 = 0; c1  n; c1 += 1)
 S_7(c1);
 } else {
   for (int c1 = 0; c1  k; c1 += 1)
 S_7(c1);
   S_6(k);
   for (int c1 = k + 1; c1  n; c1 += 1)
 S_7(c1);
 }

Should we make pr35356-2.c to be similar to isl-codegen-loop-dumping.c
by replacing “MIN_EXPR\[^\\n\\r]*; and MAX_EXPR\[^\\n\\r]*; with
the regexp, which contains the the above-mentioned isl ast?


Checking for this specific AST may cause failures with future versions 
of isl that choose a different schedule. Could you write a regular 
expression that checks that there is no if-condition contained in a for 
loop? I think this best models the issue that was addressed in the 
original bug report.


(Also as this test then becomes isl specific I propose to just force the 
use of isl in the run line).


Cheers,
Tobias



patch to fix PR 61923

2014-08-06 Thread Vladimir Makarov
The following patch fixes PR61923.  The details of the problem can be 
found on


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61923

The patch was tested and bootstrapped on x86/x86-64.

The patch has been committed as rev. 213674 to gcc-4.9 branch and as 
rev. 213675 to the trunk.


2014-08-06  Vladimir Makarov  vmaka...@redhat.com

PR debug/61923
* haifa-sched.c (advance_one_cycle): Fix dump.
(schedule_block): Don't advance cycle if we are already at the
beginning of the cycle.

2014-08-06  Vladimir Makarov  vmaka...@redhat.com

PR debug/61923
* gcc.target/i386/pr61923.c: New test.


Index: haifa-sched.c
===
--- haifa-sched.c   (revision 213643)
+++ haifa-sched.c   (working copy)
@@ -2972,7 +2972,7 @@ advance_one_cycle (void)
 {
   advance_state (curr_state);
   if (sched_verbose = 6)
-fprintf (sched_dump, ;;\tAdvanced a state.\n);
+fprintf (sched_dump, ;;\tAdvance the current state.\n);
 }
 
 /* Update register pressure after scheduling INSN.  */
@@ -6007,6 +6007,7 @@ schedule_block (basic_block *target_bb,
   modulo_insns_scheduled = 0;
 
   ls.modulo_epilogue = false;
+  ls.first_cycle_insn_p = true;
 
   /* Loop until all the insns in BB are scheduled.  */
   while ((*current_sched_info-schedule_more_p) ())
@@ -6077,7 +6078,6 @@ schedule_block (basic_block *target_bb,
   if (must_backtrack)
goto do_backtrack;
 
-  ls.first_cycle_insn_p = true;
   ls.shadows_only_p = false;
   cycle_issued_insns = 0;
   ls.can_issue_more = issue_rate;
@@ -6363,11 +6363,13 @@ schedule_block (basic_block *target_bb,
  break;
}
}
+  ls.first_cycle_insn_p = true;
 }
   if (ls.modulo_epilogue)
 success = true;
  end_schedule:
-  advance_one_cycle ();
+  if (!ls.first_cycle_insn_p)
+advance_one_cycle ();
   perform_replacements_new_cycle ();
   if (modulo_ii  0)
 {
Index: testsuite/gcc.target/i386/pr61923.c
===
--- testsuite/gcc.target/i386/pr61923.c (revision 0)
+++ testsuite/gcc.target/i386/pr61923.c (working copy)
@@ -0,0 +1,36 @@
+/* PR debug/61923 */
+/* { dg-do compile } */
+/* { dg-options -O2 -fcompare-debug } */
+
+typedef struct
+{
+  struct
+  {
+struct
+{
+  char head;
+} tickets;
+  };
+} arch_spinlock_t;
+struct ext4_map_blocks
+{
+  int m_lblk;
+  int m_len;
+  int m_flags;
+};
+int ext4_da_map_blocks_ei_0;
+void fn1 (int p1, struct ext4_map_blocks *p2)
+{
+  int ret;
+  if (p2-m_flags)
+{
+  ext4_da_map_blocks_ei_0++;
+  arch_spinlock_t *lock;
+  switch (sizeof *lock-tickets.head)
+  case 1:
+  asm( : +m(*lock-tickets.head) : (0));
+  __asm__();
+  ret = 0;
+}
+  fn2 (p2-m_lblk, p2-m_len);
+}


Re: [GSoC] Elimination of CLooG library installation dependency

2014-08-06 Thread Tobias Grosser

On 06/08/2014 17:21, Roman Gareev wrote:

Hi Tobias,

I've attached the patch, which should eliminate CLooG library
installation dependency from GCC. The CLooG AST generator is still the
main code generator, but the isl ast generator will be chosen in case
of nonavailability of CLooG library.


Nice.


However, I've found out a problem. Almost all the functions of the ISL
cannot be used without installed CLooG. (I get errors which contain
“undefined reference to...”). Maybe I missed something. What do you
think about this?


This is surprising.

What is the exact error message? To which library does gcc link (Check 
with 'ldd cc1')? I wonder if gcc happens to link against the system 
CLooG/isl instead of the ones you installed?

Also, does objdump -x libisl.so show those missing symbols?


I also have a few questions about gcc. Could you please answer them?

Should Makefile.in be regenerated or manually changed? (I haven't
found out how to regenerate.)


I think it is manually maintained.


I've used printf to print “The CLooG code generator cannot be used
+(CLooG is not available). The ISL code generator was chosen.\n”.
Should another function be used for this purpose?


I have no idea. Let's leave it for now. I expect the CLooG code to 
disappear very soon.


Also, regarding this patch. It seems you mix two important changes.

1) The configure/makefile changes that make cloog optional
2) The switch from isl to cloog

Best starting with 2), followed by 1).

To commit 2), I would like you to run a wider set of tests (e.g., the 
LLVM test suite). If this passes successful, we should give a headsup on 
the GCC mailing list and ask other people to try the new isl support.

If now bugs have found, we switch.

Cheers,
Tobias


Re: [C PATCH] Discard P - (P + CST) optimization in pointer_diff (PR c/61240)

2014-08-06 Thread Jeff Law

On 08/06/14 02:27, Richard Biener wrote:



which we may restrict better with checking whether the pointer
uses a partial integer mode.  Not sure how PSImode - SImode
extends on RTL?
Well, at least on the mn102, I defined both a zero and sign extension 
for PSI - SI.  So whichever one the generic parts of the compiler 
needed, the backend provided.


As to what bits are modified, that's target dependent as the precise 
size of the partial modes is target dependent.  That's one of the things 
that would be largely made irrelevant by DJ's proposed changes.  Instead 
of using PSImode, we'd be able to define modes of precisely the number 
of bits one of these targets needs.


Jeff


Re: [PATCH v2] gcc/testsuite: Disable pr44194-1.c for BE Power64/Linux

2014-08-06 Thread Maciej W. Rozycki
On Wed, 6 Aug 2014, Marek Polacek wrote:

   Applied, thanks.
 
 I see
 ERROR: gcc.dg/pr44194-1.c: unknown dg option: \} for }
 now (x86_64-unknown-linux-gnu).

 I pushed a stale version of the patch, sorry.  Fixed up now.

2014-08-06  Maciej W. Rozycki  ma...@codesourcery.com

* gcc.dg/pr44194-1.c: Remove an extraneous brace.

  Maciej


[PATCH Fortran/Diagnostics] Move Fortran to common diagnostics machinery

2014-08-06 Thread Manuel López-Ibáñez
This is the first step for moving Fortran to use the common
diagnostics machinery. This patch makes Fortran use the common
machinery for those warnings that don't have a location or a
controlling option.

Before:

manuel@gcc10:~/test1$ echo end | ./213518/build/gcc/f951
-fdiagnostics-color -Werror -x -
f951: error: command line option '-x -' is valid for the driver but
not for Fortran
Warning: Reading file 'stdin' as free form

After:

manuel@gcc10:~/test1$ echo end | ./213518M/build/gcc/f951
-fdiagnostics-color -Werror -x -
f951: Error: command line option '-x -' is valid for the driver but
not for Fortran
f951: Error: Reading file 'stdin' as free form [-Werror]

(Plus the colors that you cannot see in this mail).

Bootstrapped and regression tested on x86_64-linux-gnu.

OK?

2014-08-03  Manuel López-Ibáñez  m...@gcc.gnu.org

PR fortran/44054
c-family/
* c-format.c: Handle Fortran flags.
* diagnostic.c (build_message_string): Make it extern.
* diagnostic.h (build_message_string): Make it extern.
fortran/
* gfortran.h: Define GCC_DIAG_STYLE.
(gfc_diagnostics_init,gfc_warning_cmdline): Declare.
* trans-array.c: Include gfortran.h before diagnostic-core.h.
* trans-expr.c: Likewise.
* trans-openmp.c: Likewise.
* trans-const.c: Likewise.
* trans.c: Likewise.
* trans-types.c: Likewise.
* f95-lang.c: Likewise.
* trans-decl.c: Likewise.
* trans-io.c: Likewise.
* trans-intrinsic.c: Likewise.
* error.c: Include diagnostic.h and diagnostic-color.h.
(gfc_diagnostic_build_prefix): New.
(gfc_diagnostic_starter): New.
(gfc_diagnostic_finalizer): New.
(gfc_warning_cmdline): New.
(gfc_diagnostics_init): New.
* gfc-diagnostic.def: New.
* options.c (gfc_init_options): Call gfc_diagnostics_init.
(gfc_post_options): Use gfc_warning_cmdline.
Index: gcc/c-family/c-format.c
===
--- gcc/c-family/c-format.c (revision 213518)
+++ gcc/c-family/c-format.c (working copy)
@@ -508,15 +508,11 @@ static const format_flag_pair gcc_diag_f
 };
 
 #define gcc_tdiag_flag_pairs gcc_diag_flag_pairs
 #define gcc_cdiag_flag_pairs gcc_diag_flag_pairs
 #define gcc_cxxdiag_flag_pairs gcc_diag_flag_pairs
-
-static const format_flag_pair gcc_gfc_flag_pairs[] =
-{
-  { 0, 0, 0, 0 }
-};
+#define gcc_gfc_flag_pairs gcc_diag_flag_pairs
 
 static const format_flag_spec gcc_diag_flag_specs[] =
 {
   { '+',  0, 0, N_('+' flag),N_(the '+' printf flag),  
STD_C89 },
   { '#',  0, 0, N_('#' flag),N_(the '#' printf flag),  
STD_C89 },
@@ -527,10 +523,11 @@ static const format_flag_spec gcc_diag_f
 };
 
 #define gcc_tdiag_flag_specs gcc_diag_flag_specs
 #define gcc_cdiag_flag_specs gcc_diag_flag_specs
 #define gcc_cxxdiag_flag_specs gcc_diag_flag_specs
+#define gcc_gfc_flag_specs gcc_diag_flag_specs
 
 static const format_flag_spec scanf_flag_specs[] =
 {
   { '*',  0, 0, N_(assignment suppression), N_(the assignment suppression 
scanf feature), STD_C89 },
   { 'a',  0, 0, N_('a' flag),   N_(the 'a' scanf flag),
   STD_EXT },
@@ -739,19 +736,21 @@ static const format_char_info gcc_gfc_ch
 {
   /* C89 conversion specifiers.  */
   { di,  0, STD_C89, { T89_I,   BADLEN,  BADLEN,  T89_L,   BADLEN,  BADLEN,  
BADLEN,  BADLEN,  BADLEN  }, , , NULL },
   { u,   0, STD_C89, { T89_UI,  BADLEN,  BADLEN,  T89_UL,  BADLEN,  BADLEN,  
BADLEN,  BADLEN,  BADLEN  }, , , NULL },
   { c,   0, STD_C89, { T89_I,   BADLEN,  BADLEN,  BADLEN,  BADLEN,  BADLEN,  
BADLEN,  BADLEN,  BADLEN  }, , , NULL },
-  { s,   1, STD_C89, { T89_C,   BADLEN,  BADLEN,  BADLEN,  BADLEN,  BADLEN,  
BADLEN,  BADLEN,  BADLEN  }, , cR, NULL },
+  { s,   1, STD_C89, { T89_C,   BADLEN,  BADLEN,  BADLEN,  BADLEN,  BADLEN,  
BADLEN,  BADLEN,  BADLEN  }, q, cR, NULL },
 
   /* gfc conversion specifiers.  */
 
   { C,   0, STD_C89, NOARGUMENTS, ,  ,   NULL },
 
   /* This will require a locus at runtime.  */
   { L,   0, STD_C89, { T89_V,   BADLEN,  BADLEN,  BADLEN,  BADLEN,  BADLEN,  
BADLEN,  BADLEN,  BADLEN  }, , R, NULL },
 
+  /* These will require nothing.  */
+  { ,0, STD_C89, NOARGUMENTS, ,  ,   NULL },
   { NULL,  0, STD_C89, NOLENGTHS, NULL, NULL, NULL }
 };
 
 static const format_char_info scan_char_table[] =
 {
@@ -844,12 +843,12 @@ static const format_kind_info format_typ
 gcc_cxxdiag_flag_specs, gcc_cxxdiag_flag_pairs,
 FMT_FLAG_ARG_CONVERT,
 0, 0, 'p', 0, 'L', 0,
 NULL, integer_type_node
   },
-  { gcc_gfc, gcc_gfc_length_specs, gcc_gfc_char_table, , NULL,
-NULL, gcc_gfc_flag_pairs,
+  { gcc_gfc, gcc_gfc_length_specs, gcc_gfc_char_table, q+#, NULL,
+gcc_gfc_flag_specs, gcc_gfc_flag_pairs,
 FMT_FLAG_ARG_CONVERT,
 0, 0, 0, 0, 0, 0,
 NULL, NULL
   },
   { NSString,   NULL,  NULL, NULL, NULL,
Index: gcc/diagnostic.c
===
--- gcc/diagnostic.c(revision 

Re: [C++ Patch/RFC] PR 43906

2014-08-06 Thread Paolo Carlini

Hi,

On 08/06/2014 05:19 PM, Jason Merrill wrote:

On 08/05/2014 10:48 AM, Paolo Carlini wrote:

+(VOID_TYPE_P (TREE_TYPE (type1))
+   || comptypes (TYPE_MAIN_VARIANT (TREE_TYPE (type0)),
+ TYPE_MAIN_VARIANT (TREE_TYPE (type1)),
+ COMPARE_BASE | COMPARE_DERIVED


Can we drop this now that we're calling composite_pointer_type?
Yes we can, sorry for not investigating that earlier. I only have to 
tweak a bit the testcase because then in the malformed cases we emit 
first the permerror and then the -Waddress warning too. I suppose it's 
Ok because after all those are in most of the cases permerrors and I 
don't think the additional verbosity should be that common, we are 
talking about comparing a null pointer of the wrong type, not a 
generic pointer. Otherwise we would have to tweak composite_pointer_type 
to precisely inform the caller when an actual error was emitted.


Thanks,
Paolo.

/
Index: cp/typeck.c
===
--- cp/typeck.c (revision 213654)
+++ cp/typeck.c (working copy)
@@ -4353,13 +4353,18 @@ cp_build_binary_op (location_t location,
   (code1 == INTEGER_TYPE || code1 == REAL_TYPE
  || code1 == COMPLEX_TYPE || code1 == ENUMERAL_TYPE))
short_compare = 1;
-  else if ((code0 == POINTER_TYPE  code1 == POINTER_TYPE)
-  || (TYPE_PTRDATAMEM_P (type0)  TYPE_PTRDATAMEM_P (type1)))
-   result_type = composite_pointer_type (type0, type1, op0, op1,
- CPO_COMPARISON, complain);
-  else if ((code0 == POINTER_TYPE || TYPE_PTRDATAMEM_P (type0))
-   null_ptr_cst_p (op1))
+  else if (((code0 == POINTER_TYPE || TYPE_PTRDATAMEM_P (type0))
+null_ptr_cst_p (op1))
+  /* Handle, eg, (void*)0 (c++/43906), and more.  */
+  || (code0 == POINTER_TYPE
+   TYPE_PTR_P (type1)  integer_zerop (op1)))
{
+ if (TYPE_PTR_P (type1))
+   result_type = composite_pointer_type (type0, type1, op0, op1,
+ CPO_COMPARISON, complain);
+ else
+   result_type = type0;
+
  if (TREE_CODE (op0) == ADDR_EXPR
   decl_with_nonnull_addr_p (TREE_OPERAND (op0, 0)))
{
@@ -4368,11 +4373,19 @@ cp_build_binary_op (location_t location,
warning (OPT_Waddress, the address of %qD will never be NULL,
 TREE_OPERAND (op0, 0));
}
- result_type = type0;
}
-  else if ((code1 == POINTER_TYPE || TYPE_PTRDATAMEM_P (type1))
-   null_ptr_cst_p (op0))
+  else if (((code1 == POINTER_TYPE || TYPE_PTRDATAMEM_P (type1))
+null_ptr_cst_p (op0))
+  /* Handle, eg, (void*)0 (c++/43906), and more.  */
+  || (code1 == POINTER_TYPE
+   TYPE_PTR_P (type0)  integer_zerop (op0)))
{
+ if (TYPE_PTR_P (type0))
+   result_type = composite_pointer_type (type0, type1, op0, op1,
+ CPO_COMPARISON, complain);
+ else
+   result_type = type1;
+
  if (TREE_CODE (op1) == ADDR_EXPR 
   decl_with_nonnull_addr_p (TREE_OPERAND (op1, 0)))
{
@@ -4381,8 +4394,11 @@ cp_build_binary_op (location_t location,
warning (OPT_Waddress, the address of %qD will never be NULL,
 TREE_OPERAND (op1, 0));
}
- result_type = type1;
}
+  else if ((code0 == POINTER_TYPE  code1 == POINTER_TYPE)
+  || (TYPE_PTRDATAMEM_P (type0)  TYPE_PTRDATAMEM_P (type1)))
+   result_type = composite_pointer_type (type0, type1, op0, op1,
+ CPO_COMPARISON, complain);
   else if (null_ptr_cst_p (op0)  null_ptr_cst_p (op1))
/* One of the operands must be of nullptr_t type.  */
 result_type = TREE_TYPE (nullptr_node);
Index: testsuite/g++.dg/warn/Waddress-1.C
===
--- testsuite/g++.dg/warn/Waddress-1.C  (revision 0)
+++ testsuite/g++.dg/warn/Waddress-1.C  (working copy)
@@ -0,0 +1,50 @@
+// PR c++/43906
+// { dg-options -Waddress -pedantic }
+
+extern void z();
+typedef void (*ptrf) ();
+typedef int (*ptrfn) (int);
+int n;
+const int m = 1;
+struct S { };
+struct T : S { };
+struct U;
+S s;
+T t;
+double d;
+
+void f()  { if (z) z(); }   // { dg-warning address }
+
+void gl() { if (z != 0) z(); }  // { dg-warning address }
+void hl() { if (z != (ptrf)0) z(); }// { dg-warning address }
+void il() { if (z != (void*)0) z(); }   // { dg-warning address|comparison }
+void jl() { if (n != (int*)0) z(); }   // { dg-warning address }
+void kl() { if (m != (int*)0) z(); }   // { dg-warning address }
+void ll() { if (s != (T*)0) z(); }  

[PATCH 002/236] JUMP_LABEL is not always a LABEL

2014-08-06 Thread David Malcolm
gcc/
* rtl.h (JUMP_LABEL): Add a note that this isn't always a LABEL.
---
 gcc/rtl.h | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/gcc/rtl.h b/gcc/rtl.h
index 51cfae5..b9b069a 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -1194,7 +1194,11 @@ enum label_kind
 
 /* In jump.c, each JUMP_INSN can point to a label that it can jump to,
so that if the JUMP_INSN is deleted, the label's LABEL_NUSES can
-   be decremented and possibly the label can be deleted.  */
+   be decremented and possibly the label can be deleted.
+
+   This is not always a LABEL; for example in combine.c, this field
+   does double duty for storing notes, and in shrink-wrap.c it can
+   be set to simple_return_rtx, a SIMPLE_RETURN.  */
 #define JUMP_LABEL(INSN)   XCEXP (INSN, 7, JUMP_INSN)
 
 /* Once basic blocks are found, each CODE_LABEL starts a chain that
-- 
1.8.5.3



[PATCH 001/236] Convert lab_rtx_for_bb from pointer_map_t to pointer_maprtx

2014-08-06 Thread David Malcolm
This gives a slight improvement in typesafety in cfgexpand.c

gcc/
* cfgexpand.c (lab_rtx_for_bb): Convert from pointer_map_t to
pointer_maprtx.
(label_rtx_for_bb): Update for conversion of lab_rtx_for_bb to
a pointer_maprtx, eliminating casts from void* to rtx.
(expand_gimple_basic_block): Likewise.
(pass_expand::execute): Likewise, using new/delete of
pointer_maprtx rathern than pointer_map_create/destroy.  NULLify
the lab_rtx_for_bb ptr after deletion for good measure.
---
 gcc/cfgexpand.c | 23 ---
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 934f40d..d124d94 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -1956,7 +1956,7 @@ maybe_dump_rtl_for_gimple_stmt (gimple stmt, rtx since)
 
 /* Maps the blocks that do not contain tree labels to rtx labels.  */
 
-static struct pointer_map_t *lab_rtx_for_bb;
+static struct pointer_maprtx *lab_rtx_for_bb;
 
 /* Returns the label_rtx expression for a label starting basic block BB.  */
 
@@ -1966,14 +1966,14 @@ label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
   gimple_stmt_iterator gsi;
   tree lab;
   gimple lab_stmt;
-  void **elt;
+  rtx *elt;
 
   if (bb-flags  BB_RTL)
 return block_label (bb);
 
-  elt = pointer_map_contains (lab_rtx_for_bb, bb);
+  elt = lab_rtx_for_bb-contains (bb);
   if (elt)
-return (rtx) *elt;
+return *elt;
 
   /* Find the tree label if it is present.  */
 
@@ -1990,9 +1990,9 @@ label_rtx_for_bb (basic_block bb ATTRIBUTE_UNUSED)
   return label_rtx (lab);
 }
 
-  elt = pointer_map_insert (lab_rtx_for_bb, bb);
+  elt = lab_rtx_for_bb-insert (bb);
   *elt = gen_label_rtx ();
-  return (rtx) *elt;
+  return *elt;
 }
 
 
@@ -4880,7 +4880,7 @@ expand_gimple_basic_block (basic_block bb, bool 
disable_tail_calls)
   rtx note, last;
   edge e;
   edge_iterator ei;
-  void **elt;
+  rtx *elt;
 
   if (dump_file)
 fprintf (dump_file, \n;; Generating RTL for gimple basic block %d\n,
@@ -4924,7 +4924,7 @@ expand_gimple_basic_block (basic_block bb, bool 
disable_tail_calls)
stmt = NULL;
 }
 
-  elt = pointer_map_contains (lab_rtx_for_bb, bb);
+  elt = lab_rtx_for_bb-contains (bb);
 
   if (stmt || elt)
 {
@@ -4937,7 +4937,7 @@ expand_gimple_basic_block (basic_block bb, bool 
disable_tail_calls)
}
 
   if (elt)
-   emit_label ((rtx) *elt);
+   emit_label (*elt);
 
   /* Java emits line number notes in the top of labels.
 ??? Make this go away once line number notes are obsoleted.  */
@@ -5797,7 +5797,7 @@ pass_expand::execute (function *fun)
   FOR_EACH_EDGE (e, ei, ENTRY_BLOCK_PTR_FOR_FN (fun)-succs)
 e-flags = ~EDGE_EXECUTABLE;
 
-  lab_rtx_for_bb = pointer_map_create ();
+  lab_rtx_for_bb = new pointer_map rtx;
   FOR_BB_BETWEEN (bb, init_block-next_bb, EXIT_BLOCK_PTR_FOR_FN (fun),
  next_bb)
 bb = expand_gimple_basic_block (bb, var_ret_seq != NULL_RTX);
@@ -5822,7 +5822,8 @@ pass_expand::execute (function *fun)
 
   /* Expansion is used by optimization passes too, set maybe_hot_insn_p
  conservatively to true until they are all profile aware.  */
-  pointer_map_destroy (lab_rtx_for_bb);
+  delete lab_rtx_for_bb;
+  lab_rtx_for_bb = NULL;
   free_histograms ();
 
   construct_exit_block ();
-- 
1.8.5.3



[PATCH 012/236] Convert DF_REF_INSN to a function for now

2014-08-06 Thread David Malcolm
DF_REF_INSN looks up the insn field of the referenced df_insn_info.
This will eventually be an rtx_insn *, but for now is just an rtx.

As further scaffolding: for now, convert DF_REF_INSN to a function,
adding a checked downcast to rtx_insn *.  This can eventually be
converted back to macro when the field is an rtx_insn *.

gcc/
* df-core.c (DF_REF_INSN): New, using a checked cast for now.
* df.h (DF_REF_INSN): Convert from a macro to a function, so
that we can return an rtx_insn *.

/
* rtx-classes-status.txt: Add DF_REF_INSN.
---
 gcc/df-core.c  | 6 ++
 gcc/df.h   | 2 +-
 rtx-classes-status.txt | 1 +
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/gcc/df-core.c b/gcc/df-core.c
index 9fdf6010..0dd8cc4 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -2532,3 +2532,9 @@ debug_df_chain (struct df_link *link)
   df_chain_dump (link, stderr);
   fputc ('\n', stderr);
 }
+
+rtx_insn *DF_REF_INSN (df_ref ref)
+{
+  rtx insn = ref-base.insn_info-insn;
+  return as_a_nullable rtx_insn * (insn);
+}
diff --git a/gcc/df.h b/gcc/df.h
index 878f507..aabde63 100644
--- a/gcc/df.h
+++ b/gcc/df.h
@@ -647,7 +647,7 @@ struct df_d
: BLOCK_FOR_INSN (DF_REF_INSN (REF)))
 #define DF_REF_BBNO(REF) (DF_REF_BB (REF)-index)
 #define DF_REF_INSN_INFO(REF) ((REF)-base.insn_info)
-#define DF_REF_INSN(REF) ((REF)-base.insn_info-insn)
+extern rtx_insn *DF_REF_INSN (df_ref ref);
 #define DF_REF_INSN_UID(REF) (INSN_UID (DF_REF_INSN(REF)))
 #define DF_REF_CLASS(REF) ((REF)-base.cl)
 #define DF_REF_TYPE(REF) ((REF)-base.type)
diff --git a/rtx-classes-status.txt b/rtx-classes-status.txt
index e57d775..68bbe54 100644
--- a/rtx-classes-status.txt
+++ b/rtx-classes-status.txt
@@ -10,5 +10,6 @@ Phase 6: use extra rtx_def subclasses: TODO
 
 TODO: Scaffolding to be removed
 =
+* DF_REF_INSN
 * SET_BB_HEAD, SET_BB_END, SET_BB_HEADER, SET_BB_FOOTER
 * SET_NEXT_INSN, SET_PREV_INSN
-- 
1.8.5.3



[PATCH 003/236] config/mn10300: Fix missing PATTERN in PARALLEL handling

2014-08-06 Thread David Malcolm
gcc/
* config/mn10300/mn10300.c (mn10300_adjust_sched_cost): Fix the
handling of PARALLEL to work on PATTERN (insn) and PATTERN (dep),
rather than just on insn, dep themselves.  The latter are insns,
and thus can't be PARALLEL.
---
 gcc/config/mn10300/mn10300.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/gcc/config/mn10300/mn10300.c b/gcc/config/mn10300/mn10300.c
index eb00077..99b5d19 100644
--- a/gcc/config/mn10300/mn10300.c
+++ b/gcc/config/mn10300/mn10300.c
@@ -2772,11 +2772,11 @@ mn10300_adjust_sched_cost (rtx insn, rtx link, rtx dep, 
int cost)
   if (!TARGET_AM33)
 return 1;
 
-  if (GET_CODE (insn) == PARALLEL)
-insn = XVECEXP (insn, 0, 0);
+  if (GET_CODE (PATTERN (insn)) == PARALLEL)
+insn = XVECEXP (PATTERN (insn), 0, 0);
 
-  if (GET_CODE (dep) == PARALLEL)
-dep = XVECEXP (dep, 0, 0);
+  if (GET_CODE (PATTERN (dep)) == PARALLEL)
+dep = XVECEXP (PATTERN (dep), 0, 0);
 
   /* For the AM34 a load instruction that follows a
  store instruction incurs an extra cycle of delay.  */
-- 
1.8.5.3



[PATCH 000/236] Introduce rtx subclasses

2014-08-06 Thread David Malcolm
This is the patch series I spoke about at Cauldron in the talk
A proposal for typesafe RTL; slides here:
http://dmalcolm.fedorapeople.org/presentations/cauldron-2014/rtl

They can also be seen at:
https://dmalcolm.fedorapeople.org/gcc/patch-backups/rtx-classes/v20/

The aim of the patch series is to improve the type-safety and
readability of the backend by introducing subclasses of rtx (actually
rtx_def) for *instructions*, and also for EXPR_LIST, INSN_LIST, SEQUENCE.

That way we can document directly in the code the various places that
manipulate insn chains vs other kinds of rtx node.

An example of a bug detected using this approach: in mn10300.c there
was dead code of the form:

  if (GET_CODE (insn) == PARALLEL)
insn = XVECEXP (insn, 0, 0);

where the test should really have been on PATTERN (insn), not insn:

  if (GET_CODE (PATTERN (insn)) == PARALLEL)
insn = XVECEXP (PATTERN (insn), 0, 0);

[as discussed in https://gcc.gnu.org/ml/gcc/2014-07/msg00078.html]

The class hierarchy looks like this (using indentation to show
inheritance, and indicating the invariants):

class rtx_def;
  class rtx_expr_list;   /* GET_CODE (X) == EXPR_LIST */
  class rtx_insn_list;   /* GET_CODE (X) == INSN_LIST */
  class rtx_sequence;/* GET_CODE (X) == SEQUENCE */
  class rtx_insn;/* INSN_CHAIN_CODE_P (GET_CODE (X)) */
class rtx_real_insn; /* INSN_P (X) */
  class rtx_debug_insn;  /* DEBUG_INSN_P (X) */
  class rtx_nonjump_insn;/* NONJUMP_INSN_P (X) */
  class rtx_jump_insn;   /* JUMP_P (X) */
  class rtx_call_insn;   /* CALL_P (X) */
class rtx_jump_table_data;   /* JUMP_TABLE_DATA_P (X) */
class rtx_barrier;   /* BARRIER_P (X) */
class rtx_code_label;/* LABEL_P (X) */
class rtx_note;  /* NOTE_P (X) */

The patch series converts roughly 4300 places in the code from using
rtx to the more concrete rtx_insn *, in such places as:

  * the core types within basic blocks

  * hundreds of function params, struct fields, etc.  e.g. within
register allocators, schedulers

  * insn and curr_insn within .md files (peephole, attributes,
define_bypass guards)

  * insn_t in sel-sched-ir.h

  * Target hooks: updated params of 25 of them

  * Debug hooks: label and var_location

etc

The patch series also contains some cleanups using inline methods:

  * being able to get rid of this boilerplate everywhere that jump tables
are handled:

  if (GET_CODE (PATTERN (table)) == ADDR_VEC)
vec = XVEC (PATTERN (table), 0);
  else
vec = XVEC (PATTERN (table), 1);

   in favor of a helper method (inlined):

  vec = table-get_labels ();

  * having a subclass for EXPR_LIST allows for replacing this kind of
thing:

  for (x = forced_labels; x; x = XEXP (x, 1))
if (XEXP (x, 0))
   set_label_offsets (XEXP (x, 0), NULL_RTX, 1);

with the following, which captures that it's an EXPR_LIST, and makes
it clearer that we're simply walking a singly-linked list:

  for (rtx_expr_list *x = forced_labels; x; x = x-next ())
if (x-element ())
  set_label_offsets (x-element (), NULL_RTX, 1);

There are some surface details to the patches:

  * class names.  The subclass names correspond to the lower_case name
from rtl.def, with an rtx_ prefix.  rtx_insn and rtx_real_insn
don't correspond to concrete node kinds, and hence I had to invent
the names.  (In an earlier version of the patches these were
rtx_base_insn and rtx_insn respectively, but the former occurred
much more than the latter and so it seemed better to use the shorter
spelling for the common case).

  * there's a NULL_RTX define in rtl.h.   In an earlier version of the
patch I added a NULL_INSN define, but in this version I simply use
NULL, and I'm in two minds about whether a NULL_INSN is desirable
(would we have a NULL_FOO for all of the subclasses?).  I like having
a strong distinction between arbitrary RTL nodes vs instructions,
so maybe there's a case for NULL_INSN, but not for the subclasses?

  * I added an rtx_real_insn subclass for the INSN_P predicate, adding
the idea of a PATTERN, a basic_block, and a location - but I hardly
use this anywhere.  That said, it seems to be a real concept in the
code, so I added it.

  * pointerness of the types.  rtx is a typedef to rtx_def * i.e.
there's an implicit pointer.  In the discussion about using C++
classes for gimple statements:
   https://gcc.gnu.org/ml/gcc-patches/2014-04/msg01427.html
Richi said:

 To followup myself here, it's because 'tree' is a typedef to a pointer
 and thus 'const tree' is different from 'const tree_node *'.

 Not sure why we re-introduced the 'mistake' of making 'tree' a pointer
 when we introduced 'gimple'.  If we were to make 'gimple' the class
 type itself we can use gimple *, const gimple * and also const gimple 
 (when a 

[PATCH 008/236] Split BB_HEAD et al into BB_HEAD/SET_BB_HEAD variants

2014-08-06 Thread David Malcolm
This is an enabling patch, splitting existing macros in two, covering
the rvalue and lvalue uses separately.

Followup patches will replace these with functions, and gradually convert
the types from rtx to rtx_insn *, but we need to do this separately for
the lvalue vs rvalue use-cases, hence this patch.

The plan is to eventually eliminate the split in a further followup patch,
and convert them back to macros, where the underlying fields are of type
rtx_insn *.

gcc/
* basic-block.h (BB_HEAD): Split macro in two: the existing one,
for rvalues, and...
(SET_BB_HEAD): New macro, for use as a lvalue.
(BB_END, SET_BB_END): Likewise.
(BB_HEADER, SET_BB_HEADER): Likewise.
(BB_FOOTER, SET_BB_FOOTER): Likewise.

* bb-reorder.c (add_labels_and_missing_jumps): Convert lvalue use
of BB_* macros into SET_BB_* macros.
(fix_crossing_unconditional_branches): Likewise.
* caller-save.c (save_call_clobbered_regs): Likewise.
(insert_one_insn): Likewise.
* cfgbuild.c (find_bb_boundaries): Likewise.
* cfgcleanup.c (merge_blocks_move_successor_nojumps): Likewise.
(outgoing_edges_match): Likewise.
(try_optimize_cfg): Likewise.
* cfgexpand.c (expand_gimple_cond): Likewise.
(expand_gimple_tailcall): Likewise.
(expand_gimple_basic_block): Likewise.
(construct_exit_block): Likewise.
* cfgrtl.c (delete_insn): Likewise.
(create_basic_block_structure): Likewise.
(rtl_delete_block): Likewise.
(rtl_split_block): Likewise.
(emit_nop_for_unique_locus_between): Likewise.
(rtl_merge_blocks): Likewise.
(block_label): Likewise.
(try_redirect_by_replacing_jump): Likewise.
(emit_barrier_after_bb): Likewise.
(fixup_abnormal_edges): Likewise.
(record_effective_endpoints): Likewise.
(relink_block_chain): Likewise.
(fixup_reorder_chain): Likewise.
(fixup_fallthru_exit_predecessor): Likewise.
(cfg_layout_duplicate_bb): Likewise.
(cfg_layout_split_block): Likewise.
(cfg_layout_delete_block): Likewise.
(cfg_layout_merge_blocks): Likewise.
* combine.c (update_cfg_for_uncondjump): Likewise.
* emit-rtl.c (add_insn_after): Likewise.
(remove_insn): Likewise.
(reorder_insns): Likewise.
(emit_insn_after_1): Likewise.
* haifa-sched.c (get_ebb_head_tail): Likewise.
(restore_other_notes): Likewise.
(move_insn): Likewise.
(sched_extend_bb): Likewise.
(fix_jump_move): Likewise.
* ifcvt.c (noce_process_if_block): Likewise.
(dead_or_predicable): Likewise.
* ira.c (update_equiv_regs): Likewise.
* reg-stack.c (change_stack): Likewise.
* sel-sched-ir.c (sel_move_insn): Likewise.
* sel-sched.c (move_nop_to_previous_block): Likewise.

* config/c6x/c6x.c (hwloop_optimize): Likewise.
* config/ia64/ia64.c (emit_predicate_relation_info): Likewise.

/
* rtx-classes-status.txt (TODO): Add SET_BB_HEAD, SET_BB_END,
SET_BB_HEADER, SET_BB_FOOTER
---
 gcc/basic-block.h  | 15 +++---
 gcc/bb-reorder.c   |  4 +--
 gcc/caller-save.c  |  6 ++--
 gcc/cfgbuild.c |  4 +--
 gcc/cfgcleanup.c   | 14 +-
 gcc/cfgexpand.c| 22 +++
 gcc/cfgrtl.c   | 74 +-
 gcc/combine.c  |  2 +-
 gcc/config/c6x/c6x.c   |  4 +--
 gcc/config/ia64/ia64.c |  6 ++--
 gcc/emit-rtl.c | 12 
 gcc/haifa-sched.c  | 18 ++--
 gcc/ifcvt.c|  4 +--
 gcc/ira.c  |  2 +-
 gcc/reg-stack.c|  2 +-
 gcc/sel-sched-ir.c |  2 +-
 gcc/sel-sched.c|  2 +-
 rtx-classes-status.txt |  4 +++
 18 files changed, 104 insertions(+), 93 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 0bf6e87..d27f498 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -368,10 +368,17 @@ struct GTY(()) control_flow_graph {
 
 /* Stuff for recording basic block info.  */
 
-#define BB_HEAD(B)  (B)-il.x.head_
-#define BB_END(B)   (B)-il.x.rtl-end_
-#define BB_HEADER(B)(B)-il.x.rtl-header_
-#define BB_FOOTER(B)(B)-il.x.rtl-footer_
+/* These macros are currently split into two:
+   one suitable for reading, and for writing.
+   These will become functions in a follow-up patch.  */
+#define BB_HEAD(B)  (((const_basic_block)B)-il.x.head_)
+#define SET_BB_HEAD(B)  (B)-il.x.head_
+#define BB_END(B)   (((const rtl_bb_info *)(B)-il.x.rtl)-end_)
+#define SET_BB_END(B)   (B)-il.x.rtl-end_
+#define BB_HEADER(B)(((const rtl_bb_info *)(B)-il.x.rtl)-header_)
+#define SET_BB_HEADER(B) (B)-il.x.rtl-header_
+#define BB_FOOTER(B)(((const rtl_bb_info *)(B)-il.x.rtl)-footer_)
+#define SET_BB_FOOTER(B) (B)-il.x.rtl-footer_
 
 /* Special block numbers [markers] for entry and exit.
Neither of 

[PATCH 013/236] DEP_PRO/DEP_CON scaffolding

2014-08-06 Thread David Malcolm
For now, convert DEP_PRO and DEP_CON into functions.  We will eventually
change them back to macros once the relevant fields are of type
rtx_insn *.

gcc/
* sched-int.h (DEP_PRO): struct _dep's pro and con fields will
eventually be rtx_insn *, but to help with transition, for now,
convert from an access macro into a pair of functions: DEP_PRO
returning an rtx_insn * and...
(SET_DEP_PRO): New function, for use where DEP_PRO is used as an
lvalue, returning an rtx.
(DEP_CON): Analogous changes to DEP_PRO above.
(SET_DEP_CON): Likewise.

* haifa-sched.c (create_check_block_twin): Replace DEP_CON used as
an lvalue to SET_DEP_CON.
* sched-deps.c (init_dep_1): Likewise for DEP_PRO and DEP_CON.
(sd_copy_back_deps): Likewise for DEP_CON.
(DEP_PRO): New function, adding a checked cast for now.
(DEP_CON): Likewise.
(SET_DEP_PRO): New function.
(SET_DEP_CON): Likewise.

/
* rtx-classes-status.txt: Add SET_DEP_PRO, SET_DEP_CON.
---
 gcc/haifa-sched.c  |  2 +-
 gcc/sched-deps.c   | 26 +++---
 gcc/sched-int.h|  6 --
 rtx-classes-status.txt |  1 +
 4 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
index 59c7fc9..caee1b8 100644
--- a/gcc/haifa-sched.c
+++ b/gcc/haifa-sched.c
@@ -7886,7 +7886,7 @@ create_check_block_twin (rtx insn, bool mutate_p)
 
   if (rec != EXIT_BLOCK_PTR_FOR_FN (cfun))
{
- DEP_CON (new_dep) = twin;
+ SET_DEP_CON (new_dep) = twin;
  sd_add_dep (new_dep, false);
}
 }
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
index efc4223..d59cffc 100644
--- a/gcc/sched-deps.c
+++ b/gcc/sched-deps.c
@@ -103,8 +103,8 @@ dk_to_ds (enum reg_note dk)
 void
 init_dep_1 (dep_t dep, rtx pro, rtx con, enum reg_note type, ds_t ds)
 {
-  DEP_PRO (dep) = pro;
-  DEP_CON (dep) = con;
+  SET_DEP_PRO (dep) = pro;
+  SET_DEP_CON (dep) = con;
   DEP_TYPE (dep) = type;
   DEP_STATUS (dep) = ds;
   DEP_COST (dep) = UNKNOWN_DEP_COST;
@@ -1416,7 +1416,7 @@ sd_copy_back_deps (rtx to, rtx from, bool resolved_p)
   dep_def _new_dep, *new_dep = _new_dep;
 
   copy_dep (new_dep, dep);
-  DEP_CON (new_dep) = to;
+  SET_DEP_CON (new_dep) = to;
   sd_add_dep (new_dep, resolved_p);
 }
 }
@@ -4895,4 +4895,24 @@ find_modifiable_mems (rtx head, rtx tail)
 success_in_block);
 }
 
+rtx_insn *DEP_PRO (dep_t dep)
+{
+  return as_a_nullable rtx_insn * (dep-pro);
+}
+
+rtx_insn *DEP_CON (dep_t dep)
+{
+  return as_a_nullable rtx_insn * (dep-con);
+}
+
+rtx SET_DEP_PRO (dep_t dep)
+{
+  return dep-pro;
+}
+
+rtx SET_DEP_CON (dep_t dep)
+{
+  return dep-con;
+}
+
 #endif /* INSN_SCHEDULING */
diff --git a/gcc/sched-int.h b/gcc/sched-int.h
index fe00496..3680889 100644
--- a/gcc/sched-int.h
+++ b/gcc/sched-int.h
@@ -250,8 +250,10 @@ struct _dep
 typedef struct _dep dep_def;
 typedef dep_def *dep_t;
 
-#define DEP_PRO(D) ((D)-pro)
-#define DEP_CON(D) ((D)-con)
+extern rtx_insn *DEP_PRO (dep_t dep);
+extern rtx_insn *DEP_CON (dep_t dep);
+extern rtx SET_DEP_PRO (dep_t dep);
+extern rtx SET_DEP_CON (dep_t dep);
 #define DEP_TYPE(D) ((D)-type)
 #define DEP_STATUS(D) ((D)-status)
 #define DEP_COST(D) ((D)-cost)
diff --git a/rtx-classes-status.txt b/rtx-classes-status.txt
index 68bbe54..2a8773f 100644
--- a/rtx-classes-status.txt
+++ b/rtx-classes-status.txt
@@ -12,4 +12,5 @@ TODO: Scaffolding to be removed
 =
 * DF_REF_INSN
 * SET_BB_HEAD, SET_BB_END, SET_BB_HEADER, SET_BB_FOOTER
+* SET_DEP_PRO, SET_DEP_CON
 * SET_NEXT_INSN, SET_PREV_INSN
-- 
1.8.5.3



[PATCH 021/236] entry_of_function returns an insn

2014-08-06 Thread David Malcolm
gcc/
* rtl.h (entry_of_function): Strengthen return type from rtx to
rtx_insn *.
* cfgrtl.c (entry_of_function): Likewise.
---
 gcc/cfgrtl.c | 2 +-
 gcc/rtl.h| 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index d386367..3079bb3 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -499,7 +499,7 @@ make_pass_free_cfg (gcc::context *ctxt)
 }
 
 /* Return RTX to emit after when we want to emit code on the entry of 
function.  */
-rtx
+rtx_insn *
 entry_of_function (void)
 {
   return (n_basic_blocks_for_fn (cfun)  NUM_FIXED_BLOCKS ?
diff --git a/gcc/rtl.h b/gcc/rtl.h
index a703e34..3a28fcc 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -3061,7 +3061,7 @@ extern void add_insn_after (rtx, rtx, basic_block);
 extern void remove_insn (rtx);
 extern rtx emit (rtx);
 extern void delete_insn (rtx);
-extern rtx entry_of_function (void);
+extern rtx_insn *entry_of_function (void);
 extern void emit_insn_at_entry (rtx);
 extern void delete_insn_chain (rtx, rtx, bool);
 extern rtx unlink_insn_chain (rtx, rtx);
-- 
1.8.5.3



[PATCH 018/236] Strengthen return types of various {next|prev}_*insn from rtx to rtx_insn *

2014-08-06 Thread David Malcolm
These should all eventually require an rtx_insn * as an argument,
but we'll save that for a later patch.

gcc/
* rtl.h (previous_insn): Strengthen return type from rtx to
rtx_insn *.
(next_insn): Likewise.
(prev_nonnote_insn): Likewise.
(prev_nonnote_insn_bb): Likewise.
(next_nonnote_insn): Likewise.
(next_nonnote_insn_bb): Likewise.
(prev_nondebug_insn): Likewise.
(next_nondebug_insn): Likewise.
(prev_nonnote_nondebug_insn): Likewise.
(next_nonnote_nondebug_insn): Likewise.
(prev_real_insn): Likewise.
(next_real_insn): Likewise.
(prev_active_insn): Likewise.
(next_active_insn): Likewise.

* emit-rtl.c (next_insn): Strengthen return type from rtx to
rtx_insn *, adding a checked cast.
(previous_insn): Likewise.
(next_nonnote_insn): Likewise.
(next_nonnote_insn_bb): Likewise.
(prev_nonnote_insn): Likewise.
(prev_nonnote_insn_bb): Likewise.
(next_nondebug_insn): Likewise.
(prev_nondebug_insn): Likewise.
(next_nonnote_nondebug_insn): Likewise.
(prev_nonnote_nondebug_insn): Likewise.
(next_real_insn): Likewise.
(prev_real_insn): Likewise.
(next_active_insn): Likewise.
(prev_active_insn): Likewise.

* config/sh/sh-protos.h (sh_find_set_of_reg): Convert function ptr
param stepfunc so that it returns an rtx_insn * rather than an
rtx, to track the change to prev_nonnote_insn_bb, which is the
only function this is called with.
* config/sh/sh.c (sh_find_set_of_reg): Likewise.
---
 gcc/config/sh/sh-protos.h |  2 +-
 gcc/config/sh/sh.c|  2 +-
 gcc/emit-rtl.c| 60 +++
 gcc/rtl.h | 28 +++---
 4 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/gcc/config/sh/sh-protos.h b/gcc/config/sh/sh-protos.h
index 685cd23..cec324c 100644
--- a/gcc/config/sh/sh-protos.h
+++ b/gcc/config/sh/sh-protos.h
@@ -181,7 +181,7 @@ struct set_of_reg
   rtx set_src;
 };
 
-extern set_of_reg sh_find_set_of_reg (rtx reg, rtx insn, rtx(*stepfunc)(rtx));
+extern set_of_reg sh_find_set_of_reg (rtx reg, rtx insn, rtx_insn 
*(*stepfunc)(rtx));
 extern bool sh_is_logical_t_store_expr (rtx op, rtx insn);
 extern rtx sh_try_omit_signzero_extend (rtx extended_op, rtx insn);
 #endif /* RTX_CODE */
diff --git a/gcc/config/sh/sh.c b/gcc/config/sh/sh.c
index a5118c6..a21625f 100644
--- a/gcc/config/sh/sh.c
+++ b/gcc/config/sh/sh.c
@@ -13478,7 +13478,7 @@ sh_find_equiv_gbr_addr (rtx insn, rtx mem)
'prev_nonnote_insn_bb'.  When the insn is found, try to extract the rtx
of the reg set.  */
 set_of_reg
-sh_find_set_of_reg (rtx reg, rtx insn, rtx(*stepfunc)(rtx))
+sh_find_set_of_reg (rtx reg, rtx insn, rtx_insn *(*stepfunc)(rtx))
 {
   set_of_reg result;
   result.insn = insn;
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index 729e0cc..c51b7d8 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -3166,7 +3166,7 @@ get_max_insn_count (void)
 /* Return the next insn.  If it is a SEQUENCE, return the first insn
of the sequence.  */
 
-rtx
+rtx_insn *
 next_insn (rtx insn)
 {
   if (insn)
@@ -3177,13 +3177,13 @@ next_insn (rtx insn)
insn = XVECEXP (PATTERN (insn), 0, 0);
 }
 
-  return insn;
+  return as_a_nullable rtx_insn * (insn);
 }
 
 /* Return the previous insn.  If it is a SEQUENCE, return the last insn
of the sequence.  */
 
-rtx
+rtx_insn *
 previous_insn (rtx insn)
 {
   if (insn)
@@ -3194,13 +3194,13 @@ previous_insn (rtx insn)
insn = XVECEXP (PATTERN (insn), 0, XVECLEN (PATTERN (insn), 0) - 1);
 }
 
-  return insn;
+  return as_a_nullable rtx_insn * (insn);
 }
 
 /* Return the next insn after INSN that is not a NOTE.  This routine does not
look inside SEQUENCEs.  */
 
-rtx
+rtx_insn *
 next_nonnote_insn (rtx insn)
 {
   while (insn)
@@ -3210,14 +3210,14 @@ next_nonnote_insn (rtx insn)
break;
 }
 
-  return insn;
+  return as_a_nullable rtx_insn * (insn);
 }
 
 /* Return the next insn after INSN that is not a NOTE, but stop the
search before we enter another basic block.  This routine does not
look inside SEQUENCEs.  */
 
-rtx
+rtx_insn *
 next_nonnote_insn_bb (rtx insn)
 {
   while (insn)
@@ -3226,16 +3226,16 @@ next_nonnote_insn_bb (rtx insn)
   if (insn == 0 || !NOTE_P (insn))
break;
   if (NOTE_INSN_BASIC_BLOCK_P (insn))
-   return NULL_RTX;
+   return NULL;
 }
 
-  return insn;
+  return as_a_nullable rtx_insn * (insn);
 }
 
 /* Return the previous insn before INSN that is not a NOTE.  This routine does
not look inside SEQUENCEs.  */
 
-rtx
+rtx_insn *
 prev_nonnote_insn (rtx insn)
 {
   while (insn)
@@ -3245,14 +3245,14 @@ prev_nonnote_insn (rtx insn)
break;
 }
 
-  return insn;
+  return as_a_nullable rtx_insn * (insn);
 }
 
 /* Return the previous insn before INSN 

[PATCH 019/236] Strengthen return type of gen_label_rtx

2014-08-06 Thread David Malcolm
gcc/
* rtl.h (gen_label_rtx): Strengthen return type from rtx to
rtx_code_label *.

* emit-rtl.c (gen_label_rtx): Likewise.
---
 gcc/emit-rtl.c | 7 ---
 gcc/rtl.h  | 2 +-
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index c51b7d8..5175284 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -2468,11 +2468,12 @@ set_mem_attrs_for_spill (rtx mem)
 
 /* Return a newly created CODE_LABEL rtx with a unique label number.  */
 
-rtx
+rtx_code_label *
 gen_label_rtx (void)
 {
-  return gen_rtx_CODE_LABEL (VOIDmode, NULL_RTX, NULL_RTX,
-NULL, label_num++, NULL);
+  return as_a rtx_code_label * (
+   gen_rtx_CODE_LABEL (VOIDmode, NULL_RTX, NULL_RTX,
+   NULL, label_num++, NULL));
 }
 
 /* For procedure integration.  */
diff --git a/gcc/rtl.h b/gcc/rtl.h
index 049f01e..a703e34 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -2278,7 +2278,7 @@ extern rtx gen_reg_rtx (enum machine_mode);
 extern rtx gen_rtx_REG_offset (rtx, enum machine_mode, unsigned int, int);
 extern rtx gen_reg_rtx_offset (rtx, enum machine_mode, int);
 extern rtx gen_reg_rtx_and_attrs (rtx);
-extern rtx gen_label_rtx (void);
+extern rtx_code_label *gen_label_rtx (void);
 extern rtx gen_lowpart_common (enum machine_mode, rtx);
 
 /* In cse.c */
-- 
1.8.5.3



[PATCH 004/236] PHASE 1: Initial scaffolding commits

2014-08-06 Thread David Malcolm
This commit is a placeholder for me when rebasing, to help organize the
patch kit.

/
* rtx-classes-status.txt: New file
---
 rtx-classes-status.txt | 9 +
 1 file changed, 9 insertions(+)
 create mode 100644 rtx-classes-status.txt

diff --git a/rtx-classes-status.txt b/rtx-classes-status.txt
new file mode 100644
index 000..9971853
--- /dev/null
+++ b/rtx-classes-status.txt
@@ -0,0 +1,9 @@
+git rebase has a tendency to delete empty commits, so this dummy file
+exists to be modified by marker commits.
+
+Phase 1: initial scaffolding commits:IN PROGRESS
+Phase 2: per-file commits in main source dir:  TODO
+Phase 3: per-file commits within config subdirs: TODO
+Phase 4: removal of scaffolding: TODO
+Phase 5: additional rtx_def subclasses:TODO
+Phase 6: use extra rtx_def subclasses: TODO
-- 
1.8.5.3



[PATCH 020/236] Return rtx_insn from get_insns/get_last_insn

2014-08-06 Thread David Malcolm
Ultimately, the underlying fields should become rtx_insn *, but for now we
can do this with a checked cast.

Note to self:
  config/m32c/m32c.c: m32c_leaf_function_p directly manipulates
  x_first_insn and x_last_insn, using sequence_stack.

gcc/
* emit-rtl.h (get_insns): Strengthen return type from rtx to
rtx_insn *, adding a checked cast for now.
(get_last_insn): Likewise.
---
 gcc/emit-rtl.h | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/gcc/emit-rtl.h b/gcc/emit-rtl.h
index c72c24f..f97ac49 100644
--- a/gcc/emit-rtl.h
+++ b/gcc/emit-rtl.h
@@ -74,10 +74,11 @@ extern bool need_atomic_barrier_p (enum memmodel, bool);
 
 /* Return the first insn of the current sequence or current function.  */
 
-static inline rtx
+static inline rtx_insn *
 get_insns (void)
 {
-  return crtl-emit.x_first_insn;
+  rtx insn = crtl-emit.x_first_insn;
+  return as_a_nullable rtx_insn * (insn);
 }
 
 /* Specify a new insn as the first in the chain.  */
@@ -91,10 +92,11 @@ set_first_insn (rtx insn)
 
 /* Return the last insn emitted in current sequence or current function.  */
 
-static inline rtx
+static inline rtx_insn *
 get_last_insn (void)
 {
-  return crtl-emit.x_last_insn;
+  rtx insn = crtl-emit.x_last_insn;
+  return as_a_nullable rtx_insn * (insn);
 }
 
 /* Specify a new insn as the last in the chain.  */
-- 
1.8.5.3



[PATCH 027/236] asan_emit_stack_protection returns an insn

2014-08-06 Thread David Malcolm
gcc/
* asan.h (asan_emit_stack_protection): Strengthen return type from
rtx to rtx_insn *.
* asan.c (asan_emit_stack_protection): Likewise.  Add local
insns to hold the return value.
---
 gcc/asan.c | 7 ---
 gcc/asan.h | 4 ++--
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/gcc/asan.c b/gcc/asan.c
index 118f9fc..11627c7 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -960,11 +960,12 @@ asan_function_start (void)
assigned to PBASE, when not doing use after return protection, or
corresponding address based on __asan_stack_malloc* return value.  */
 
-rtx
+rtx_insn *
 asan_emit_stack_protection (rtx base, rtx pbase, unsigned int alignb,
HOST_WIDE_INT *offsets, tree *decls, int length)
 {
   rtx shadow_base, shadow_mem, ret, mem, orig_base, lab;
+  rtx_insn *insns;
   char buf[30];
   unsigned char shadow_bytes[4];
   HOST_WIDE_INT base_offset = offsets[length - 1];
@@ -1234,9 +1235,9 @@ asan_emit_stack_protection (rtx base, rtx pbase, unsigned 
int alignb,
   if (lab)
 emit_label (lab);
 
-  ret = get_insns ();
+  insns = get_insns ();
   end_sequence ();
-  return ret;
+  return insns;
 }
 
 /* Return true if DECL, a global var, might be overridden and needs
diff --git a/gcc/asan.h b/gcc/asan.h
index 08d5063..198433f 100644
--- a/gcc/asan.h
+++ b/gcc/asan.h
@@ -23,8 +23,8 @@ along with GCC; see the file COPYING3.  If not see
 
 extern void asan_function_start (void);
 extern void asan_finish_file (void);
-extern rtx asan_emit_stack_protection (rtx, rtx, unsigned int, HOST_WIDE_INT *,
-  tree *, int);
+extern rtx_insn *asan_emit_stack_protection (rtx, rtx, unsigned int,
+HOST_WIDE_INT *, tree *, int);
 extern bool asan_protect_global (tree);
 extern void initialize_sanitizer_builtins (void);
 extern tree asan_dynamic_init_call (bool);
-- 
1.8.5.3



[PATCH 006/236] Introduce rtx_insn subclass of rtx_def

2014-08-06 Thread David Malcolm
gcc/
* coretypes.h (class rtx_insn): Add forward declaration.

* rtl.h: Include is-a.h
(struct rtx_def): Add dummy desc and tag GTY options as a
workaround to ensure gengtype knows inheritance is occurring,
whilst continuing to use the pre-existing special-casing for
rtx_def.
(class rtx_insn): New subclass of rtx_def, adding the
invariant that we're dealing with something we can sanely use INSN_UID,
NEXT_INSN, PREV_INSN on.
(is_a_helper rtx_insn *::test): New.
(is_a_helper const rtx_insn *::test): New.
---
 gcc/coretypes.h |  7 +++
 gcc/rtl.h   | 60 -
 2 files changed, 66 insertions(+), 1 deletion(-)

diff --git a/gcc/coretypes.h b/gcc/coretypes.h
index bbb5150..f22b980 100644
--- a/gcc/coretypes.h
+++ b/gcc/coretypes.h
@@ -55,6 +55,13 @@ typedef const struct simple_bitmap_def *const_sbitmap;
 struct rtx_def;
 typedef struct rtx_def *rtx;
 typedef const struct rtx_def *const_rtx;
+
+/* Subclasses of rtx_def, using indentation to show the class
+   hierarchy.
+   Where possible, keep this list in the same order as in rtl.def.  */
+class rtx_def;
+  class rtx_insn;
+
 struct rtvec_def;
 typedef struct rtvec_def *rtvec;
 typedef const struct rtvec_def *const_rtvec;
diff --git a/gcc/rtl.h b/gcc/rtl.h
index b9b069a..0858230 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -31,6 +31,7 @@ along with GCC; see the file COPYING3.  If not see
 #include hashtab.h
 #include wide-int.h
 #include flags.h
+#include is-a.h
 
 /* Value used by some passes to recognize noop moves as valid
  instructions.  */
@@ -266,7 +267,21 @@ struct GTY((variable_size)) hwivec_def {
 
 /* RTL expression (rtx).  */
 
-struct GTY((chain_next (RTX_NEXT (%h)),
+/* The GTY desc and tag options below are a kludge: we need a desc
+   field for for gengtype to recognize that inheritance is occurring,
+   so that all subclasses are redirected to the traversal hook for the
+   base class.
+   However, all of the fields are in the base class, and special-casing
+   is at work.  Hence we use desc and tag of 0, generating a switch
+   statement of the form:
+ switch (0)
+   {
+   case 0: // all the work happens here
+  }
+   in order to work with the existing special-casing in gengtype.  */
+
+struct GTY((desc(0), tag(0),
+   chain_next (RTX_NEXT (%h)),
chain_prev (RTX_PREV (%h rtx_def {
   /* The kind of expression this is.  */
   ENUM_BITFIELD(rtx_code) code: 16;
@@ -387,6 +402,25 @@ struct GTY((chain_next (RTX_NEXT (%h)),
   } GTY ((special (rtx_def), desc (GET_CODE (%0 u;
 };
 
+class GTY(()) rtx_insn : public rtx_def
+{
+  /* No extra fields, but adds the invariant:
+
+ (INSN_P (X)
+  || NOTE_P (X)
+  || JUMP_TABLE_DATA_P (X)
+  || BARRIER_P (X)
+  || LABEL_P (X))
+
+ i.e. that we must be able to use the following:
+  INSN_UID ()
+  NEXT_INSN ()
+  PREV_INSN ()
+i.e. we have an rtx that has an INSN_UID field and can be part of
+a linked list of insns.
+  */
+};
+
 /* The size in bytes of an rtx header (code, mode and flags).  */
 #define RTX_HDR_SIZE offsetof (struct rtx_def, u)
 
@@ -548,6 +582,30 @@ struct GTY(()) rtvec_def {
 /* Predicate yielding nonzero iff X is a data for a jump table.  */
 #define JUMP_TABLE_DATA_P(INSN) (GET_CODE (INSN) == JUMP_TABLE_DATA)
 
+template 
+template 
+inline bool
+is_a_helper rtx_insn *::test (rtx rt)
+{
+  return (INSN_P (rt)
+ || NOTE_P (rt)
+ || JUMP_TABLE_DATA_P (rt)
+ || BARRIER_P (rt)
+ || LABEL_P (rt));
+}
+
+template 
+template 
+inline bool
+is_a_helper const rtx_insn *::test (const_rtx rt)
+{
+  return (INSN_P (rt)
+ || NOTE_P (rt)
+ || JUMP_TABLE_DATA_P (rt)
+ || BARRIER_P (rt)
+ || LABEL_P (rt));
+}
+
 /* Predicate yielding nonzero iff X is a return or simple_return.  */
 #define ANY_RETURN_P(X) \
   (GET_CODE (X) == RETURN || GET_CODE (X) == SIMPLE_RETURN)
-- 
1.8.5.3



[PATCH 009/236] Replace BB_HEAD et al macros with functions

2014-08-06 Thread David Malcolm
This is further scaffolding; convert the BB_* and SET_BB_* macros
into functions.  Convert the BB_* rvalue-style functions into returning
rtx_insn * rather than plain rtx.

For now, this is done by adding a checked cast, but this will eventually
become a field lookup.  The lvalue form for now returns an rtx to allow
in-place modification.

gcc/
* basic-block.h (BB_HEAD): Convert to a function.  Strengthen the
return type from rtx to rtx_insn *.
(BB_END): Likewise.
(BB_HEADER): Likewise.
(BB_FOOTER): Likewise.
(SET_BB_HEAD): Convert to a function.
(SET_BB_END): Likewise.
(SET_BB_HEADER): Likewise.
(SET_BB_FOOTER): Likewise.

* cfgrtl.c (BB_HEAD): New function, from macro of same name.
Strengthen the return type from rtx to rtx_insn *.  For now, this
is done by adding a checked cast, but this will eventually
become a field lookup.
(BB_END): Likewise.
(BB_HEADER): Likewise.
(BB_FOOTER): Likewise.
(SET_BB_HEAD): New function, from macro of same name.  This is
intended for use as an lvalue, and so returns an rtx to allow
in-place modification.
(SET_BB_END): Likewise.
(SET_BB_HEADER): Likewise.
(SET_BB_FOOTER): Likewise.
---
 gcc/basic-block.h | 26 ++--
 gcc/cfgrtl.c  | 60 +++
 2 files changed, 75 insertions(+), 11 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index d27f498..82dbfe9 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -368,17 +368,21 @@ struct GTY(()) control_flow_graph {
 
 /* Stuff for recording basic block info.  */
 
-/* These macros are currently split into two:
-   one suitable for reading, and for writing.
-   These will become functions in a follow-up patch.  */
-#define BB_HEAD(B)  (((const_basic_block)B)-il.x.head_)
-#define SET_BB_HEAD(B)  (B)-il.x.head_
-#define BB_END(B)   (((const rtl_bb_info *)(B)-il.x.rtl)-end_)
-#define SET_BB_END(B)   (B)-il.x.rtl-end_
-#define BB_HEADER(B)(((const rtl_bb_info *)(B)-il.x.rtl)-header_)
-#define SET_BB_HEADER(B) (B)-il.x.rtl-header_
-#define BB_FOOTER(B)(((const rtl_bb_info *)(B)-il.x.rtl)-footer_)
-#define SET_BB_FOOTER(B) (B)-il.x.rtl-footer_
+/* For now, these will be functions (so that they can include checked casts
+   to rtx_insn.   Once the underlying fields are converted from rtx
+   to rtx_insn, these can be converted back to macros.  */
+
+extern rtx_insn *BB_HEAD (const_basic_block bb);
+extern rtx SET_BB_HEAD (basic_block bb);
+
+extern rtx_insn *BB_END (const_basic_block bb);
+extern rtx SET_BB_END (basic_block bb);
+
+extern rtx_insn *BB_HEADER (const_basic_block bb);
+extern rtx SET_BB_HEADER (basic_block bb);
+
+extern rtx_insn *BB_FOOTER (const_basic_block bb);
+extern rtx SET_BB_FOOTER (basic_block bb);
 
 /* Special block numbers [markers] for entry and exit.
Neither of them is supposed to hold actual statements.  */
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 026fb48..5f2879e 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -5094,4 +5094,64 @@ struct cfg_hooks cfg_layout_rtl_cfg_hooks = {
   rtl_account_profile_record,
 };
 
+/* BB_HEAD as an rvalue. */
+
+rtx_insn *BB_HEAD (const_basic_block bb)
+{
+  rtx insn = bb-il.x.head_;
+  return as_a_nullable rtx_insn * (insn);
+}
+
+/* BB_HEAD for use as an lvalue. */
+
+rtx SET_BB_HEAD (basic_block bb)
+{
+  return bb-il.x.head_;
+}
+
+/* BB_END as an rvalue. */
+
+rtx_insn *BB_END (const_basic_block bb)
+{
+  rtx insn = bb-il.x.rtl-end_;
+  return as_a_nullable rtx_insn * (insn);
+}
+
+/* BB_END as an lvalue. */
+
+rtx SET_BB_END (basic_block bb)
+{
+  return bb-il.x.rtl-end_;
+}
+
+/* BB_HEADER as an rvalue. */
+
+rtx_insn *BB_HEADER (const_basic_block bb)
+{
+  rtx insn = bb-il.x.rtl-header_;
+  return as_a_nullable rtx_insn * (insn);
+}
+
+/* BB_HEADER as an lvalue. */
+
+rtx SET_BB_HEADER (basic_block bb)
+{
+  return bb-il.x.rtl-header_;
+}
+
+/* BB_FOOTER as an rvalue. */
+
+rtx_insn *BB_FOOTER (const_basic_block bb)
+{
+  rtx insn = bb-il.x.rtl-footer_;
+  return as_a_nullable rtx_insn * (insn);
+}
+
+/* BB_FOOTER as an lvalue. */
+
+rtx SET_BB_FOOTER (basic_block bb)
+{
+  return bb-il.x.rtl-footer_;
+}
+
 #include gt-cfgrtl.h
-- 
1.8.5.3



[PATCH 007/236] New function: for_each_rtx_in_insn

2014-08-06 Thread David Malcolm
gcc/
* rtl.h (for_each_rtx_in_insn): New function.
* rtlanal.c (for_each_rtx_in_insn): Likewise.
---
 gcc/rtl.h |  1 +
 gcc/rtlanal.c | 16 
 2 files changed, 17 insertions(+)

diff --git a/gcc/rtl.h b/gcc/rtl.h
index 0858230..3e37ed0 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -2356,6 +2356,7 @@ extern int computed_jump_p (const_rtx);
 
 typedef int (*rtx_function) (rtx *, void *);
 extern int for_each_rtx (rtx *, rtx_function, void *);
+extern int for_each_rtx_in_insn (rtx_insn **, rtx_function, void *);
 
 /* Callback for for_each_inc_dec, to process the autoinc operation OP
within MEM that sets DEST to SRC + SRCOFF, or SRC if SRCOFF is
diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c
index 82cfc1bf..5e2e908 100644
--- a/gcc/rtlanal.c
+++ b/gcc/rtlanal.c
@@ -3011,6 +3011,22 @@ for_each_rtx (rtx *x, rtx_function f, void *data)
   return for_each_rtx_1 (*x, i, f, data);
 }
 
+/* Like for_each_rtx, but for calling on an rtx_insn **.  */
+
+int
+for_each_rtx_in_insn (rtx_insn **insn, rtx_function f, void *data)
+{
+  rtx insn_as_rtx = *insn;
+  int result;
+
+  result = for_each_rtx (insn_as_rtx, f, data);
+
+  if (insn_as_rtx != *insn)
+*insn = as_a_nullable rtx_insn * (insn_as_rtx);
+
+  return result;
+}
+
 
 
 /* Data structure that holds the internal state communicated between
-- 
1.8.5.3



[PATCH 030/236] Convert various rtx to rtx_note *

2014-08-06 Thread David Malcolm
gcc/
* basic-block.h (create_basic_block_structure): Strengthen third
param bb_note from rtx to rtx_note *.
* rtl.h (emit_note_before): Strengthen return type from rtx to
rtx_note *.
(emit_note_after): Likewise.
(emit_note): Likewise.
(emit_note_copy): Likewise.  Also, strengthen param similarly.
* function.h (struct rtl_data): Strengthen field
x_stack_check_probe_note from rtx to rtx_note *.

* cfgexpand.c (expand_gimple_basic_block): Strengthen local note
from rtx to rtx_note *.
* cfgrtl.c (create_basic_block_structure): Strengthen third param
bb_note from rtx to rtx_note *.
(duplicate_insn_chain): Likewise for local last.  Add a checked cast
when calling emit_note_copy.
* emit-rtl.c (make_note_raw): Strengthen return type from rtx to
rtx_note *.
(emit_note_after): Likewise.
(emit_note_before): Likewise.
(emit_note_copy): Likewise.  Also, strengthen param similarly.
(emit_note): Likewise.
* except.c (convert_to_eh_region_ranges): Strengthen local note
from rtx to rtx_note *.
* final.c (change_scope): Likewise.
(reemit_insn_block_notes): Likewise, for both locals named note.
Also, strengthen local insn from rtx to rtx_insn *.
* haifa-sched.c (sched_extend_bb): Strengthen local note from
rtx to rtx_note *.
* reg-stack.c (compensate_edge): Likewise for local after. Also,
strengthen local seq from rtx to rtx_insn *.
* reload1.c (reload_as_needed): Strengthen local marker from rtx
to rtx_note *.
* sel-sched-ir.c (bb_note_pool): Strengthen from rtx_vec_t to
vecrtx_note *.
(get_bb_note_from_pool): Strengthen return type from rtx to
rtx_note *.
(sel_create_basic_block): Strengthen local new_bb_note from
insn_t to rtx_note *.
* var-tracking.c (emit_note_insn_var_location): Strengthen local
note from rtx to rtx_note *.
(emit_notes_in_bb): Likewise.
---
 gcc/basic-block.h  |  3 ++-
 gcc/cfgexpand.c|  4 ++--
 gcc/cfgrtl.c   |  8 +---
 gcc/emit-rtl.c | 22 +++---
 gcc/except.c   |  3 ++-
 gcc/final.c|  7 ---
 gcc/function.h |  2 +-
 gcc/haifa-sched.c  |  2 +-
 gcc/reg-stack.c|  3 ++-
 gcc/reload1.c  |  3 ++-
 gcc/rtl.h  |  8 
 gcc/sel-sched-ir.c | 10 +-
 gcc/var-tracking.c |  6 --
 13 files changed, 45 insertions(+), 36 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 87094c6..03dbdbc 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -412,7 +412,8 @@ extern void remove_edge_raw (edge);
 extern void redirect_edge_succ (edge, basic_block);
 extern edge redirect_edge_succ_nodup (edge, basic_block);
 extern void redirect_edge_pred (edge, basic_block);
-extern basic_block create_basic_block_structure (rtx, rtx, rtx, basic_block);
+extern basic_block create_basic_block_structure (rtx, rtx, rtx_note *,
+basic_block);
 extern void clear_bb_flags (void);
 extern void dump_bb_info (FILE *, basic_block, int, int, bool, bool);
 extern void dump_edge_info (FILE *, edge, int, int);
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 643bb19..d2dc924 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -4878,7 +4878,7 @@ expand_gimple_basic_block (basic_block bb, bool 
disable_tail_calls)
   gimple_stmt_iterator gsi;
   gimple_seq stmts;
   gimple stmt = NULL;
-  rtx note;
+  rtx_note *note;
   rtx_insn *last;
   edge e;
   edge_iterator ei;
@@ -4951,7 +4951,7 @@ expand_gimple_basic_block (basic_block bb, bool 
disable_tail_calls)
   maybe_dump_rtl_for_gimple_stmt (stmt, last);
 }
   else
-note = SET_BB_HEAD (bb) = emit_note (NOTE_INSN_BASIC_BLOCK);
+SET_BB_HEAD (bb) = note = emit_note (NOTE_INSN_BASIC_BLOCK);
 
   NOTE_BASIC_BLOCK (note) = bb;
 
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index ac3bc87..2a490f9 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -272,7 +272,8 @@ delete_insn_chain (rtx start, rtx finish, bool clear_bb)
AFTER is the basic block we should be put after.  */
 
 basic_block
-create_basic_block_structure (rtx head, rtx end, rtx bb_note, basic_block 
after)
+create_basic_block_structure (rtx head, rtx end, rtx_note *bb_note,
+ basic_block after)
 {
   basic_block bb;
 
@@ -4085,7 +4086,8 @@ cfg_layout_can_duplicate_bb_p (const_basic_block bb)
 rtx
 duplicate_insn_chain (rtx from, rtx to)
 {
-  rtx insn, next, last, copy;
+  rtx insn, next, copy;
+  rtx_note *last;
 
   /* Avoid updating of boundaries of previous basic block.  The
  note will get removed from insn stream in fixup.  */
@@ -4153,7 +4155,7 @@ duplicate_insn_chain (rtx from, rtx to)
  break;
 
case NOTE_INSN_EPILOGUE_BEG:
- emit_note_copy (insn);
+ emit_note_copy 

[PATCH 040/236] Use rtx_insn internally within generated functions

2014-08-06 Thread David Malcolm
With this patch, insn and curr_insn as used from C++ fragments in .md
files are strengthened from rtx to rtx_insn *, allowing numerous
target-specific functions to have their params be similiar strengthened.

The top-level interfaces (recog, split, peephole2) continue to take
a plain rtx for insn, to avoid introducing dependencies on other
patches.

gcc/
* recog.h (insn_output_fn): Update this function typedef to match
the changes below to the generated output functions, strengthening
the 2nd param from rtx to rtx_insn *.

* final.c (get_insn_template): Add a checked cast to rtx_insn * on
insn when invoking an output function, to match the new signature
of insn_output_fn with a stronger second param.

* genconditions.c (write_header): In the generated code for
gencondmd.c, strengthen the global insn from rtx to rtx_insn *
to match the other changes in this patch.

* genemit.c (gen_split): Strengthen the 1st param curr_insn of
the generated gen_ functions from rtx to rtx_insn * within their
implementations.

* genrecog.c (write_subroutine): Strengthen the 2nd param insn of
the subfunctions within the generated recog_, split, peephole2
function trees from rtx to rtx_insn *.  For now, the top-level
generated functions (recog, split, peephole2) continue to
take a plain rtx for insn, to avoid introducing dependencies on
other patches.  Rename this 2nd param from insn to
uncast_insn, and reintroduce insn as a local variable of type
rtx_insn *, initialized at the top of the generated function with
a checked cast on uncast_insn.
(make_insn_sequence): Strengthen the 1st param curr_insn of
the generated gen_ functions from rtx to rtx_insn * within their
prototypes.

* genoutput.c (process_template): Strengthen the 2nd param within
the generated output_ functions insn from rtx to rtx_insn *.
---
 gcc/final.c |  3 ++-
 gcc/genconditions.c |  2 +-
 gcc/genemit.c   |  8 
 gcc/genoutput.c |  4 ++--
 gcc/genrecog.c  | 29 ++---
 gcc/recog.h |  2 +-
 6 files changed, 32 insertions(+), 16 deletions(-)

diff --git a/gcc/final.c b/gcc/final.c
index 38f6e0c..3a78aad 100644
--- a/gcc/final.c
+++ b/gcc/final.c
@@ -2055,7 +2055,8 @@ get_insn_template (int code, rtx insn)
   return insn_data[code].output.multi[which_alternative];
 case INSN_OUTPUT_FORMAT_FUNCTION:
   gcc_assert (insn);
-  return (*insn_data[code].output.function) (recog_data.operand, insn);
+  return (*insn_data[code].output.function) (recog_data.operand,
+as_a rtx_insn * (insn));
 
 default:
   gcc_unreachable ();
diff --git a/gcc/genconditions.c b/gcc/genconditions.c
index dc22c78..8390797 100644
--- a/gcc/genconditions.c
+++ b/gcc/genconditions.c
@@ -95,7 +95,7 @@ write_header (void)
 
   puts (\
 /* Dummy external declarations.  */\n\
-extern rtx insn;\n\
+extern rtx_insn *insn;\n\
 extern rtx ins1;\n\
 extern rtx operands[];\n\
 \n\
diff --git a/gcc/genemit.c b/gcc/genemit.c
index 16b5644..1bc73f0 100644
--- a/gcc/genemit.c
+++ b/gcc/genemit.c
@@ -557,15 +557,15 @@ gen_split (rtx split)
   /* Output the prototype, function name and argument declarations.  */
   if (GET_CODE (split) == DEFINE_PEEPHOLE2)
 {
-  printf (extern rtx gen_%s_%d (rtx, rtx *);\n,
+  printf (extern rtx gen_%s_%d (rtx_insn *, rtx *);\n,
  name, insn_code_number);
-  printf (rtx\ngen_%s_%d (rtx curr_insn ATTRIBUTE_UNUSED, rtx 
*operands%s)\n,
+  printf (rtx\ngen_%s_%d (rtx_insn *curr_insn ATTRIBUTE_UNUSED, rtx 
*operands%s)\n,
  name, insn_code_number, unused);
 }
   else
 {
-  printf (extern rtx gen_split_%d (rtx, rtx *);\n, insn_code_number);
-  printf (rtx\ngen_split_%d (rtx curr_insn ATTRIBUTE_UNUSED, rtx 
*operands%s)\n,
+  printf (extern rtx gen_split_%d (rtx_insn *, rtx *);\n, 
insn_code_number);
+  printf (rtx\ngen_split_%d (rtx_insn *curr_insn ATTRIBUTE_UNUSED, rtx 
*operands%s)\n,
  insn_code_number, unused);
 }
   printf ({\n);
diff --git a/gcc/genoutput.c b/gcc/genoutput.c
index b3ce120..b33a361 100644
--- a/gcc/genoutput.c
+++ b/gcc/genoutput.c
@@ -652,7 +652,7 @@ process_template (struct data *d, const char *template_code)
   d-output_format = INSN_OUTPUT_FORMAT_FUNCTION;
 
   puts (\nstatic const char *);
-  printf (output_%d (rtx *operands ATTRIBUTE_UNUSED, rtx insn 
ATTRIBUTE_UNUSED)\n,
+  printf (output_%d (rtx *operands ATTRIBUTE_UNUSED, rtx_insn *insn 
ATTRIBUTE_UNUSED)\n,
  d-code_number);
   puts ({);
   print_md_ptr_loc (template_code);
@@ -681,7 +681,7 @@ process_template (struct data *d, const char *template_code)
  d-output_format = INSN_OUTPUT_FORMAT_FUNCTION;
  puts (\nstatic const char *);

[PATCH 032/236] emit_* functions return rtx_insn

2014-08-06 Thread David Malcolm
More scaffolding: strengthen the return types from the various emit_
functions from rtx to rtx_insn * (or to the rtx_barrier * subclass in a
few cases).

These will ultimately have their params strengthened also, but we
postpone that until much later in the patch series.  So for now there
are also various checked casts to ensure we really got an insn when
returning such params back.

Doing so requires a minor tweak to config/sh/sh.c

gcc/
* emit-rtl.h (emit_copy_of_insn_after): Strengthen return type
from rtx to rtx_insn *.

* rtl.h (emit_insn_before): Likewise.
(emit_insn_before_noloc): Likewise.
(emit_insn_before_setloc): Likewise.
(emit_jump_insn_before): Likewise.
(emit_jump_insn_before_noloc): Likewise.
(emit_jump_insn_before_setloc): Likewise.
(emit_call_insn_before): Likewise.
(emit_call_insn_before_noloc): Likewise.
(emit_call_insn_before_setloc): Likewise.
(emit_debug_insn_before): Likewise.
(emit_debug_insn_before_noloc): Likewise.
(emit_debug_insn_before_setloc): Likewise.
(emit_label_before): Likewise.
(emit_insn_after): Likewise.
(emit_insn_after_noloc): Likewise.
(emit_insn_after_setloc): Likewise.
(emit_jump_insn_after): Likewise.
(emit_jump_insn_after_noloc): Likewise.
(emit_jump_insn_after_setloc): Likewise.
(emit_call_insn_after): Likewise.
(emit_call_insn_after_noloc): Likewise.
(emit_call_insn_after_setloc): Likewise.
(emit_debug_insn_after): Likewise.
(emit_debug_insn_after_noloc): Likewise.
(emit_debug_insn_after_setloc): Likewise.
(emit_label_after): Likewise.
(emit_insn): Likewise.
(emit_debug_insn): Likewise.
(emit_jump_insn): Likewise.
(emit_call_insn): Likewise.
(emit_label): Likewise.
(gen_clobber): Likewise.
(emit_clobber): Likewise.
(gen_use): Likewise.
(emit_use): Likewise.
(emit): Likewise.

(emit_barrier_before): Strengthen return type from rtx to
rtx_barrier *.
(emit_barrier_after): Likewise.
(emit_barrier): Likewise.

* emit-rtl.c (emit_pattern_before_noloc):  Strengthen return type
from rtx to rtx_insn *.  Add checked casts for now when converting
last from rtx to rtx_insn *.
(emit_insn_before_noloc): Likewise for return type.
(emit_jump_insn_before_noloc): Likewise.
(emit_call_insn_before_noloc): Likewise.
(emit_debug_insn_before_noloc): Likewise.
(emit_barrier_before): Strengthen return type and local insn
from rtx to rtx_barrier *.
(emit_label_before): Strengthen return type from rtx to
rtx_insn *.  Add checked cast for now when returning param
(emit_pattern_after_noloc): Strengthen return type from rtx to
rtx_insn *.  Add checked casts for now when converting last from
rtx to rtx_insn *.
(emit_insn_after_noloc): Strengthen return type from rtx to
rtx_insn *.
(emit_jump_insn_after_noloc): Likewise.
(emit_call_insn_after_noloc): Likewise.
(emit_debug_insn_after_noloc): Likewise.
(emit_barrier_after): Strengthen return type from rtx to
rtx_barrier *.
(emit_label_after): Strengthen return type from rtx to rtx_insn *.
Add checked cast for now when converting label from rtx to
rtx_insn *.
(emit_pattern_after_setloc): Strengthen return type from rtx to
rtx_insn *.  Add checked casts for now when converting last from
rtx to rtx_insn *.
(emit_pattern_after): Strengthen return type from rtx to
rtx_insn *.
(emit_insn_after_setloc): Likewise.
(emit_insn_after): Likewise.
(emit_jump_insn_after_setloc): Likewise.
(emit_jump_insn_after): Likewise.
(emit_call_insn_after_setloc): Likewise.
(emit_call_insn_after): Likewise.
(emit_debug_insn_after_setloc): Likewise.
(emit_debug_insn_after): Likewise.
(emit_pattern_before_setloc): Likewise.  Add checked casts for now
when converting last from rtx to rtx_insn *.
(emit_pattern_before): Strengthen return type from rtx to
rtx_insn *.
(emit_insn_before_setloc): Likewise.
(emit_insn_before): Likewise.
(emit_jump_insn_before_setloc): Likewise.
(emit_jump_insn_before): Likewise.
(emit_call_insn_before_setloc): Likewise.
(emit_call_insn_before): Likewise.
(emit_debug_insn_before_setloc): Likewise.
(emit_debug_insn_before): Likewise.
(emit_insn): Strengthen return type and locals last, insn,
next from rtx to rtx_insn *.  Add checked cast to rtx_insn
within cases where we know we have an insn.
(emit_debug_insn): Likewise.
(emit_jump_insn): Likewise.
(emit_call_insn): Strengthen return 

[PATCH 034/236] next_cc0_user and prev_cc0_setter scaffolding

2014-08-06 Thread David Malcolm
gcc/
* rtl.h (next_cc0_user): Strengthen return type from rtx to
rtx_insn *.
(prev_cc0_setter): Likewise.

* emit-rtl.c (next_cc0_user): Strengthen return type from rtx to
rtx_insn *, adding checked casts for now as necessary.
(prev_cc0_setter): Likewise.
---
 gcc/emit-rtl.c | 12 ++--
 gcc/rtl.h  |  4 ++--
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index 042694a..b64b276 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -3437,20 +3437,20 @@ prev_active_insn (rtx insn)
 
Return 0 if we can't find the insn.  */
 
-rtx
+rtx_insn *
 next_cc0_user (rtx insn)
 {
   rtx note = find_reg_note (insn, REG_CC_USER, NULL_RTX);
 
   if (note)
-return XEXP (note, 0);
+return as_a_nullable rtx_insn * (XEXP (note, 0));
 
   insn = next_nonnote_insn (insn);
   if (insn  NONJUMP_INSN_P (insn)  GET_CODE (PATTERN (insn)) == SEQUENCE)
 insn = XVECEXP (PATTERN (insn), 0, 0);
 
   if (insn  INSN_P (insn)  reg_mentioned_p (cc0_rtx, PATTERN (insn)))
-return insn;
+return as_a_nullable rtx_insn * (insn);
 
   return 0;
 }
@@ -3458,18 +3458,18 @@ next_cc0_user (rtx insn)
 /* Find the insn that set CC0 for INSN.  Unless INSN has a REG_CC_SETTER
note, it is the previous insn.  */
 
-rtx
+rtx_insn *
 prev_cc0_setter (rtx insn)
 {
   rtx note = find_reg_note (insn, REG_CC_SETTER, NULL_RTX);
 
   if (note)
-return XEXP (note, 0);
+return as_a_nullable rtx_insn * (XEXP (note, 0));
 
   insn = prev_nonnote_insn (insn);
   gcc_assert (sets_cc0_p (PATTERN (insn)));
 
-  return insn;
+  return as_a_nullable rtx_insn * (insn);
 }
 #endif
 
diff --git a/gcc/rtl.h b/gcc/rtl.h
index d519908..b4027aa 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -2418,8 +2418,8 @@ extern rtx_insn *next_real_insn (rtx);
 extern rtx_insn *prev_active_insn (rtx);
 extern rtx_insn *next_active_insn (rtx);
 extern int active_insn_p (const_rtx);
-extern rtx next_cc0_user (rtx);
-extern rtx prev_cc0_setter (rtx);
+extern rtx_insn *next_cc0_user (rtx);
+extern rtx_insn *prev_cc0_setter (rtx);
 
 /* In emit-rtl.c  */
 extern int insn_line (const_rtx);
-- 
1.8.5.3



[PATCH 042/236] try_split returns an rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* rtl.h (try_split): Strengthen return type from rtx to rtx_insn *.

* emit-rtl.c (try_split): Likewise, also for locals before and
after.  For now, don't strengthen param trial, which requires
adding checked casts when returning it.
---
 gcc/emit-rtl.c | 12 ++--
 gcc/rtl.h  |  2 +-
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index b64b276..05b787b 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -3538,11 +3538,11 @@ mark_label_nuses (rtx x)
replacement insn depending on the value of LAST.  Otherwise, it
returns TRIAL.  If the insn to be returned can be split, it will be.  */
 
-rtx
+rtx_insn *
 try_split (rtx pat, rtx trial, int last)
 {
-  rtx before = PREV_INSN (trial);
-  rtx after = NEXT_INSN (trial);
+  rtx_insn *before = PREV_INSN (trial);
+  rtx_insn *after = NEXT_INSN (trial);
   int has_barrier = 0;
   rtx note, seq, tem;
   int probability;
@@ -3552,7 +3552,7 @@ try_split (rtx pat, rtx trial, int last)
 
   /* We're not good at redistributing frame information.  */
   if (RTX_FRAME_RELATED_P (trial))
-return trial;
+return as_a rtx_insn * (trial);
 
   if (any_condjump_p (trial)
(note = find_reg_note (trial, REG_BR_PROB, 0)))
@@ -3572,7 +3572,7 @@ try_split (rtx pat, rtx trial, int last)
 }
 
   if (!seq)
-return trial;
+return as_a rtx_insn * (trial);
 
   /* Avoid infinite loop if any insn of the result matches
  the original pattern.  */
@@ -3581,7 +3581,7 @@ try_split (rtx pat, rtx trial, int last)
 {
   if (INSN_P (insn_last)
   rtx_equal_p (PATTERN (insn_last), pat))
-   return trial;
+   return as_a rtx_insn * (trial);
   if (!NEXT_INSN (insn_last))
break;
   insn_last = NEXT_INSN (insn_last);
diff --git a/gcc/rtl.h b/gcc/rtl.h
index a97a81e..f28a62a 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -2442,7 +2442,7 @@ extern rtx delete_related_insns (rtx);
 extern rtx *find_constant_term_loc (rtx *);
 
 /* In emit-rtl.c  */
-extern rtx try_split (rtx, rtx, int);
+extern rtx_insn *try_split (rtx, rtx, int);
 extern int split_branch_probability;
 
 /* In unknown file  */
-- 
1.8.5.3



[PATCH 048/236] alias.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* alias.c (init_alias_analysis): Strengthen local insn from rtx
to rtx_insn *.
---
 gcc/alias.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/gcc/alias.c b/gcc/alias.c
index 0246dd7..903840c 100644
--- a/gcc/alias.c
+++ b/gcc/alias.c
@@ -2840,7 +2840,8 @@ init_alias_analysis (void)
   int changed, pass;
   int i;
   unsigned int ui;
-  rtx insn, val;
+  rtx_insn *insn;
+  rtx val;
   int rpo_cnt;
   int *rpo;
 
-- 
1.8.5.3



[PATCH 047/236] PHASE 2: Per-file commits in main source directory

2014-08-06 Thread David Malcolm
This commit is a placeholder for me when rebasing, to help organize the
patch kit.

/
* rtx-classes-status.txt: Update
---
 rtx-classes-status.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/rtx-classes-status.txt b/rtx-classes-status.txt
index 52567e7..e350eaf 100644
--- a/rtx-classes-status.txt
+++ b/rtx-classes-status.txt
@@ -1,8 +1,8 @@
 git rebase has a tendency to delete empty commits, so this dummy file
 exists to be modified by marker commits.
 
-Phase 1: initial scaffolding commits:IN PROGRESS
-Phase 2: per-file commits in main source dir:  TODO
+Phase 1: initial scaffolding commits:DONE
+Phase 2: per-file commits in main source dir:  IN PROGRESS
 Phase 3: per-file commits within config subdirs: TODO
 Phase 4: removal of scaffolding: TODO
 Phase 5: additional rtx_def subclasses:TODO
-- 
1.8.5.3



[PATCH 054/236] calls.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* calls.c (emit_call_1): Strengthen local call_insn from rtx to
rtx_insn *.
(internal_arg_pointer_exp_state): Likewise for field scan_start.
(internal_arg_pointer_based_exp_scan): Likewise for locals insn,
scan_start.
(load_register_parameters): Likewise for local before_arg.
(check_sibcall_argument_overlap): Likewise for param insn.
(expand_call): Likewise for locals normal_call_insns,
tail_call_insns, insns, before_call, after_args,
before_arg, last, prev.  Strengthen one of the last from
rtx to rtx_call_insn *.
(fixup_tail_calls): Strengthen local insn from rtx to
rtx_insn *.
(emit_library_call_value_1): Likewise for locals before_call and
last.
---
 gcc/calls.c | 47 +--
 1 file changed, 25 insertions(+), 22 deletions(-)

diff --git a/gcc/calls.c b/gcc/calls.c
index 78fe7d8..a3b9993 100644
--- a/gcc/calls.c
+++ b/gcc/calls.c
@@ -153,7 +153,7 @@ static rtx emit_library_call_value_1 (int, rtx, rtx, enum 
libcall_type,
  enum machine_mode, int, va_list);
 static int special_function_p (const_tree, int);
 static int check_sibcall_argument_overlap_1 (rtx);
-static int check_sibcall_argument_overlap (rtx, struct arg_data *, int);
+static int check_sibcall_argument_overlap (rtx_insn *, struct arg_data *, int);
 
 static int combine_pending_stack_adjustment_and_call (int, struct args_size *,
  unsigned int);
@@ -261,7 +261,8 @@ emit_call_1 (rtx funexp, tree fntree ATTRIBUTE_UNUSED, tree 
fndecl ATTRIBUTE_UNU
 cumulative_args_t args_so_far ATTRIBUTE_UNUSED)
 {
   rtx rounded_stack_size_rtx = GEN_INT (rounded_stack_size);
-  rtx call_insn, call, funmem;
+  rtx_insn *call_insn;
+  rtx call, funmem;
   int already_popped = 0;
   HOST_WIDE_INT n_popped
 = targetm.calls.return_pops_args (fndecl, funtype, stack_size);
@@ -1685,7 +1686,7 @@ static struct
 {
   /* Last insn that has been scanned by internal_arg_pointer_based_exp_scan,
  or NULL_RTX if none has been scanned yet.  */
-  rtx scan_start;
+  rtx_insn *scan_start;
   /* Vector indexed by REGNO - FIRST_PSEUDO_REGISTER, recording if a pseudo is
  based on crtl-args.internal_arg_pointer.  The element is NULL_RTX if the
  pseudo isn't based on it, a CONST_INT offset if the pseudo is based on it
@@ -1704,7 +1705,7 @@ static rtx internal_arg_pointer_based_exp (rtx, bool);
 static void
 internal_arg_pointer_based_exp_scan (void)
 {
-  rtx insn, scan_start = internal_arg_pointer_exp_state.scan_start;
+  rtx_insn *insn, *scan_start = internal_arg_pointer_exp_state.scan_start;
 
   if (scan_start == NULL_RTX)
 insn = get_insns ();
@@ -1870,7 +1871,7 @@ load_register_parameters (struct arg_data *args, int 
num_actuals,
  int partial = args[i].partial;
  int nregs;
  int size = 0;
- rtx before_arg = get_last_insn ();
+ rtx_insn *before_arg = get_last_insn ();
  /* Set non-negative if we must move a word at a time, even if
 just one word (e.g, partial == 4  mode == DFmode).  Set
 to -1 if we just use a normal move insn.  This value can be
@@ -2101,7 +2102,8 @@ check_sibcall_argument_overlap_1 (rtx x)
slots, zero otherwise.  */
 
 static int
-check_sibcall_argument_overlap (rtx insn, struct arg_data *arg, int 
mark_stored_args_map)
+check_sibcall_argument_overlap (rtx_insn *insn, struct arg_data *arg,
+   int mark_stored_args_map)
 {
   int low, high;
 
@@ -2192,9 +2194,9 @@ expand_call (tree exp, rtx target, int ignore)
   /* RTX for the function to be called.  */
   rtx funexp;
   /* Sequence of insns to perform a normal call.  */
-  rtx normal_call_insns = NULL_RTX;
+  rtx_insn *normal_call_insns = NULL;
   /* Sequence of insns to perform a tail call.  */
-  rtx tail_call_insns = NULL_RTX;
+  rtx_insn *tail_call_insns = NULL;
   /* Data type of the function.  */
   tree funtype;
   tree type_arg_types;
@@ -2660,8 +2662,8 @@ expand_call (tree exp, rtx target, int ignore)
 recursion call can be ignored if we indeed use the tail
 call expansion.  */
   saved_pending_stack_adjust save;
-  rtx insns;
-  rtx before_call, next_arg_reg, after_args;
+  rtx_insn *insns, *before_call, *after_args;
+  rtx next_arg_reg;
 
   if (pass == 0)
{
@@ -3030,7 +3032,7 @@ expand_call (tree exp, rtx target, int ignore)
{
  if (args[i].reg == 0 || args[i].pass_on_stack)
{
- rtx before_arg = get_last_insn ();
+ rtx_insn *before_arg = get_last_insn ();
 
  /* We don't allow passing huge ( 2^30 B) arguments
 by value.  It would cause an overflow later on.  */
@@ -3070,7 +3072,7 @@ expand_call (tree exp, rtx target, int ignore)
for (i = 0; i  num_actuals; i++)
  if 

[PATCH 058/236] cfgloop.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* cfgloop.c (loop_exits_from_bb_p): Strengthen local insn from
rtx to rtx_insn *.
---
 gcc/cfgloop.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gcc/cfgloop.c b/gcc/cfgloop.c
index 73f79ef..6d1fe8d 100644
--- a/gcc/cfgloop.c
+++ b/gcc/cfgloop.c
@@ -1736,7 +1736,7 @@ loop_exits_from_bb_p (struct loop *loop, basic_block bb)
 location_t
 get_loop_location (struct loop *loop)
 {
-  rtx insn = NULL;
+  rtx_insn *insn = NULL;
   struct niter_desc *desc = NULL;
   edge exit;
 
-- 
1.8.5.3



[PATCH 056/236] cfgbuild.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* cfgbuild.c (make_edges): Strengthen local insn from rtx to
rtx_insn *.
(purge_dead_tablejump_edges): Likewise.
(find_bb_boundaries): Likewise for locals insn, end,
flow_transfer_insn.
---
 gcc/cfgbuild.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/gcc/cfgbuild.c b/gcc/cfgbuild.c
index 848e13f..8bbf325 100644
--- a/gcc/cfgbuild.c
+++ b/gcc/cfgbuild.c
@@ -218,7 +218,8 @@ make_edges (basic_block min, basic_block max, int update_p)
 
   FOR_BB_BETWEEN (bb, min, max-next_bb, next_bb)
 {
-  rtx insn, x;
+  rtx_insn *insn;
+  rtx x;
   enum rtx_code code;
   edge e;
   edge_iterator ei;
@@ -399,7 +400,8 @@ mark_tablejump_edge (rtx label)
 static void
 purge_dead_tablejump_edges (basic_block bb, rtx table)
 {
-  rtx insn = BB_END (bb), tmp;
+  rtx_insn *insn = BB_END (bb);
+  rtx tmp;
   rtvec vec;
   int j;
   edge_iterator ei;
@@ -443,10 +445,10 @@ static void
 find_bb_boundaries (basic_block bb)
 {
   basic_block orig_bb = bb;
-  rtx insn = BB_HEAD (bb);
-  rtx end = BB_END (bb), x;
+  rtx_insn *insn = BB_HEAD (bb);
+  rtx_insn *end = BB_END (bb), *x;
   rtx_jump_table_data *table;
-  rtx flow_transfer_insn = NULL_RTX;
+  rtx_insn *flow_transfer_insn = NULL;
   edge fallthru = NULL;
 
   if (insn == BB_END (bb))
@@ -480,7 +482,7 @@ find_bb_boundaries (basic_block bb)
 
  bb = fallthru-dest;
  remove_edge (fallthru);
- flow_transfer_insn = NULL_RTX;
+ flow_transfer_insn = NULL;
  if (code == CODE_LABEL  LABEL_ALT_ENTRY_P (insn))
make_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun), bb, 0);
}
-- 
1.8.5.3



[PATCH 066/236] dce.c: Use rtx subclasses

2014-08-06 Thread David Malcolm
gcc/
* dce.c (worklist): Strengthen from vecrtx to vecrtx_insn *.
(deletable_insn_p): Strengthen param insn from rtx to
rtx_insn *.  Add checked cast to rtx_call_insn when invoking
find_call_stack_args, since this is guarded by CALL_P (insn).
(marked_insn_p): Strengthen param insn from rtx to
rtx_insn *.
(mark_insn): Likewise.  Add checked cast to rtx_call_insn when
invoking find_call_stack_args, since this is guarded by
CALL_P (insn).
(mark_nonreg_stores_1): Strengthen cast of data from rtx to
rtx_insn *; we know this is an insn since this was called by
mark_nonreg_stores.
(mark_nonreg_stores_2): Likewise.
(mark_nonreg_stores): Strengthen param insn from rtx to
rtx_insn *.
(find_call_stack_args): Strengthen param call_insn from rtx to
rtx_call_insn *; strengthen locals insn and prev_insn from rtx
to rtx_insn *.
(remove_reg_equal_equiv_notes_for_defs): Strengthen param insn
from rtx to rtx_insn *.
(reset_unmarked_insns_debug_uses): Likewise for locals insn,
next, ref_insn.
(delete_unmarked_insns): Likewise for locals insn, next.
(prescan_insns_for_dce): Likewise for locals insn, prev.
(mark_reg_dependencies): Likewise for param insn.
(rest_of_handle_ud_dce): Likewise for local insn.
(word_dce_process_block): Likewise.
(dce_process_block): Likewise.
---
 gcc/dce.c | 46 --
 1 file changed, 24 insertions(+), 22 deletions(-)

diff --git a/gcc/dce.c b/gcc/dce.c
index 0e24577..921c9d9 100644
--- a/gcc/dce.c
+++ b/gcc/dce.c
@@ -51,7 +51,7 @@ static bool can_alter_cfg = false;
 
 /* Instructions that have been marked but whose dependencies have not
yet been processed.  */
-static vecrtx worklist;
+static vecrtx_insn * worklist;
 
 /* Bitmap of instructions marked as needed indexed by INSN_UID.  */
 static sbitmap marked;
@@ -60,7 +60,7 @@ static sbitmap marked;
 static bitmap_obstack dce_blocks_bitmap_obstack;
 static bitmap_obstack dce_tmp_bitmap_obstack;
 
-static bool find_call_stack_args (rtx, bool, bool, bitmap);
+static bool find_call_stack_args (rtx_call_insn *, bool, bool, bitmap);
 
 /* A subroutine for which BODY is part of the instruction being tested;
either the top-level pattern, or an element of a PARALLEL.  The
@@ -92,7 +92,7 @@ deletable_insn_p_1 (rtx body)
the DCE pass.  */
 
 static bool
-deletable_insn_p (rtx insn, bool fast, bitmap arg_stores)
+deletable_insn_p (rtx_insn *insn, bool fast, bitmap arg_stores)
 {
   rtx body, x;
   int i;
@@ -109,7 +109,8 @@ deletable_insn_p (rtx insn, bool fast, bitmap arg_stores)
  infinite loop.  */
(RTL_CONST_OR_PURE_CALL_P (insn)
   !RTL_LOOPING_CONST_OR_PURE_CALL_P (insn)))
-return find_call_stack_args (insn, false, fast, arg_stores);
+return find_call_stack_args (as_a rtx_call_insn * (insn), false,
+fast, arg_stores);
 
   /* Don't delete jumps, notes and the like.  */
   if (!NONJUMP_INSN_P (insn))
@@ -163,7 +164,7 @@ deletable_insn_p (rtx insn, bool fast, bitmap arg_stores)
 /* Return true if INSN has been marked as needed.  */
 
 static inline int
-marked_insn_p (rtx insn)
+marked_insn_p (rtx_insn *insn)
 {
   /* Artificial defs are always needed and they do not have an insn.
  We should never see them here.  */
@@ -176,7 +177,7 @@ marked_insn_p (rtx insn)
the worklist.  */
 
 static void
-mark_insn (rtx insn, bool fast)
+mark_insn (rtx_insn *insn, bool fast)
 {
   if (!marked_insn_p (insn))
 {
@@ -190,7 +191,7 @@ mark_insn (rtx insn, bool fast)
   !SIBLING_CALL_P (insn)
   (RTL_CONST_OR_PURE_CALL_P (insn)
   !RTL_LOOPING_CONST_OR_PURE_CALL_P (insn)))
-   find_call_stack_args (insn, true, fast, NULL);
+   find_call_stack_args (as_a rtx_call_insn * (insn), true, fast, NULL);
 }
 }
 
@@ -202,7 +203,7 @@ static void
 mark_nonreg_stores_1 (rtx dest, const_rtx pattern, void *data)
 {
   if (GET_CODE (pattern) != CLOBBER  !REG_P (dest))
-mark_insn ((rtx) data, true);
+mark_insn ((rtx_insn *) data, true);
 }
 
 
@@ -213,14 +214,14 @@ static void
 mark_nonreg_stores_2 (rtx dest, const_rtx pattern, void *data)
 {
   if (GET_CODE (pattern) != CLOBBER  !REG_P (dest))
-mark_insn ((rtx) data, false);
+mark_insn ((rtx_insn *) data, false);
 }
 
 
 /* Mark INSN if BODY stores to a non-register destination.  */
 
 static void
-mark_nonreg_stores (rtx body, rtx insn, bool fast)
+mark_nonreg_stores (rtx body, rtx_insn *insn, bool fast)
 {
   if (fast)
 note_stores (body, mark_nonreg_stores_1, insn);
@@ -257,10 +258,11 @@ check_argument_store (rtx mem, HOST_WIDE_INT off, 
HOST_WIDE_INT min_sp_off,
going to be marked called again with DO_MARK true.  */
 
 static bool
-find_call_stack_args (rtx call_insn, bool do_mark, bool fast,
+find_call_stack_args 

[PATCH 063/236] compare-elim.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* compare-elim.c (struct comparison_use): Strengthen field insn
from rtx to rtx_insn *.
(struct comparison): Likewise, also for field prev_clobber.
(conforming_compare): Likewise for param insn.
(arithmetic_flags_clobber_p): Likewise.
(find_flags_uses_in_insn): Likewise.
(find_comparison_dom_walker::before_dom_children): Likewise for
locals insn, next, last_clobber.
(try_eliminate_compare): Likewise for locals insn, bb_head.
---
 gcc/compare-elim.c | 19 ++-
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/gcc/compare-elim.c b/gcc/compare-elim.c
index a373799..979c12b 100644
--- a/gcc/compare-elim.c
+++ b/gcc/compare-elim.c
@@ -81,7 +81,7 @@ along with GCC; see the file COPYING3.  If not see
 struct comparison_use
 {
   /* The instruction in which the result of the compare is used.  */
-  rtx insn;
+  rtx_insn *insn;
   /* The location of the flags register within the use.  */
   rtx *loc;
   /* The comparison code applied against the flags register.  */
@@ -91,10 +91,10 @@ struct comparison_use
 struct comparison
 {
   /* The comparison instruction.  */
-  rtx insn;
+  rtx_insn *insn;
 
   /* The insn prior to the comparison insn that clobbers the flags.  */
-  rtx prev_clobber;
+  rtx_insn *prev_clobber;
 
   /* The two values being compared.  These will be either REGs or
  constants.  */
@@ -126,7 +126,7 @@ static veccomparison_struct_p all_compares;
the rtx for the COMPARE itself.  */
 
 static rtx
-conforming_compare (rtx insn)
+conforming_compare (rtx_insn *insn)
 {
   rtx set, src, dest;
 
@@ -156,7 +156,7 @@ conforming_compare (rtx insn)
correct.  The term arithmetic may be somewhat misleading...  */
 
 static bool
-arithmetic_flags_clobber_p (rtx insn)
+arithmetic_flags_clobber_p (rtx_insn *insn)
 {
   rtx pat, x;
 
@@ -191,7 +191,7 @@ arithmetic_flags_clobber_p (rtx insn)
it in CMP; otherwise indicate that we've missed a use.  */
 
 static void
-find_flags_uses_in_insn (struct comparison *cmp, rtx insn)
+find_flags_uses_in_insn (struct comparison *cmp, rtx_insn *insn)
 {
   df_ref *use_rec, use;
 
@@ -260,7 +260,7 @@ void
 find_comparison_dom_walker::before_dom_children (basic_block bb)
 {
   struct comparison *last_cmp;
-  rtx insn, next, last_clobber;
+  rtx_insn *insn, *next, *last_clobber;
   bool last_cmp_valid;
   bitmap killed;
 
@@ -291,7 +291,7 @@ find_comparison_dom_walker::before_dom_children 
(basic_block bb)
 {
   rtx src;
 
-  next = (insn == BB_END (bb) ? NULL_RTX : NEXT_INSN (insn));
+  next = (insn == BB_END (bb) ? NULL : NEXT_INSN (insn));
   if (!NONDEBUG_INSN_P (insn))
continue;
 
@@ -490,7 +490,8 @@ maybe_select_cc_mode (struct comparison *cmp, rtx a 
ATTRIBUTE_UNUSED,
 static bool
 try_eliminate_compare (struct comparison *cmp)
 {
-  rtx x, insn, bb_head, flags, in_a, cmp_src;
+  rtx_insn *insn, *bb_head;
+  rtx x, flags, in_a, cmp_src;
 
   /* We must have found an interesting clobber preceding the compare.  */
   if (cmp-prev_clobber == NULL)
-- 
1.8.5.3



[PATCH 068/236] df-*.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* df-core.c (df_bb_regno_first_def_find): Strengthen local insn
from rtx to rtx_insn *.
(df_bb_regno_last_def_find): Likewise.

* df-problems.c (df_rd_bb_local_compute): Likewise.
(df_lr_bb_local_compute): Likewise.
(df_live_bb_local_compute): Likewise.
(df_chain_remove_problem): Likewise.
(df_chain_create_bb): Likewise.
(df_word_lr_bb_local_compute): Likewise.
(df_remove_dead_eq_notes): Likewise for param insn.
(df_note_bb_compute): Likewise for local insn.
(simulate_backwards_to_point): Likewise.
(df_md_bb_local_compute): Likewise.

* df-scan.c (df_scan_free_bb_info): Likewise.
(df_scan_start_dump): Likewise.
(df_scan_start_block): Likewise.
(df_install_ref_incremental): Likewise for local insn.
(df_insn_rescan_all): Likewise.
(df_reorganize_refs_by_reg_by_insn): Likewise.
(df_reorganize_refs_by_insn_bb): Likewise.
(df_recompute_luids): Likewise.
(df_bb_refs_record): Likewise.
(df_update_entry_exit_and_calls): Likewise.
(df_bb_verify): Likewise.
---
 gcc/df-core.c |  4 ++--
 gcc/df-problems.c | 20 ++--
 gcc/df-scan.c | 24 
 3 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/gcc/df-core.c b/gcc/df-core.c
index 0dd8cc4..0267bde 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -1946,7 +1946,7 @@ df_set_clean_cfg (void)
 df_ref
 df_bb_regno_first_def_find (basic_block bb, unsigned int regno)
 {
-  rtx insn;
+  rtx_insn *insn;
   df_ref *def_rec;
   unsigned int uid;
 
@@ -1972,7 +1972,7 @@ df_bb_regno_first_def_find (basic_block bb, unsigned int 
regno)
 df_ref
 df_bb_regno_last_def_find (basic_block bb, unsigned int regno)
 {
-  rtx insn;
+  rtx_insn *insn;
   df_ref *def_rec;
   unsigned int uid;
 
diff --git a/gcc/df-problems.c b/gcc/df-problems.c
index 77f8c99..47902f7 100644
--- a/gcc/df-problems.c
+++ b/gcc/df-problems.c
@@ -355,7 +355,7 @@ df_rd_bb_local_compute (unsigned int bb_index)
 {
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_rd_bb_info *bb_info = df_rd_get_bb_info (bb_index);
-  rtx insn;
+  rtx_insn *insn;
 
   bitmap_clear (seen_in_block);
   bitmap_clear (seen_in_insn);
@@ -835,7 +835,7 @@ df_lr_bb_local_compute (unsigned int bb_index)
 {
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_lr_bb_info *bb_info = df_lr_get_bb_info (bb_index);
-  rtx insn;
+  rtx_insn *insn;
   df_ref *def_rec;
   df_ref *use_rec;
 
@@ -1462,7 +1462,7 @@ df_live_bb_local_compute (unsigned int bb_index)
 {
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_live_bb_info *bb_info = df_live_get_bb_info (bb_index);
-  rtx insn;
+  rtx_insn *insn;
   df_ref *def_rec;
   int luid = 0;
 
@@ -1982,7 +1982,7 @@ df_chain_remove_problem (void)
 
   EXECUTE_IF_SET_IN_BITMAP (df_chain-out_of_date_transfer_functions, 0, 
bb_index, bi)
 {
-  rtx insn;
+  rtx_insn *insn;
   df_ref *def_rec;
   df_ref *use_rec;
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
@@ -2105,7 +2105,7 @@ df_chain_create_bb (unsigned int bb_index)
 {
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_rd_bb_info *bb_info = df_rd_get_bb_info (bb_index);
-  rtx insn;
+  rtx_insn *insn;
   bitmap_head cpy;
 
   bitmap_initialize (cpy, bitmap_default_obstack);
@@ -2531,7 +2531,7 @@ df_word_lr_bb_local_compute (unsigned int bb_index)
 {
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_word_lr_bb_info *bb_info = df_word_lr_get_bb_info (bb_index);
-  rtx insn;
+  rtx_insn *insn;
   df_ref *def_rec;
   df_ref *use_rec;
 
@@ -2883,7 +2883,7 @@ df_remove_dead_and_unused_notes (rtx insn)
as the bitmap of currently live registers.  */
 
 static void
-df_remove_dead_eq_notes (rtx insn, bitmap live)
+df_remove_dead_eq_notes (rtx_insn *insn, bitmap live)
 {
   rtx *pprev = REG_NOTES (insn);
   rtx link = *pprev;
@@ -3153,7 +3153,7 @@ df_note_bb_compute (unsigned int bb_index,
bitmap live, bitmap do_not_gen, bitmap artificial_uses)
 {
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
-  rtx insn;
+  rtx_insn *insn;
   df_ref *def_rec;
   df_ref *use_rec;
   struct dead_debug_local debug;
@@ -3784,7 +3784,7 @@ find_memory_stores (rtx x, const_rtx pat ATTRIBUTE_UNUSED,
 void
 simulate_backwards_to_point (basic_block bb, regset live, rtx point)
 {
-  rtx insn;
+  rtx_insn *insn;
   bitmap_copy (live, df_get_live_out (bb));
   df_simulate_initialize_backwards (bb, live);
 
@@ -4271,7 +4271,7 @@ df_md_bb_local_compute (unsigned int bb_index)
 {
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
   struct df_md_bb_info *bb_info = df_md_get_bb_info (bb_index);
-  rtx insn;
+  rtx_insn *insn;
 
   /* Artificials are only hard regs.  */
   if (!(df-changeable_flags  DF_NO_HARD_REGS))
diff --git a/gcc/df-scan.c b/gcc/df-scan.c
index 992d0af..28196b3 

[PATCH 049/236] asan.c: strengthen some rtx locals

2014-08-06 Thread David Malcolm
This is an example of strengthening rtx.  For example, we
now have strong enough types provided by the existing scaffolding to
turn insn and insns in this:

  for (insn = insns; insn; insn = NEXT_INSN (insn))

from plain rtx into rtx_insn *.

gcc/
* asan.c (asan_clear_shadow): Strengthen locals insn, insns
and jump from rtx to rtx_insn *.  Strengthen local top_label
from rtx to rtx_code_label *.
---
 gcc/asan.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/gcc/asan.c b/gcc/asan.c
index 11627c7..82e601d 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -905,7 +905,9 @@ asan_shadow_cst (unsigned char shadow_bytes[4])
 static void
 asan_clear_shadow (rtx shadow_mem, HOST_WIDE_INT len)
 {
-  rtx insn, insns, top_label, end, addr, tmp, jump;
+  rtx_insn *insn, *insns, *jump;
+  rtx_code_label *top_label;
+  rtx end, addr, tmp;
 
   start_sequence ();
   clear_storage (shadow_mem, GEN_INT (len), BLOCK_OP_NORMAL);
-- 
1.8.5.3



[PATCH 077/236] fwprop.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* fwprop.c (single_def_use_dom_walker::before_dom_children):
Strengthen local insn from rtx to rtx_insn *.
(use_killed_between): Likewise for param target_insn.
(all_uses_available_at): Likewise for param target_insn and
local next.
(update_df_init): Likewise for params def_insn, insn.
(update_df): Likewise for param insn.
(try_fwprop_subst): Likewise for param def_insn and local
insn.
(free_load_extend): Likewise for param insn.
(forward_propagate_subreg): Likewise for param def_insn and
local use_insn.
(forward_propagate_asm): Likewise for param def_insn and local
use_insn.
(forward_propagate_and_simplify): Likewise for param def_insn
and local use_insn.
(forward_propagate_into): Likewise for locals def_insn and
use_insn.
---
 gcc/fwprop.c | 36 
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/gcc/fwprop.c b/gcc/fwprop.c
index 0179bf1..9a1f085 100644
--- a/gcc/fwprop.c
+++ b/gcc/fwprop.c
@@ -220,7 +220,7 @@ single_def_use_dom_walker::before_dom_children (basic_block 
bb)
   int bb_index = bb-index;
   struct df_md_bb_info *md_bb_info = df_md_get_bb_info (bb_index);
   struct df_lr_bb_info *lr_bb_info = df_lr_get_bb_info (bb_index);
-  rtx insn;
+  rtx_insn *insn;
 
   bitmap_copy (local_md, md_bb_info-in);
   bitmap_copy (local_lr, lr_bb_info-in);
@@ -724,7 +724,7 @@ local_ref_killed_between_p (df_ref ref, rtx from, rtx to)
  we check if the definition is killed after DEF_INSN or before
  TARGET_INSN insn, in their respective basic blocks.  */
 static bool
-use_killed_between (df_ref use, rtx def_insn, rtx target_insn)
+use_killed_between (df_ref use, rtx_insn *def_insn, rtx_insn *target_insn)
 {
   basic_block def_bb = BLOCK_FOR_INSN (def_insn);
   basic_block target_bb = BLOCK_FOR_INSN (target_insn);
@@ -788,12 +788,12 @@ use_killed_between (df_ref use, rtx def_insn, rtx 
target_insn)
would require full computation of available expressions;
we check only restricted conditions, see use_killed_between.  */
 static bool
-all_uses_available_at (rtx def_insn, rtx target_insn)
+all_uses_available_at (rtx_insn *def_insn, rtx_insn *target_insn)
 {
   df_ref *use_rec;
   struct df_insn_info *insn_info = DF_INSN_INFO_GET (def_insn);
   rtx def_set = single_set (def_insn);
-  rtx next;
+  rtx_insn *next;
 
   gcc_assert (def_set);
 
@@ -883,7 +883,7 @@ register_active_defs (df_ref *use_rec)
I'm not doing this yet, though.  */
 
 static void
-update_df_init (rtx def_insn, rtx insn)
+update_df_init (rtx_insn *def_insn, rtx_insn *insn)
 {
 #ifdef ENABLE_CHECKING
   sparseset_clear (active_defs_check);
@@ -921,7 +921,7 @@ update_uses (df_ref *use_rec)
uses if NOTES_ONLY is true.  */
 
 static void
-update_df (rtx insn, rtx note)
+update_df (rtx_insn *insn, rtx note)
 {
   struct df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
 
@@ -948,9 +948,10 @@ update_df (rtx insn, rtx note)
performed.  */
 
 static bool
-try_fwprop_subst (df_ref use, rtx *loc, rtx new_rtx, rtx def_insn, bool 
set_reg_equal)
+try_fwprop_subst (df_ref use, rtx *loc, rtx new_rtx, rtx_insn *def_insn,
+ bool set_reg_equal)
 {
-  rtx insn = DF_REF_INSN (use);
+  rtx_insn *insn = DF_REF_INSN (use);
   rtx set = single_set (insn);
   rtx note = NULL_RTX;
   bool speed = optimize_bb_for_speed_p (BLOCK_FOR_INSN (insn));
@@ -1031,7 +1032,7 @@ try_fwprop_subst (df_ref use, rtx *loc, rtx new_rtx, rtx 
def_insn, bool set_reg_
load from memory.  */
 
 static bool
-free_load_extend (rtx src, rtx insn)
+free_load_extend (rtx src, rtx_insn *insn)
 {
   rtx reg;
   df_ref *use_vec;
@@ -1077,10 +1078,11 @@ free_load_extend (rtx src, rtx insn)
 /* If USE is a subreg, see if it can be replaced by a pseudo.  */
 
 static bool
-forward_propagate_subreg (df_ref use, rtx def_insn, rtx def_set)
+forward_propagate_subreg (df_ref use, rtx_insn *def_insn, rtx def_set)
 {
   rtx use_reg = DF_REF_REG (use);
-  rtx use_insn, src;
+  rtx_insn *use_insn;
+  rtx src;
 
   /* Only consider subregs... */
   enum machine_mode use_mode = GET_MODE (use_reg);
@@ -1147,9 +1149,10 @@ forward_propagate_subreg (df_ref use, rtx def_insn, rtx 
def_set)
 /* Try to replace USE with SRC (defined in DEF_INSN) in __asm.  */
 
 static bool
-forward_propagate_asm (df_ref use, rtx def_insn, rtx def_set, rtx reg)
+forward_propagate_asm (df_ref use, rtx_insn *def_insn, rtx def_set, rtx reg)
 {
-  rtx use_insn = DF_REF_INSN (use), src, use_pat, asm_operands, new_rtx, *loc;
+  rtx_insn *use_insn = DF_REF_INSN (use);
+  rtx src, use_pat, asm_operands, new_rtx, *loc;
   int speed_p, i;
   df_ref *use_vec;
 
@@ -1224,9 +1227,9 @@ forward_propagate_asm (df_ref use, rtx def_insn, rtx 
def_set, rtx reg)
result.  */
 
 static bool
-forward_propagate_and_simplify (df_ref use, rtx def_insn, rtx def_set)
+forward_propagate_and_simplify (df_ref use, rtx_insn 

[PATCH 080/236] haifa-sched.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* haifa-sched.c (bb_header): Strengthen from rtx * to rtx_insn **.
(add_delay_dependencies): Strengthen local pro from rtx to
rtx_insn *.
(recompute_todo_spec): Likewise.
(dep_cost_1): Likewise for locals insn, used.
(schedule_insn): Likewise for local dbg.
(schedule_insn): Likewise for locals pro, next.
(unschedule_insns_until): Likewise for local con.
(restore_pattern): Likewise for local next.
(estimate_insn_tick): Likewise for local pro.
(resolve_dependencies): Likewise for local next.
(fix_inter_tick): Likewise.
(fix_tick_ready): Likewise for local pro.
(add_to_speculative_block): Likewise for locals check, twin,
pro.
(sched_extend_bb): Likewise for locals end, insn.
(init_before_recovery): Likewise for local x.
(sched_create_recovery_block): Likewise for local barrier.
(create_check_block_twin): Likewise for local pro.
(fix_recovery_deps): Likewise for locals note, insn, jump,
consumer.
(unlink_bb_notes): Update for change to type of bb_header.
Strengthen locals prev, label, note, next from rtx to
rtx_insn *.
(clear_priorities): Likewise for local pro.
---
 gcc/haifa-sched.c | 60 ---
 1 file changed, 31 insertions(+), 29 deletions(-)

diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
index 04a3576..fd46977 100644
--- a/gcc/haifa-sched.c
+++ b/gcc/haifa-sched.c
@@ -261,7 +261,7 @@ bool haifa_recovery_bb_ever_added_p;
 static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
 
 /* Array used in {unlink, restore}_bb_notes.  */
-static rtx *bb_header = 0;
+static rtx_insn **bb_header = 0;
 
 /* Basic block after which recovery blocks will be created.  */
 static basic_block before_recovery;
@@ -798,7 +798,7 @@ add_delay_dependencies (rtx insn)
 
   FOR_EACH_DEP (pair-i2, SD_LIST_BACK, sd_it, dep)
 {
-  rtx pro = DEP_PRO (dep);
+  rtx_insn *pro = DEP_PRO (dep);
   struct delay_pair *other_pair
= delay_htab_i2.find_with_hash (pro, htab_hash_pointer (pro));
   if (!other_pair || other_pair-stages)
@@ -1208,7 +1208,7 @@ recompute_todo_spec (rtx next, bool for_backtrack)
 
   FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
 {
-  rtx pro = DEP_PRO (dep);
+  rtx_insn *pro = DEP_PRO (dep);
   ds_t ds = DEP_STATUS (dep)  SPECULATIVE;
 
   if (DEBUG_INSN_P (pro)  !DEBUG_INSN_P (next))
@@ -1414,8 +1414,8 @@ insn_cost (rtx insn)
 int
 dep_cost_1 (dep_t link, dw_t dw)
 {
-  rtx insn = DEP_PRO (link);
-  rtx used = DEP_CON (link);
+  rtx_insn *insn = DEP_PRO (link);
+  rtx_insn *used = DEP_CON (link);
   int cost;
 
   if (DEP_COST (link) != UNKNOWN_DEP_COST)
@@ -3787,7 +3787,7 @@ schedule_insn (rtx insn)
 for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
 sd_iterator_cond (sd_it, dep);)
   {
-   rtx dbg = DEP_PRO (dep);
+   rtx_insn *dbg = DEP_PRO (dep);
struct reg_use_data *use, *next;
 
if (DEP_STATUS (dep)  DEP_CANCELLED)
@@ -3876,7 +3876,7 @@ schedule_insn (rtx insn)
sd_iterator_cond (sd_it, dep); sd_iterator_next (sd_it))
 {
   struct dep_replacement *desc = DEP_REPLACE (dep);
-  rtx pro = DEP_PRO (dep);
+  rtx_insn *pro = DEP_PRO (dep);
   if (QUEUE_INDEX (pro) != QUEUE_SCHEDULED
   desc != NULL  desc-insn == pro)
apply_replacement (dep, false);
@@ -3886,7 +3886,7 @@ schedule_insn (rtx insn)
   for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
sd_iterator_cond (sd_it, dep);)
 {
-  rtx next = DEP_CON (dep);
+  rtx_insn *next = DEP_CON (dep);
   bool cancelled = (DEP_STATUS (dep)  DEP_CANCELLED) != 0;
 
   /* Resolve the dependence between INSN and NEXT.
@@ -4251,7 +4251,7 @@ unschedule_insns_until (rtx insn)
   for (sd_it = sd_iterator_start (last, SD_LIST_RES_FORW);
   sd_iterator_cond (sd_it, dep);)
{
- rtx con = DEP_CON (dep);
+ rtx_insn *con = DEP_CON (dep);
  sd_unresolve_dep (sd_it);
  if (!MUST_RECOMPUTE_SPEC_P (con))
{
@@ -4496,7 +4496,7 @@ apply_replacement (dep_t dep, bool immediately)
 static void
 restore_pattern (dep_t dep, bool immediately)
 {
-  rtx next = DEP_CON (dep);
+  rtx_insn *next = DEP_CON (dep);
   int tick = INSN_TICK (next);
 
   /* If we already scheduled the insn, the modified version is
@@ -4581,7 +4581,7 @@ estimate_insn_tick (bitmap processed, rtx insn, int 
budget)
 
   FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
 {
-  rtx pro = DEP_PRO (dep);
+  rtx_insn *pro = DEP_PRO (dep);
   int t;
 
   if (DEP_STATUS (dep)  DEP_CANCELLED)
@@ -4658,7 +4658,7 @@ resolve_dependencies (rtx insn)
   for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
sd_iterator_cond (sd_it, dep);)
 {
-  rtx next = DEP_CON (dep);
+  rtx_insn *next = DEP_CON (dep);
 
   

[PATCH 084/236] internal-fn.c: Use rtx_insn and rtx_code_label

2014-08-06 Thread David Malcolm
gcc/
* internal-fn.c (ubsan_expand_si_overflow_addsub_check):
Strengthen locals done_label, do_error from rtx to
rtx_code_label *.
(ubsan_expand_si_overflow_addsub_check): Strengthen local last
from rtx to rtx_insn *.  Strengthen local sub_check from rtx to
rtx_code_label *.
(ubsan_expand_si_overflow_neg_check): Likewise for locals
done_label, do_error to rtx_code_label * and local  last to
rtx_insn *.
(ubsan_expand_si_overflow_mul_check): Likewise for locals
done_label, do_error, large_op0, small_op0_large_op1,
one_small_one_large, both_ops_large, after_hipart_neg,
after_lopart_neg, do_overflow, hipart_different  to
rtx_code_label * and local  last to rtx_insn *.
---
 gcc/internal-fn.c | 33 ++---
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c
index 68b2b66..ba97c10 100644
--- a/gcc/internal-fn.c
+++ b/gcc/internal-fn.c
@@ -167,7 +167,8 @@ ubsan_expand_si_overflow_addsub_check (tree_code code, 
gimple stmt)
 {
   rtx res, op0, op1;
   tree lhs, fn, arg0, arg1;
-  rtx done_label, do_error, target = NULL_RTX;
+  rtx_code_label *done_label, *do_error;
+  rtx target = NULL_RTX;
 
   lhs = gimple_call_lhs (stmt);
   arg0 = gimple_call_arg (stmt, 0);
@@ -187,7 +188,7 @@ ubsan_expand_si_overflow_addsub_check (tree_code code, 
gimple stmt)
   if (icode != CODE_FOR_nothing)
 {
   struct expand_operand ops[4];
-  rtx last = get_last_insn ();
+  rtx_insn *last = get_last_insn ();
 
   res = gen_reg_rtx (mode);
   create_output_operand (ops[0], res, mode);
@@ -213,7 +214,7 @@ ubsan_expand_si_overflow_addsub_check (tree_code code, 
gimple stmt)
 
   if (icode == CODE_FOR_nothing)
 {
-  rtx sub_check = gen_label_rtx ();
+  rtx_code_label *sub_check = gen_label_rtx ();
   int pos_neg = 3;
 
   /* Compute the operation.  On RTL level, the addition is always
@@ -315,7 +316,8 @@ ubsan_expand_si_overflow_neg_check (gimple stmt)
 {
   rtx res, op1;
   tree lhs, fn, arg1;
-  rtx done_label, do_error, target = NULL_RTX;
+  rtx_code_label *done_label, *do_error;
+  rtx target = NULL_RTX;
 
   lhs = gimple_call_lhs (stmt);
   arg1 = gimple_call_arg (stmt, 1);
@@ -333,7 +335,7 @@ ubsan_expand_si_overflow_neg_check (gimple stmt)
   if (icode != CODE_FOR_nothing)
 {
   struct expand_operand ops[3];
-  rtx last = get_last_insn ();
+  rtx_insn *last = get_last_insn ();
 
   res = gen_reg_rtx (mode);
   create_output_operand (ops[0], res, mode);
@@ -391,7 +393,8 @@ ubsan_expand_si_overflow_mul_check (gimple stmt)
 {
   rtx res, op0, op1;
   tree lhs, fn, arg0, arg1;
-  rtx done_label, do_error, target = NULL_RTX;
+  rtx_code_label *done_label, *do_error;
+  rtx target = NULL_RTX;
 
   lhs = gimple_call_lhs (stmt);
   arg0 = gimple_call_arg (stmt, 0);
@@ -411,7 +414,7 @@ ubsan_expand_si_overflow_mul_check (gimple stmt)
   if (icode != CODE_FOR_nothing)
 {
   struct expand_operand ops[4];
-  rtx last = get_last_insn ();
+  rtx_insn *last = get_last_insn ();
 
   res = gen_reg_rtx (mode);
   create_output_operand (ops[0], res, mode);
@@ -469,14 +472,14 @@ ubsan_expand_si_overflow_mul_check (gimple stmt)
   else if (hmode != BLKmode
2 * GET_MODE_PRECISION (hmode) == GET_MODE_PRECISION (mode))
{
- rtx large_op0 = gen_label_rtx ();
- rtx small_op0_large_op1 = gen_label_rtx ();
- rtx one_small_one_large = gen_label_rtx ();
- rtx both_ops_large = gen_label_rtx ();
- rtx after_hipart_neg = gen_label_rtx ();
- rtx after_lopart_neg = gen_label_rtx ();
- rtx do_overflow = gen_label_rtx ();
- rtx hipart_different = gen_label_rtx ();
+ rtx_code_label *large_op0 = gen_label_rtx ();
+ rtx_code_label *small_op0_large_op1 = gen_label_rtx ();
+ rtx_code_label *one_small_one_large = gen_label_rtx ();
+ rtx_code_label *both_ops_large = gen_label_rtx ();
+ rtx_code_label *after_hipart_neg = gen_label_rtx ();
+ rtx_code_label *after_lopart_neg = gen_label_rtx ();
+ rtx_code_label *do_overflow = gen_label_rtx ();
+ rtx_code_label *hipart_different = gen_label_rtx ();
 
  unsigned int hprec = GET_MODE_PRECISION (hmode);
  rtx hipart0 = expand_shift (RSHIFT_EXPR, mode, op0, hprec,
-- 
1.8.5.3



[PATCH 087/236] loop-doloop.c: Use rtx_insn in a few places

2014-08-06 Thread David Malcolm
gcc/
* loop-doloop.c (doloop_valid_p): Strengthen local insn from rtx
to rtx_insn *.
(add_test): Likewise for locals seq, jump.
(doloop_modify): Likewise for locals sequence, jump_insn.
---
 gcc/loop-doloop.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/gcc/loop-doloop.c b/gcc/loop-doloop.c
index 0e84393..42e7f70 100644
--- a/gcc/loop-doloop.c
+++ b/gcc/loop-doloop.c
@@ -261,7 +261,7 @@ static bool
 doloop_valid_p (struct loop *loop, struct niter_desc *desc)
 {
   basic_block *body = get_loop_body (loop), bb;
-  rtx insn;
+  rtx_insn *insn;
   unsigned i;
   bool result = true;
 
@@ -336,7 +336,8 @@ cleanup:
 static bool
 add_test (rtx cond, edge *e, basic_block dest)
 {
-  rtx seq, jump, label;
+  rtx_insn *seq, *jump;
+  rtx label;
   enum machine_mode mode;
   rtx op0 = XEXP (cond, 0), op1 = XEXP (cond, 1);
   enum rtx_code code = GET_CODE (cond);
@@ -401,8 +402,8 @@ doloop_modify (struct loop *loop, struct niter_desc *desc,
 {
   rtx counter_reg;
   rtx tmp, noloop = NULL_RTX;
-  rtx sequence;
-  rtx jump_insn;
+  rtx_insn *sequence;
+  rtx_insn *jump_insn;
   rtx jump_label;
   int nonneg = 0;
   bool increment_count;
-- 
1.8.5.3



[PATCH 099/236] predict.*: Use rtx_insn (also touches function.c and config/cris/cris.c)

2014-08-06 Thread David Malcolm
gcc/
* predict.h (predict_insn_def): Strengthen param insn from rtx
to rtx_insn *.

* function.c (stack_protect_epilogue): Add checked cast to
rtx_insn for now when invoking predict_insn_def.

* predict.c (predict_insn): Strengthen param insn from rtx to
rtx_insn *.
(predict_insn_def): Likewise.
(rtl_predict_edge): Likewise for local last_insn.
(can_predict_insn_p): Strengthen param insn from const_rtx to
const rtx_insn *.
(combine_predictions_for_insn): Strengthen param insn from rtx
to rtx_insn *.
(bb_estimate_probability_locally): Likewise for local last_insn.
(expensive_function_p): Likewise for local insn.

* config/cris/cris.c (cris_emit_trap_for_misalignment): Likewise for
local jmp, since this is used when invoking predict_insn_def.
---
 gcc/config/cris/cris.c |  3 ++-
 gcc/function.c |  2 +-
 gcc/predict.c  | 18 +-
 gcc/predict.h  |  2 +-
 4 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/gcc/config/cris/cris.c b/gcc/config/cris/cris.c
index 194dd14..56adf45 100644
--- a/gcc/config/cris/cris.c
+++ b/gcc/config/cris/cris.c
@@ -1987,7 +1987,8 @@ cris_simple_epilogue (void)
 void
 cris_emit_trap_for_misalignment (rtx mem)
 {
-  rtx addr, reg, ok_label, andop, jmp;
+  rtx addr, reg, ok_label, andop;
+  rtx_insn *jmp;
   int natural_alignment;
   gcc_assert (MEM_P (mem));
 
diff --git a/gcc/function.c b/gcc/function.c
index b2c9d81..c5619e9 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -4675,7 +4675,7 @@ stack_protect_epilogue (void)
  except adding the prediction by hand.  */
   tmp = get_last_insn ();
   if (JUMP_P (tmp))
-predict_insn_def (tmp, PRED_NORETURN, TAKEN);
+predict_insn_def (as_a rtx_insn * (tmp), PRED_NORETURN, TAKEN);
 
   expand_call (targetm.stack_protect_fail (), NULL_RTX, /*ignore=*/true);
   free_temp_slots ();
diff --git a/gcc/predict.c b/gcc/predict.c
index 55a645d..e08c982 100644
--- a/gcc/predict.c
+++ b/gcc/predict.c
@@ -74,11 +74,11 @@ along with GCC; see the file COPYING3.  If not see
 static sreal real_zero, real_one, real_almost_one, real_br_prob_base,
 real_inv_br_prob_base, real_one_half, real_bb_freq_max;
 
-static void combine_predictions_for_insn (rtx, basic_block);
+static void combine_predictions_for_insn (rtx_insn *, basic_block);
 static void dump_prediction (FILE *, enum br_predictor, int, basic_block, int);
 static void predict_paths_leading_to (basic_block, enum br_predictor, enum 
prediction);
 static void predict_paths_leading_to_edge (edge, enum br_predictor, enum 
prediction);
-static bool can_predict_insn_p (const_rtx);
+static bool can_predict_insn_p (const rtx_insn *);
 
 /* Information we hold about each branch predictor.
Filled using information from predict.def.  */
@@ -560,7 +560,7 @@ br_prob_note_reliable_p (const_rtx note)
 }
 
 static void
-predict_insn (rtx insn, enum br_predictor predictor, int probability)
+predict_insn (rtx_insn *insn, enum br_predictor predictor, int probability)
 {
   gcc_assert (any_condjump_p (insn));
   if (!flag_guess_branch_prob)
@@ -575,7 +575,7 @@ predict_insn (rtx insn, enum br_predictor predictor, int 
probability)
 /* Predict insn by given predictor.  */
 
 void
-predict_insn_def (rtx insn, enum br_predictor predictor,
+predict_insn_def (rtx_insn *insn, enum br_predictor predictor,
  enum prediction taken)
 {
int probability = predictor_info[(int) predictor].hitrate;
@@ -591,7 +591,7 @@ predict_insn_def (rtx insn, enum br_predictor predictor,
 void
 rtl_predict_edge (edge e, enum br_predictor predictor, int probability)
 {
-  rtx last_insn;
+  rtx_insn *last_insn;
   last_insn = BB_END (e-src);
 
   /* We can store the branch prediction information only about
@@ -680,7 +680,7 @@ clear_bb_predictions (basic_block bb)
At the moment we represent predictions only on conditional
jumps, not at computed jump or other complicated cases.  */
 static bool
-can_predict_insn_p (const_rtx insn)
+can_predict_insn_p (const rtx_insn *insn)
 {
   return (JUMP_P (insn)
   any_condjump_p (insn)
@@ -773,7 +773,7 @@ set_even_probabilities (basic_block bb)
note if not already present.  Remove now useless REG_BR_PRED notes.  */
 
 static void
-combine_predictions_for_insn (rtx insn, basic_block bb)
+combine_predictions_for_insn (rtx_insn *insn, basic_block bb)
 {
   rtx prob_note;
   rtx *pnote;
@@ -1668,7 +1668,7 @@ predict_loops (void)
 static void
 bb_estimate_probability_locally (basic_block bb)
 {
-  rtx last_insn = BB_END (bb);
+  rtx_insn *last_insn = BB_END (bb);
   rtx cond;
 
   if (! can_predict_insn_p (last_insn))
@@ -2891,7 +2891,7 @@ expensive_function_p (int threshold)
   limit = ENTRY_BLOCK_PTR_FOR_FN (cfun)-frequency * threshold;
   FOR_EACH_BB_FN (bb, cfun)
 {
-  rtx insn;
+  rtx_insn *insn;
 
   FOR_BB_INSNS (bb, insn)
if (active_insn_p 

[PATCH 092/236] lra: use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* lra-int.h (struct lra_insn_recog_data): Strengthen field insn
from rtx to rtx_insn *.
(lra_push_insn): Likewise for 1st param.
(lra_push_insn_and_update_insn_regno_info): Likewise.
(lra_pop_insn): Likewise for return type.
(lra_invalidate_insn_data): Likewise for 1st param.
(lra_set_insn_deleted): Likewise.
(lra_delete_dead_insn): Likewise.
(lra_process_new_insns): Likewise for first 3 params.
(lra_set_insn_recog_data): Likewise for 1st param.
(lra_update_insn_recog_data): Likewise.
(lra_set_used_insn_alternative): Likewise.
(lra_invalidate_insn_regno_info): Likewise.
(lra_update_insn_regno_info): Likewise.
(lra_former_scratch_operand_p): Likewise.
(lra_eliminate_regs_1): Likewise.
(lra_get_insn_recog_data): Likewise.

* lra-assigns.c (assign_by_spills): Strengthen local insn from
rtx to rtx_insn *.

* lra-coalesce.c (move_freq_compare_func): Likewise for locals
mv1 and mv2.
(substitute_within_insn): New.
(lra_coalesce): Strengthen locals mv, insn, next from rtx to
rtx_insn *.  Strengthen sorted_moves from rtx * to rxt_insn **.
Replace call to substitute with call to substitute_within_insn.

* lra-constraints.c (curr_insn): Strengthen from rtx to
rtx_insn *.
(get_equiv_with_elimination): Likewise for param insn.
(match_reload): Strengthen params before and after from rtx *
to rtx_insn **.
(emit_spill_move): Likewise for return type.  Add a checked cast
to rtx_insn * on result of gen_move_insn for now.
(check_and_process_move): Likewise for local before.  Replace
NULL_RTX with NULL when referring to insns.
(process_addr_reg): Strengthen params before and after from
rtx * to rtx_insn **.
(insert_move_for_subreg): Likewise.
(simplify_operand_subreg): Strengthen locals before and after
from rtx to rtx_insn *.
(process_address_1): Strengthen params before and after from
rtx * to rtx_insn **.  Strengthen locals insns, last_insn from
rtx to rtx_insn *.
(process_address): Strengthen params before and after from
rtx * to rtx_insn **.
(emit_inc): Strengthen local last from rtx to rtx_insn *.
(curr_insn_transform): Strengthen locals before and after
from rtx to rtx_insn *.  Replace NULL_RTX with NULL when referring
to insns.
(loc_equivalence_callback): Update cast of data, changing
resulting type from rtx to rtx_insn *.
(substitute_pseudo_within_insn): New.
(inherit_reload_reg): Strengthen param insn from rtx to
rtx_insn *; likewise for local new_insns.  Replace NULL_RTX with
NULL when referring to insns.  Add a checked cast to rtx_insn *
when using usage_insn to invoke lra_update_insn_regno_info.
(split_reg): Strengthen param insn from rtx to rtx_insn *;
likewise for locals restore, save.  Add checked casts to
rtx_insn * when using usage_insn to invoke
lra_update_insn_regno_info and lra_process_new_insns.  Replace
NULL_RTX with NULL when referring to insns.
(split_if_necessary): Strengthen param insn from rtx to
rtx_insn *.
(update_ebb_live_info): Likewise for params head, tail and local
prev_insn.
(get_last_insertion_point): Likewise for return type and local insn.
(get_live_on_other_edges): Likewise for local last.
(inherit_in_ebb): Likewise for params head, tail and locals
prev_insn, next_insn, restore.
(remove_inheritance_pseudos): Likewise for local prev_insn.
(undo_optional_reloads): Likewise for local insn.

* lra-eliminations.c (lra_eliminate_regs_1): Likewise for param
insn.
(lra_eliminate_regs): Replace NULL_RTX with NULL when referring to
insns.
(eliminate_regs_in_insn): Strengthen param insn from rtx to
rtx_insn *.
(spill_pseudos): Likewise for local insn.
(init_elimination): Likewise.
(process_insn_for_elimination): Likewise for param insn.

* lra-lives.c (curr_insn): Likewise.;

* lra-spills.c (assign_spill_hard_regs): Likewise for local insn.
(remove_pseudos): Likewise for param insn.
(spill_pseudos): Likewise for local insn.
(lra_final_code_change): Likewise for locals insn, curr.

* lra.c (lra_invalidate_insn_data): Likewise for param insn.
(lra_set_insn_deleted): Likewise.
(lra_delete_dead_insn): Likewise, and for local prev.
(new_insn_reg): Likewise for param insn.
(lra_set_insn_recog_data): Likewise.
(lra_update_insn_recog_data): Likewise.
(lra_set_used_insn_alternative): Likewise.
(get_insn_freq): Likewise.
(invalidate_insn_data_regno_info): Likewise.
   

[PATCH 090/236] loop-unroll.c: Use rtx_insn (also touches basic-block.h)

2014-08-06 Thread David Malcolm
gcc/
* basic-block.h (basic_block split_edge_and_insert): Strengthen
param insns from rtx to rtx_insn *.

* loop-unroll.c (struct iv_to_split): Strengthen field insn from
rtx to rtx_insn *.
(struct iv_to_split): Likewise.
(loop_exit_at_end_p): Likewise for local insn.
(split_edge_and_insert): Likewise for param insns.
(compare_and_jump_seq): Likewise for return type, param cinsn,
and locals seq, jump.
(unroll_loop_runtime_iterations): Likewise for locals init_code,
branch_code; update invocations of compare_and_jump_seq to
eliminate NULL_RTX in favor of NULL.
(referenced_in_one_insn_in_loop_p): Strengthen local insn from
rtx to rtx_insn *.
(reset_debug_uses_in_loop): Likewise.
(analyze_insn_to_expand_var): Likewise for param insn.
(analyze_iv_to_split_insn): Likewise.
(analyze_insns_in_loop): Likewise for local insn.
(insert_base_initialization): Likewise for param
insn and local seq.
(split_iv): Likewise for param insn and local seq.
(expand_var_during_unrolling): Likewise for param insn.
(insert_var_expansion_initialization): Likewise for local seq.
(combine_var_copies_in_loop_exit): Likewise.
(combine_var_copies_in_loop_exit): Likewise for locals seq and
insn.
(maybe_strip_eq_note_for_split_iv): Likewise for param insn.
(apply_opt_in_copies): Likewise for locals insn, orig_insn,
next.
---
 gcc/basic-block.h |  2 +-
 gcc/loop-unroll.c | 53 +
 2 files changed, 30 insertions(+), 25 deletions(-)

diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index 172908d..18d3871 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -399,7 +399,7 @@ extern unsigned int free_bb_for_insn (void);
 extern void update_bb_for_insn (basic_block);
 
 extern void insert_insn_on_edge (rtx, edge);
-basic_block split_edge_and_insert (edge, rtx);
+basic_block split_edge_and_insert (edge, rtx_insn *);
 
 extern void commit_one_edge_insertion (edge e);
 extern void commit_edge_insertions (void);
diff --git a/gcc/loop-unroll.c b/gcc/loop-unroll.c
index c283900..67dbe8b 100644
--- a/gcc/loop-unroll.c
+++ b/gcc/loop-unroll.c
@@ -73,7 +73,7 @@ along with GCC; see the file COPYING3.  If not see
 
 struct iv_to_split
 {
-  rtx insn;/* The insn in that the induction variable occurs.  */
+  rtx_insn *insn;  /* The insn in that the induction variable occurs.  */
   rtx orig_var;/* The variable (register) for the IV before 
split.  */
   rtx base_var;/* The variable on that the values in the 
further
   iterations are based.  */
@@ -90,7 +90,7 @@ struct iv_to_split
 
 struct var_to_expand
 {
-  rtx insn;   /* The insn in that the variable expansion 
occurs.  */
+  rtx_insn *insn; /* The insn in that the variable expansion 
occurs.  */
   rtx reg; /* The accumulator which is expanded.  */
   vecrtx var_expansions;   /* The copies of the accumulator which is 
expanded.  */
   struct var_to_expand *next; /* Next entry in walking order.  */
@@ -192,10 +192,10 @@ static struct opt_info *analyze_insns_in_loop (struct 
loop *);
 static void opt_info_start_duplication (struct opt_info *);
 static void apply_opt_in_copies (struct opt_info *, unsigned, bool, bool);
 static void free_opt_info (struct opt_info *);
-static struct var_to_expand *analyze_insn_to_expand_var (struct loop*, rtx);
+static struct var_to_expand *analyze_insn_to_expand_var (struct loop*, 
rtx_insn *);
 static bool referenced_in_one_insn_in_loop_p (struct loop *, rtx, int *);
 static struct iv_to_split *analyze_iv_to_split_insn (rtx_insn *);
-static void expand_var_during_unrolling (struct var_to_expand *, rtx);
+static void expand_var_during_unrolling (struct var_to_expand *, rtx_insn *);
 static void insert_var_expansion_initialization (struct var_to_expand *,
 basic_block);
 static void combine_var_copies_in_loop_exit (struct var_to_expand *,
@@ -324,7 +324,7 @@ static bool
 loop_exit_at_end_p (struct loop *loop)
 {
   struct niter_desc *desc = get_simple_loop_desc (loop);
-  rtx insn;
+  rtx_insn *insn;
 
   if (desc-in_edge-dest != loop-latch)
 return false;
@@ -1012,7 +1012,7 @@ decide_unroll_runtime_iterations (struct loop *loop, int 
flags)
and NULL is returned instead.  */
 
 basic_block
-split_edge_and_insert (edge e, rtx insns)
+split_edge_and_insert (edge e, rtx_insn *insns)
 {
   basic_block bb;
 
@@ -1058,11 +1058,12 @@ split_edge_and_insert (edge e, rtx insns)
true, with probability PROB.  If CINSN is not NULL, it is the insn to copy
in order to create a jump.  */
 
-static rtx
+static rtx_insn *
 compare_and_jump_seq (rtx op0, rtx op1, enum rtx_code comp, rtx label, int 
prob,
- 

[PATCH 109/236] resource.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* resource.c (next_insn_no_annul): Strengthen local next from
rtx to rtx_insn *.
(mark_referenced_resources): Likewise for local insn.
---
 gcc/resource.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/gcc/resource.c b/gcc/resource.c
index b555682..ef08976 100644
--- a/gcc/resource.c
+++ b/gcc/resource.c
@@ -174,7 +174,7 @@ next_insn_no_annul (rtx insn)
   INSN_ANNULLED_BRANCH_P (insn)
   NEXT_INSN (PREV_INSN (insn)) != insn)
{
- rtx next = NEXT_INSN (insn);
+ rtx_insn *next = NEXT_INSN (insn);
 
  while ((NONJUMP_INSN_P (next) || JUMP_P (next) || CALL_P (next))
  INSN_FROM_TARGET_P (next))
@@ -308,7 +308,7 @@ mark_referenced_resources (rtx x, struct resources *res,
 However, we may have moved some of the parameter loading insns
 into the delay slot of this CALL.  If so, the USE's for them
 don't count and should be skipped.  */
- rtx insn = PREV_INSN (x);
+ rtx_insn *insn = PREV_INSN (x);
  rtx sequence = 0;
  int seq_size = 0;
  int i;
-- 
1.8.5.3



[PATCH 107/236] regstat.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* regstat.c (regstat_bb_compute_ri): Strengthen local insn from
rtx to rtx_insn *.
(regstat_bb_compute_calls_crossed): Likewise.
---
 gcc/regstat.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/gcc/regstat.c b/gcc/regstat.c
index 75d9cb4..be5d92f 100644
--- a/gcc/regstat.c
+++ b/gcc/regstat.c
@@ -121,7 +121,7 @@ regstat_bb_compute_ri (unsigned int bb_index,
   int *local_live_last_luid)
 {
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
-  rtx insn;
+  rtx_insn *insn;
   df_ref *def_rec;
   df_ref *use_rec;
   int luid = 0;
@@ -441,7 +441,7 @@ static void
 regstat_bb_compute_calls_crossed (unsigned int bb_index, bitmap live)
 {
   basic_block bb = BASIC_BLOCK_FOR_FN (cfun, bb_index);
-  rtx insn;
+  rtx_insn *insn;
   df_ref *def_rec;
   df_ref *use_rec;
 
-- 
1.8.5.3



[PATCH 102/236] ree.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* ree.c (struct ext_cand): Strengthen field insn from rtx to
rtx_insn *.
(combine_set_extension): Likewise for param curr_insn.
(transform_ifelse): Likewise for param def_insn.
(get_defs): Likewise for param def_insn.  Strengthen param dest
from vecrtx * to vecrtx_insn * *.
(is_cond_copy_insn): Likewise for param insn.
(struct ext_state): Strengthen the four vec fields from vecrtx
to vecrtx_insn *.
(make_defs_and_copies_lists): Strengthen param extend_insn and
local def_insn from rtx to rtx_insn *.
(get_sub_rtx): Likewise for param def_insn.
(merge_def_and_ext): Likewise.
(combine_reaching_defs): Likewise.
(add_removable_extension): Likewise for param insn.
(find_removable_extensions): Likewise for local insn.
(find_and_remove_re): Likewise for locals curr_insn and
def_insn.  Strengthen locals reinsn_del_list and
reinsn_del_list from auto_vecrtx to auto_vecrtx_insn *.
---
 gcc/ree.c | 45 +++--
 1 file changed, 23 insertions(+), 22 deletions(-)

diff --git a/gcc/ree.c b/gcc/ree.c
index 77f1384..6ca6345 100644
--- a/gcc/ree.c
+++ b/gcc/ree.c
@@ -255,7 +255,7 @@ typedef struct ext_cand
   enum machine_mode mode;
 
   /* The instruction where it lives.  */
-  rtx insn;
+  rtx_insn *insn;
 } ext_cand;
 
 
@@ -279,7 +279,7 @@ static int max_insn_uid;
assign it to the register.  */
 
 static bool
-combine_set_extension (ext_cand *cand, rtx curr_insn, rtx *orig_set)
+combine_set_extension (ext_cand *cand, rtx_insn *curr_insn, rtx *orig_set)
 {
   rtx orig_src = SET_SRC (*orig_set);
   rtx new_set;
@@ -383,7 +383,7 @@ combine_set_extension (ext_cand *cand, rtx curr_insn, rtx 
*orig_set)
DEF_INSN is the if_then_else insn.  */
 
 static bool
-transform_ifelse (ext_cand *cand, rtx def_insn)
+transform_ifelse (ext_cand *cand, rtx_insn *def_insn)
 {
   rtx set_insn = PATTERN (def_insn);
   rtx srcreg, dstreg, srcreg2;
@@ -429,7 +429,7 @@ transform_ifelse (ext_cand *cand, rtx def_insn)
of the definitions onto DEST.  */
 
 static struct df_link *
-get_defs (rtx insn, rtx reg, vecrtx *dest)
+get_defs (rtx_insn *insn, rtx reg, vecrtx_insn * *dest)
 {
   df_ref reg_info, *uses;
   struct df_link *ref_chain, *ref_link;
@@ -470,7 +470,7 @@ get_defs (rtx insn, rtx reg, vecrtx *dest)
and store x1 and x2 in REG_1 and REG_2.  */
 
 static bool
-is_cond_copy_insn (rtx insn, rtx *reg1, rtx *reg2)
+is_cond_copy_insn (rtx_insn *insn, rtx *reg1, rtx *reg2)
 {
   rtx expr = single_set (insn);
 
@@ -517,10 +517,10 @@ typedef struct ext_state
   /* In order to avoid constant alloc/free, we keep these
  4 vectors live through the entire find_and_remove_re and just
  truncate them each time.  */
-  vecrtx defs_list;
-  vecrtx copies_list;
-  vecrtx modified_list;
-  vecrtx work_list;
+  vecrtx_insn * defs_list;
+  vecrtx_insn * copies_list;
+  vecrtx_insn * modified_list;
+  vecrtx_insn * work_list;
 
   /* For instructions that have been successfully modified, this is
  the original mode from which the insn is extending and
@@ -541,7 +541,7 @@ typedef struct ext_state
success.  */
 
 static bool
-make_defs_and_copies_lists (rtx extend_insn, const_rtx set_pat,
+make_defs_and_copies_lists (rtx_insn *extend_insn, const_rtx set_pat,
ext_state *state)
 {
   rtx src_reg = XEXP (SET_SRC (set_pat), 0);
@@ -559,7 +559,7 @@ make_defs_and_copies_lists (rtx extend_insn, const_rtx 
set_pat,
   /* Perform transitive closure for conditional copies.  */
   while (!state-work_list.is_empty ())
 {
-  rtx def_insn = state-work_list.pop ();
+  rtx_insn *def_insn = state-work_list.pop ();
   rtx reg1, reg2;
 
   gcc_assert (INSN_UID (def_insn)  max_insn_uid);
@@ -595,7 +595,7 @@ make_defs_and_copies_lists (rtx extend_insn, const_rtx 
set_pat,
return NULL.  This is similar to single_set, except that
single_set allows multiple SETs when all but one is dead.  */
 static rtx *
-get_sub_rtx (rtx def_insn)
+get_sub_rtx (rtx_insn *def_insn)
 {
   enum rtx_code code = GET_CODE (PATTERN (def_insn));
   rtx *sub_rtx = NULL;
@@ -633,7 +633,7 @@ get_sub_rtx (rtx def_insn)
on the SET pattern.  */
 
 static bool
-merge_def_and_ext (ext_cand *cand, rtx def_insn, ext_state *state)
+merge_def_and_ext (ext_cand *cand, rtx_insn *def_insn, ext_state *state)
 {
   enum machine_mode ext_src_mode;
   rtx *sub_rtx;
@@ -694,7 +694,7 @@ get_extended_src_reg (rtx src)
 static bool
 combine_reaching_defs (ext_cand *cand, const_rtx set_pat, ext_state *state)
 {
-  rtx def_insn;
+  rtx_insn *def_insn;
   bool merge_successful = true;
   int i;
   int defs_ix;
@@ -743,7 +743,7 @@ combine_reaching_defs (ext_cand *cand, const_rtx set_pat, 
ext_state *state)
return false;
 
   /* There's only one reaching def.  */
-  rtx def_insn = state-defs_list[0];
+  rtx_insn *def_insn = 

[PATCH 104/236] regcprop.c: Use rtx_insn

2014-08-06 Thread David Malcolm
gcc/
* regcprop.c (struct queued_debug_insn_change): Strengthen field
insn from rtx to rtx_insn *.
(replace_oldest_value_reg): Likewise for param insn.
(replace_oldest_value_addr): Likewise.
(replace_oldest_value_mem): Likewise.
(apply_debug_insn_changes): Likewise for local last_insn.
(copyprop_hardreg_forward_1): Likewise for local insn.
---
 gcc/regcprop.c | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/gcc/regcprop.c b/gcc/regcprop.c
index 7a5a4f6..6f1851a 100644
--- a/gcc/regcprop.c
+++ b/gcc/regcprop.c
@@ -50,7 +50,7 @@
 struct queued_debug_insn_change
 {
   struct queued_debug_insn_change *next;
-  rtx insn;
+  rtx_insn *insn;
   rtx *loc;
   rtx new_rtx;
 };
@@ -93,12 +93,12 @@ static bool mode_change_ok (enum machine_mode, enum 
machine_mode,
 static rtx maybe_mode_change (enum machine_mode, enum machine_mode,
  enum machine_mode, unsigned int, unsigned int);
 static rtx find_oldest_value_reg (enum reg_class, rtx, struct value_data *);
-static bool replace_oldest_value_reg (rtx *, enum reg_class, rtx,
+static bool replace_oldest_value_reg (rtx *, enum reg_class, rtx_insn *,
  struct value_data *);
 static bool replace_oldest_value_addr (rtx *, enum reg_class,
-  enum machine_mode, addr_space_t, rtx,
-  struct value_data *);
-static bool replace_oldest_value_mem (rtx, rtx, struct value_data *);
+  enum machine_mode, addr_space_t,
+  rtx_insn *, struct value_data *);
+static bool replace_oldest_value_mem (rtx, rtx_insn *, struct value_data *);
 static bool copyprop_hardreg_forward_1 (basic_block, struct value_data *);
 extern void debug_value_data (struct value_data *);
 #ifdef ENABLE_CHECKING
@@ -482,7 +482,7 @@ find_oldest_value_reg (enum reg_class cl, rtx reg, struct 
value_data *vd)
in register class CL.  Return true if successfully replaced.  */
 
 static bool
-replace_oldest_value_reg (rtx *loc, enum reg_class cl, rtx insn,
+replace_oldest_value_reg (rtx *loc, enum reg_class cl, rtx_insn *insn,
  struct value_data *vd)
 {
   rtx new_rtx = find_oldest_value_reg (cl, *loc, vd);
@@ -523,7 +523,7 @@ replace_oldest_value_reg (rtx *loc, enum reg_class cl, rtx 
insn,
 static bool
 replace_oldest_value_addr (rtx *loc, enum reg_class cl,
   enum machine_mode mode, addr_space_t as,
-  rtx insn, struct value_data *vd)
+  rtx_insn *insn, struct value_data *vd)
 {
   rtx x = *loc;
   RTX_CODE code = GET_CODE (x);
@@ -669,7 +669,7 @@ replace_oldest_value_addr (rtx *loc, enum reg_class cl,
 /* Similar to replace_oldest_value_reg, but X contains a memory.  */
 
 static bool
-replace_oldest_value_mem (rtx x, rtx insn, struct value_data *vd)
+replace_oldest_value_mem (rtx x, rtx_insn *insn, struct value_data *vd)
 {
   enum reg_class cl;
 
@@ -690,7 +690,7 @@ static void
 apply_debug_insn_changes (struct value_data *vd, unsigned int regno)
 {
   struct queued_debug_insn_change *change;
-  rtx last_insn = vd-e[regno].debug_insn_changes-insn;
+  rtx_insn *last_insn = vd-e[regno].debug_insn_changes-insn;
 
   for (change = vd-e[regno].debug_insn_changes;
change;
@@ -741,7 +741,7 @@ static bool
 copyprop_hardreg_forward_1 (basic_block bb, struct value_data *vd)
 {
   bool anything_changed = false;
-  rtx insn;
+  rtx_insn *insn;
 
   for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
 {
-- 
1.8.5.3



[PATCH 116/236] shrink-wrap.*: Use rtx_insn (touches config/i386/i386.c)

2014-08-06 Thread David Malcolm
gcc/
* shrink-wrap.h (requires_stack_frame_p): Strengthen param 1
insn from rtx to rtx_insn *.
(dup_block_and_redirect): Likewise for param 3 before.

* shrink-wrap.c (requires_stack_frame_p): Strengthen param insn
from rtx to rtx_insn *.
(move_insn_for_shrink_wrap): Likewise.
(prepare_shrink_wrap): Likewise for locals insn, curr.
(dup_block_and_redirect): Likewise for param before and local
insn.
(try_shrink_wrapping): Likewise for locals insn, insert_point,
end.
(convert_to_simple_return): Likewise for local start.

* config/i386/i386.c (ix86_finalize_stack_realign_flags):
Strengthen local insn from rtx to rtx_insn *, for use when
invoking requires_stack_frame_p.
---
 gcc/config/i386/i386.c |  2 +-
 gcc/shrink-wrap.c  | 19 ++-
 gcc/shrink-wrap.h  |  5 +++--
 3 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c
index 8827256..ea79519 100644
--- a/gcc/config/i386/i386.c
+++ b/gcc/config/i386/i386.c
@@ -10669,7 +10669,7 @@ ix86_finalize_stack_realign_flags (void)
   HARD_FRAME_POINTER_REGNUM);
   FOR_EACH_BB_FN (bb, cfun)
 {
-  rtx insn;
+  rtx_insn *insn;
  FOR_BB_INSNS (bb, insn)
if (NONDEBUG_INSN_P (insn)
 requires_stack_frame_p (insn, prologue_used,
diff --git a/gcc/shrink-wrap.c b/gcc/shrink-wrap.c
index 7d9c6e7..785ca21 100644
--- a/gcc/shrink-wrap.c
+++ b/gcc/shrink-wrap.c
@@ -61,7 +61,7 @@ along with GCC; see the file COPYING3.  If not see
prologue.  SET_UP_BY_PROLOGUE is the set of registers we expect the
prologue to set up for the function.  */
 bool
-requires_stack_frame_p (rtx insn, HARD_REG_SET prologue_used,
+requires_stack_frame_p (rtx_insn *insn, HARD_REG_SET prologue_used,
HARD_REG_SET set_up_by_prologue)
 {
   df_ref *df_rec;
@@ -162,7 +162,7 @@ live_edge_for_reg (basic_block bb, int regno, int end_regno)
is splitted or not.  */
 
 static bool
-move_insn_for_shrink_wrap (basic_block bb, rtx insn,
+move_insn_for_shrink_wrap (basic_block bb, rtx_insn *insn,
   const HARD_REG_SET uses,
   const HARD_REG_SET defs,
   bool *split_p)
@@ -331,7 +331,8 @@ move_insn_for_shrink_wrap (basic_block bb, rtx insn,
 void
 prepare_shrink_wrap (basic_block entry_block)
 {
-  rtx insn, curr, x;
+  rtx_insn *insn, *curr;
+  rtx x;
   HARD_REG_SET uses, defs;
   df_ref *ref;
   bool split_p = false;
@@ -373,12 +374,12 @@ prepare_shrink_wrap (basic_block entry_block)
 /* Create a copy of BB instructions and insert at BEFORE.  Redirect
preds of BB to COPY_BB if they don't appear in NEED_PROLOGUE.  */
 void
-dup_block_and_redirect (basic_block bb, basic_block copy_bb, rtx before,
+dup_block_and_redirect (basic_block bb, basic_block copy_bb, rtx_insn *before,
bitmap_head *need_prologue)
 {
   edge_iterator ei;
   edge e;
-  rtx insn = BB_END (bb);
+  rtx_insn *insn = BB_END (bb);
 
   /* We know BB has a single successor, so there is no need to copy a
  simple jump at the end of BB.  */
@@ -513,7 +514,7 @@ try_shrink_wrapping (edge *entry_edge, edge orig_entry_edge,
 
   FOR_EACH_BB_FN (bb, cfun)
{
- rtx insn;
+ rtx_insn *insn;
  unsigned size = 0;
 
  FOR_BB_INSNS (bb, insn)
@@ -707,7 +708,7 @@ try_shrink_wrapping (edge *entry_edge, edge orig_entry_edge,
FOR_EACH_BB_REVERSE_FN (bb, cfun)
  {
basic_block copy_bb, tbb;
-   rtx insert_point;
+   rtx_insn *insert_point;
int eflags;
 
if (!bitmap_clear_bit (bb_tail, bb-index))
@@ -724,7 +725,7 @@ try_shrink_wrapping (edge *entry_edge, edge orig_entry_edge,
if (e)
  {
 /* Make sure we insert after any barriers.  */
-rtx end = get_last_bb_insn (e-src);
+rtx_insn *end = get_last_bb_insn (e-src);
 copy_bb = create_basic_block (NEXT_INSN (end),
   NULL_RTX, e-src);
BB_COPY_PARTITION (copy_bb, e-src);
@@ -902,7 +903,7 @@ convert_to_simple_return (edge entry_edge, edge 
orig_entry_edge,
  else if (*pdest_bb == NULL)
{
  basic_block bb;
- rtx start;
+ rtx_insn *start;
 
  bb = create_basic_block (NULL, NULL, exit_pred);
  BB_COPY_PARTITION (bb, e-src);
diff --git a/gcc/shrink-wrap.h b/gcc/shrink-wrap.h
index bccfb31..5576d36 100644
--- a/gcc/shrink-wrap.h
+++ b/gcc/shrink-wrap.h
@@ -34,10 +34,11 @@ extern basic_block emit_return_for_exit (edge 
exit_fallthru_edge,
 bool simple_p);
 
 /* In shrink-wrap.c.  */
-extern 

  1   2   3   4   >