On 06/18/2014 03:57 PM, Kyle McMartin wrote:
pretty sure we need a similar fix for tlsgd_small, since __tls_get_addr
could clobber CC as well.
As I replied in IRC, no, because tlsgd_small is modeled with an actual
CALL_INSN, and thus call-clobbered registers work as normal.
r~
On 06/01/2014 03:00 AM, Tom de Vries wrote:
+/* Emit call insn with PAT and do aarch64-specific handling. */
+
+bool
+aarch64_emit_call_insn (rtx pat)
+{
+ rtx insn = emit_call_insn (pat);
+
+ rtx *fusage = CALL_INSN_FUNCTION_USAGE (insn);
+ clobber_reg (fusage, gen_rtx_REG
On 06/01/2014 03:00 AM, Tom de Vries wrote:
+aarch64_emit_call_insn (rtx pat)
+{
+ rtx insn = emit_call_insn (pat);
+
+ rtx *fusage = CALL_INSN_FUNCTION_USAGE (insn);
+ clobber_reg (fusage, gen_rtx_REG (word_mode, IP0_REGNUM));
+ clobber_reg (fusage, gen_rtx_REG (word_mode,
On 06/01/2014 04:27 AM, Tom de Vries wrote:
+ if (TARGET_AAPCS_BASED)
+{
+ /* For AAPCS, IP and CC can be clobbered by veneers inserted by the
+ linker. We need to add these to allow
+ arm_call_fusage_contains_non_callee_clobbers to return true. */
+ rtx *fusage =
On 05/19/2014 07:30 AM, Tom de Vries wrote:
+ for (insn = get_insns (); insn != NULL_RTX; insn = next_insn (insn))
+{
+ HARD_REG_SET insn_used_regs;
+
+ if (!NONDEBUG_INSN_P (insn))
+ continue;
+
+ find_all_hard_reg_sets (insn, insn_used_regs, false);
+
+ if
On 06/19/2014 05:39 AM, Tom de Vries wrote:
2014-06-19 Tom de Vries t...@codesourcery.com
* final.c (collect_fn_hard_reg_usage): Add and use variable
function_used_regs.
Looks good, thanks.
r~
On 06/19/2014 01:39 AM, Tom de Vries wrote:
On 19-06-14 05:53, Richard Henderson wrote:
Do we in fact make sure this isn't an ifunc resolver? I don't immediately
see
how those get wired up in the cgraph...
Richard,
using the patch below I changed the
gcc/testsuite/gcc.target/i386/fuse
On 06/19/2014 09:06 AM, Tom de Vries wrote:
2014-06-19 Tom de Vries t...@codesourcery.com
* final.c (collect_fn_hard_reg_usage): Don't save function_used_regs if
it contains all call_used_regs.
Ok.
r~
On 06/19/2014 09:07 AM, Tom de Vries wrote:
2014-06-19 Tom de Vries t...@codesourcery.com
* final.c (collect_fn_hard_reg_usage): Add separate IOR_HARD_REG_SET for
get_call_reg_set_usage.
Ok, as far as it goes, but...
It seems like there should be quite a bit of overlap with
On 06/19/2014 09:37 AM, Tom de Vries wrote:
On 19-06-14 05:59, Richard Henderson wrote:
On 06/01/2014 04:27 AM, Tom de Vries wrote:
+ if (TARGET_AAPCS_BASED)
+{
+ /* For AAPCS, IP and CC can be clobbered by veneers inserted by the
+ linker. We need to add these to allow
On 06/19/2014 09:40 AM, Richard Henderson wrote:
It appears that regs_ever_live includes any register mentioned explicitly, and
thus the only registers it doesn't contain are those killed by the callees.
That should be an easier scan than the rtl, since we have those already
collected
On 06/19/2014 11:25 AM, Tom de Vries wrote:
On 19-06-14 05:53, Richard Henderson wrote:
On 06/01/2014 03:00 AM, Tom de Vries wrote:
+aarch64_emit_call_insn (rtx pat)
+{
+ rtx insn = emit_call_insn (pat);
+
+ rtx *fusage = CALL_INSN_FUNCTION_USAGE (insn);
+ clobber_reg (fusage
On 06/19/2014 12:36 PM, Jan Hubicka wrote:
On 06/19/2014 09:06 AM, Tom de Vries wrote:
2014-06-19 Tom de Vries t...@codesourcery.com
* final.c (collect_fn_hard_reg_usage): Don't save function_used_regs if
it contains all call_used_regs.
Ok.
When we now have way to represent
On 02/28/2014 01:32 AM, Ramana Radhakrishnan wrote:
Hi,
This defines TARGET_FLAGS_REGNUM for AArch64 to be CC_REGNUM. Noticed this
turns on the cmpelim pass after reload and in a few examples and a couple of
benchmarks I noticed a number of comparisons getting deleted. A similar patch
On 06/20/2014 08:56 AM, Kai Tietz wrote:
+(define_split
+ [(set (match_operand:W 0 register_operand)
+(match_operand:W 1 memory_operand))
+ (set (pc) (match_dup 0))]
+ !TARGET_X32 peep2_reg_dead_p (2, operands[0])
+ [(set (pc) (match_dup 1))])
+
Huh? You can't use peep2 data
On 06/20/2014 10:52 AM, Kai Tietz wrote:
2014-06-20 Kai Tietz kti...@redhat.com
PR target/39284
* passes.def (peephole2): Add second peephole2 pass before
split before sched2 pass.
* config/i386/i386.md (peehole2): To combine
indirect jump with memory.
(split2):
There aren't too many users of the cmpelim pass, and previously they were all
small embedded targets without an FPU.
I'm a bit surprised that Ramana decided to enable this pass for aarch64, as
that target is not so limited as the block comment for the pass describes.
Honestly, whatever is being
On 06/23/2014 02:29 AM, Ramana Radhakrishnan wrote:
On 20/06/14 21:28, Richard Henderson wrote:
There aren't too many users of the cmpelim pass, and previously they were all
small embedded targets without an FPU.
I'm a bit surprised that Ramana decided to enable this pass for aarch64
On 06/20/2014 02:59 PM, Kai Tietz wrote:
So I suggest following change of passes.def:
Index: passes.def
===
--- passes.def (Revision 211850)
+++ passes.def (Arbeitskopie)
@@ -384,7 +384,6 @@ along with GCC; see the file
On 06/20/2014 01:42 PM, Marek Polacek wrote:
2014-06-20 Marek Polacek pola...@redhat.com
* genpreds.c (verify_rtx_codes): New function.
(main): Call it.
* rtl.h (RTX_FLD_WIDTH, RTX_HWINT_WIDTH): Define.
(struct rtx_def): Use them.
Looks pretty good. Just a few
On 06/23/2014 09:22 AM, Jeff Law wrote:
On 06/23/14 08:32, Richard Biener wrote:
Btw, there is now no DCE after peephole2? Is peephole2 expected to
cleanup after itself?
There were cases where we wanted to change the insns we would output to fit
into the 4:1:1 issue model of the PPro, but to
On 06/23/2014 08:55 AM, Ramana Radhakrishnan wrote:
Agreed, this is why cmpelim looks interesting for Thumb1. (We may need another
hook or something to disable it in configurations we don't need it in, but you
know ... )
Yeah. Feel free to change targetm.flags_regnum from a variable to a
On 06/25/2014 08:28 AM, Jeff Law wrote:
Ask an ARM maintainer if the new code is actually better than the old code.
It isn't.
It appears that with the peep2 pass moved that we actually if-convert the
fall-thru path of the conditional and eliminate the conditional. Which, on the
surface seems
On 06/25/2014 06:35 AM, Kai Tietz wrote:
Hello,
so there seems to be a fallout caused by moving peephole2 pass. See PR/61608.
So we need indeed 2 peephole2 passes.
We don't need a second peephole pass. Please try this.
I think there's room for cleanup here, depending on when we leave
On 06/26/2014 02:43 AM, Uros Bizjak wrote:
2014-06-26 Uros Bizjak ubiz...@gmail.com
PR target/61586
* config/alpha/alpha.c (alpha_handle_trap_shadows): Handle BARRIER RTX.
testsuite/ChangeLog:
2014-06-26 Uros Bizjak ubiz...@gmail.com
PR target/61586
*
On 06/26/2014 02:15 PM, Uros Bizjak wrote:
* except.c (emit_note_eh_region_end): New helper function.
(convert_to_eh_region_ranges): Use emit_note_eh_region_end to
emit EH_REGION_END note.
This bit looks good.
rtx insn, next, prev;
- for (insn = get_insns (); insn; insn =
On 06/27/2014 10:04 AM, Uros Bizjak wrote:
This happened due to the way stores to QImode and HImode locations are
implemented on non-BWX targets. The sequence reads full word, does its
magic to the part and stores the full word with changed part back to
the memory. However - the scheduler
On 06/29/2014 12:51 PM, Uros Bizjak wrote:
I believe that attached v2 patch addresses all your review comments.
Patch was bootstrapped and regression tested on x86_64-linux-gnu {,-m32}.
Looks good, thanks.
r~
On 06/29/2014 11:14 AM, Uros Bizjak wrote:
if (MEM_READONLY_P (x))
+if (GET_CODE (mem_addr) == AND)
+ return 1;
return 0;
Certainly missing braces here. But with that fixed the patch looks plausible.
I'll look at it closer later today.
r~
On 07/07/2014 02:10 AM, Richard Biener wrote:
On Mon, Jun 30, 2014 at 5:54 PM, Richard Henderson r...@redhat.com wrote:
On 06/29/2014 11:14 AM, Uros Bizjak wrote:
if (MEM_READONLY_P (x))
+if (GET_CODE (mem_addr) == AND)
+ return 1;
return 0;
Certainly missing braces here
On 07/03/2014 02:53 AM, Evgeny Stupachenko wrote:
-expand_vec_perm_palignr (struct expand_vec_perm_d *d)
+expand_vec_perm_palignr (struct expand_vec_perm_d *d, int insn_num)
insn_num might as well be bool avx2, since it's only ever set to two values.
- /* Even with AVX, palignr only operates
On 07/07/2014 07:34 AM, Richard Biener wrote:
Ugh. I wasn't aware of that - is this documented anywhere? What
exactly does such address conflict with? Does it inhibit type-based analysis?
Dunno if it's documented anywhere. Such addresses conflict with anything,
unless it can be proven not
On 07/07/2014 09:35 AM, Uros Bizjak wrote:
On Mon, Jul 7, 2014 at 5:01 PM, Richard Henderson r...@redhat.com wrote:
Early alpha can't store sub-4-byte quantities. Altivec can't store anything
but 16 byte quantities. In order to perform smaller stores, we have to do a
read-modify-write
I noticed this while backporting support to the 4.9 branch.
I'm not sure what I was thinking when I wrote this originally;
probably too much cut-and-paste from another implementation.
Anyway, sanity tested and committed.
r~
* config/aarch64/sjlj.S (_ITM_beginTransaction): Use post-inc
On 07/26/2014 05:35 AM, Uros Bizjak wrote:
On Mon, May 2, 2011 at 9:21 AM, Uros Bizjak ubiz...@gmail.com wrote:
It looks that GP relative relocations do not fit anymore into GPREL16
reloc, so bootstrap on alpha hosts fail in stage2 with relocation
truncated to fit: GPREL16 against I
On 07/29/2014 06:11 AM, Uros Bizjak wrote:
Perhaps even better solution for mainline would be to detect a recent
enough linker and skip the workaround in that case? I guess that 2.25
will have this issue fixed?
Certainly 2.25 will have this fixed. If you want to add a check for binutils
On 11/12/2014 11:01 AM, Yangfei (Felix) wrote:
+(define_expand doloop_end
+ [(use (match_operand 0 )) ; loop pseudo
+ (use (match_operand 1 ))] ; label
+
+
+{
Drop the surrounding the { }.
r~
On 11/12/2014 10:18 PM, Iain Sandoe wrote:
* config/darwin/host-config.h New.
* config/darwin/lock.c New.
* configure.tgt (DEFAULT_X86_CPU): New, (target): New entry for darwin.
Looks pretty good.
+# ifndef USE_ATOMIC
+#define USE_ATOMIC 1
+# endif
Why would
On 11/13/2014 08:49 AM, Zhenqiang Chen wrote:
After adding HAVE_cbranchcc4, we can just use HAVE_cbranchcc4. No need to
add a local variable allow_cc_mode.
Here is the updated patch.
This is ok.
Since I've already approved Ulrich's s390 fix, there should not be a
problem there for long.
On 11/13/2014 09:34 PM, Iain Sandoe wrote:
Um, surely not LOCK_SIZE, but CACHELINE_SIZE. It's the granularity of the
target region that's at issue, not the size of the lock itself.
The algorithm I've used is intentionally different from the pthreads-based
posix one...
All that would be
On 11/14/2014 09:12 AM, Iain Sandoe wrote:
my locks are only 4 bytes [whereas they are
rounded-up-to-n-cachlines(sizeof(pthreads mutext)) for the posix
implementation].
The items that they are locking are of arbitrary size (at least up to one
page).
hmmm .. there's something I'm not
On 11/14/2014 09:44 AM, Iain Sandoe wrote:
or do you think that a scheme to lock variable-sized chunks cannot work?
It certainly can't work while
+void
+libat_lock_1 (void *ptr)
+{
+ LockLock (locks[addr_hash (ptr, 1)]);
+}
doesn't have the true size.
r~
On 11/14/2014 09:11 PM, Gopalasubramanian, Ganesh wrote:
+const char * pftype[2][10]
+ = { {PLDL1STRM, PLDL3KEEP, PLDL2KEEP, PLDL1KEEP},
+ {PSTL1STRM, PSTL3KEEP, PSTL2KEEP, PSTL1KEEP},
+ };
The array should be
static const char * const pftype[2][4]
I've no idea where
On 09/30/2014 12:47 PM, Olivier Hainque wrote:
2014-09-30 Olivier Hainque hain...@adacore.com
libgcc/
* unwind-dw2.c (DWARF_REG_TO_UNWIND_COLUMN): Move default def to ...
gcc/
* defaults.h: ... here.
* dwarf2cfi.c (init_one_dwarf_reg_size): New
On 10/29/2014 04:31 AM, Andi Kleen wrote:
2014-10-28 Andi Kleen a...@linux.intel.com
PR target/63672
* config/i386/i386.c (ix86_expand_builtin): Generate memory
barrier after abort.
* config/i386/i386.md (xbegin): Add memory barrier.
(xend): Rename to ...
On 11/18/2014 11:48 AM, Yangfei (Felix) wrote:
+(define_expand doloop_end
+ [(use (match_operand 0 )) ; loop pseudo
+ (use (match_operand 1 ))] ; label
+
+{
+ /* Currently SMS relies on the do-loop pattern to recognize loops
+ where (1) the control part consists of all
On 11/18/2014 12:28 PM, Yangfei (Felix) wrote:
+2014-11-13 Felix Yang felix.y...@huawei.com
+
+ * config/aarch64/aarch64.c (doloop_end): New pattern.
+ * config/aarch64/aarch64.md (TARGET_CAN_USE_DOLOOP_P): Implement.
Looks good to me. I'll leave it for aarch64 maintainers for
On 10/31/2014 03:51 PM, Renlin Li wrote:
+(define_expand sumaxminv2di3
+ [(parallel [
+(set (match_operand:V2DI 0 register_operand )
+ (MAXMIN:V2DI (match_operand:V2DI 1 register_operand )
+ (match_operand:V2DI 2 register_operand )))
+(clobber (reg:CC
As opposed to always being a decl. This is a prerequisite
to allowing the static chain to be loaded for indirect calls.
* targhooks.c (default_static_chain): Remove check for
DECL_STATIC_CHAIN.
* config/moxie/moxie.c (moxie_static_chain): Likewise.
*
review, and hope to have to libffi merge complete soon.
I think it's important that the change to Go using the static chain
happen before gcc5, as it represents an ABI change. Our next opportunity
to fix the bulk of the non-x86 targets would be gcc6 in a year's time.
r~
Richard Henderson (3
We need to be able to set the static chain on a few calls within the
Go runtime, so expose this with __builtin_call_with_static_chain.
* c-family/c-common.c (c_common_reswords): Add
__builtin_call_with_static_chain.
* c-family/c-common.h
And, at the same time, allow indirect calls to have a static chain.
We'll always eliminate the static chain if we can prove it's unused.
* calls.c (prepare_call_address): Allow decl or type for first arg.
(expand_call): Pass type to prepare_call_address if no decl.
*
My mistake yesterday. I thought I'd tested both x86_64 -m64/-m32, but not so.
Anyway, as the comment says, the backend keeps querying the static chain, and
if you don't early out, it sets ix86_static_chain_on_stack, at which point the
setting is permanent and affects prologue generation, and not
On 11/20/2014 12:36 PM, Evgeny Stupachenko wrote:
+ /* Required for pack. */
+ if (!TARGET_SSE4_2 || d-one_operand_p)
+return false;
Why the SSE4_2 check here when...
+
+ /* Only V8HI, V16QI, V16HI and V32QI modes are more profitable than general
+ shuffles. */
+ if (d-vmode
On 11/19/2014 08:56 PM, H.J. Lu wrote:
On Wed, Nov 19, 2014 at 10:04 AM, Jakub Jelinek ja...@redhat.com wrote:
On Wed, Nov 19, 2014 at 03:58:50PM +0100, Richard Henderson wrote:
As opposed to always being a decl. This is a prerequisite
to allowing the static chain to be loaded for indirect
On 11/20/2014 10:48 AM, Zhenqiang Chen wrote:
+/* Check X clobber CC reg or not. */
+
+static bool
+clobber_cc_p (rtx x)
+{
+ RTX_CODE code = GET_CODE (x);
+ int i;
+
+ if (code == CLOBBER
+ REG_P (XEXP (x, 0))
+ (GET_MODE_CLASS (GET_MODE (XEXP (x, 0))) == MODE_CC))
+
On 11/20/2014 07:09 PM, Jakub Jelinek wrote:
Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
2014-11-20 Jakub Jelinek ja...@redhat.com
PR target/63764
c-family/
* c-common.h (convert_vector_to_pointer_for_subscript): Change
return type to bool.
This is the non-gofrontend patch corresponding to
https://codereview.appspot.com/180890043/
Ian, I thought I'd sent it out at the same time as
the codereview, but I guess it never went.
r~
commit 5a78f18cf8cd2160c62472693222cc72bd4379b6
Author: Richard Henderson r...@redhat.com
Date: Tue
On 11/22/2014 01:25 AM, John David Anglin wrote:
#define ABORT_INSTRUCTION asm (iitlbp %r0,(%sr0, %r0))
...
+static inline long
+__kernel_cmpxchg2 (void * oldval, void * newval, void *mem, int val_size)
+{
+ register unsigned long lws_mem asm(r26) = (unsigned long) (mem);
+ register long
On 11/24/2014 06:11 AM, Zhenqiang Chen wrote:
Expand pass always uses sign-extend to represent constant value. For the
case in the patch, a 8-bit unsigned value 252 is represented as -4,
which pass the ccmn check. After mode conversion, -4 becomes 252, which
leads to mismatch.
This sort of
On 11/25/2014 09:41 AM, Zhenqiang Chen wrote:
I want to confirm with you two things before I rework it.
(1) expand_insn needs an optab_handler as input. Do I need to define a
ccmp_optab with different mode support in optabs.def?
No, look again: expand_insn needs an enum insn_code as input.
On 09/29/2014 11:12 AM, Jiong Wang wrote:
+inline rtx single_set_no_clobber_use (const rtx_insn *insn)
+{
+ if (!INSN_P (insn))
+return NULL_RTX;
+
+ if (GET_CODE (PATTERN (insn)) == SET)
+return PATTERN (insn);
+
+ /* Defer to the more expensive case, and return NULL_RTX if
On 09/30/2014 02:52 AM, Jakub Jelinek wrote:
On Tue, Sep 30, 2014 at 11:03:47AM +0400, Varvara Rainchik wrote:
Corrected patch: call pthread_setspecific (gomp_tls_key, NULL) in
gomp_thread_start if HAVE_TLS is not defined.
2014-09-19 Varvara Rainchik varvara.rainc...@intel.com
*
?
I've done this for arm64 (based on the 4.9 branch, since that's important
internally at the moment, and may be less invasive than backporting the libffi
code), and it at least passes the testsuite.
Thoughts?
r~
commit f1d42e628ed611297731b2a78bbee69d2f45f8e1
Author: Richard Henderson r
On 10/01/2014 03:08 PM, Ian Lance Taylor wrote:
func TestMakeFunc(t *testing.T) {
switch runtime.GOARCH {
case amd64, 386:
default:
t.Skip(MakeFunc not implemented for + runtime.GOARCH)
}
Wait, what sources are you looking at? I took that
On 10/08/2014 03:47 AM, Chen Gang wrote:
It passes make -k check under Darwin x86_64.
2014-10-07 Chen Gang gang.chen.5...@gmail.com
* unwind-dw2-fde.h (last_fde): Use (const fde *) instead of
(char *) to avoid qualifier warning by 'xgcc' compiling.
Ok.
r~
counting a non-zero number of non-constants, etc.
Tested on x86_64, and against Andi's test case (unfortunately unreduced).
r~
2014-10-10 Richard Henderson r...@redhat.com
PR target/63404
* shrink-wrap.c (move_insn_for_shrink_wrap): Don't use single_set.
Restrict
On 10/10/2014 09:39 AM, Jiong Wang wrote:
(1) Don't bother modifying single_set; just look for a bare SET.
(2) Tighten the set of expressions we're willing to move.
(3) Use direct return false in the failure case, rather than
counting a non-zero number of non-constants, etc.
Tested on
On 10/10/2014 10:21 AM, Jeff Law wrote:
On 10/10/14 11:04, Jakub Jelinek wrote:
On Fri, Oct 10, 2014 at 11:00:54AM -0600, Jeff Law wrote:
But it's really a lot more like a
kind of PLUS. If instead we had a LOW to match HIGH it would have been
Right. In fact, I believe at the hardware
On 10/10/2014 09:52 AM, Richard Henderson wrote:
I wonder what kind of fallout there would be from changing LO_SUM to
RTX_BIN_ARITH, which is what it should have been all along.
The answer is a lot. Mostly throughout simplify-rtx.c, wherein we'd have to
move all sorts of code around
/2014-10/msg00098.html
[2] https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00102.html
[3] Except that after rebasing the tree on yesterday's trunk,
I discovered that i386 and aarch64 both have bootstrap
problems on trunk. Ouch.
[4] git://github.com/rth7680/gcc.git rth/go-closure
Richard
This is awful syntax, and therefore contains no documentation.
But we'll need to be able to set the static chain on a few calls
within the Go runtime, so we need to expose this by some means.
It currently looks like
function(args...) __builtin_call_chain(pointer)
because that was easy
And, at the same time, allow indirect calls to have a static chain.
We'll always eliminate the static chain if we can prove it's unused.
---
gcc/calls.c | 14 --
gcc/gimple-fold.c | 21 +
gcc/gimplify.c| 17 -
gcc/tree-cfg.c| 22
As opposed to always being a decl. This is a prerequisite
to allowing the static chain to be loaded for indirect calls.
---
gcc/config/i386/i386.c | 19 +--
gcc/config/moxie/moxie.c | 5 +
gcc/config/xtensa/xtensa.c | 2 +-
gcc/doc/tm.texi| 2 +-
This is not a standalone patch; it only touches the Go front end.
Further changes are required for the Go runtime.
---
gcc/go/go-gcc.cc | 44 ++--
gcc/go/gofrontend/backend.h | 7 ++-
gcc/go/gofrontend/expressions.cc | 21
A ffi_go_closure is intended to be compatible with the
function descriptors used by Go, and ffi_call_go sets up
the static chain parameter for calling a Go function.
The entry points are disabled when a backend has not been
updated, much like we do for normal closures.
---
Doesn't delete the __go_get/set_closure routines yet, as they're
still referenced by the ffi code, to be updated in another patch.
---
libgo/go/reflect/makefunc_386.S | 22 +-
libgo/go/reflect/makefunc_amd64.S | 13 -
libgo/runtime/malloc.goc | 8
Still missing changes for darwin, win64, and all 32-bit abis.
Dumps all of the hand-coded unwind info for gas generated, as
I can't be bothered to do the updates by hand again.
---
libffi/src/x86/ffi64.c | 103 ++-
libffi/src/x86/ffitarget.h | 2 +
libffi/src/x86/unix64.S|
---
libgo/runtime/proc.c| 20
libgo/runtime/runtime.h | 4
2 files changed, 24 deletions(-)
diff --git a/libgo/runtime/proc.c b/libgo/runtime/proc.c
index 87cd3ed..e52d37c 100644
--- a/libgo/runtime/proc.c
+++ b/libgo/runtime/proc.c
@@ -3370,26 +3370,6 @@
---
libffi/src/aarch64/ffi.c | 42 ++---
libffi/src/aarch64/ffitarget.h | 3 ++-
libffi/src/aarch64/sysv.S | 60 +-
3 files changed, 99 insertions(+), 6 deletions(-)
diff --git a/libffi/src/aarch64/ffi.c
This does drop support for targets whose libffi hasn't been updated,
but if we go this way that should be fairly easy to do.
---
libgo/go/reflect/makefunc.go | 49 ++--
libgo/go/reflect/makefunc_ffi.go | 67 --
---
libffi/src/x86/ffi.c | 88 -
libffi/src/x86/sysv.S | 107 +-
2 files changed, 166 insertions(+), 29 deletions(-)
diff --git a/libffi/src/x86/ffi.c b/libffi/src/x86/ffi.c
index e3f82ef..77abbe3 100644
(1) Invent a new internal.h rather than polluting the public ffitarget.h
with stuff that ought not be exposed.
(2) Rewrite is_hfa to not be so horribly computationally expensive. And
more to the point require us to _re_ compute the same stuff in order
to actually do anything with the
(1) Invent a new internal.h rather than polluting the public ffitarget.h
with stuff that ought not be exposed.
(2) Reduce the ifdefs to a minimum. Support the windows and sysv abis at
the same time. After all, it's possible to write functions for any of
these abis with gcc at any
On 10/10/2014 06:42 PM, Peter Collingbourne wrote:
A colleague has suggested a perhaps nicer syntax:
__builtin_call_chain(pointer, call) where call must be a call expression
I like this.
Unlike the other suggestions, it doesn't mess with the parsing of the regular
part of the function call.
On 09/22/2014 11:43 PM, Zhenqiang Chen wrote:
+@cindex @code{ccmp} instruction pattern
+@item @samp{ccmp}
+Conditional compare instruction. Operand 2 and 5 are RTLs which perform
+two comparisons. Operand 1 is AND or IOR, which operates on the result of
+operand 2 and 5.
+It uses
On 09/22/2014 11:43 PM, Zhenqiang Chen wrote:
+ /* If jumps are cheap and the target does not support conditional
+ compare, turn some more codes into jumpy sequences. */
+ else if (BRANCH_COST (optimize_insn_for_speed_p (), false) 4
+
On 09/22/2014 11:44 PM, Zhenqiang Chen wrote:
+/* Return true if val can be encoded as a 5-bit unsigned immediate. */
+bool
+aarch64_uimm5 (HOST_WIDE_INT val)
+{
+ return (val (HOST_WIDE_INT) 0x1f) == val;
+}
This is just silly.
+(define_constraint Usn
+ A constant that can be used
On 09/22/2014 11:44 PM, Zhenqiang Chen wrote:
+case CC_DNEmode:
+ return comp_code == NE ? AARCH64_NE : AARCH64_EQ;
+case CC_DEQmode:
+ return comp_code == NE ? AARCH64_EQ : AARCH64_NE;
+case CC_DGEmode:
+ return comp_code == NE ? AARCH64_GE : AARCH64_LT;
+
On 09/22/2014 11:45 PM, Zhenqiang Chen wrote:
+static unsigned int
+aarch64_code_to_nzcv (enum rtx_code code, bool inverse) {
+ switch (code)
+{
+case NE: /* NE, Z == 0. */
+ return inverse ? AARCH64_CC_Z : 0;
+case EQ: /* EQ, Z == 1. */
+ return inverse ? 0 :
On 09/22/2014 11:45 PM, Zhenqiang Chen wrote:
+(define_expand cbranchcc4
+ [(set (pc) (if_then_else
+ (match_operator 0 aarch64_comparison_operator
+[(match_operand 1 cc_register )
+ (const_int 0)])
+ (label_ref (match_operand 3 ))
+
On 09/22/2014 11:46 PM, Zhenqiang Chen wrote:
+static bool
+aarch64_convert_mode (rtx* op0, rtx* op1, int unsignedp)
+{
+ enum machine_mode mode;
+
+ mode = GET_MODE (*op0);
+ if (mode == VOIDmode)
+mode = GET_MODE (*op1);
+
+ if (mode == QImode || mode == HImode)
+{
+
On 09/22/2014 11:46 PM, Zhenqiang Chen wrote:
@@ -2375,10 +2387,21 @@ noce_get_condition (rtx_insn *jump, rtx_insn
**earliest, bool then_else_reversed
return cond;
}
+ /* For conditional compare, set ALLOW_CC_MODE to TRUE. */
+ if (targetm.gen_ccmp_first)
+{
+
On 10/11/2014 09:11 AM, Richard Henderson wrote:
On 09/22/2014 11:45 PM, Zhenqiang Chen wrote:
+static unsigned int
+aarch64_code_to_nzcv (enum rtx_code code, bool inverse) {
+ switch (code)
+{
+case NE: /* NE, Z == 0. */
+ return inverse ? AARCH64_CC_Z : 0;
+case EQ
but not mainline, but
I'm applying it here anyway.
Tested on i686 and x86_64.
r~
2014-10-13 Richard Henderson r...@redhat.com
* combine-stack-adj.c (no_unhandled_cfa): New.
(maybe_merge_cfa_adjust): New.
(combine_stack_adjustments_for_block): Use them.
* g++.dg/torture
On 10/14/2014 08:08 AM, Evgeny Stupachenko wrote:
Hi,
Bootstaped with --enable-languages=c,c++,fortran,lto,go passed.
Make check in progress.
Is it ok?
ChangeLog
2014-10-14 Evgeny Stupachenko evstu...@gmail.com
* config/i386/i386.c (ix86_expand_split_stack_prologue):
On 10/14/2014 06:02 AM, Christian Bruel wrote:
2014-09-23 Christian Bruel christian.br...@st.com
* execute_dwarf2_frame (dw_frame_pointer_regnum): Reinitialize for each
function.
It's tempting to make this a local variable within dwarf2out_frame_debug_expr
and not try to cache it at
On 10/14/2014 11:25 AM, Richard Henderson wrote:
On 10/14/2014 06:02 AM, Christian Bruel wrote:
2014-09-23 Christian Bruel christian.br...@st.com
* execute_dwarf2_frame (dw_frame_pointer_regnum): Reinitialize for each
function.
It's tempting to make this a local variable within
On 08/08/2014 08:51 PM, Andrew Pinski wrote:
ChangeLog:
* explow.c (convert_memory_address_addr_space): Rename to ...
(convert_memory_address_addr_space_1): This. Add in_const argument.
Inside a CONST RTL, permute the conversion and addition of constant
for zero and
00:00:00 2001
From: Richard Henderson r...@redhat.com
Date: Tue, 7 Oct 2014 12:17:28 -0700
Subject: [PATCH 03/13] Allow the static chain to be set from C
We need to be able to set the static chain on a few calls within the
Go runtime, so expose this with __builtin_call_with_static_chain.
---
gcc/c
1 - 100 of 2511 matches
Mail list logo