[PATCH] D123642: [clang codegen] Assume arguments of __atomic_* are aligned.

2022-04-18 Thread Eli Friedman via Phabricator via cfe-commits
efriedma abandoned this revision.
efriedma added a comment.

Opened https://github.com/llvm/llvm-project/issues/54963 to more specifically 
track the clang issue.

With b27430f 
, it 
should be easy enough to special-case calls to std::__addressof in 
CodeGenFunction::EmitPointerWithAlignment, so we can just do that, I guess.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D123642/new/

https://reviews.llvm.org/D123642

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D123642: [clang codegen] Assume arguments of __atomic_* are aligned.

2022-04-15 Thread Richard Smith - zygoloid via Phabricator via cfe-commits
rsmith added a comment.

In D123642#3450129 , @xbolva00 wrote:

> Do you have any comments related to this issue by gcc devs that this is a 
> "known" bug?

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87237#c1, from someone in the GCC 
MAINTAINERS file, suggests that this is either a GCC bug or a GCC documentation 
bug.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D123642/new/

https://reviews.llvm.org/D123642

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D123642: [clang codegen] Assume arguments of __atomic_* are aligned.

2022-04-15 Thread James Y Knight via Phabricator via cfe-commits
jyknight added a comment.

In D123642#3450110 , @efriedma wrote:

>> I disagree with this on principle -- IMO, it is basically a historical bug 
>> in GCC that it ignores the type alignment, and we should NOT try to match 
>> that -- it's dangerous.
>
> gcc has always behaved this way, and I don't see any indication they intend 
> to change it.  I mean, you can call it a bug, but at the end of the day the 
> bug reports will land in our bugtracker, not gcc's.

Have we had many other such bug reports?

On the other hand, I have seen many cases where people wrote code using the 
`__atomic_*` APIs, and pass arguments which are underaligned on some platforms 
(though not the one the code was developed on). Having it be silently 
non-atomic (which is what typically happens with misaligned atomic 
instructions) is just...really nasty.

>> Ask GCC to modify libstdc++ so that `__builtin_addressof` is called 
>> directly, instead of going through `std::__addressof`.
>
> Even if gcc did this today, it would take years to reach people on Linux.

True, but the behavior in the meantime is correct. And given the apparent lack 
of widespread issues, I'm not sure it much matters.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D123642/new/

https://reviews.llvm.org/D123642

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D123642: [clang codegen] Assume arguments of __atomic_* are aligned.

2022-04-15 Thread Eli Friedman via Phabricator via cfe-commits
efriedma added a comment.

> Have we verified that this is the rule that GCC uses? Is it true even if e.g. 
> the pointer expression is the address of a variable with a known alignment, 
> or if the pointer has an explicitly-aligned type (e.g. with an aligned 
> typedef)?

As far as I can tell, if the type's size is a power of two, and there's an 
inline atomic sequence available, gcc will use an inline sequence that assumes 
the address is naturally aligned.  Otherwise, it makes the libcall.  I've tried 
a variety of ways of writing the operation; alignment doesn't matter at all, no 
matter what the alignment actually is.

Oddly, `__atomic_is_lock_free`/`__atomic_always_lock_free` do care about the 
alignment, though.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D123642/new/

https://reviews.llvm.org/D123642

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D123642: [clang codegen] Assume arguments of __atomic_* are aligned.

2022-04-14 Thread John McCall via Phabricator via cfe-commits
rjmccall added a comment.

The GCC documentation 
 really 
ought to specify what alignment assumptions these functions make, if any.  But 
they control the specification here, so if they're making alignment 
assumptions, then I agree we probably ought to make the same assumptions.

Have we verified that this is the rule that GCC uses?  Is it true even if e.g. 
the pointer expression is the address of a variable with a known alignment, or 
if the pointer has an explicitly-aligned type (e.g. with an aligned `typedef`)?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D123642/new/

https://reviews.llvm.org/D123642

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D123642: [clang codegen] Assume arguments of __atomic_* are aligned.

2022-04-13 Thread Dávid Bolvanský via Phabricator via cfe-commits
xbolva00 added subscribers: jwakely, jakubjelinek, msebor.
xbolva00 added a comment.

>> IMO, it is basically a historical bug in GCC that it ignores the type 
>> alignment

Do you have any comments related to this issue by gcc devs that this is a 
"known" bug?

cc @jakubjelinek @msebor @jwakely


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D123642/new/

https://reviews.llvm.org/D123642

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D123642: [clang codegen] Assume arguments of __atomic_* are aligned.

2022-04-13 Thread Eli Friedman via Phabricator via cfe-commits
efriedma added a comment.

> I disagree with this on principle -- IMO, it is basically a historical bug in 
> GCC that it ignores the type alignment, and we should NOT try to match that 
> -- it's dangerous.

gcc has always behaved this way, and I don't see any indication they intend to 
change it.  I mean, you can call it a bug, but at the end of the day the bug 
reports will land in our bugtracker, not gcc's.

> As a workaround: add alignas(uint64_t) to the affected struct in lld. (is 
> GHashCell the only one?)

I think that's the only one, at least according to `git grep std::atomic`; I 
guess that works.  (Assuming you meant `alignas(sizeof(uint64_t))`.)

> Ask GCC to modify libstdc++ so that `__builtin_addressof` is called directly, 
> instead of going through `std::__addressof`.

Even if gcc did this today, it would take years to reach people on Linux.

I guess this is motivation to implement namespaced builtins...?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D123642/new/

https://reviews.llvm.org/D123642

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D123642: [clang codegen] Assume arguments of __atomic_* are aligned.

2022-04-12 Thread James Y Knight via Phabricator via cfe-commits
jyknight added a comment.

I disagree with this on principle -- IMO, it is basically a historical bug in 
GCC that it ignores the type alignment, and we should NOT try to match that -- 
it's dangerous.

We ought to resolve the bug via other fixes:

- As a workaround: add alignas(uint64_t) to the affected struct in lld. (is 
GHashCell the only one?)
- Fix `__builtin_addressof` in Clang to propagate alignment in the same way `&` 
does (they ought to act equivalently; I think a new stanza in 
CodeGenFunction::EmitPointerWithAlignment will fix that).
- Ask GCC to modify libstdc++ so that `__builtin_addressof` is called directly, 
instead of going through `std::__addressof`.

In addition to not liking the intended change, I think this implementation is 
wrong -- and will be treacherous to attempt to fix -- because I'm pretty sure 
GCC's implementation doesn't assume particular alignments in general; if it 
decides it _cannot_ emit inline atomics (based on the size and target's 
configuration), then no extra alignment is assumed. (And note that Clang's idea 
of MaxPromoteWidth is unrelated to which sizes of atomics GCC inlines.)


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D123642/new/

https://reviews.llvm.org/D123642

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[PATCH] D123642: [clang codegen] Assume arguments of __atomic_* are aligned.

2022-04-12 Thread Eli Friedman via Phabricator via cfe-commits
efriedma created this revision.
efriedma added reviewers: jyknight, craig.topper, jfb, rjmccall.
Herald added a subscriber: StephenFan.
Herald added a project: All.
efriedma requested review of this revision.
Herald added a project: clang.

This seems weird, but it's necessary to match gcc, and avoid unnecessary calls 
to libatomic from libstdc++ .

Fixes https://github.com/llvm/llvm-project/issues/54790 .


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D123642

Files:
  clang/lib/CodeGen/CGAtomic.cpp
  clang/test/CodeGen/atomic-ops.c
  clang/test/CodeGen/atomics-sema-alignment.c

Index: clang/test/CodeGen/atomics-sema-alignment.c
===
--- clang/test/CodeGen/atomics-sema-alignment.c
+++ clang/test/CodeGen/atomics-sema-alignment.c
@@ -12,10 +12,10 @@
 
 void func(IntPair *p) {
   IntPair res;
-  __atomic_load(p, , 0);// expected-warning {{misaligned atomic operation may incur significant performance penalty; the expected alignment (8 bytes) exceeds the actual alignment (4 bytes)}}
-  __atomic_store(p, , 0);   // expected-warning {{misaligned atomic operation may incur significant performance penalty; the expected alignment (8 bytes) exceeds the actual alignment (4 bytes)}}
-  __atomic_fetch_add((unaligned_int *)p, 1, 2); // expected-warning {{misaligned atomic operation may incur significant performance penalty; the expected alignment (4 bytes) exceeds the actual alignment (1 bytes)}}
-  __atomic_fetch_sub((unaligned_int *)p, 1, 3); // expected-warning {{misaligned atomic operation may incur significant performance penalty; the expected alignment (4 bytes) exceeds the actual alignment (1 bytes)}}
+  __atomic_load(p, , 0);
+  __atomic_store(p, , 0);
+  __atomic_fetch_add((unaligned_int *)p, 1, 2);
+  __atomic_fetch_sub((unaligned_int *)p, 1, 3);
 }
 
 void func1(LongStruct *p) {
@@ -40,7 +40,7 @@
 
 void braz(Foo *foo, ThirtyTwo *braz) {
   Foo bar;
-  __atomic_load(foo, , __ATOMIC_RELAXED); // expected-warning {{misaligned atomic operation may incur significant performance penalty; the expected alignment (16 bytes) exceeds the actual alignment (8 bytes)}}
+  __atomic_load(foo, , __ATOMIC_RELAXED);
 
   ThirtyTwo thirtyTwo1;
   ThirtyTwo thirtyTwo2;
Index: clang/test/CodeGen/atomic-ops.c
===
--- clang/test/CodeGen/atomic-ops.c
+++ clang/test/CodeGen/atomic-ops.c
@@ -200,9 +200,8 @@
   // CHECK: [[RETVAL:%.*]] = alloca %struct.S, align 4
   // CHECK: [[A:%.*]]   = bitcast %struct.S* {{.*}} to i64*
   // CHECK: [[CAST:%.*]]  = bitcast %struct.S* [[RETVAL]] to i64*
-  // CHECK: [[SRC:%.*]]  = bitcast i64* [[A]] to i8*
-  // CHECK: [[DEST:%.*]]  = bitcast i64* [[CAST]] to i8*
-  // CHECK: call void @__atomic_load(i32 noundef 8, i8* noundef [[SRC]], i8* noundef [[DEST]], i32 noundef 5)
+  // CHECK: load atomic i64, i64* [[A]]
+  // CHECK: store i64 {{.*}}, i64* [[CAST]]
   // CHECK: ret
   struct S ret;
   __atomic_load(a, , memory_order_seq_cst);
@@ -219,9 +218,8 @@
   // CHECK-NEXT: [[LOAD_B_PTR:%.*]] = load %struct.S*, %struct.S** [[B_ADDR]], align 4
   // CHECK-NEXT: [[COERCED_A_TMP:%.*]] = bitcast %struct.S* [[LOAD_A_PTR]] to i64*
   // CHECK-NEXT: [[COERCED_B:%.*]] = bitcast %struct.S* [[LOAD_B_PTR]] to i64*
-  // CHECK-NEXT: [[COERCED_A:%.*]] = bitcast i64* [[COERCED_A_TMP]] to i8*
-  // CHECK-NEXT: [[CAST_B:%.*]] = bitcast i64* [[COERCED_B]] to i8*
-  // CHECK-NEXT: call void @__atomic_store(i32 noundef 8, i8* noundef [[COERCED_A]], i8* noundef [[CAST_B]],
+  // CHECK-NEXT: load i64, i64* [[COERCED_B]]
+  // CHECK-NEXT: store atomic i64 {{.*}}, i64* [[COERCED_A_TMP]]
   // CHECK-NEXT: ret void
   __atomic_store(a, b, memory_order_seq_cst);
 }
@@ -240,11 +238,8 @@
   // CHECK-NEXT: [[COERCED_A_TMP:%.*]] = bitcast %struct.S* [[LOAD_A_PTR]] to i64*
   // CHECK-NEXT: [[COERCED_B:%.*]] = bitcast %struct.S* [[LOAD_B_PTR]] to i64*
   // CHECK-NEXT: [[COERCED_C:%.*]] = bitcast %struct.S* [[LOAD_C_PTR]] to i64*
-  // CHECK-NEXT: [[COERCED_A:%.*]] = bitcast i64* [[COERCED_A_TMP]] to i8*
-  // CHECK-NEXT: [[CAST_B:%.*]] = bitcast i64* [[COERCED_B]] to i8*
-  // CHECK-NEXT: [[CAST_C:%.*]] = bitcast i64* [[COERCED_C]] to i8*
-  // CHECK-NEXT: call void @__atomic_exchange(i32 noundef 8, i8* noundef [[COERCED_A]], i8* noundef [[CAST_B]], i8* noundef [[CAST_C]],
-
+  // CHECK-NEXT: load i64, i64* [[COERCED_B]]
+  // CHECK-NEXT: atomicrmw xchg i64* [[COERCED_A_TMP]]
   __atomic_exchange(a, b, c, memory_order_seq_cst);
 }
 
@@ -262,11 +257,9 @@
   // CHECK-NEXT: [[COERCED_A_TMP:%.*]] = bitcast %struct.S* [[LOAD_A_PTR]] to i64*
   // CHECK-NEXT: [[COERCED_B_TMP:%.*]] = bitcast %struct.S* [[LOAD_B_PTR]] to i64*
   // CHECK-NEXT: [[COERCED_C:%.*]] = bitcast %struct.S* [[LOAD_C_PTR]] to i64*
-  // CHECK-NEXT: [[COERCED_A:%.*]] = bitcast i64* [[COERCED_A_TMP]] to i8*
-  // CHECK-NEXT: [[COERCED_B:%.*]] = bitcast i64* [[COERCED_B_TMP]] to i8*
-  //