================
@@ -1097,6 +1117,24 @@ mlir::LogicalResult 
CIRToLLVMCallOpLowering::matchAndRewrite(
                              getTypeConverter(), op.getCalleeAttr());
 }
 
+mlir::LogicalResult CIRToLLVMReturnAddrOpLowering::matchAndRewrite(
+    cir::ReturnAddrOp op, OpAdaptor adaptor,
+    mlir::ConversionPatternRewriter &rewriter) const {
+  auto llvmPtrTy = mlir::LLVM::LLVMPointerType::get(rewriter.getContext());
+  replaceOpWithCallLLVMIntrinsicOp(rewriter, op, "llvm.returnaddress",
----------------
bcardosolopes wrote:

The information coming from these builtins are likely useful in order to mark 
some function as unsafe for certain optimizations (e.g. where the return 
address or stack addressed in general might escape), they feel very different 
IMO than random shift vectors for ARM SVE. That said, you are right we don't 
use them right now, but I'd argue for keeping them because their behavior could 
be more intrusive than regular intrinsics.

https://github.com/llvm/llvm-project/pull/153698
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to