llvmbot wrote:

<!--LLVM PR SUMMARY COMMENT-->

@llvm/pr-subscribers-backend-x86

Author: Joshua Cranmer (jcranmer-intel)

<details>
<summary>Changes</summary>

This makes infinities and NaNs return the new, more explicit outputs,
and extends the base decimal output literal to support non-double types.
In addition, the legacy hexadecimal floating-point literal format is no longer
output in any circumstance.

A script to change the output for most test files is available at
https://gist.github.com/jcranmer-intel/d279296ad91884c98b77fb23a9112a5a.
Full disclosure: the python portion of the script was written with the
use of AI tools. The C++ portion was written entirely by hand, reusing
LLVM's existing helpers for output.

The test changes included in this commit are *only* those which do not
work with that script, either because the script mangled the conversion
(e.g., intentionally matching only a prefix to allow some imprecision)
or because the tests were too unusual for the script to find the
floating-point literals being compared against.

---

Patch is 3.25 MiB, truncated to 20.00 KiB below, full version: 
https://github.com/llvm/llvm-project/pull/190649.diff


618 Files Affected:

- (modified) clang/test/AST/ByteCode/codegen.cpp (+1-1) 
- (modified) clang/test/AST/ByteCode/const-fpfeatures.cpp (+8-8) 
- (modified) clang/test/AST/const-fpfeatures.c (+13-13) 
- (modified) clang/test/AST/const-fpfeatures.cpp (+19-19) 
- (modified) clang/test/C/C11/n1396.c (+20-20) 
- (modified) clang/test/C/C2y/n3364.c (+9-9) 
- (modified) clang/test/C/C2y/n3460_1.c (+2-2) 
- (modified) clang/test/CIR/CodeGen/atomic.c (+2-2) 
- (modified) clang/test/CIR/CodeGen/binassign.c (+2-2) 
- (modified) clang/test/CIR/CodeGen/comma.c (+2-2) 
- (modified) clang/test/CIR/CodeGen/complex.cpp (+4-4) 
- (modified) clang/test/CIR/CodeGen/throws.cpp (+3-3) 
- (modified) clang/test/CIR/CodeGen/union.c (+2-2) 
- (modified) clang/test/CIR/CodeGen/union.cpp (+2-2) 
- (modified) clang/test/CIR/CodeGenBuiltins/X86/avx512fp16-builtins.c (+4-4) 
- (modified) clang/test/CIR/CodeGenBuiltins/X86/avx512vlfp16-builtins.c (+8-8) 
- (modified) clang/test/CIR/CodeGenBuiltins/builtin-call.cpp (+8-8) 
- (modified) clang/test/CIR/CodeGenBuiltins/builtin-isinf-sign.c (+1-1) 
- (modified) clang/test/CIR/Lowering/global-var-simple.cpp (+8-8) 
- (modified) clang/test/CodeGen/AArch64/atomic-ops-float-check-minmax.c (+6-6) 
- (modified) clang/test/CodeGen/AArch64/neon/fullfp16.c (+1-1) 
- (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c 
(+10-10) 
- (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c (+4-4) 
- (modified) clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c (+6-6) 
- (modified) clang/test/CodeGen/PowerPC/ppc64-complex-parms.c (+4-4) 
- (modified) clang/test/CodeGen/RISCV/riscv64-vararg.c (+3-3) 
- (modified) clang/test/CodeGen/SystemZ/atomic_is_lock_free.c (+1-1) 
- (modified) clang/test/CodeGen/SystemZ/builtins-systemz-zvector-constrained.c 
(+4-4) 
- (modified) clang/test/CodeGen/SystemZ/builtins-systemz-zvector.c (+4-4) 
- (modified) clang/test/CodeGen/X86/Float16-arithmetic.c (+1-1) 
- (modified) clang/test/CodeGen/X86/Float16-complex.c (+29-29) 
- (modified) clang/test/CodeGen/X86/avx512fp16-builtins.c (+10-10) 
- (modified) clang/test/CodeGen/X86/avx512vlfp16-builtins.c (+11-11) 
- (modified) clang/test/CodeGen/X86/long-double-config-size.c (+3-3) 
- (modified) clang/test/CodeGen/X86/x86-atomic-long_double.c (+20-20) 
- (modified) clang/test/CodeGen/X86/x86_64-longdouble.c (+4-4) 
- (modified) clang/test/CodeGen/atomic.c (+2-2) 
- (modified) clang/test/CodeGen/attr-target-mv.c (+2-2) 
- (modified) clang/test/CodeGen/builtin-complex.c (+2-2) 
- (modified) clang/test/CodeGen/builtin-nan-exception.c (+3-3) 
- (modified) clang/test/CodeGen/builtin-nan-legacy.c (+2-2) 
- (modified) clang/test/CodeGen/builtin-nanf.c (+1-1) 
- (modified) clang/test/CodeGen/builtin_Float16.c (+4-4) 
- (modified) clang/test/CodeGen/builtins-elementwise-math.c (+1-1) 
- (modified) clang/test/CodeGen/builtins-nvptx.c (+20-20) 
- (modified) clang/test/CodeGen/builtins-reduction-math.c (+2-2) 
- (modified) clang/test/CodeGen/builtins.c (+23-39) 
- (modified) clang/test/CodeGen/captured-statements.c (+1-1) 
- (modified) clang/test/CodeGen/catch-undef-behavior.c (+5-5) 
- (modified) clang/test/CodeGen/complex-init-list.c (+1-1) 
- (modified) clang/test/CodeGen/complex_Float16.c (+1-1) 
- (modified) clang/test/CodeGen/conditional.c (+1-1) 
- (modified) clang/test/CodeGen/const-init.c (+1-1) 
- (modified) clang/test/CodeGen/constexpr-c23-internal-linkage.c (+1-1) 
- (modified) clang/test/CodeGen/cx-complex-range-real.c (+4-4) 
- (modified) clang/test/CodeGen/ext-vector.c (+1-1) 
- (modified) clang/test/CodeGen/fp-floatcontrol-pragma.cpp (+1-1) 
- (modified) clang/test/CodeGen/fp16-ops-strictfp.c (+6-6) 
- (modified) clang/test/CodeGen/fp16-ops.c (+3-3) 
- (modified) clang/test/CodeGen/isfpclass.c (+2-2) 
- (modified) clang/test/CodeGen/logb_scalbn.c (+102-102) 
- (modified) clang/test/CodeGen/math-builtins-long.c (+8-8) 
- (modified) clang/test/CodeGen/mingw-long-double.c (+4-4) 
- (modified) clang/test/CodeGen/mips-unsupported-nan.c (+4-4) 
- (modified) clang/test/CodeGen/ppc-vec_ct-truncate.c (+8-8) 
- (modified) clang/test/CodeGen/rounding-math.c (+3-3) 
- (modified) clang/test/CodeGen/rounding-math.cpp (+12-12) 
- (modified) clang/test/CodeGen/spir-half-type.cpp (+20-20) 
- (modified) clang/test/CodeGen/strictfp_builtins.c (+3-3) 
- (modified) clang/test/CodeGenCUDA/long-double.cu (+1-1) 
- (modified) clang/test/CodeGenCUDA/printf.cu (+1-1) 
- (modified) clang/test/CodeGenCUDA/types.cu (+1-1) 
- (modified) clang/test/CodeGenCXX/auto-var-init.cpp (+29-29) 
- (modified) clang/test/CodeGenCXX/blocks-cxx11.cpp (+1-1) 
- (modified) clang/test/CodeGenCXX/const-init.cpp (+1-1) 
- (modified) clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp (+1-1) 
- (modified) clang/test/CodeGenCXX/float128-declarations.cpp (+24-24) 
- (modified) clang/test/CodeGenCXX/float16-declarations.cpp (+16-16) 
- (modified) clang/test/CodeGenCXX/ibm128-declarations.cpp (+1-1) 
- (modified) clang/test/CodeGenCXX/lambda-deterministic-captures.cpp (+3-3) 
- (modified) clang/test/CodeGenHLSL/BasicFeatures/ArrayElementwiseCast.hlsl 
(+1-1) 
- (modified) clang/test/CodeGenHLSL/BasicFeatures/frem_modulo.hlsl (+4-4) 
- (modified) clang/test/CodeGenHLSL/Operators/logical-not.hlsl (+1-1) 
- (modified) clang/test/CodeGenHLSL/RootSignature.hlsl (+1-1) 
- (modified) clang/test/CodeGenHLSL/builtins/D3DCOLORtoUBYTE4.hlsl (+1-1) 
- (modified) clang/test/CodeGenHLSL/builtins/VectorSwizzles.hlsl (+1-1) 
- (modified) clang/test/CodeGenHLSL/builtins/dst.hlsl (+1-1) 
- (modified) clang/test/CodeGenHLSL/builtins/faceforward.hlsl (+4-4) 
- (modified) clang/test/CodeGenHLSL/builtins/lit.hlsl (+6-6) 
- (modified) clang/test/CodeGenHLSL/builtins/rcp-builtin.hlsl (+1-1) 
- (modified) clang/test/CodeGenHLSL/builtins/rcp.hlsl (+4-4) 
- (modified) clang/test/CodeGenHLSL/builtins/reflect.hlsl (+5-5) 
- (modified) clang/test/CodeGenHLSL/builtins/refract.hlsl (+10-10) 
- (modified) clang/test/CodeGenHLSL/builtins/smoothstep.hlsl (+8-8) 
- (modified) clang/test/CodeGenHLSL/vk-features/vk.spec-constant.hlsl (+2-2) 
- (modified) clang/test/CodeGenObjC/objc-literal-tests.m (+2-2) 
- (modified) clang/test/CodeGenObjC/objc2-constant-number-literal.m (+5-5) 
- (modified) clang/test/CodeGenOpenCL/amdgpu-alignment.cl (+4-4) 
- (modified) clang/test/CodeGenOpenCL/half.cl (+4-4) 
- (modified) clang/test/Frontend/fixed_point_compound.c (+7-7) 
- (modified) clang/test/Frontend/fixed_point_conversions.c (+10-10) 
- (modified) clang/test/Frontend/fixed_point_conversions_const.c (+1-1) 
- (modified) clang/test/Frontend/fixed_point_conversions_half.c (+21-21) 
- (modified) clang/test/Headers/__clang_hip_math.hip (+58-58) 
- (modified) clang/test/Headers/cuda_wrapper_algorithm.cu (+4-4) 
- (modified) clang/test/Lexer/11-27-2007-FloatLiterals.c (+5-5) 
- (modified) clang/test/OpenMP/atomic_capture_codegen.cpp (+1-1) 
- (modified) clang/test/OpenMP/atomic_update_codegen.cpp (+1-1) 
- (modified) clang/test/OpenMP/declare_reduction_codegen.c (+2-2) 
- (modified) clang/test/OpenMP/declare_target_constexpr_codegen.cpp (+1-1) 
- (modified) clang/test/OpenMP/for_reduction_codegen.cpp (+1-1) 
- (modified) clang/test/OpenMP/parallel_reduction_codegen.cpp (+2-2) 
- (modified) clang/test/OpenMP/sections_reduction_codegen.cpp (+1-1) 
- (modified) flang/test/Fir/target-complex16.f90 (+1-1) 
- (modified) llvm/docs/LangRef.rst (+54-30) 
- (modified) llvm/docs/ReleaseNotes.md (+12) 
- (modified) llvm/include/llvm/ADT/APFloat.h (+30-1) 
- (modified) llvm/include/llvm/AsmParser/LLLexer.h (+1) 
- (modified) llvm/include/llvm/AsmParser/LLToken.h (+4-2) 
- (modified) llvm/lib/AsmParser/LLLexer.cpp (+163-38) 
- (modified) llvm/lib/AsmParser/LLParser.cpp (+32-2) 
- (modified) llvm/lib/CodeGen/MIRParser/MILexer.cpp (+18) 
- (modified) llvm/lib/IR/AsmWriter.cpp (+42-84) 
- (modified) llvm/lib/Support/APFloat.cpp (+11-1) 
- (modified) llvm/test/Analysis/CostModel/AArch64/arith-bf16.ll (+12-12) 
- (modified) llvm/test/Analysis/CostModel/AArch64/arith-fp.ll (+8-8) 
- (modified) llvm/test/Analysis/CostModel/AArch64/insert-extract.ll (+2-2) 
- (modified) llvm/test/Analysis/CostModel/AArch64/pow-special.ll (+12-12) 
- (modified) llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll (+48-48) 
- (modified) llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll (+80-80) 
- (modified) llvm/test/Analysis/CostModel/ARM/divrem.ll (+40-40) 
- (modified) llvm/test/Analysis/CostModel/ARM/reduce-fp.ll (+48-48) 
- (modified) llvm/test/Analysis/CostModel/RISCV/phi-const.ll (+1-1) 
- (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+224-224) 
- (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+126-126) 
- (modified) llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll (+3-3) 
- (modified) llvm/test/Analysis/TypeBasedAliasAnalysis/dynamic-indices.ll 
(+1-1) 
- (modified) llvm/test/Assembler/2002-04-07-InfConstant.ll (+1-1) 
- (modified) llvm/test/Assembler/2005-01-03-FPConstantDisassembly.ll (+1-1) 
- (modified) llvm/test/Assembler/2006-09-28-CrashOnInvalid.ll (+1-1) 
- (modified) llvm/test/Assembler/bfloat.ll (+7-7) 
- (modified) llvm/test/Assembler/constant-splat.ll (+5-5) 
- (added) llvm/test/Assembler/float-literals.ll (+44) 
- (modified) llvm/test/Assembler/half-constprop.ll (+1-1) 
- (modified) llvm/test/Assembler/half-conv.ll (+1-1) 
- (modified) llvm/test/Assembler/short-hexpair.ll (+1-1) 
- (modified) llvm/test/Bitcode/compatibility-3.6.ll (+1-1) 
- (modified) llvm/test/Bitcode/compatibility-3.7.ll (+1-1) 
- (modified) llvm/test/Bitcode/compatibility-3.8.ll (+3-3) 
- (modified) llvm/test/Bitcode/compatibility-3.9.ll (+3-3) 
- (modified) llvm/test/Bitcode/compatibility-4.0.ll (+3-3) 
- (modified) llvm/test/Bitcode/compatibility-5.0.ll (+3-3) 
- (modified) llvm/test/Bitcode/compatibility-6.0.ll (+3-3) 
- (modified) llvm/test/Bitcode/compatibility.ll (+3-3) 
- (modified) llvm/test/Bitcode/constant-splat.ll (+5-5) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (+1-1) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir (+2-2) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir (+1-1) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir 
(+12-12) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir (+2-2) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir (+3-3) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir (+1-1) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/constant-mir-debugify.mir 
(+1-1) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-constant.mir (+1-1) 
- (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir 
(+6-6) 
- (modified) 
llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir
 (+2-2) 
- (modified) llvm/test/CodeGen/AArch64/convertphitype.ll (+1-1) 
- (modified) 
llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir 
(+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir 
(+6-6) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir 
(+11-11) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir 
(+3-3) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir (+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir (+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll (+6-6) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll 
(+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll (+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-amdgcn.rsq.clamp.mir 
(+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-divrem.mir (+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir (+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir (+30-30) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir (+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fexp.mir (+66-66) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fexp2.mir (+9-9) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-ffloor.mir (+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-flog.mir (+26-26) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-flog10.mir (+26-26) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-flog2.mir (+6-6) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaximum.mir (+6-6) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir (+3-3) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminimum.mir (+6-6) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir (+3-3) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fpow.mir (+27-27) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fpowi.mir (+7-7) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fptosi.mir (+27-27) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fptoui.mir (+27-27) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fshl.mir (+3-3) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fshr.mir (+3-3) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir (+30-30) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsqrt.mir (+12-12) 
- (modified) 
llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-amdgcn-fdiv-fast.mir 
(+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir 
(+36-36) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-rotl-rotr.mir (+3-3) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sdiv.mir (+72-72) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir (+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-srem.mir (+72-72) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-udiv.mir (+72-72) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir (+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-urem.mir (+72-72) 
- (modified) 
llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir 
(+2-2) 
- (modified) 
llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir 
(+5-5) 
- (modified) 
llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir 
(+10-10) 
- (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir 
(+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fdiv.f64.ll (+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fdiv.ll (+122-122) 
- (modified) 
llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+3-3) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-idiv.ll (+28-28) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-log.ll (+8-8) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-sqrt.ll (+22-22) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll (+329-329) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pown-fast.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pown.ll (+8-8) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-powr.ll 
(+250-250) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll (+55-55) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-acos.ll 
(+6-6) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-acosh.ll 
(+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-acospi.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-asin.ll 
(+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-asinh.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-asinpi.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-atan.ll 
(+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-atanh.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-atanpi.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-cbrt.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-cos.ll (+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-cosh.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-cospi.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-erf.ll (+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-erfc.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-exp.ll (+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-exp10.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-exp2.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-expm1.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-log.ll (+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-log10.ll 
(+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-log2.ll 
(+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-rsqrt.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-sin.ll (+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-sinh.ll 
(+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-sinpi.ll 
(+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-sqrt.ll 
(+4-4) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-tan.ll (+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-tanh.ll 
(+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-tanpi.ll 
(+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-tdo-tgamma.ll 
(+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+146-146) 
- (modified) llvm/test/CodeGen/AMDGPU/global_atomic_optimizer_fp_rtn.ll 
(+42-42) 
- (modified) llvm/test/CodeGen/AMDGPU/global_atomics_optimizer_fp_no_rtn.ll 
(+32-32) 
- (modified) llvm/test/CodeGen/AMDGPU/lower-module-lds.ll (+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll (+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/nested-loop-conditions.ll (+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/prevent-fmul-hoist-ir.ll (+6-6) 
- (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1) 
- (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-subvecs.ll (+8-8) 
- (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll (+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll (+2-2) 
- (modified) llvm/test/CodeGen/AMDGPU/unstructured-cfg-def-use-issue.ll (+3-3) 
- (modified) llvm/test/CodeGen/ARM/GlobalISel/arm-legalize-fp.mir (+2-2) 
- (modified) llvm/test/CodeGen/ARM/vector-promotion.ll (+4-4) 
- (modified) llvm/test/CodeGen/DirectX/MemIntrinsics/memset.ll (+2-2) 
- (modified) llvm/test/CodeGen/DirectX/all.ll (+1-1) 
- (modified) llvm/test/CodeGen/DirectX/any.ll (+1-1) 
- (modified) llvm/test/CodeGen/DirectX/atan2.ll (+16-16) 
- (modified) llvm/test/CodeGen/DirectX/degrees.ll (+6-6) 
- (modified) llvm/test/CodeGen/DirectX/exp-vec.ll (+1-1) 
- (modified) llvm/test/CodeGen/DirectX/exp.ll (+2-2) 
- (modified) llvm/test/CodeGen/DirectX/log-vec.ll (+2-2) 
- (modified) llvm/test/CodeGen/DirectX/log.ll (+2-2) 
- (modified) llvm/test/CodeGen/DirectX/log10.ll (+2-2) 
- (modified) llvm/test/CodeGen/DirectX/radians.ll (+10-10) 
- (modified) llvm/test/CodeGen/DirectX/sign.ll (+2-2) 
- (modified) llvm/test/CodeGen/DirectX/step.ll (+4-4) 
- (modified) llvm/test/CodeGen/DirectX/vector_reduce_add.ll (+1-1) 
- (modified) llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir (+2-2) 
- (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-immediate-operands.mir 
(+2-2) 
- (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir 
(+1-1) 
- (modified) llvm/test/CodeGen/Mips/GlobalISel/legalizer/float_constants.mir 
(+4-4) 
- (modified) llvm/test/CodeGen/Mips/GlobalISel/legalizer/fptosi_and_fptoui.mir 
(+12-12) 
- (modified) llvm/test/CodeGen/Mips/GlobalISel/legalizer/sitofp_and_uitofp.mir 
(+12-12) 
- (modified) 
llvm/test/CodeGen/Mips/GlobalISel/regbankselect/float_constants.mir (+4-4) 
- (modified) 
llvm/test/CodeGen/Mips/GlobalISel/regbankselect/sitofp_and_uitofp.mir (+2-2) 
- (modified) llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll (+1-1) 


``````````diff
diff --git a/clang/test/AST/ByteCode/codegen.cpp 
b/clang/test/AST/ByteCode/codegen.cpp
index cbb0504c89f13..c2d2a6ef22bdc 100644
--- a/clang/test/AST/ByteCode/codegen.cpp
+++ b/clang/test/AST/ByteCode/codegen.cpp
@@ -27,7 +27,7 @@ S s;
 // CHECK: @sp = constant ptr getelementptr (i8, ptr @s, i64 16), align 8
 float &sp = s.c[3];
 
-// CHECK: @PR9558 = global float 0.000000e+0
+// CHECK: @PR9558 = global float 0.000000e+00
 float PR9558 = reinterpret_cast<const float&>("asd");
 // CHECK: @i = constant ptr @PR9558
 int &i = reinterpret_cast<int&>(PR9558);
diff --git a/clang/test/AST/ByteCode/const-fpfeatures.cpp 
b/clang/test/AST/ByteCode/const-fpfeatures.cpp
index 0764e3d8ba813..15f2af0bb4d0c 100644
--- a/clang/test/AST/ByteCode/const-fpfeatures.cpp
+++ b/clang/test/AST/ByteCode/const-fpfeatures.cpp
@@ -7,12 +7,12 @@
 float F1u = 1.0F + 0x0.000002p0F;
 float F2u = 1.0F + 0x0.000001p0F;
 float F3u = 0x1.000001p0;
-// CHECK: @F1u = {{.*}} float 0x3FF0000020000000
-// CHECK: @F2u = {{.*}} float 0x3FF0000020000000
-// CHECK: @F3u = {{.*}} float 0x3FF0000020000000
+// CHECK: @F1u = {{.*}} float f0x3F800001
+// CHECK: @F2u = {{.*}} float f0x3F800001
+// CHECK: @F3u = {{.*}} float f0x3F800001
 
 float FI1u = 0xFFFFFFFFU;
-// CHECK: @FI1u = {{.*}} float 0x41F0000000000000
+// CHECK: @FI1u = {{.*}} float f0x4F800000
 
 #pragma STDC FENV_ROUND FE_DOWNWARD
 
@@ -20,13 +20,13 @@ float F1d = 1.0F + 0x0.000002p0F;
 float F2d = 1.0F + 0x0.000001p0F;
 float F3d = 0x1.000001p0;
 
-// CHECK: @F1d = {{.*}} float 0x3FF0000020000000
+// CHECK: @F1d = {{.*}} float f0x3F800001
 // CHECK: @F2d = {{.*}} float 1.000000e+00
 // CHECK: @F3d = {{.*}} float 1.000000e+00
 
 
 float FI1d = 0xFFFFFFFFU;
-// CHECK: @FI1d = {{.*}} float 0x41EFFFFFE0000000
+// CHECK: @FI1d = {{.*}} float f0x4F7FFFFF
 
 // nextUp(1.F) == 0x1.000002p0F
 
@@ -47,7 +47,7 @@ constexpr float add_round_up(float x, float y) {
 float V1 = add_round_down(1.0F, 0x0.000001p0F);
 float V2 = add_round_up(1.0F, 0x0.000001p0F);
 // CHECK: @V1 = {{.*}} float 1.000000e+00
-// CHECK: @V2 = {{.*}} float 0x3FF0000020000000
+// CHECK: @V2 = {{.*}} float f0x3F800001
 
 
 constexpr float add_cast_round_down(float x, double y) {
@@ -68,4 +68,4 @@ float V3 = add_cast_round_down(1.0F, 0x0.000001p0F);
 float V4 = add_cast_round_up(1.0F, 0x0.000001p0F);
 
 // CHECK: @V3 = {{.*}} float 1.000000e+00
-// CHECK: @V4 = {{.*}} float 0x3FF0000020000000
+// CHECK: @V4 = {{.*}} float f0x3F800001
diff --git a/clang/test/AST/const-fpfeatures.c 
b/clang/test/AST/const-fpfeatures.c
index 787bb989dd4a2..15dc607afe231 100644
--- a/clang/test/AST/const-fpfeatures.c
+++ b/clang/test/AST/const-fpfeatures.c
@@ -9,25 +9,25 @@ const double _Complex C0 = 0x1.000001p0 + 0x1.000001p0I;
 float F1u = 1.0F + 0x0.000002p0F;
 float F2u = 1.0F + 0x0.000001p0F;
 float F3u = 0x1.000001p0;
-// CHECK: @F1u = {{.*}} float 0x3FF0000020000000
-// CHECK: @F2u = {{.*}} float 0x3FF0000020000000
-// CHECK: @F3u = {{.*}} float 0x3FF0000020000000
+// CHECK: @F1u = {{.*}} float f0x3F800001
+// CHECK: @F2u = {{.*}} float f0x3F800001
+// CHECK: @F3u = {{.*}} float f0x3F800001
 
 float FI1u = 0xFFFFFFFFU;
-// CHECK: @FI1u = {{.*}} float 0x41F0000000000000
+// CHECK: @FI1u = {{.*}} float f0x4F800000
 
 float _Complex C1u = C0;
-// CHECK: @C1u = {{.*}} { float, float } { float 0x3FF0000020000000, float 
0x3FF0000020000000 }
+// CHECK: @C1u = {{.*}} { float, float } { float f0x3F800001, float 
f0x3F800001 }
 
 float FLu = 0.1F;
-// CHECK: @FLu = {{.*}} float 0x3FB99999A0000000
+// CHECK: @FLu = {{.*}} float 1.000000e-01
 
 typedef float  vector2float  __attribute__((__vector_size__(8)));
 typedef double vector2double  __attribute__((__vector_size__(16)));
 const vector2float V2Fu = {1.0F + 0x0.000001p0F, 1.0F + 0x0.000002p0F};
 vector2double V2Du = __builtin_convertvector(V2Fu, vector2double);
-// CHECK: @V2Fu = {{.*}} <2 x float> splat (float 0x3FF0000020000000)
-// CHECK: @V2Du = {{.*}} <2 x double> splat (double 0x3FF0000020000000)
+// CHECK: @V2Fu = {{.*}} <2 x float> splat (float f0x3F800001)
+// CHECK: @V2Du = {{.*}} <2 x double> splat (double f0x3FF0000020000000)
 
 #pragma STDC FENV_ROUND FE_DOWNWARD
 
@@ -35,20 +35,20 @@ float F1d = 1.0F + 0x0.000002p0F;
 float F2d = 1.0F + 0x0.000001p0F;
 float F3d = 0x1.000001p0;
 
-// CHECK: @F1d = {{.*}} float 0x3FF0000020000000
+// CHECK: @F1d = {{.*}} float f0x3F800001
 // CHECK: @F2d = {{.*}} float 1.000000e+00
 // CHECK: @F3d = {{.*}} float 1.000000e+00
 
 float FI1d = 0xFFFFFFFFU;
-// CHECK: @FI1d = {{.*}} float 0x41EFFFFFE0000000
+// CHECK: @FI1d = {{.*}} float f0x4F7FFFFF
 
 float _Complex C1d = C0;
 // CHECK: @C1d = {{.*}} { float, float } { float 1.000000e+00, float 
1.000000e+00 }
 
 float FLd = 0.1F;
-// CHECK: @FLd = {{.*}} float 0x3FB9999980000000
+// CHECK: @FLd = {{.*}} float f0x3DCCCCCC
 
 const vector2float V2Fd = {1.0F + 0x0.000001p0F, 1.0F + 0x0.000002p0F};
 vector2double V2Dd = __builtin_convertvector(V2Fd, vector2double);
-// CHECK: @V2Fd = {{.*}} <2 x float> <float 1.000000e+00, float 
0x3FF0000020000000>
-// CHECK: @V2Dd = {{.*}} <2 x double> <double 1.000000e+00, double 
0x3FF0000020000000>
+// CHECK: @V2Fd = {{.*}} <2 x float> <float 1.000000e+00, float f0x3F800001>
+// CHECK: @V2Dd = {{.*}} <2 x double> <double 1.000000e+00, double 
f0x3FF0000020000000>
diff --git a/clang/test/AST/const-fpfeatures.cpp 
b/clang/test/AST/const-fpfeatures.cpp
index 5e903c8c0e874..f5fdd3569a92c 100644
--- a/clang/test/AST/const-fpfeatures.cpp
+++ b/clang/test/AST/const-fpfeatures.cpp
@@ -19,7 +19,7 @@ constexpr float add_round_up(float x, float y) {
 float V1 = add_round_down(1.0F, 0x0.000001p0F);
 float V2 = add_round_up(1.0F, 0x0.000001p0F);
 // CHECK: @V1 = {{.*}} float 1.000000e+00
-// CHECK: @V2 = {{.*}} float 0x3FF0000020000000
+// CHECK: @V2 = {{.*}} float f0x3F800001
 
 constexpr float add_cast_round_down(float x, double y) {
   #pragma STDC FENV_ROUND FE_DOWNWARD
@@ -39,7 +39,7 @@ float V3 = add_cast_round_down(1.0F, 0x0.000001p0F);
 float V4 = add_cast_round_up(1.0F, 0x0.000001p0F);
 
 // CHECK: @V3 = {{.*}} float 1.000000e+00
-// CHECK: @V4 = {{.*}} float 0x3FF0000020000000
+// CHECK: @V4 = {{.*}} float f0x3F800001
 
 // The next three variables use the same function as initializer, only rounding
 // modes differ.
@@ -54,7 +54,7 @@ float V5 = []() -> float {
     }(1.0F, 0x0.000001p0F),
   0x0.000001p0F);
 }();
-// CHECK: @V5 = {{.*}} float 0x3FF0000040000000
+// CHECK: @V5 = {{.*}} float f0x3F800002
 
 float V6 = []() -> float {
   return [](float x, float y)->float {
@@ -66,7 +66,7 @@ float V6 = []() -> float {
     }(1.0F, 0x0.000001p0F),
   0x0.000001p0F);
 }();
-// CHECK: @V6 = {{.*}} float 0x3FF0000020000000
+// CHECK: @V6 = {{.*}} float f0x3F800001
 
 float V7 = []() -> float {
   return [](float x, float y)->float {
@@ -89,11 +89,11 @@ template<float V> struct L {
 
 #pragma STDC FENV_ROUND FE_DOWNWARD
 L<0.1F> val_d;
-// CHECK: @val_d = {{.*}} { float 0x3FB9999980000000 }
+// CHECK: @val_d = {{.*}} { float f0x3DCCCCCC }
 
 #pragma STDC FENV_ROUND FE_UPWARD
 L<0.1F> val_u;
-// CHECK: @val_u = {{.*}} { float 0x3FB99999A0000000 }
+// CHECK: @val_u = {{.*}} { float 1.000000e-01 }
 
 
 // Check literals in macros.
@@ -103,11 +103,11 @@ L<0.1F> val_u;
 
 #pragma STDC FENV_ROUND FE_UPWARD
 float C1_ru = CONSTANT_0_1;
-// CHECK: @C1_ru = {{.*}} float 0x3FB99999A0000000
+// CHECK: @C1_ru = {{.*}} float 1.000000e-01
 
 #pragma STDC FENV_ROUND FE_DOWNWARD
 float C1_rd = CONSTANT_0_1;
-// CHECK: @C1_rd = {{.*}} float 0x3FB9999980000000
+// CHECK: @C1_rd = {{.*}} float f0x3DCCCCCC
 
 #pragma STDC FENV_ROUND FE_DOWNWARD
 #define PRAGMA(x) _Pragma(#x)
@@ -116,14 +116,14 @@ float C1_rd = CONSTANT_0_1;
 #pragma STDC FENV_ROUND FE_UPWARD
 float C2_rd = CONSTANT_0_1_RM(0.1F, FE_DOWNWARD);
 float C2_ru = CONSTANT_0_1_RM(0.1F, FE_UPWARD);
-// CHECK: @C2_rd = {{.*}} float 0x3FB9999980000000
-// CHECK: @C2_ru = {{.*}} float 0x3FB99999A0000000
+// CHECK: @C2_rd = {{.*}} float f0x3DCCCCCC
+// CHECK: @C2_ru = {{.*}} float 1.000000e-01
 
 #pragma STDC FENV_ROUND FE_DOWNWARD
 float C3_rd = CONSTANT_0_1_RM(0.1F, FE_DOWNWARD);
 float C3_ru = CONSTANT_0_1_RM(0.1F, FE_UPWARD);
-// CHECK: @C3_rd = {{.*}} float 0x3FB9999980000000
-// CHECK: @C3_ru = {{.*}} float 0x3FB99999A0000000
+// CHECK: @C3_rd = {{.*}} float f0x3DCCCCCC
+// CHECK: @C3_ru = {{.*}} float 1.000000e-01
 
 // Check literals in template instantiations.
 
@@ -136,11 +136,11 @@ constexpr T foo() {
 
 #pragma STDC FENV_ROUND FE_DOWNWARD
 float var_d = foo<float, 0.1F>();
-// CHECK: @var_d = {{.*}} float 0x3FB9999980000000
+// CHECK: @var_d = {{.*}} float f0x3DCCCCCC
 
 #pragma STDC FENV_ROUND FE_UPWARD
 float var_u = foo<float, 0.1F>();
-// CHECK: @var_u = {{.*}} float 0x3FB99999A0000000
+// CHECK: @var_u = {{.*}} float 1.000000e-01
 
 #pragma STDC FENV_ROUND FE_DYNAMIC
 
@@ -159,10 +159,10 @@ void func_02() {
 }
 
 // CHECK-LABEL: define {{.*}} void @_Z4foo2IfTnT_Lf3dccccccEEvv()
-// CHECK:         store float 0x3FB9999980000000, ptr
+// CHECK:         store float f0x3DCCCCCC, ptr
 
 // CHECK-LABEL: define {{.*}} void @_Z4foo2IfTnT_Lf3dcccccdEEvv()
-// CHECK:         store float 0x3FB99999A0000000, ptr
+// CHECK:         store float 1.000000e-01, ptr
 
 
 #pragma STDC FENV_ROUND FE_DOWNWARD
@@ -172,15 +172,15 @@ float tfunc_01() {
 }
 template float tfunc_01<0>();
 // CHECK-LABEL: define {{.*}} float @_Z8tfunc_01ILi0EEfv()
-// CHECK:         ret float 0x3FB9999980000000
+// CHECK:         ret float f0x3DCCCCCC
 
 #pragma STDC FENV_ROUND FE_UPWARD
 template float tfunc_01<1>();
 // CHECK-LABEL: define {{.*}} float @_Z8tfunc_01ILi1EEfv()
-// CHECK:         ret float 0x3FB9999980000000
+// CHECK:         ret float f0x3DCCCCCC
 
 template<> float tfunc_01<2>() {
   return 0.1F;
 }
 // CHECK-LABEL: define {{.*}} float @_Z8tfunc_01ILi2EEfv()
-// CHECK:         ret float 0x3FB99999A0000000
+// CHECK:         ret float 1.000000e-01
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe959496..7ccaaf02463d9 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 
0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 1.000000e+00
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -42,7 +42,7 @@
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 
0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 1.000000e+00
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -64,7 +64,7 @@
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 
0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 1.000000e+00
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -75,7 +75,7 @@
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 
0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 1.000000e+00
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -86,7 +86,7 @@
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 
0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 1.000000e+00
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 
0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 1.000000e+00
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 
0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 1.000000e+00
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 
0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 1.000000e+00
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 
0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 1.000000e+00
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 
0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 1.000000e+00
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 
0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 1.000000e+00
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 
0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 1.000000e+00
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 
0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 1.000000e+00
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 
0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 1.000000e+00
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 
0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 1.000000e+00
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 
0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 1.000000e+00
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 
0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 1.000000e+00
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 
0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 1.000000e+00
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000...
[truncated]

``````````

</details>


https://github.com/llvm/llvm-project/pull/190649
_______________________________________________
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to