target is  x86_64-apple-darwin15.4.0 (Apple LLVM 7.3.0)

This function
===== vec.c =====
#include <stdlib.h>

int vec(int index)
{
        return ((int*) NULL)[index];
}
===============

 [Yes, I know it's undefined behavior in ansi C].

Compiled with "clang -c -O0 -o vec.o vec.c" yields this code

=====
0000000000000000        55                      pushq   %rbp
0000000000000001        4889e5                  movq    %rsp, %rbp
0000000000000004        31c0                    xorl    %eax, %eax
0000000000000006        89c1                    movl    %eax, %ecx
0000000000000008        897dfc                  movl    %edi, -0x4(%rbp)
000000000000000b        486355fc                movslq  -0x4(%rbp), %rdx
000000000000000f        8b0491                  movl    _vec(%rcx,%rdx,4), %eax
0000000000000012        5d                      popq    %rbp
0000000000000013        c3                      retq
=====

Compiled with "clang -c -Os -o vec.o vec.c" yields this code
=====
0000000000000000        55                      pushq   %rbp
0000000000000001        4889e5                  movq    %rsp, %rbp
0000000000000004        0f0b                    ud2
=====

Questions:
1) Is there a way to suppress the optimization that generates a trap, and have 
-Os yield working code like -O0?
2) Barring that, is there some way to have this code generate a compile-time 
diagnostic instead of emitting a run-time trap?

Thank You.
_______________________________________________
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users

Reply via email to