tmoreau89 commented on issue #3934: [Runtime] MISRA-C compliant TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/3934#issuecomment-595427659
 
 
   Excellent, thanks for adding this to CI so quickly! I was able to reproduce 
the demo by typing in `make demo`; it ran for the most part successfully, but I 
got an illegal instruction error in the end:
   
   ```$ make demo
   python3 build_model.py -o build
   INFO:root:Model file not found. Downloading to 
/Users/moreau/.mxnet/models/mobilenet0.25-9f83e440.params.
   Downloading /Users/moreau/.mxnet/models/mobilenet0.25-9f83e440.zip from 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
   INFO:autotvm:Download pre-tuned parameters package from 
https://raw.githubusercontent.com/uwsampl/tvm-distro/master/tophub/llvm_v0.04.log
   ...100%, 0.02 MB, 121 KB/s, 0 seconds passed
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 3, 224, 224), 'float32'), 
('TENSOR', (8, 3, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 8, 112, 112), 
'float32'), ('TENSOR', (8, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 
'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring 
great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 8, 112, 112), 'float32'), 
('TENSOR', (16, 8, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 16, 112, 112), 
'float32'), ('TENSOR', (16, 1, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 1), 
'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may bring 
great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 16, 56, 56), 'float32'), 
('TENSOR', (32, 16, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 32, 56, 56), 'float32'), 
('TENSOR', (32, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 32, 56, 56), 'float32'), 
('TENSOR', (32, 32, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 32, 56, 56), 'float32'), 
('TENSOR', (32, 1, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 32, 28, 28), 'float32'), 
('TENSOR', (64, 32, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 64, 28, 28), 'float32'), 
('TENSOR', (64, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 28, 28), 'float32'), 
('TENSOR', (64, 64, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 64, 28, 28), 'float32'), 
('TENSOR', (64, 1, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 64, 14, 14), 'float32'), 
('TENSOR', (128, 64, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 128, 14, 14), 
'float32'), ('TENSOR', (128, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 
1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may 
bring great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 128, 14, 14), 'float32'), 
('TENSOR', (128, 128, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 128, 14, 14), 
'float32'), ('TENSOR', (128, 1, 3, 3), 'float32'), (2, 2), (1, 1, 1, 1), (1, 
1), 'NCHW', 'NCHW', 'float32'). A fallback configuration is used, which may 
bring great performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 128, 7, 7), 'float32'), ('TENSOR', 
(256, 128, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 
'float32'). A fallback configuration is used, which may bring great performance 
regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('depthwise_conv2d_NCHWc.x86', ('TENSOR', (1, 256, 7, 7), 'float32'), 
('TENSOR', (256, 1, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'NCHW', 
'NCHW', 'float32'). A fallback configuration is used, which may bring great 
performance regression.
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('conv2d_NCHWc.x86', ('TENSOR', (1, 256, 7, 7), 'float32'), ('TENSOR', 
(256, 256, 1, 1), 'float32'), (1, 1), (0, 0, 0, 0), (1, 1), 'NCHW', 'NCHW', 
'float32'). A fallback configuration is used, which may bring great performance 
regression.
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation softmax.cpu for op nn.softmax
   WARNING:autotvm:Cannot find config for target=llvm --system-lib, 
workload=('dense_nopack.x86', ('TENSOR', (1, 256), 'float32'), ('TENSOR', 
(1000, 256), 'float32'), None, 'float32'). A fallback configuration is used, 
which may bring great performance regression.
   INFO:compile_engine:Use implementation dense_nopack.x86 for op nn.dense
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   INFO:compile_engine:Use implementation injective.cpu for op nn.batch_flatten
   INFO:compile_engine:Use implementation injective.cpu for op nn.batch_flatten
   INFO:compile_engine:Use implementation adaptive_pool.cpu for op 
nn.global_avg_pool2d
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op 
nn.contrib_depthwise_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op 
nn.contrib_depthwise_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op 
nn.contrib_depthwise_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op 
nn.contrib_depthwise_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op 
nn.contrib_depthwise_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op 
nn.contrib_depthwise_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op 
nn.contrib_depthwise_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op 
nn.contrib_depthwise_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation depthwise_conv2d_NCHWc.x86 for op 
nn.contrib_depthwise_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation conv2d_NCHWc.x86 for op 
nn.contrib_conv2d_NCHWc
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op nn.relu
   INFO:compile_engine:Use implementation injective.cpu for op layout_transform
   Downloading from url 
https://homes.cs.washington.edu/~moreau/media/vta/cat.jpg to 
/Users/moreau/Documents/Projects/tvm-misra/apps/bundle_deploy/build/cat.png
   ...100%, 0.12 MB, 2633 KB/s, 0 seconds passed
   x (1, 3, 224, 224)
   xxd -i build/graph.json  > build/graph.json.c
   xxd -i build/params.bin  > build/params.bin.c
   g++ -std=c++14 -O2 -fPIC 
-I/Users/moreau/Documents/Projects/tvm-misra/include 
-I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dmlc-core/include 
-I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dlpack/include -o 
build/demo  demo.cc -ldl
   g++ -shared -std=c++14 -O2 -fPIC 
-I/Users/moreau/Documents/Projects/tvm-misra/include 
-I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dmlc-core/include 
-I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dlpack/include 
-fvisibility=hidden -o build/bundle.so  bundle.cc runtime.cc build/model.o 
-pthread
   gcc -shared -std=c99 -O2 -fPIC 
-I/Users/moreau/Documents/Projects/tvm-misra/include 
-I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dmlc-core/include 
-I/Users/moreau/Documents/Projects/tvm-misra/3rdparty/dlpack/include 
-fvisibility=hidden -o build/bundle_c.so  bundle.c runtime.c build/model.o 
-pthread
   In file included from runtime.c:47:
   In file included from ./../../src/runtime/crt/crt_runtime_api.c:28:
   In file included from ./../../src/runtime/crt/graph_runtime.h:31:
   ./../../src/runtime/crt/packed_func.h:105:3: warning: redefinition of 
typedef 'TVMPackedFunc' is a C11 feature [-Wtypedef-redefinition]
   } TVMPackedFunc;
     ^
   ./../../src/runtime/crt/module.h:31:30: note: previous definition is here
   typedef struct TVMPackedFunc TVMPackedFunc;
                                ^
   In file included from runtime.c:47:
   ./../../src/runtime/crt/crt_runtime_api.c:82:12: warning: expression result 
unused [-Wunused-value]
       status -1;
       ~~~~~~ ^~
   In file included from runtime.c:48:
   ./../../src/runtime/crt/crt_backend_api.c:55:82: warning: format string is 
not a string literal (potentially insecure) [-Wformat-security]
     snprintf(g_fexecs[g_fexecs_count].name, 
sizeof(g_fexecs[g_fexecs_count].name), name);
                                                                                
    ^~~~
   
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/secure/_stdio.h:57:62:
 note: expanded from macro 'snprintf'
     __builtin___snprintf_chk (str, len, 0, __darwin_obsz(str), __VA_ARGS__)
                                                                ^~~~~~~~~~~
   ./../../src/runtime/crt/crt_backend_api.c:55:82: note: treat the string as 
an argument to avoid this
     snprintf(g_fexecs[g_fexecs_count].name, 
sizeof(g_fexecs[g_fexecs_count].name), name);
                                                                                
    ^
                                                                                
    "%s",
   
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/secure/_stdio.h:57:62:
 note: expanded from macro 'snprintf'
     __builtin___snprintf_chk (str, len, 0, __darwin_obsz(str), __VA_ARGS__)
                                                                ^
   In file included from runtime.c:49:
   ./../../src/runtime/crt/graph_runtime.c:542:84: warning: format specifies 
type 'int' but the argument has type 'size_t' (aka 'unsigned long') [-Wformat]
         fprintf(stderr, "fail to create for node with idx=%d, 
storage_id=%d\n", idx, storage_id);
                                                                          ~~    
      ^~~~~~~~~~
                                                                          %zu
   In file included from runtime.c:51:
   ./../../src/runtime/crt/ndarray.c:95:13: warning: format specifies type 
'long' but the argument has type 'int64_t' (aka 'long long') [-Wformat]
               data_byte_size, (num_elems * elem_bytes));
               ^~~~~~~~~~~~~~
   ./../../src/runtime/crt/ndarray.c:95:29: warning: format specifies type 
'long' but the argument has type 'long long' [-Wformat]
               data_byte_size, (num_elems * elem_bytes));
                               ^~~~~~~~~~~~~~~~~~~~~~~~
   6 warnings generated.
   build/demo build/bundle.so build/cat.bin
   The maximum position in output vector is: 278, with max-value 0.613490.
   timing: 5.07 ms (create), 0.74 ms (set_input), 3.60 ms (run), 0.01 ms 
(get_output), 0.10 ms (destroy)
   build/demo build/bundle_c.so build/cat.bin
   make: *** [demo] Illegal instruction: 4```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to