kparzysz-quic edited a comment on pull request #8509:
URL: https://github.com/apache/tvm/pull/8509#issuecomment-1050982328
The first failing testcase is resnet50:
This is the script that reproduces the crash is:
```
import coremltools
import onnx
import onnxmltools
import tvm
import tvm.contrib.hexagon
from tvm import relay
from tvm.relay.transform import InferType, ToMixedPrecision
dtype_dict = {"data": "float32"}
shape_dict = {"data": [1,3,224,224]}
#input name and path for your caffe model
proto_file = './ResNet-50-deploy.prototxt'
input_caffe_path = './ResNet-50-model.caffemodel'
# Convert Caffe model to CoreML
coreml_model = coremltools.converters.caffe.convert((input_caffe_path,
proto_file))
# Convert the Core ML model into ONNX
onnx_model = onnxmltools.convert_coreml(coreml_model)
onnxmltools.utils.save_model(onnx_model, 'resnet-50.onnx')
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
mod = InferType()(mod)
mod = ToMixedPrecision("float16")(mod)
target = tvm.target.hexagon("v68", link_params=True)
config = {"relay.FuseOps.link_params":0} # <-- doesn't do
anything
with tvm.transform.PassContext(opt_level=3, config=config):
lib = relay.build(mod, target, target_host=target, params=params,
mod_name="default")
```
At the bottom of the crash dump you should see
```
0: tvm::relay::tec::ScheduleBuilder::VisitExpr_(tvm::relay::ConstantNode
const*)::{lambda(tvm::runtime::Array<tvm::tir::Var, void>
const&)#1}::operator()(tvm::runtime::Array<tvm::tir::Var, void> const&) const
File "/w/src/dmlc/tvm/src/relay/backend/te_compiler_cache.cc", line 242
TVMError: float16 not handled
```
The target string is `hexagon -keys=hexagon -link-params=1
-mattr=+hvxv68,+hvx-length128b -mcpu=hexagonv68 -mtriple=hexagon`.
We don't yet have codegen for `AllocateConst` specific to Hexagon, but if
the LLVM codegen handles it, it will probably work for us as well. Right now
we use the link_params function, which contains all the constants in it, but is
called at runtime to supply them to the model.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]