[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-29 Thread GitBox


srkreddy1238 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r432813198



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2895,16 +2909,25 @@ def _parse_import_prerequisites(self, graph):
 which are not supported
 """
 missing_operators = set()
+from tensorflow.python.framework import op_def_registry
 for node in graph.node:
+getOpDef = op_def_registry._registered_ops.get if 
hasattr(op_def_registry,\
+"_registered_ops") else op_def_registry.get
+op_def = getOpDef(node.op)
 if node.op == "Placeholder" or node.op == 'PlaceholderWithDefault':
 pass
 elif node.op == "Const":
 pass
+elif node.op in ["PartitionedCall", "StatefulPartitionedCall"]:
+pass
 else:
 if any([node.op in t for t in [_identity_list, _convert_map,
_convert_map_rnn,
_control_flow_nodes]]):
 pass
+elif op_def is not None and op_def.is_stateful:
+self._main_graph_proto._stateful_ops_list.append(node.op)

Review comment:
   No need of another list. If needed you may append (StatufulOperator) to 
node.op





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-29 Thread GitBox


srkreddy1238 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r432813134



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2773,6 +2774,12 @@ def from_tensorflow(self, graph, layout="NHWC", 
shape=None, outputs=None):
 if freezed_ops:
 raise Exception("Graph is not frozen. Provide a frozen graph. "
 "Found operators {}".format(freezed_ops))
+stateful_ops = [op for op in missing_operators
+if op in self._main_graph_proto._stateful_ops_list]
+if stateful_ops:
+raise Exception("Found stateful operators in this graph {}. " \

Review comment:
   Again this will display exception with only stateful missing ops 
(Doesn't show normal missing ops.).
   I don't think we need separate list for stateful missing ops. Just add them 
to missing_operators list and display as one.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5695: fix small bug about dense_grad

2020-05-29 Thread GitBox


srkreddy1238 commented on a change in pull request #5695:
URL: https://github.com/apache/incubator-tvm/pull/5695#discussion_r432812422



##
File path: python/tvm/relay/op/_tensor_grad.py
##
@@ -472,8 +472,8 @@ def bias_add_grad(orig, grad):
 def dense_grad(orig, grad):
 """Returns [grad' @ weight, data @ grad']"""
 data, weight = orig.args
-return [collapse_sum_like(transpose(grad) * weight, data),
-collapse_sum_like(data * transpose(grad), weight)]
+return [collapse_sum_like(_nn.dense(grad, transpose(weight)), data),

Review comment:
   Please add units arg too to support batching.
   
   ``
   return [collapse_sum_like(_nn.dense(grad, transpose(weight), 
units=weight.checked_type.shape[1]), data),
   collapse_sum_like(_nn.dense(transpose(grad), transpose(data), 
units=data.checked_type.shape[1]), weight)]
   ``





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5698: [REFACTOR][RELAY] Replace build_config with PassContext

2020-05-29 Thread GitBox


tqchen merged pull request #5698:
URL: https://github.com/apache/incubator-tvm/pull/5698


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [REFACTOR][RELAY] Replace build_config with PassContext (#5698)

2020-05-29 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new c55ed37  [REFACTOR][RELAY] Replace build_config with PassContext 
(#5698)
c55ed37 is described below

commit c55ed371693740291b82cc8d88bf09c830d029c7
Author: Zhi <5145158+zhi...@users.noreply.github.com>
AuthorDate: Fri May 29 21:59:35 2020 -0700

[REFACTOR][RELAY] Replace build_config with PassContext (#5698)
---
 apps/android_camera/models/prepare_model.py  |  2 +-
 apps/benchmark/arm_cpu_imagenet_bench.py |  2 +-
 apps/benchmark/gpu_imagenet_bench.py |  2 +-
 apps/benchmark/mobile_gpu_imagenet_bench.py  |  2 +-
 apps/bundle_deploy/build_model.py|  2 +-
 apps/sgx/src/build_model.py  |  2 +-
 golang/sample/gen_mobilenet_lib.py   |  4 ++--
 python/tvm/relay/frontend/common.py  |  2 +-
 python/tvm/relay/quantize/_calibrate.py  |  3 +--
 python/tvm/relay/transform/transform.py  | 10 +++---
 rust/frontend/examples/resnet/src/build_resnet.py|  4 ++--
 src/relay/backend/build_module.cc|  5 ++---
 tests/cpp/relay_transform_sequential.cc  |  2 +-
 tests/python/frontend/caffe2/test_forward.py |  2 +-
 tests/python/frontend/coreml/test_forward.py |  4 ++--
 tests/python/frontend/keras/test_forward.py  |  2 +-
 tests/python/frontend/mxnet/test_forward.py  |  2 +-
 tests/python/frontend/mxnet/test_qnn_ops_utils.py|  6 +++---
 tests/python/frontend/onnx/test_forward.py   |  2 +-
 tests/python/frontend/pytorch/qnn_test.py|  2 +-
 tests/python/frontend/pytorch/test_forward.py|  4 ++--
 tests/python/frontend/tensorflow/test_bn_dynamic.py  |  2 +-
 tests/python/frontend/tensorflow/test_forward.py |  4 ++--
 tests/python/frontend/tflite/test_forward.py |  2 +-
 .../nightly/quantization/test_quantization_accuracy.py   |  4 ++--
 tests/python/relay/benchmarking/benchmark_vm.py  |  4 ++--
 tests/python/relay/test_backend_compile_engine.py|  2 +-
 tests/python/relay/test_backend_graph_runtime.py |  2 +-
 tests/python/relay/test_cpp_build_module.py  |  2 +-
 tests/python/relay/test_external_codegen.py  |  6 --
 tests/python/relay/test_memory_passes.py |  2 +-
 tests/python/relay/test_op_fast_math.py  |  2 +-
 tests/python/relay/test_op_level2.py | 10 +-
 tests/python/relay/test_op_qnn_conv2d.py | 10 +-
 tests/python/relay/test_op_qnn_dense.py  |  2 +-
 tests/python/relay/test_op_qnn_dequantize.py |  2 +-
 tests/python/relay/test_op_qnn_quantize.py   |  2 +-
 tests/python/relay/test_op_qnn_requantize.py |  2 +-
 tests/python/relay/test_pass_annotate_target.py  |  4 ++--
 tests/python/relay/test_pass_fast_math.py|  4 ++--
 tests/python/relay/test_pass_fold_constant.py|  2 +-
 tests/python/relay/test_pass_manager.py  | 12 ++--
 tests/python/relay/test_pass_partition_graph.py  | 11 ++-
 tests/python/relay/test_simplify_fc_transpose.py |  2 +-
 tests/python/relay/test_sparse_dense_convert.py  |  2 +-
 tests/python/unittest/test_runtime_module_export.py  |  6 +++---
 tests/python/unittest/test_target_codegen_blob.py|  4 ++--
 tutorials/autotvm/tune_relay_arm.py  |  2 +-
 tutorials/autotvm/tune_relay_cuda.py |  2 +-
 tutorials/autotvm/tune_relay_mobile_gpu.py   |  2 +-
 tutorials/autotvm/tune_relay_x86.py  |  2 +-
 tutorials/dev/relay_pass_infra.py| 10 +-
 tutorials/frontend/build_gcn.py  |  2 +-
 tutorials/frontend/deploy_model_on_android.py|  2 +-
 tutorials/frontend/deploy_model_on_rasp.py   |  2 +-
 tutorials/frontend/deploy_prequantized.py|  2 +-
 tutorials/frontend/deploy_prequantized_tflite.py |  2 +-
 tutorials/frontend/deploy_ssd_gluoncv.py |  2 +-
 tutorials/frontend/from_caffe2.py|  4 ++--
 tutorials/frontend/from_coreml.py|  2 +-
 tutorials/frontend/from_darknet.py   |  2 +-
 tutorials/frontend/from_keras.py 

[GitHub] [incubator-tvm] masahi merged pull request #5701: [BYOC] Support Tuple Output in C/DNNL Codegen

2020-05-29 Thread GitBox


masahi merged pull request #5701:
URL: https://github.com/apache/incubator-tvm/pull/5701


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5701: [BYOC] Support Tuple Output in C/DNNL Codegen

2020-05-29 Thread GitBox


masahi commented on pull request #5701:
URL: https://github.com/apache/incubator-tvm/pull/5701#issuecomment-636260956


   Thanks @comaniac @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [BYOC] Support Tuple Output in C/DNNL Codegen (#5701)

2020-05-29 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 910edef  [BYOC] Support Tuple Output in C/DNNL Codegen (#5701)
910edef is described below

commit 910edef099705926b7af14aa3ae7b4c33920ace9
Author: Cody Yu 
AuthorDate: Fri May 29 19:11:24 2020 -0700

[BYOC] Support Tuple Output in C/DNNL Codegen (#5701)

* Support tuple output runtime

* fix unit test
---
 src/relay/backend/contrib/codegen_c/codegen.cc  | 19 ++
 src/relay/backend/contrib/codegen_c/codegen_c.h | 47 ++--
 src/relay/backend/contrib/dnnl/codegen.cc   | 12 +--
 tests/python/relay/test_pass_partition_graph.py | 48 -
 4 files changed, 96 insertions(+), 30 deletions(-)

diff --git a/src/relay/backend/contrib/codegen_c/codegen.cc 
b/src/relay/backend/contrib/codegen_c/codegen.cc
index b8803d4..2968966 100644
--- a/src/relay/backend/contrib/codegen_c/codegen.cc
+++ b/src/relay/backend/contrib/codegen_c/codegen.cc
@@ -56,6 +56,25 @@ class CodegenC : public 
MemoizedExprTranslator>, public Code
 return {output};
   }
 
+  std::vector VisitExpr_(const TupleNode* node) final {
+std::vector outs;
+for (auto field : node->fields) {
+  auto res = VisitExpr(field);
+  CHECK_EQ(res.size(), 1U) << "Do not support tuple nest";
+  outs.push_back(res[0]);
+}
+return outs;
+  }
+
+  std::vector VisitExpr_(const TupleGetItemNode* op) final {
+auto res = VisitExpr(op->tuple);
+CHECK_GT(res.size(), static_cast(op->index));
+
+// Only keep the item we want for the child node.
+// FIXME(@comaniac): The other items should still be requried for the 
primary outputs.
+return {res[op->index]};
+  }
+
   std::vector VisitExpr_(const ConstantNode* cn) final {
 // Note this is for demonstration purpose. ConstantNode doesn't necessarily
 // belong to calls. We need to revisit this when tuples come into play.
diff --git a/src/relay/backend/contrib/codegen_c/codegen_c.h 
b/src/relay/backend/contrib/codegen_c/codegen_c.h
index 2ee68ce..3a3c486 100644
--- a/src/relay/backend/contrib/codegen_c/codegen_c.h
+++ b/src/relay/backend/contrib/codegen_c/codegen_c.h
@@ -125,7 +125,7 @@ class CodegenCBase {
* \endcode
*/
   void GenerateBackendCFunc(const std::string& func_name, const Array& 
args,
-const Output& out) {
+const std::vector& outs) {
 // Print signature
 code_stream_ << "\n";
 code_stream_ << "extern \"C\" int " << func_name << "_wrapper_(";
@@ -133,9 +133,11 @@ class CodegenCBase {
   code_stream_ << "DLTensor* arg" << i << ",\n";
   code_stream_ << "\t";
 }
-if (args.size() > 0) {
-  code_stream_ << "DLTensor* arg" << args.size() << ") {\n";
+for (size_t i = 0; i < outs.size() - 1; i++) {
+  code_stream_ << "DLTensor* out" << i << ",\n";
+  code_stream_ << "\t";
 }
+code_stream_ << "DLTensor* out" << outs.size() - 1 << ") {\n";
 
 EnterScope();
 
@@ -147,10 +149,12 @@ class CodegenCBase {
   code_stream_ << "static_cast<" << dtype_str << "*>(arg" << i << 
"->data),\n";
   PrintIndents();
 }
-if (args.size() > 0) {
-  code_stream_ << "static_cast<" << out.dtype << "*>(arg" << args.size() 
<< "->data)";
+for (size_t i = 0; i < outs.size() - 1; i++) {
+  code_stream_ << "static_cast<" << outs[i].dtype << "*>(out" << i << 
"->data),\n";
+  PrintIndents();
 }
-code_stream_ << ");\n";
+code_stream_ << "static_cast<" << outs.back().dtype << "*>(out" << 
outs.size() - 1
+ << "->data));\n";
 PrintIndents();
 code_stream_ << "return 0;\n";
 ExitScope();
@@ -186,18 +190,19 @@ class CodegenCBase {
*/
   std::string JitImpl(const std::string& ext_func_id, const Array& args,
   const std::vector& buf_decl,
-  const std::vector& body, const 
std::vector& out) {
+  const std::vector& body, const 
std::vector& outs) {
 // Create the signature. For example, it could be:
-// extern "C" void dnnl_0_(float* input0, float* input1, float* out, int 
M, int N) {}
+// extern "C" void dnnl_0_(float* in0, float* in1, float* out0, float* 
out1) {}
 code_stream_ << "extern \"C\" void " << ext_func_id << "_(";
 
-CHECK_EQ(out.size(), 1U) << "Internal error: only single output is 
support.";
-
 for (const auto& arg : args) {
   const auto& dtype_str = GetDtypeString(arg);
   code_stream_ << dtype_str << "* " << arg->name_hint() << ", ";
 }
-code_stream_ << out[0].dtype << "* out) {\n";
+for (size_t i = 0; i < outs.size() - 1; ++i) {
+  code_stream_ << outs[i].dtype << "* out" << i << ", ";
+}
+code_stream_ << outs.back().dtype << "* out" << outs.size() - 1 

[GitHub] [incubator-tvm] masahi merged pull request #5697: [ONNX] Skip ADD inside Gemm op when vector is zero

2020-05-29 Thread GitBox


masahi merged pull request #5697:
URL: https://github.com/apache/incubator-tvm/pull/5697


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5697: [ONNX] Skip ADD inside Gemm op when vector is zero

2020-05-29 Thread GitBox


masahi commented on pull request #5697:
URL: https://github.com/apache/incubator-tvm/pull/5697#issuecomment-636252443


   Thanks @cbalint13 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [ONNX] Skip ADD inside Gemm op when vector is zero (#5697)

2020-05-29 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 879158a  [ONNX] Skip ADD inside Gemm op when vector is zero (#5697)
879158a is described below

commit 879158a07158f85bc4bb63127ac0226aab744532
Author: Balint Cristian 
AuthorDate: Sat May 30 04:10:22 2020 +0300

[ONNX] Skip ADD inside Gemm op when vector is zero (#5697)
---
 python/tvm/relay/frontend/onnx.py | 4 
 1 file changed, 4 insertions(+)

diff --git a/python/tvm/relay/frontend/onnx.py 
b/python/tvm/relay/frontend/onnx.py
index ea1ac90..be88683 100644
--- a/python/tvm/relay/frontend/onnx.py
+++ b/python/tvm/relay/frontend/onnx.py
@@ -462,6 +462,10 @@ class Gemm(OnnxOpConverter):
 inputs[0] = _op.nn.batch_flatten(inputs[0])
 out = _op.nn.dense(_expr.const(alpha) * inputs[0],
inputs[1], units=channels)
+# skip (beta * C) if zero
+C_array = params[inputs[2].name_hint].asnumpy()
+if (beta == 0.0) or np.array_equal(C_array, np.array([0])):
+return out
 return _op.nn.bias_add(out, _expr.const(beta) * inputs[2])
 
 



[incubator-tvm] branch master updated (1ae7162 -> 2cd5117)

2020-05-29 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 1ae7162  In memory_plan, check if value is not None, instead of just 
checking value as boolean. (#5700)
 add 2cd5117  [PatternLang]Conditionally Embedding Constants in Partitioned 
Functions (#5693)

No new revisions were added by this update.

Summary of changes:
 docs/langref/relay_pattern.rst|   8 +-
 python/tvm/relay/dataflow_pattern/__init__.py |   7 +-
 src/relay/ir/dataflow_matcher.cc  |  36 -
 tests/python/relay/test_dataflow_pattern.py   | 107 --
 4 files changed, 125 insertions(+), 33 deletions(-)



[GitHub] [incubator-tvm] masahi merged pull request #5693: [PatternLang]Conditionally Embedding Constants in Partitioned Functions

2020-05-29 Thread GitBox


masahi merged pull request #5693:
URL: https://github.com/apache/incubator-tvm/pull/5693


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5693: [PatternLang]Conditionally Embedding Constants in Partitioned Functions

2020-05-29 Thread GitBox


masahi commented on pull request #5693:
URL: https://github.com/apache/incubator-tvm/pull/5693#issuecomment-636252052


   Thanks @mbrookhart @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax

2020-05-29 Thread GitBox


anijain2305 commented on pull request #4805:
URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-636251657


   Ping @inadob Let us know if you are working on this. Or else I can take a 
chance at this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on pull request #5495: [Relay, Topi] [Frontend][TFLite, MXNet] ReverseSequence operator

2020-05-29 Thread GitBox


dhruvaray commented on pull request #5495:
URL: https://github.com/apache/incubator-tvm/pull/5495#issuecomment-636250465


   @anijain2305 - Please review 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] dhruvaray commented on pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-05-29 Thread GitBox


dhruvaray commented on pull request #5329:
URL: https://github.com/apache/incubator-tvm/pull/5329#issuecomment-636250037


   @anijain2305 - Please review and merge



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5701: [BYOC] Support Tuple Output in C/DNNL Codegen

2020-05-29 Thread GitBox


comaniac commented on a change in pull request #5701:
URL: https://github.com/apache/incubator-tvm/pull/5701#discussion_r432785888



##
File path: tests/python/relay/test_pass_partition_graph.py
##
@@ -201,8 +202,11 @@ def check_vm_result():
 exe = runtime.vm.Executable.load_exec(code, lib)
 vm = runtime.vm.VirtualMachine(exe)
 vm.init(ctx)
-out = vm.run(**map_inputs)
-tvm.testing.assert_allclose(out.asnumpy(), result, rtol=tol, atol=tol)
+outs = vm.run(**map_inputs)
+outs = outs if len(outs) > 1 else [outs]

Review comment:
   Yeah the test failed because of this. Changing to `isinstance(outs, 
runtime.container.ADT)`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5701: [BYOC] Support Tuple Output in C/DNNL Codegen

2020-05-29 Thread GitBox


zhiics commented on a change in pull request #5701:
URL: https://github.com/apache/incubator-tvm/pull/5701#discussion_r432785625



##
File path: tests/python/relay/test_pass_partition_graph.py
##
@@ -201,8 +202,11 @@ def check_vm_result():
 exe = runtime.vm.Executable.load_exec(code, lib)
 vm = runtime.vm.VirtualMachine(exe)
 vm.init(ctx)
-out = vm.run(**map_inputs)
-tvm.testing.assert_allclose(out.asnumpy(), result, rtol=tol, atol=tol)
+outs = vm.run(**map_inputs)
+outs = outs if len(outs) > 1 else [outs]

Review comment:
   is `len(outs)` oaky? It doesn't necessarily be an ADTObj/array, right? 
what if it is an objectref (i.e. NDArray in this case)?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #5701: [BYOC] Support Tuple Output in C/DNNL Codegen

2020-05-29 Thread GitBox


zhiics commented on pull request #5701:
URL: https://github.com/apache/incubator-tvm/pull/5701#issuecomment-636239753


   @comaniac Thanks for adding this support. I always thought this was done 
yet...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5700: In memory_plan, check if value is not None, instead of just checking value as boolean.

2020-05-29 Thread GitBox


tqchen merged pull request #5700:
URL: https://github.com/apache/incubator-tvm/pull/5700


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5700: In memory_plan, check if value is not None, instead of just checking value as boolean.

2020-05-29 Thread GitBox


tqchen commented on pull request #5700:
URL: https://github.com/apache/incubator-tvm/pull/5700#issuecomment-636236396


   Thanks @notoraptor 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (ff10e6c -> 1ae7162)

2020-05-29 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ff10e6c  [ONNX]LpPool Support added (#5696)
 add 1ae7162  In memory_plan, check if value is not None, instead of just 
checking value as boolean. (#5700)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/transform/memory_plan.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-tvm] comaniac opened a new pull request #5701: [BYOC] Support Tuple Output in C/DNNL Codegen

2020-05-29 Thread GitBox


comaniac opened a new pull request #5701:
URL: https://github.com/apache/incubator-tvm/pull/5701


   We previously supported multiple outputs in graph partition pass by creating 
a new tuple node as a single output. However, we missed a point that this tuple 
output has to be decomposed when generating C code.
   
   Fortunately, TVM runtime already flats tuples to multiple tensors, so it's 
not hard to make runtime working. This PR modifies the C codegen accordingly. 
Note that although the new added unit test uses C codegen, I've tested locally 
and make sure this change is also applicable to DNNL.
   
   cc @masahi (sorry for the confusions  ), @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-05-29 Thread GitBox


kevinthesun commented on a change in pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r432772828



##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -94,21 +95,21 @@ def vmobj_to_list(o):
 
 def run_tvm_graph(graph_def, input_data, input_node, num_output=1,
   target='llvm', out_names=None, opt_level=3, 
mode='graph_runtime',
-  cuda_layout="NCHW"):
+  layout=None, disabled_pass=None):

Review comment:
   We might need to disable fold_scale_axis for ssd testing. 
https://discuss.tvm.ai/t/relay-pass-do-we-want-to-allow-letnode-in-foldscaleaxis-pass/6420
 Will update that when dynamic nms pr is in.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-05-29 Thread GitBox


kevinthesun commented on a change in pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r432772828



##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -94,21 +95,21 @@ def vmobj_to_list(o):
 
 def run_tvm_graph(graph_def, input_data, input_node, num_output=1,
   target='llvm', out_names=None, opt_level=3, 
mode='graph_runtime',
-  cuda_layout="NCHW"):
+  layout=None, disabled_pass=None):

Review comment:
   We might need to disable fold_scale_axis for ssd testing. 
https://discuss.tvm.ai/t/relay-pass-do-we-want-to-allow-letnode-in-foldscaleaxis-pass/6420





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lixiaoquan commented on a change in pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-05-29 Thread GitBox


lixiaoquan commented on a change in pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r432765056



##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -94,21 +95,21 @@ def vmobj_to_list(o):
 
 def run_tvm_graph(graph_def, input_data, input_node, num_output=1,
   target='llvm', out_names=None, opt_level=3, 
mode='graph_runtime',
-  cuda_layout="NCHW"):
+  layout=None, disabled_pass=None):

Review comment:
   It seems disabled_pass is not specified by any caller





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (a9ce2f7 -> ff10e6c)

2020-05-29 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from a9ce2f7  Support more dtypes for TVMDSOOp (#5694)
 add ff10e6c  [ONNX]LpPool Support added (#5696)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  | 52 ++
 tests/python/frontend/onnx/test_forward.py | 71 ++
 2 files changed, 123 insertions(+)



[GitHub] [incubator-tvm] masahi merged pull request #5696: [ONNX]LpPool Support added

2020-05-29 Thread GitBox


masahi merged pull request #5696:
URL: https://github.com/apache/incubator-tvm/pull/5696


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5696: [ONNX]LpPool Support added

2020-05-29 Thread GitBox


masahi commented on pull request #5696:
URL: https://github.com/apache/incubator-tvm/pull/5696#issuecomment-636200664


   Thanks @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5038: [RFC] Module based Model Runtime Interface

2020-05-29 Thread GitBox


tqchen commented on issue #5038:
URL: https://github.com/apache/incubator-tvm/issues/5038#issuecomment-636198783


   @FrozenGene can we follow up on this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5693: [PatternLang]Conditionally Embedding Constants in Partitioned Functions

2020-05-29 Thread GitBox


masahi commented on pull request #5693:
URL: https://github.com/apache/incubator-tvm/pull/5693#issuecomment-636194198


   I confirmed that DNNL codegen test works as expected, and constants are 
available during codegen. On mobilenet test, it took 45 sec (each for vm and 
graph, total 90 sec) to compile generated C code on my laptop @comaniac .



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #5693: [PatternLang]Conditionally Embedding Constants in Partitioned Functions

2020-05-29 Thread GitBox


mbrookhart commented on pull request #5693:
URL: https://github.com/apache/incubator-tvm/pull/5693#issuecomment-636192395


   The MacOS build says it failed, but the job just says "failed", no other 
information. I'm thinking about debasing and kicking off another job



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5694: Support more dtypes for TVMDSOOp

2020-05-29 Thread GitBox


tqchen merged pull request #5694:
URL: https://github.com/apache/incubator-tvm/pull/5694


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (06bf8b0 -> a9ce2f7)

2020-05-29 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 06bf8b0  [COMMUNITY] @masahi -> PPMC (#5691)
 add a9ce2f7  Support more dtypes for TVMDSOOp (#5694)

No new revisions were added by this update.

Summary of changes:
 src/contrib/tf_op/tvm_dso_op_kernels.cc | 23 ---
 src/contrib/tf_op/tvm_dso_ops.cc|  8 ++--
 2 files changed, 26 insertions(+), 5 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on pull request #5694: Support more dtypes for TVMDSOOp

2020-05-29 Thread GitBox


tqchen commented on pull request #5694:
URL: https://github.com/apache/incubator-tvm/pull/5694#issuecomment-636189447


   Thanks @tobegit3hub !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5693: [PatternLang]Conditionally Embedding Constants in Partitioned Functions

2020-05-29 Thread GitBox


masahi commented on pull request #5693:
URL: https://github.com/apache/incubator-tvm/pull/5693#issuecomment-636180452


   The Github UI is showing me that some checks are not successful. But I don't 
see what the issue is. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] notoraptor opened a new pull request #5700: In memory_plan, check if value is not None, instead of just checking value as boolean.

2020-05-29 Thread GitBox


notoraptor opened a new pull request #5700:
URL: https://github.com/apache/incubator-tvm/pull/5700


   Hi! I am having a trouble in file memory_plan (here: 
https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/transform/memory_plan.py#L246
 ) when value is evaluated to False (for example if value is a 
`relay.Tuple([])`, so I suggest this quick fix: check `assert value is not 
None` instead of just `assert value`. What do you think ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on issue #4845: [DEV] TVM v0.7 Roadmap

2020-05-29 Thread GitBox


kevinthesun commented on issue #4845:
URL: https://github.com/apache/incubator-tvm/issues/4845#issuecomment-636151921


   We will have general support for TensorFlow control flow and tensor array, 
which allows parsing for TensorFlow object detection models



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun opened a new pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-05-29 Thread GitBox


kevinthesun opened a new pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699


   This PR does a major refactor for TensorFlow control flow and tensor array 
parsing logic:
   
   1. Follow the logic of loop invariant code motion to directly lift any 
out-of-loop nodes as loop variable. This is a general method better than 
previous structure hash matching method.
   2. Optimize static tensor array shape generation.
   
   This PR is mostly done but depends on 
https://github.com/apache/incubator-tvm/pull/4312 for end to end SSD testing.
   
   After this PR, TensorFlow frontend should be able to generally support 
control flow and tensor array, and user can compile TensorFlow object detection 
model with VM compilation.
   
   @tqchen @zhiics @wweic @yongwww @lixiaoquan 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5695: fix small bug about dense_grad

2020-05-29 Thread GitBox


tqchen commented on pull request #5695:
URL: https://github.com/apache/incubator-tvm/pull/5695#issuecomment-636141164


   cc @junrushao1994 @MarisaKirisame 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5693: [PatternLang]Conditionally Embedding Constants in Partitioned Functions

2020-05-29 Thread GitBox


comaniac commented on a change in pull request #5693:
URL: https://github.com/apache/incubator-tvm/pull/5693#discussion_r432604623



##
File path: src/relay/ir/dataflow_matcher.cc
##
@@ -613,6 +613,25 @@ class PatternGrouper {
 CHECK_EQ(groups_[gid_].gid, gid_);
   }
 
+  bool EmbedConst(const Expr& expr, const DFPattern pattern) {

Review comment:
   Maybe provide function description to explain the constant embedding 
rules?

##
File path: python/tvm/relay/dataflow_pattern/__init__.py
##
@@ -318,15 +318,14 @@ class VarPattern(DFPattern):
 Parameters
 --
 name_hint: str
-The name of the variable.
-This name only acts as a hint, and is not used
-for equality.
+The name of the variable. Optional, if not provided,
+the pattern will match any VarNode

Review comment:
   period.

##
File path: docs/langref/relay_pattern.rst
##
@@ -266,10 +266,10 @@ Attribute Pattern
 
 Check that the operator matched by the pattern has an attribute with a 
particular value.
 
-Input
-*
+Variable Pattern
+
 
-Check that the expression is an input, i.e has no parents and is a variable.
+Check that the expression is a relay Variable, and optional provide a name to 
match to the Variable name

Review comment:
   period.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (ddf7190 -> 3ee2270)

2020-05-29 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ddf7190  [REFACTOR][RELAY] move fallback_device to config (#5690)
 add 3ee2270  @zhiics -> PPMC (#5692)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[incubator-tvm] branch master updated (3ee2270 -> 06bf8b0)

2020-05-29 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 3ee2270  @zhiics -> PPMC (#5692)
 add 06bf8b0  [COMMUNITY] @masahi -> PPMC (#5691)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-tvm] tmoreau89 merged pull request #5691: [COMMUNITY] @masahi -> PPMC

2020-05-29 Thread GitBox


tmoreau89 merged pull request #5691:
URL: https://github.com/apache/incubator-tvm/pull/5691


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 merged pull request #5692: [COMMUNITY] @zhiics -> PPMC

2020-05-29 Thread GitBox


tmoreau89 merged pull request #5692:
URL: https://github.com/apache/incubator-tvm/pull/5692


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics opened a new pull request #5698: [REFACTOR][RELAY] Replace build_config with PassContext

2020-05-29 Thread GitBox


zhiics opened a new pull request #5698:
URL: https://github.com/apache/incubator-tvm/pull/5698


   Per #5650, this PR replace `relay.build_config` with 
`tvm.transform.PassContext`. The original `relay.build_config` API is kept with 
a warning message to keep the backward compatibility for downstream deployment. 
It will be removed in the next release.
   
   @tqchen @junrushao1994 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cbalint13 opened a new pull request #5697: [ONNX] Skip ADD inside Gemm op when vector is zero

2020-05-29 Thread GitBox


cbalint13 opened a new pull request #5697:
URL: https://github.com/apache/incubator-tvm/pull/5697


   **Description**
   
   This PR fixes errors in ONNX Gemm operator import of bias-less dense 
operator.
   
   
   **Fixes**
   
   Gemm -> ```Y = alpha*A*B + beta*C```
   
   There are two cases fixed / optimized here:
   
   1. ```C``` is a zero valued constant (e.g. mxnet ```Dense(use_bias=False)``` 
exports).
   2. ```beta==0```, so we can skip the addition (usually is a bias vector).
   
   **Cases**
   
   In case of #1 without this fix:
   ```
 %11 = multiply(1f, %bias11) an internal invariant was violated while 
typechecking your program Check failed: t0->dtype == t1->dtype (float32 vs. 
int64)
   ```
   or worse (fixing type casts) we get size issues (```C``` is more like a 
constant instead of vector):
   ```
   %12 = nn.bias_add(%10, %11) unable to unify: `Tensor[(512), float32]` and 
`Tensor[(1), int64]`;
   ```
   
   
   Thank You !
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5690: [REFACTOR][RELAY] move fallback_device to config

2020-05-29 Thread GitBox


zhiics commented on a change in pull request #5690:
URL: https://github.com/apache/incubator-tvm/pull/5690#discussion_r432583680



##
File path: src/relay/backend/build_module.cc
##
@@ -304,7 +304,12 @@ class RelayBuildModule : public runtime::ModuleNode {
 // Handle heterogeneous compilation.
 transform::PassContext pass_ctx = PassContext::Current();
 if (targets_.size() > 1) {
-  relay_module = RunDeviceAnnotationPass(relay_module, 
pass_ctx->fallback_device);
+  Optional opt_fallback_dev =
+  pass_ctx->GetConfig("relay.fallback_device_type",
+  IntImm(runtime::DataType::Int(32), 
static_cast(kDLCPU)));

Review comment:
   Yeah, I tried this but the compilation failed. I can take a deeper look 
though.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5690: [REFACTOR][RELAY] move fallback_device to config

2020-05-29 Thread GitBox


zhiics commented on a change in pull request #5690:
URL: https://github.com/apache/incubator-tvm/pull/5690#discussion_r432583081



##
File path: tests/cpp/relay_transform_sequential.cc
##
@@ -70,7 +70,7 @@ TEST(Relay, Sequential) {
   auto mod = IRModule::FromExpr(func);
   auto pass_ctx = relay::transform::PassContext::Create();
   pass_ctx->opt_level = 3;
-  pass_ctx->fallback_device = 1;
+  pass_ctx->config.Set("relay.fallback_device_type", IntImm(DataType::Int(32), 
1));

Review comment:
   Yeah, I tried this but the compilation failed. I can take a deeper look 
though. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5686: [vulkan] Assertion in tir/transforms/lower_thread_allreduce.cc", line 157 TVMError: Check failed: v:

2020-05-29 Thread GitBox


tqchen commented on issue #5686:
URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-636028237


   Thanks @majiang31312  please let me know if you would like to attempt a fix. 
cc @kazum @wpan11nv @yongfeng-nv @roastduck 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5690: [REFACTOR][RELAY] move fallback_device to config

2020-05-29 Thread GitBox


tqchen commented on a change in pull request #5690:
URL: https://github.com/apache/incubator-tvm/pull/5690#discussion_r432541406



##
File path: tests/cpp/relay_transform_sequential.cc
##
@@ -70,7 +70,7 @@ TEST(Relay, Sequential) {
   auto mod = IRModule::FromExpr(func);
   auto pass_ctx = relay::transform::PassContext::Create();
   pass_ctx->opt_level = 3;
-  pass_ctx->fallback_device = 1;
+  pass_ctx->config.Set("relay.fallback_device_type", IntImm(DataType::Int(32), 
1));

Review comment:
   After merging the code, I find that we can further simplify it to 
pass_ctx->config.Set("relay.fallback_device_type", Integer(1));

##
File path: src/relay/backend/build_module.cc
##
@@ -304,7 +304,12 @@ class RelayBuildModule : public runtime::ModuleNode {
 // Handle heterogeneous compilation.
 transform::PassContext pass_ctx = PassContext::Current();
 if (targets_.size() > 1) {
-  relay_module = RunDeviceAnnotationPass(relay_module, 
pass_ctx->fallback_device);
+  Optional opt_fallback_dev =
+  pass_ctx->GetConfig("relay.fallback_device_type",
+  IntImm(runtime::DataType::Int(32), 
static_cast(kDLCPU)));

Review comment:
   Integer(kDLCPU)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5690: [REFACTOR][RELAY] move fallback_device to config

2020-05-29 Thread GitBox


tqchen commented on pull request #5690:
URL: https://github.com/apache/incubator-tvm/pull/5690#issuecomment-636017068


   Thanks @zhiics @junrushao1994 !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (3698d5d -> ddf7190)

2020-05-29 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 3698d5d  [RELAY] Fix segfault in pretty print when ObjectRef is null 
(#5681)
 add ddf7190  [REFACTOR][RELAY] move fallback_device to config (#5690)

No new revisions were added by this update.

Summary of changes:
 include/tvm/ir/transform.h |  5 -
 python/tvm/ir/transform.py | 18 +-
 python/tvm/relay/transform/transform.py| 10 ++
 src/ir/transform.cc|  7 ++-
 src/relay/backend/build_module.cc  |  7 ++-
 src/relay/ir/transform.cc  |  2 ++
 tests/cpp/relay_transform_sequential.cc|  2 +-
 tests/python/relay/test_pass_annotation.py | 12 ++--
 8 files changed, 20 insertions(+), 43 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5690: [REFACTOR][RELAY] move fallback_device to config

2020-05-29 Thread GitBox


tqchen merged pull request #5690:
URL: https://github.com/apache/incubator-tvm/pull/5690


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5681: [RELAY] Fix segfault in pretty print when ObjectRef is null

2020-05-29 Thread GitBox


tqchen merged pull request #5681:
URL: https://github.com/apache/incubator-tvm/pull/5681


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5681: [RELAY] Fix segfault in pretty print when ObjectRef is null

2020-05-29 Thread GitBox


tqchen commented on pull request #5681:
URL: https://github.com/apache/incubator-tvm/pull/5681#issuecomment-636016677


   Thanks @lhutton1 !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (05b1b23 -> 3698d5d)

2020-05-29 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 05b1b23  [Relay] Fix dataflow_pattern.rewrite() hang if Match in IR 
(#5680)
 add 3698d5d  [RELAY] Fix segfault in pretty print when ObjectRef is null 
(#5681)

No new revisions were added by this update.

Summary of changes:
 src/printer/relay_text_printer.cc  |  7 ---
 tests/python/relay/test_ir_text_printer.py | 10 ++
 2 files changed, 14 insertions(+), 3 deletions(-)



[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5696: [ONNX]LpPool Support added

2020-05-29 Thread GitBox


siju-samuel opened a new pull request #5696:
URL: https://github.com/apache/incubator-tvm/pull/5696


   Added support of 1D/2D/3D LpPool for ONNX Frontend
   
   @masahi @FrozenGene please help me to review and merge this PR. TIA.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5601: [DataType] Add bfloat16

2020-05-29 Thread GitBox


Menooker commented on a change in pull request #5601:
URL: https://github.com/apache/incubator-tvm/pull/5601#discussion_r432449575



##
File path: src/tir/transforms/bf16_legalize.cc
##
@@ -0,0 +1,384 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file bf16_legalize.cc
+ * \brief legalize bf16 type by adding cast_to_fp32
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "../../arith/ir_mutator_with_analyzer.h"
+#include "../../arith/ir_visitor_with_analyzer.h"
+
+namespace tvm {
+namespace tir {
+
+using arith::Analyzer;
+using arith::IRMutatorWithAnalyzer;
+
+class BF16PromoteRewriter : public StmtExprMutator {
+ public:
+  BF16PromoteRewriter() {}
+
+  Stmt operator()(Stmt s) { return VisitStmt(s); }
+
+  std::tuple DoCast(PrimExpr orig_a, PrimExpr orig_b, 
bool* is_bfloat16) {
+auto a = this->VisitExpr(orig_a);
+auto b = this->VisitExpr(orig_b);
+*is_bfloat16 = false;
+if (a->dtype.is_bfloat16()) {
+  CHECK(b->dtype.is_bfloat16());
+  *is_bfloat16 = true;
+} else if (b->dtype.is_bfloat16()) {
+  CHECK(a->dtype.is_bfloat16());
+  *is_bfloat16 = true;
+}
+
+if (*is_bfloat16) {
+  DataType fp32ty(kDLFloat, 32, 1);
+  a = CastNode::make(fp32ty, a);
+  b = CastNode::make(fp32ty, b);
+}
+return std::make_tuple(a, b);
+  }
+
+  PrimExpr VisitExpr_(const AddNode* op) final;
+  PrimExpr VisitExpr_(const SubNode* op) final;
+  PrimExpr VisitExpr_(const MulNode* op) final;
+  PrimExpr VisitExpr_(const DivNode* op) final;
+  PrimExpr VisitExpr_(const MinNode* op) final;
+  PrimExpr VisitExpr_(const MaxNode* op) final;
+  PrimExpr VisitExpr_(const LTNode* op) final;
+  PrimExpr VisitExpr_(const LENode* op) final;
+  PrimExpr VisitExpr_(const GTNode* op) final;
+  PrimExpr VisitExpr_(const GENode* op) final;
+};
+
+#define DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(OP, FUNC)\
+  PrimExpr BF16PromoteRewriter::VisitExpr_(const OP* op) {   \
+PrimExpr a, b;   \
+bool is_bfloat16;\
+std::tie(a, b) = DoCast(op->a, op->b, _bfloat16); \
+if (a.same_as(op->a) && b.same_as(op->b)) {  \
+  return GetRef(op);   \
+} else { \
+  auto ret = FUNC(a, b); \
+  if (!is_bfloat16)  \
+return ret;  \
+  else   \
+return CastNode::make(DataType(kTVMBFloat, 16, 1), ret); \
+}\
+  }
+
+#define DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(OP, FUNC) \
+  PrimExpr BF16PromoteRewriter::VisitExpr_(const OP* op) {\
+PrimExpr a, b;\
+bool is_bfloat16; \
+std::tie(a, b) = DoCast(op->a, op->b, _bfloat16);  \
+if (a.same_as(op->a) && b.same_as(op->b)) {   \
+  return GetRef(op);\
+} else {  \
+  auto ret = FUNC(a, b);  \
+  return ret; \
+} \
+  }
+
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(AddNode, operator+)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(SubNode, operator-)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MulNode, operator*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(DivNode, div)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MinNode, min)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MaxNode, max)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(LTNode, operator<)   // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(LENode, operator<=)  // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(GTNode, operator>)   // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(GENode, 

[GitHub] [incubator-tvm] Menooker commented on pull request #5601: [DataType] Add bfloat16

2020-05-29 Thread GitBox


Menooker commented on pull request #5601:
URL: https://github.com/apache/incubator-tvm/pull/5601#issuecomment-635945266


   RFC Discussion link
   https://discuss.tvm.ai/t/rfc-add-bfloat16-data-type/6778



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] handar423 opened a new pull request #5695: fix small bug about dense_grad

2020-05-29 Thread GitBox


handar423 opened a new pull request #5695:
URL: https://github.com/apache/incubator-tvm/pull/5695


   To whom it may concern,
   Hello, I am learning tvm and trying to write some training model with 
relay when I got an error in `dense_grad()` with the size of data(`5*4`) and 
weight(`3*4`). Furthermore, I found that it may be caused by a small bug in 
dense_grad():
   
   the present dense_grad() is:
   ```python
   @register_gradient("nn.dense")
   def dense_grad(orig, grad):
   """Returns [grad' @ weight, data @ grad']"""
   data, weight = orig.args
   return [collapse_sum_like(transpose(grad) * weight, data),
   collapse_sum_like(data * transpose(grad), weight)]
   ```
   in a common situation, when we calculate the gradient of `dense(A(i * j), 
weight(k * j))`, we get grad matrix with size `i * k`, so in above 
`dense_grad()`, the first multiply operator get parameters with size `k * i` 
and `k * j`, the second one get paramenters with size `i * j` and `k * i`, so 
we can only avoid conflict when `i == j == k` or some of them are `1`. 
   To increase the robustness of the function, maybe we can modify it to:
   ```python
   @register_gradient("nn.dense")
   def dense_grad(orig, grad):
   """Returns [grad' @ weight, data @ grad']"""
   data, weight = orig.args
   return [collapse_sum_like(_nn.dense(grad, transpose(weight)), data),
   collapse_sum_like(_nn.dense(transpose(grad), transpose(data)), 
weight)]
   ```
   we change multiply(`*`) to `_nn.dense` so that it can handel matrix multiply 
as well. For above assumption, the first `_nn.dense()` get parameters with size 
`i * k` and `j * k` and give a result with size `i * j`, which is the same as 
`data`; the second one get parameters with size `k * i` and `j * i` and give a 
result with size `k * j`, which is the same as `weight`. We add an extra test 
case in test_dense_grad() to test its correctness.
   I am just starting to learn about tvm, so I apologize if I miss some obvious 
things. Thank you very much!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ZhennanQin commented on a change in pull request #5601: [DataType] Add bfloat16

2020-05-29 Thread GitBox


ZhennanQin commented on a change in pull request #5601:
URL: https://github.com/apache/incubator-tvm/pull/5601#discussion_r432418422



##
File path: src/tir/transforms/bf16_legalize.cc
##
@@ -0,0 +1,384 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file bf16_legalize.cc
+ * \brief legalize bf16 type by adding cast_to_fp32
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "../../arith/ir_mutator_with_analyzer.h"
+#include "../../arith/ir_visitor_with_analyzer.h"
+
+namespace tvm {
+namespace tir {
+
+using arith::Analyzer;
+using arith::IRMutatorWithAnalyzer;
+
+class BF16PromoteRewriter : public StmtExprMutator {
+ public:
+  BF16PromoteRewriter() {}
+
+  Stmt operator()(Stmt s) { return VisitStmt(s); }
+
+  std::tuple DoCast(PrimExpr orig_a, PrimExpr orig_b, 
bool* is_bfloat16) {
+auto a = this->VisitExpr(orig_a);
+auto b = this->VisitExpr(orig_b);
+*is_bfloat16 = false;
+if (a->dtype.is_bfloat16()) {
+  CHECK(b->dtype.is_bfloat16());
+  *is_bfloat16 = true;
+} else if (b->dtype.is_bfloat16()) {
+  CHECK(a->dtype.is_bfloat16());
+  *is_bfloat16 = true;
+}
+
+if (*is_bfloat16) {
+  DataType fp32ty(kDLFloat, 32, 1);
+  a = CastNode::make(fp32ty, a);
+  b = CastNode::make(fp32ty, b);
+}
+return std::make_tuple(a, b);
+  }
+
+  PrimExpr VisitExpr_(const AddNode* op) final;
+  PrimExpr VisitExpr_(const SubNode* op) final;
+  PrimExpr VisitExpr_(const MulNode* op) final;
+  PrimExpr VisitExpr_(const DivNode* op) final;
+  PrimExpr VisitExpr_(const MinNode* op) final;
+  PrimExpr VisitExpr_(const MaxNode* op) final;
+  PrimExpr VisitExpr_(const LTNode* op) final;
+  PrimExpr VisitExpr_(const LENode* op) final;
+  PrimExpr VisitExpr_(const GTNode* op) final;
+  PrimExpr VisitExpr_(const GENode* op) final;
+};
+
+#define DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(OP, FUNC)\
+  PrimExpr BF16PromoteRewriter::VisitExpr_(const OP* op) {   \
+PrimExpr a, b;   \
+bool is_bfloat16;\
+std::tie(a, b) = DoCast(op->a, op->b, _bfloat16); \
+if (a.same_as(op->a) && b.same_as(op->b)) {  \
+  return GetRef(op);   \
+} else { \
+  auto ret = FUNC(a, b); \
+  if (!is_bfloat16)  \
+return ret;  \
+  else   \
+return CastNode::make(DataType(kTVMBFloat, 16, 1), ret); \
+}\
+  }
+
+#define DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(OP, FUNC) \
+  PrimExpr BF16PromoteRewriter::VisitExpr_(const OP* op) {\
+PrimExpr a, b;\
+bool is_bfloat16; \
+std::tie(a, b) = DoCast(op->a, op->b, _bfloat16);  \
+if (a.same_as(op->a) && b.same_as(op->b)) {   \
+  return GetRef(op);\
+} else {  \
+  auto ret = FUNC(a, b);  \
+  return ret; \
+} \
+  }
+
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(AddNode, operator+)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(SubNode, operator-)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MulNode, operator*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(DivNode, div)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MinNode, min)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MaxNode, max)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(LTNode, operator<)   // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(LENode, operator<=)  // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(GTNode, operator>)   // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(GENode, 

[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5601: [DataType] Add bfloat16

2020-05-29 Thread GitBox


Menooker commented on a change in pull request #5601:
URL: https://github.com/apache/incubator-tvm/pull/5601#discussion_r432401610



##
File path: src/tir/transforms/bf16_legalize.cc
##
@@ -0,0 +1,384 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file bf16_legalize.cc
+ * \brief legalize bf16 type by adding cast_to_fp32
+ */
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "../../arith/ir_mutator_with_analyzer.h"
+#include "../../arith/ir_visitor_with_analyzer.h"
+
+namespace tvm {
+namespace tir {
+
+using arith::Analyzer;
+using arith::IRMutatorWithAnalyzer;
+
+class BF16PromoteRewriter : public StmtExprMutator {
+ public:
+  BF16PromoteRewriter() {}
+
+  Stmt operator()(Stmt s) { return VisitStmt(s); }
+
+  std::tuple DoCast(PrimExpr orig_a, PrimExpr orig_b, 
bool* is_bfloat16) {
+auto a = this->VisitExpr(orig_a);
+auto b = this->VisitExpr(orig_b);
+*is_bfloat16 = false;
+if (a->dtype.is_bfloat16()) {
+  CHECK(b->dtype.is_bfloat16());
+  *is_bfloat16 = true;
+} else if (b->dtype.is_bfloat16()) {
+  CHECK(a->dtype.is_bfloat16());
+  *is_bfloat16 = true;
+}
+
+if (*is_bfloat16) {
+  DataType fp32ty(kDLFloat, 32, 1);
+  a = CastNode::make(fp32ty, a);
+  b = CastNode::make(fp32ty, b);
+}
+return std::make_tuple(a, b);
+  }
+
+  PrimExpr VisitExpr_(const AddNode* op) final;
+  PrimExpr VisitExpr_(const SubNode* op) final;
+  PrimExpr VisitExpr_(const MulNode* op) final;
+  PrimExpr VisitExpr_(const DivNode* op) final;
+  PrimExpr VisitExpr_(const MinNode* op) final;
+  PrimExpr VisitExpr_(const MaxNode* op) final;
+  PrimExpr VisitExpr_(const LTNode* op) final;
+  PrimExpr VisitExpr_(const LENode* op) final;
+  PrimExpr VisitExpr_(const GTNode* op) final;
+  PrimExpr VisitExpr_(const GENode* op) final;
+};
+
+#define DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(OP, FUNC)\
+  PrimExpr BF16PromoteRewriter::VisitExpr_(const OP* op) {   \
+PrimExpr a, b;   \
+bool is_bfloat16;\
+std::tie(a, b) = DoCast(op->a, op->b, _bfloat16); \
+if (a.same_as(op->a) && b.same_as(op->b)) {  \
+  return GetRef(op);   \
+} else { \
+  auto ret = FUNC(a, b); \
+  if (!is_bfloat16)  \
+return ret;  \
+  else   \
+return CastNode::make(DataType(kTVMBFloat, 16, 1), ret); \
+}\
+  }
+
+#define DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(OP, FUNC) \
+  PrimExpr BF16PromoteRewriter::VisitExpr_(const OP* op) {\
+PrimExpr a, b;\
+bool is_bfloat16; \
+std::tie(a, b) = DoCast(op->a, op->b, _bfloat16);  \
+if (a.same_as(op->a) && b.same_as(op->b)) {   \
+  return GetRef(op);\
+} else {  \
+  auto ret = FUNC(a, b);  \
+  return ret; \
+} \
+  }
+
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(AddNode, operator+)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(SubNode, operator-)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MulNode, operator*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(DivNode, div)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MinNode, min)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH(MaxNode, max)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(LTNode, operator<)   // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(LENode, operator<=)  // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(GTNode, operator>)   // 
NOLINT(*)
+DEFINE_BIOP_EXPR_MUTATE_WITH_TYPE_MATCH_NO_CAST(GENode, 

[GitHub] [incubator-tvm] masahi commented on pull request #5680: [Relay] Fix dataflow_pattern.rewrite() hang if Match in IR

2020-05-29 Thread GitBox


masahi commented on pull request #5680:
URL: https://github.com/apache/incubator-tvm/pull/5680#issuecomment-635902736


   Thanks @lixiaoquan @mbrookhart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi merged pull request #5680: [Relay] Fix dataflow_pattern.rewrite() hang if Match in IR

2020-05-29 Thread GitBox


masahi merged pull request #5680:
URL: https://github.com/apache/incubator-tvm/pull/5680


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (2599c2c -> 05b1b23)

2020-05-29 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 2599c2c  [PYTORCH]Minor bug fixes (#5683)
 add 05b1b23  [Relay] Fix dataflow_pattern.rewrite() hang if Match in IR 
(#5680)

No new revisions were added by this update.

Summary of changes:
 src/relay/ir/expr_functor.cc| 18 +++---
 tests/python/relay/test_dataflow_pattern.py | 13 +
 2 files changed, 28 insertions(+), 3 deletions(-)



[GitHub] [incubator-tvm] liangfu commented on pull request #5456: Creates a TVM wheel install

2020-05-29 Thread GitBox


liangfu commented on pull request #5456:
URL: https://github.com/apache/incubator-tvm/pull/5456#issuecomment-635887785


   > I suspect we will need to package llvm libraries within the wheel.
   
   If that is the case, shall we compile llvm in shared library or static 
library? We might need a separate RFC to collect ideas from the community.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lixiaoquan commented on pull request #5680: [Relay] Fix dataflow_pattern.rewrite() hang if Match in IR

2020-05-29 Thread GitBox


lixiaoquan commented on pull request #5680:
URL: https://github.com/apache/incubator-tvm/pull/5680#issuecomment-635846042


   cc @masahi  Could you please review and merge this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mshawcroft commented on pull request #5456: Creates a TVM wheel install

2020-05-29 Thread GitBox


mshawcroft commented on pull request #5456:
URL: https://github.com/apache/incubator-tvm/pull/5456#issuecomment-635819109


   > Just a high-level question, how shall we integrate with LLVM?
   
   Hi!  The python world has a series of PEPs that define the standards for 
portable packages/wheels etc.   PEP513 was the original PEP 
https://www.python.org/dev/peps/pep-0513/ now superseded by PEP599 
https://www.python.org/dev/peps/pep-0599/ 
   
   Those standards define what system .so's a package can assume is present on 
a system. Any library not defined by those standards should be included within 
the package itself.  Those PEP's also provide links to various tools to help 
construct conforming portable packages e.g. the manylinux project that provides 
dockerized build environments for the construction of conformant wheels and 
various associated auditing tools.
   
   Short answer, in order to get portable wheels that can be widely distributed 
and consumed I suspect we will need to package  llvm libraries within the wheel.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5683: [PYTORCH]Minor bug fixes

2020-05-29 Thread GitBox


masahi commented on pull request #5683:
URL: https://github.com/apache/incubator-tvm/pull/5683#issuecomment-635805483


   Thanks @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (95b3ad9 -> 2599c2c)

2020-05-29 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 95b3ad9  [PatternLang] Add ConstantPattern (#5689)
 add 2599c2c  [PYTORCH]Minor bug fixes (#5683)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py  |  58 ++--
 tests/python/frontend/pytorch/test_forward.py | 183 ++
 2 files changed, 227 insertions(+), 14 deletions(-)



[GitHub] [incubator-tvm] masahi merged pull request #5683: [PYTORCH]Minor bug fixes

2020-05-29 Thread GitBox


masahi merged pull request #5683:
URL: https://github.com/apache/incubator-tvm/pull/5683


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org