[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5484: [REFACTOR][RPC][PROCOTOL-CHANGE] Modularize the RPC infra

2020-05-02 Thread GitBox


tqchen commented on a change in pull request #5484:
URL: https://github.com/apache/incubator-tvm/pull/5484#discussion_r419038742



##
File path: apps/cpp_rpc/rpc_server.cc
##
@@ -217,10 +217,10 @@ class RPCServer {
* \param opts Parsed options for socket
* \param ping_period Timeout for select call waiting
*/
-  void AcceptConnection(TrackerClient* tracker, 
+  void AcceptConnection(TrackerClient* tracker,

Review comment:
   this is because apps are not covered by the linter as oppose the `src/`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum opened a new pull request #5503: [RUST][RUNTIME] Fix workspace

2020-05-02 Thread GitBox


kazum opened a new pull request #5503:
URL: https://github.com/apache/incubator-tvm/pull/5503


   - `!ws_size >= size` means `(!ws_size) >= size`, which is obvious wrong.
   - Return error when an invalid pointer is passed to TVMBackendFreeWorkspace.
   
   @jroesch @nhynes @ehsanmok Please help to review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on pull request #5382: [TE] Fix MakeLoopNest for warp memory

2020-05-02 Thread GitBox


roastduck commented on pull request #5382:
URL: https://github.com/apache/incubator-tvm/pull/5382#issuecomment-623041791


   It will be great if someone can make a review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on a change in pull request #5498: [Optimization] Warp level reduction support for CUDA

2020-05-02 Thread GitBox


roastduck commented on a change in pull request #5498:
URL: https://github.com/apache/incubator-tvm/pull/5498#discussion_r419033372



##
File path: tests/python/integration/test_reduce.py
##
@@ -338,6 +338,102 @@ def check_target(device):
 check_target("cuda")
 check_target("vulkan")
 
+def test_warp_reduction1():
+m = 32
+n = 128
+A = te.placeholder((m, n), name='A')
+k = te.reduce_axis((0, n))
+B = te.compute((m,), lambda i: te.max(A[i][k], axis=k), name='B')
+
+nthx = 32
+nthy = 4
+block_x = te.thread_axis("blockIdx.x")
+thread_x = te.thread_axis((0, nthx), "threadIdx.x")
+thread_y = te.thread_axis((0, nthy), "threadIdx.y")
+s = te.create_schedule(B.op)
+
+def check_target(device):
+ctx = tvm.context(device, 0)
+if not ctx.exist:
+print("skip because %s is not enabled.." % device)
+return
+
+# schedule
+k = s[B].op.reduce_axis[0]
+ko, _ = s[B].split(k, nparts=nthx)
+s[B].bind(ko, thread_x)
+xo, xi = s[B].split(s[B].op.axis[0], factor=nthy)
+s[B].bind(xi, thread_y)
+s[B].bind(xo, block_x)
+
+# validation.
+func = tvm.build(s, [A, B], "cuda", name="warp_reduction")
+a_np = np.random.uniform(size=(m,n)).astype(A.dtype)
+b_np = np.zeros((m,), dtype=A.dtype)
+a = tvm.nd.array(a_np, ctx)
+b = tvm.nd.array(b_np, ctx)
+b_np = np.max(a_np, axis=1)
+func(a, b)
+tvm.testing.assert_allclose(b.asnumpy(), b_np, rtol=1e-3, atol=1e-3)
+
+check_target("cuda")
+
+def test_warp_reduction2():
+def fcombine(x, y):
+return x[0] + y[0], x[1] * y[1]
+
+def fidentity(t0, t1):
+return tvm.tir.const(0, t0), tvm.tir.const(1, t1)
+
+add_mul_reducer = te.comm_reducer(fcombine, fidentity, 
name='add_mul_reducer')
+
+# compute
+m = 16
+n = 256
+A0 = te.placeholder((m, n), name='A0', dtype='float32')
+A1 = te.placeholder((m, n), name='Al', dtype='float32')
+k = te.reduce_axis((0, n), 'k')
+T0, T1 = te.compute((m, ), lambda i: \
+add_mul_reducer((A0[i, k], A1[i, k]), axis=k), name='T')
+
+nthdx, nthdy = 32, 2
+block_x = te.thread_axis("blockIdx.x")
+thread_x = te.thread_axis((0, nthdx), "threadIdx.x")
+thread_y = te.thread_axis((0, nthdy), "threadIdx.y")
+
+def check_target(device):
+ctx = tvm.context(device, 0)
+if not ctx.exist:

Review comment:
   Can we check the compute capability inside codegen for 
`tvm_warp_shuffle_sync`? If `__shfl_async` is not supported, we can just 
fallback to `__shfl`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on a change in pull request #5498: [Optimization] Warp level reduction support for CUDA

2020-05-02 Thread GitBox


roastduck commented on a change in pull request #5498:
URL: https://github.com/apache/incubator-tvm/pull/5498#discussion_r419033202



##
File path: src/target/source/intrin_rule_cuda.cc
##
@@ -91,6 +106,19 @@ static void DispatchCUDAShuffle(const TVMArgs& args, 
TVMRetValue* rv) {
   call->dtype, "__shfl", cuda_args, CallNode::PureExtern);
 }
 
+static void DispatchCUDAShuffleSync(const TVMArgs& args, TVMRetValue* rv) {
+  PrimExpr e = args[0];
+  const CallNode* call = e.as();
+  CHECK(call != nullptr);
+  CHECK_EQ(call->args.size(), 4);  // value, warp_id/offset, width, warp_size
+
+  // mask is ignored.
+  const std::string& name = CUDAShuffleSync()(call->name);
+  auto mask = IntImm(DataType::UInt(32), 0x);

Review comment:
   Can we reserve `mask` as an argument? Since it is critical when there 
are branch divergence. `0x` is only a special case.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on a change in pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-05-02 Thread GitBox


liangfu commented on a change in pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492#discussion_r419019525



##
File path: src/runtime/hexagon/sim/driver/CMakeLists.txt
##
@@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   
[VTA.cmake](https://github.com/apache/incubator-tvm/blob/master/cmake/modules/VTA.cmake)
 is an example of CMake config for building standalone lib. Also, given that 
it's unfriendly to require users to navigate to src/runtime/hexagon/sim/driver, 
and having CMakeLists.txt exist under src directory seems a bit odd to me, I 
think it's better to put this under `cmake`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #5484: [REFACTOR][RPC][PROCOTOL-CHANGE] Modularize the RPC infra

2020-05-02 Thread GitBox


tmoreau89 commented on a change in pull request #5484:
URL: https://github.com/apache/incubator-tvm/pull/5484#discussion_r419014962



##
File path: apps/cpp_rpc/rpc_server.cc
##
@@ -217,10 +217,10 @@ class RPCServer {
* \param opts Parsed options for socket
* \param ping_period Timeout for select call waiting
*/
-  void AcceptConnection(TrackerClient* tracker, 
+  void AcceptConnection(TrackerClient* tracker,

Review comment:
   if spaces are ending up in the source code, should we turn on the linter





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 edited a comment on issue #5455: [CI] [TEST] test_conv2d_int8_intrinsics

2020-05-02 Thread GitBox


anijain2305 edited a comment on issue #5455:
URL: https://github.com/apache/incubator-tvm/issues/5455#issuecomment-623004592


   I was able to reproduce the failure. I have not been able to solve it yet. 
The only pointer that I have till now is that if I disable tensorize (this test 
uses tensorize to use Intel VNNI), the test progresses.
   
   I am not familiar with const int bound. I will try to get familiar with it 
and see how tensorize impacts const int bounds.
   
   
   ~~~
   [19:48:06] 
/home/ubuntu/workplace/tvm/t1/tvm/src/arith/const_int_bound.cc:153: Expr = 15
   [19:48:06] 
/home/ubuntu/workplace/tvm/t1/tvm/src/arith/const_int_bound.cc:154: Bounds = 
ConstIntBound[63,63]
   ~~~



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on issue #5455: [CI] [TEST] test_conv2d_int8_intrinsics

2020-05-02 Thread GitBox


anijain2305 commented on issue #5455:
URL: https://github.com/apache/incubator-tvm/issues/5455#issuecomment-623004592


   I was able to reproduce the failure. I have not been able to solve it yet. 
The only pointer that I have till now is that if I disable tensorize (this test 
uses tensorize to use Intel VNNI), the test progresses.
   
   I am not familiar with const int bound. I will try to get familiar with it 
and see how tensorize impacts const int bounds.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #5479: [Relay-TFLite] FP32 and Quantized Object Detection Model

2020-05-02 Thread GitBox


anijain2305 commented on pull request #5479:
URL: https://github.com/apache/incubator-tvm/pull/5479#issuecomment-623000806


   @mbaret @FrozenGene @u99127 @siju-samuel 
   
   Please review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5467: [Relay]Improve Shape Func handling for Tuple inputs

2020-05-02 Thread GitBox


kevinthesun commented on a change in pull request #5467:
URL: https://github.com/apache/incubator-tvm/pull/5467#discussion_r418991866



##
File path: src/relay/op/memory/memory.cc
##
@@ -360,12 +360,26 @@ bool ShapeFuncRel(const Array& types, int 
num_inputs, const Attrs& attrs,
   auto tuple = TupleType(func_type->arg_types);
   auto in_types = FlattenTupleType(tuple);
   auto out_types = FlattenTupleType(func_type->ret_type);
+  int num_types = 0;

Review comment:
   @jroesch Improved implementation with FlattenTupleType and it can handle 
nested tuple type now. Also update test case to include nested tuple.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (c7a16d8 -> 6347406)

2020-05-02 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c7a16d8  [Rust] Fixes for wasm32 target (#5489)
 add 6347406  [uTVM] Reset target and wait for runtime initialization on 
connect. (#5499)

No new revisions were added by this update.

Summary of changes:
 src/runtime/micro/openocd_low_level_device.cc | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)



[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5502: [TOPI][RELAY][TENSORFLOW]Math ops added

2020-05-02 Thread GitBox


siju-samuel opened a new pull request #5502:
URL: https://github.com/apache/incubator-tvm/pull/5502


   Added relay/topi/tensorflow support for the following ops.
   
   - Acos
   - Acosh
   - Asin
   - Asinh
   - Atanh
   - Cosh
   - Sinh
   
   
   @FrozenGene @masahi Please help to review this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic commented on a change in pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-05-02 Thread GitBox


kparzysz-quic commented on a change in pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492#discussion_r418974927



##
File path: src/runtime/hexagon/sim/driver/CMakeLists.txt
##
@@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   This is a standalone `CMakeLists.txt`.  The files in contrib seem to be 
some cmake "sub-files" for including in other cmake files.  Could you elaborate 
on what you were suggesting?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m opened a new pull request #5501: [TIR][REFACTOR] std::string -> String Migration in TIR nodes

2020-05-02 Thread GitBox


cchung100m opened a new pull request #5501:
URL: https://github.com/apache/incubator-tvm/pull/5501


   Hi @tqchen @zhiics 
   
   Following issue #5490 , this PR is working for `std::string` -> `String` 
Migration in TIR nodes. I would appreciate if you can help to review it, many 
thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5236: [WIP][TVM][.NET] Introduce TVM.NET project

2020-05-02 Thread GitBox


ANSHUMAN87 commented on pull request #5236:
URL: https://github.com/apache/incubator-tvm/pull/5236#issuecomment-622821231


   @tqchen : I believe phase-0 development is complete now!
   Please help review! Thank you very much!
   
   May be we can change the status of the PR now!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-05-02 Thread GitBox


Menooker commented on a change in pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#discussion_r418919431



##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -18,13 +18,17 @@
 # pylint: 
disable=invalid-name,unused-argument,wildcard-import,unused-wildcard-import
 import logging
 
+import re
 import topi
 from tvm.te import SpecializedCondition
 from .generic import *
 from .. import op as _op
 
 logger = logging.getLogger('strategy')
 
+_NCHWc_matcher = re.compile("^NCHW[-+]?[0-9]+c$")

Review comment:
   Now changed to `NCHW?[0-9]+c`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-05-02 Thread GitBox


Menooker commented on a change in pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#discussion_r418919351



##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -84,8 +88,13 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
 raise ValueError("dilation should be positive value")
 
 if groups == 1:
-if layout == "NCHW":
-assert kernel_layout == "OIHW"
+if layout.startswith("NCHW"):

Review comment:
   Ok, changed. Extract the same part of code handling HCWH and NCHWc to an 
nested function.

##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -113,8 +122,13 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
 else:
 raise RuntimeError("Unsupported conv2d layout {} for 
x86".format(layout))
 elif is_depthwise_conv2d(data.shape, layout, kernel.shape, kernel_layout, 
groups):
-if layout == "NCHW":
-assert kernel_layout == "OIHW"
+if layout.startswith("NCHW"):

Review comment:
   changed as required





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-05-02 Thread GitBox


Menooker commented on a change in pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#discussion_r418919208



##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -380,8 +418,10 @@ Expr AddSubForwardRewrite(const Call& ref_call,
   if (slhs != nullptr) {
 CHECK(srhs == nullptr);
 CHECK(MatchBroadcastToLeftAxes(tlhs, trhs, slhs->axes));
-Expr scale = ExpandBiasToMatchAxis(
-slhs->scale, tlhs->shape.size(), slhs->axes);
+Expr scale = ReshapeOrExpandToMatchAxis(
+slhs->scale, tlhs->shape, slhs->axes);
+if (!scale.defined())
+  return Expr();

Review comment:
   changed as required





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Menooker commented on a change in pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-05-02 Thread GitBox


Menooker commented on a change in pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#discussion_r418919219



##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -390,8 +430,10 @@ Expr AddSubForwardRewrite(const Call& ref_call,
   } else {
 CHECK(srhs != nullptr);
 CHECK(MatchBroadcastToLeftAxes(trhs, tlhs, srhs->axes));
-Expr scale = ExpandBiasToMatchAxis(
-srhs->scale, trhs->shape.size(), srhs->axes);
+Expr scale = ReshapeOrExpandToMatchAxis(
+srhs->scale, trhs->shape, srhs->axes);
+if (!scale.defined())
+  return Expr();

Review comment:
   changed as required

##
File path: src/relay/transforms/fold_scale_axis.cc
##
@@ -314,6 +316,42 @@ class ForwardPrep : private ExprVisitor {
   }
 };
 
+static bool IsIntInArray(const Array& axis, int v) {
+  for (size_t i = 0; i < axis.size(); i++) {
+if (axis[i] == v)
+  return true;
+  }
+  return false;
+}
+
+static Expr ReshapeToMatchAxis(Expr scale, const Array& shape,
+  const Array& axis) {
+  Array arr;
+  for (size_t i = 0; i < shape.size(); i++) {
+if (IsIntInArray(axis, i)) {
+  auto node = shape[i].as();
+  if (!node) {
+// if the shape is not a constant, use normal transform
+return Expr();
+  }
+  arr.push_back(node->value);
+} else {
+  arr.push_back(1);
+}
+  }
+  return MakeReshape(scale, std::move(arr));
+}
+
+// if only one axis, use expand dim. Else, use reshape
+static Expr ReshapeOrExpandToMatchAxis(Expr scale, const Array& 
shape,
+  const Array& axis) {

Review comment:
   changed as required





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on issue #5404: [Gradient] Building module out of backward function (gradient pass) fails.

2020-05-02 Thread GitBox


srkreddy1238 commented on issue #5404:
URL: https://github.com/apache/incubator-tvm/issues/5404#issuecomment-622699482


   Doesn't reproduce on latest version. closing.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org