[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5547: [Refactor][std::string --> String] IR is updated with String

2020-05-10 Thread GitBox


ANSHUMAN87 commented on pull request #5547:
URL: https://github.com/apache/incubator-tvm/pull/5547#issuecomment-626469811


   Gentle ping @tqchen, @jroesch, thanks! 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on pull request #5553: [TIR] Enable HoistIfThenElse in the default lowering procedure

2020-05-10 Thread GitBox


roastduck commented on pull request #5553:
URL: https://github.com/apache/incubator-tvm/pull/5553#issuecomment-626459848


   Fixed a bug where `if` nodes containing thread indices could be hoisted over 
the definition of the indices. This would happen when `Attr` node for 
`thread_extent` is scheduled into the body of a `For` node, using a 
`compute_at` command.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [RUNTIME] Hexagon driver for offloading kernels to simulator (#5492)

2020-05-10 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 29ae608  [RUNTIME] Hexagon driver for offloading kernels to simulator 
(#5492)
29ae608 is described below

commit 29ae608baabb673fb0b2d3ced9321f0e1798f72e
Author: Krzysztof Parzyszek 
AuthorDate: Sun May 10 22:05:59 2020 -0500

[RUNTIME] Hexagon driver for offloading kernels to simulator (#5492)

* [RUNTIME] Hexagon driver for offloading kernels to simulator

* Add sim_dev as external project when building with Hexagon/sim support

* Change target CPU for sim_dev to v60
---
 cmake/modules/Hexagon.cmake|   9 +
 src/runtime/hexagon/sim/driver/CMakeLists.txt  |  62 +++
 src/runtime/hexagon/sim/driver/README.md   |  38 ++
 src/runtime/hexagon/sim/driver/fake_pthread.cc | 292 +
 src/runtime/hexagon/sim/driver/pthread.h   |  96 +
 src/runtime/hexagon/sim/driver/sched.h |  31 ++
 src/runtime/hexagon/sim/driver/sim_device.cc   | 573 +
 src/runtime/threading_backend.cc   |  11 +
 8 files changed, 1112 insertions(+)

diff --git a/cmake/modules/Hexagon.cmake b/cmake/modules/Hexagon.cmake
index e70a964..30b4ccb 100644
--- a/cmake/modules/Hexagon.cmake
+++ b/cmake/modules/Hexagon.cmake
@@ -15,6 +15,8 @@
 # specific language governing permissions and limitations
 # under the License.
 
+include(ExternalProject)
+
 set(PICK_SIM  "sim")
 set(PICK_HW   "target")
 set(PICK_NONE "OFF")
@@ -77,6 +79,13 @@ if(USE_HEXAGON_DEVICE STREQUAL "${PICK_SIM}")
   include_directories("${HEXAGON_TOOLCHAIN}/include/iss")
   link_directories("${HEXAGON_TOOLCHAIN}/lib/iss")
   list(APPEND TVM_RUNTIME_LINKER_LIBS "-lwrapper")
+  ExternalProject_Add(sim_dev
+SOURCE_DIR "${CMAKE_SOURCE_DIR}/src/runtime/hexagon/sim/driver"
+CMAKE_ARGS
+  "-DCMAKE_C_COMPILER=${HEXAGON_TOOLCHAIN}/bin/hexagon-clang"
+  "-DCMAKE_CXX_COMPILER=${HEXAGON_TOOLCHAIN}/bin/hexagon-clang++"
+INSTALL_COMMAND "true"
+  )
 elseif(USE_HEXAGON_DEVICE STREQUAL "${PICK_HW}")
   find_hexagon_sdk_root()
   find_hexagon_toolchain()
diff --git a/src/runtime/hexagon/sim/driver/CMakeLists.txt 
b/src/runtime/hexagon/sim/driver/CMakeLists.txt
new file mode 100644
index 000..8632b49
--- /dev/null
+++ b/src/runtime/hexagon/sim/driver/CMakeLists.txt
@@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+project(SIM_DEV C CXX)
+cmake_minimum_required(VERSION 3.0.2)
+
+set(CMAKE_SYSTEM_NAME "Linux")
+
+if(EXISTS ${CMAKE_CURRENT_BINARY_DIR}/config.cmake)
+  include(${CMAKE_CURRENT_BINARY_DIR}/config.cmake)
+endif()
+
+set(EXTRA_CXX_FLAGS
+  "-O2"
+  "-Wno-format"
+  "-mhvx -mhvx-length=128b"
+  "-mv60"
+  "-stdlib=libc++"
+)
+
+set(EXTRA_LINK_FLAGS
+  "-stdlib=libc++"
+  "-G0"
+  "-Wl,--force-dynamic"
+  "-Wl,--export-dynamic"
+  "-Wl,--whole-archive"   # This should link entire libc, libc++ and libc+abi.
+  "-Wl,--defsym=HEAP_SIZE=0x4000"
+)
+
+string(REGEX REPLACE ";" " " EXTRA_CXX_FLAGS_STR "${EXTRA_CXX_FLAGS}")
+string(REGEX REPLACE ";" " " EXTRA_LINK_FLAGS_STR "${EXTRA_LINK_FLAGS}")
+
+set(CMAKE_CXX_STANDARD 11)
+set(CMAKE_CXX_FLAGS "${EXTRA_CXX_FLAGS_STR} ${CMAKE_CXX_FLAGS}")
+set(CMAKE_EXE_LINKER_FLAGS "${EXTRA_LINK_FLAGS_STR} ${CMAKE_EXE_LINKER_FLAGS}")
+
+# Set project properties.
+
+file(GLOB SOURCE_FILES "*.cc")
+add_executable(sim_dev ${SOURCE_FILES})
+target_include_directories(sim_dev
+  PUBLIC "."
+  PUBLIC ".."
+  PUBLIC "../../../../../include"
+  PUBLIC "../../../../../3rdparty/dlpack/include"
+)
+
+target_link_libraries(sim_dev "-ldl")
diff --git a/src/runtime/hexagon/sim/driver/README.md 
b/src/runtime/hexagon/sim/driver/README.md
new file mode 100644
index 000..3aee1a1
--- /dev/null
+++ b/src/runtime/hexagon/sim/driver/README.md
@@ -0,0 +1,38 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon simulator driver
+
+The driver (`sim_dev` executable) is the process running on the Hexagon 
simulator that handles the Hexagon-side communication with the TVM runtime 
running on x86. The location of `sim_dev` should be added to `PATH` 

[GitHub] [incubator-tvm] FrozenGene commented on pull request #5492: [RUNTIME] Hexagon driver for offloading kernels to simulator

2020-05-10 Thread GitBox


FrozenGene commented on pull request #5492:
URL: https://github.com/apache/incubator-tvm/pull/5492#issuecomment-626446001


   THANKS @kparzysz-quic AND @liangfu !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5557: [LINT] clang-format the h,cc,m files.

2020-05-10 Thread GitBox


tqchen commented on pull request #5557:
URL: https://github.com/apache/incubator-tvm/pull/5557#issuecomment-626436163


   cc @yzhliu @tmoreau89 @zhiics @jroesch 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5557: [LINT] clang-format the h,cc,m files.

2020-05-10 Thread GitBox


tqchen opened a new pull request #5557:
URL: https://github.com/apache/incubator-tvm/pull/5557


   This PR prepares for our migration to use the clang-format as part of the 
linter system.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #3879: Cast from float16 to uint8 was not supported by CUDA

2020-05-10 Thread GitBox


tqchen edited a comment on issue #3879:
URL: https://github.com/apache/incubator-tvm/issues/3879#issuecomment-626433807


   cc @wpan11nv @Hzfengsy . Also a contribution is more than welcomed @kice 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #3879: Cast from float16 to uint8 was not supported by CUDA

2020-05-10 Thread GitBox


tqchen commented on issue #3879:
URL: https://github.com/apache/incubator-tvm/issues/3879#issuecomment-626433807


   cc @wpan11nv @Hzfengsy 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kice opened a new issue #3879: Cast from float16 to uint8 was not supported by CUDA

2020-05-10 Thread GitBox


kice opened a new issue #3879:
URL: https://github.com/apache/incubator-tvm/issues/3879


   Example code
   
   ```python
   expr_text = """v0.0.3
   fn(%data: Tensor[(1, 1, 1, 1), uint8]) -> Tensor[(1, 1, 1, 1), uint8] {
 %0 = cast(%data, dtype="float16");
 cast(%0, dtype="uint8")
   }
   """
   
   func = relay.parser.fromtext(expr_text)
   
   with relay.build_config(opt_level=3):
   graph, lib, params = relay.build(func, target='cuda')
   ```
   
   Output:
   ```
   my_kernel.cu(4): error: more than one conversion function from "half" to 
"unsigned char" applies:
   function "__half::operator float() const"
   function "__half::operator short() const"
   function "__half::operator unsigned short() const"
   function "__half::operator int() const"
   function "__half::operator unsigned int() const"
   function "__half::operator long long() const"
   function "__half::operator unsigned long long() const"
   function "__half::operator __nv_bool() const"
   ```
   
   After looking into cuda_fp16.h, found that no direct conversion from fp16 to 
uint8/int8.
   
   Suggest to warn the user who want to this conversion, or conver it to 
uint16/int16 and then to uint8/int8 internally.
   
   You may close this issue after reading.
   
   P.S. I am sorry that I open too many small PRs and issues in short period of 
time. xD



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum commented on issue #5464: [OpenCL] `directly 4 8 bit int in integer` causes compiling error

2020-05-10 Thread GitBox


kazum commented on issue #5464:
URL: https://github.com/apache/incubator-tvm/issues/5464#issuecomment-626430652


   @westion717 Can you share your program to reproduce this problem?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kice commented on issue #3879: Cast from float16 to uint8 was not supported by CUDA

2020-05-10 Thread GitBox


kice commented on issue #3879:
URL: https://github.com/apache/incubator-tvm/issues/3879#issuecomment-626418827


   If you take a look here
   ```
   auto inter_type = op->type.is_int() ? Int(16) : UInt(16);
   value_int8 << CastFromTo(value_16.str(), op->value.type(), inter_type);
   os << CastFromTo(value_int8.str(), inter_type, op->type);
   ```
   
   We need to convert to `uint16` and then cast to `uint8`. No direct 
conversion from `fp16` to `uint8`.
   
   ref: 
https://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATHHALF__MISC.html#group__CUDA__MATHHALF__MISC



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum commented on pull request #5502: [TOPI][RELAY][TENSORFLOW]Math ops added

2020-05-10 Thread GitBox


kazum commented on pull request #5502:
URL: https://github.com/apache/incubator-tvm/pull/5502#issuecomment-626414971


   Thanks @siju-samuel !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (cdc7ae4 -> 28057b8)

2020-05-10 Thread kazum
This is an automated email from the ASF dual-hosted git repository.

kazum pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from cdc7ae4  [WEB] WebGPU support (#5545)
 add 28057b8  [TOPI][RELAY][TENSORFLOW]Math ops added (#5502)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tensorflow.py  |  7 ++
 python/tvm/relay/op/_tensor.py   |  7 +-
 python/tvm/relay/op/_tensor_grad.py  | 53 +++--
 python/tvm/relay/op/tensor.py| 75 ++
 python/tvm/te/__init__.py|  3 +-
 src/relay/op/tensor/unary.cc | 55 ++
 tests/python/frontend/tensorflow/test_forward.py | 97 +++-
 tests/python/relay/test_op_grad_level1.py|  9 ++-
 topi/include/topi/elemwise.h |  5 ++
 topi/python/topi/math.py | 84 
 topi/src/elemwise.cc | 25 ++
 11 files changed, 340 insertions(+), 80 deletions(-)



[GitHub] [incubator-tvm] masahi commented on a change in pull request #5552: [BYOC, MergeComposite] Add additional check before re-using the cached match

2020-05-10 Thread GitBox


masahi commented on a change in pull request #5552:
URL: https://github.com/apache/incubator-tvm/pull/5552#discussion_r422723875



##
File path: src/relay/transforms/merge_composite.cc
##
@@ -122,7 +122,9 @@ class MergeCompositeWrapper : public ExprMutator {
   Expr new_arg;
   if (arg->IsInstance()) {
 // if we've already processed this call node, return the previous 
result
-if (call_map->find(arg) != call_map->end()) {
+if (call_map->find(arg) != call_map->end() &&
+ExtractPattern(Downcast(arg), Downcast(root->args[i]), 
var_map, call_map)

Review comment:
   Thanks for the suggestion. I've cleaned up `CallNode` handling. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5172: [FRONTEND] tensorflow _stridedSlice function error

2020-05-10 Thread GitBox


tqchen commented on issue #5172:
URL: https://github.com/apache/incubator-tvm/issues/5172#issuecomment-626412738


   ping @kevinthesun 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #3731: [Relay][RFC] Relay support for Sparse Tensor

2020-05-10 Thread GitBox


tqchen commented on issue #3731:
URL: https://github.com/apache/incubator-tvm/issues/3731#issuecomment-626412595


   close for now due to inactive status



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #4542: [AutoTVM] Tuning fails for an NHWC network on Arm CPU

2020-05-10 Thread GitBox


tqchen commented on issue #4542:
URL: https://github.com/apache/incubator-tvm/issues/4542#issuecomment-626412499


   close for now due to inactive status, and there are related  #3859 feel free 
to open new thread on the discuss forum



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #4821: [WINDOWS][AutoTVM] OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted and O

2020-05-10 Thread GitBox


tqchen commented on issue #4821:
URL: https://github.com/apache/incubator-tvm/issues/4821#issuecomment-626412324


   close for now due to inactive status, feel free to open new threads on 
https://discuss.tvm.ai/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #2355: [SPIRV] Incorrect Vulkan Result on Mobile GPU

2020-05-10 Thread GitBox


tqchen commented on issue #2355:
URL: https://github.com/apache/incubator-tvm/issues/2355#issuecomment-626412199


   close for now due to inactive status, we will open new thread to track 
problems if there are more followups



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #1996: [RFC][WIP] Tensor Expression level automatic differentiation

2020-05-10 Thread GitBox


tqchen commented on issue #1996:
URL: https://github.com/apache/incubator-tvm/issues/1996#issuecomment-626412036


   The te AD has been landed in the mainline



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #3879: Cast from float16 to uint8 was not supported by CUDA

2020-05-10 Thread GitBox


tqchen commented on issue #3879:
URL: https://github.com/apache/incubator-tvm/issues/3879#issuecomment-626411921


   close for now due to inactive status, i believe there were a few recent 
improvement by @wpan11nv that might be relevant, feel free to open new forum 
thread or issue if there are more things to be addressed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #4262: [RELAY][Bug] 'name_hint' AttributeError issue when covert tensorflow to TVM

2020-05-10 Thread GitBox


tqchen commented on issue #4262:
URL: https://github.com/apache/incubator-tvm/issues/4262#issuecomment-626411709


   close for now due to inactive status, feel fre to open another thread on 
https://discuss.tvm.ai/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-05-10 Thread GitBox


tqchen commented on pull request #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#issuecomment-626411470


   ping @yzhliu 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #2425: [WEB] Recover WASM and JS Flow on the Latest Emscripten

2020-05-10 Thread GitBox


tqchen commented on issue #2425:
URL: https://github.com/apache/incubator-tvm/issues/2425#issuecomment-626411557


   The new webgpu and wasm support are now landed in the mainline



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5465: [TIR] Convert if_then_else intrinsics to if-statements

2020-05-10 Thread GitBox


tqchen commented on a change in pull request #5465:
URL: https://github.com/apache/incubator-tvm/pull/5465#discussion_r422721228



##
File path: src/tir/transforms/thread_storage_sync.cc
##
@@ -95,8 +95,6 @@ class ThreadSyncPlanner : public StorageAccessVisitor {
 }
   }
   if (sync_before_stmt) {
-CHECK_EQ(condition_counter(), 0)
-<< "Cannot insert syncs inside condition";

Review comment:
   This check is necessary for GPUs





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5552: [BYOC, MergeComposite] Add additional check before re-using the cached match

2020-05-10 Thread GitBox


comaniac commented on a change in pull request #5552:
URL: https://github.com/apache/incubator-tvm/pull/5552#discussion_r422718091



##
File path: src/relay/transforms/merge_composite.cc
##
@@ -122,7 +122,9 @@ class MergeCompositeWrapper : public ExprMutator {
   Expr new_arg;
   if (arg->IsInstance()) {
 // if we've already processed this call node, return the previous 
result
-if (call_map->find(arg) != call_map->end()) {
+if (call_map->find(arg) != call_map->end() &&
+ExtractPattern(Downcast(arg), Downcast(root->args[i]), 
var_map, call_map)

Review comment:
   Would that be clearer if we lift the new `ExtractPattern` checker out of 
this if-block and combine with the one at L136? It seems to me that those two 
are the same.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5388: [RUNTIME][VULKAN] vkBuffer released before memory copy command send to GPU

2020-05-10 Thread GitBox


tqchen commented on issue #5388:
URL: https://github.com/apache/incubator-tvm/issues/5388#issuecomment-626380340


   closed thanks to @samwyi !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5556: [WEB] Setup lint, doc, test

2020-05-10 Thread GitBox


tqchen opened a new pull request #5556:
URL: https://github.com/apache/incubator-tvm/pull/5556


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on issue #5458: [CI] Fix gluoncv tutorial under mxnet-mkl

2020-05-10 Thread GitBox


anijain2305 commented on issue #5458:
URL: https://github.com/apache/incubator-tvm/issues/5458#issuecomment-626369512


   @icemelon9 Do you know about this operator?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on issue #5455: [CI] [TEST] test_conv2d_int8_intrinsics

2020-05-10 Thread GitBox


anijain2305 commented on issue #5455:
URL: https://github.com/apache/incubator-tvm/issues/5455#issuecomment-626368043


   @tqchen I verified locally. This is resolved now. You can close this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5555: [CI] Update Jenkins ci-cpu to bionic

2020-05-10 Thread GitBox


tqchen opened a new pull request #:
URL: https://github.com/apache/incubator-tvm/pull/


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5554: [CI] Update ci-cpu to bionic

2020-05-10 Thread GitBox


tqchen opened a new pull request #5554:
URL: https://github.com/apache/incubator-tvm/pull/5554


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on pull request #5553: [TIR] Enable HoistIfThenElse in the default lowering procedure

2020-05-10 Thread GitBox


roastduck commented on pull request #5553:
URL: https://github.com/apache/incubator-tvm/pull/5553#issuecomment-626343810


   Renamed `Var`s in hoisted `else_case`s, to be different from those in 
`then_case`s, so that SSA verification won't panic.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5511: [AutoTVM][TOPI] AutoTVM incorrect measurement

2020-05-10 Thread GitBox


cchung100m commented on a change in pull request #5511:
URL: https://github.com/apache/incubator-tvm/pull/5511#discussion_r422642720



##
File path: topi/python/topi/mali/conv2d.py
##
@@ -138,20 +138,15 @@ def _schedule_spatial_pack(cfg, s, output, conv, 
data_vec, kernel_vec):
 s[data_vec].unroll(vw)
 
 if isinstance(kernel_vec.op, tvm.te.ComputeOp) and kernel_vec.name == 
'kernel_vec':
-if autotvm.GLOBAL_SCOPE.in_tuning:
-# kernel packing will be pre-computed during compilation, so we 
skip
-# this part to make tuning records correct
-s[kernel_vec].pragma(s[kernel_vec].op.axis[0], 'debug_skip_region')
-else:
-max_threads = 
tvm.target.Target.current(allow_none=False).max_num_threads
-co, ci, kh, kw, vc = s[kernel_vec].op.axis
-fused = s[kernel_vec].fuse(co, ci, kh, kw, vc)
-fused, vec = s[kernel_vec].split(fused, VC)
-bb, tt = s[kernel_vec].split(fused, max_threads)
-s[kernel_vec].bind(bb, te.thread_axis("blockIdx.x"))
-s[kernel_vec].bind(tt, te.thread_axis("threadIdx.x"))
-if VC in vec_size:
-s[kernel_vec].vectorize(vec)
+max_threads = 
tvm.target.Target.current(allow_none=False).max_num_threads

Review comment:
   Hi @kevinthesun 
   Thanks for the kind explanation. 
   
   Should we keep removing the `autotvm.GLOBAL_SCOPE.in_tuning:` block here, 
and replace kernel with the correct layout placeholder in below position?
   
   
   
https://github.com/apache/incubator-tvm/blob/418da931c1806b35266bc7e12c3c8ffb01fa034b/topi/python/topi/mali/conv2d.py#L456





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on pull request #5553: Enable HoistIfThenElse in the default lowering procedure

2020-05-10 Thread GitBox


roastduck commented on pull request #5553:
URL: https://github.com/apache/incubator-tvm/pull/5553#issuecomment-626306519


   Fixed and added an extra test.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on pull request #5553: Enable HoistIfThenElse in the default lowering procedure

2020-05-10 Thread GitBox


roastduck commented on pull request #5553:
URL: https://github.com/apache/incubator-tvm/pull/5553#issuecomment-626296808


   Well, a test for TensorCore fails.
   
   `HoistIfThenElse` transforms
   
   ```
   for (n.inner, 0, 2) {
 for (o.inner, 0, 2) {
   if threadIdx.y*2) + n.inner) < 2)) {
 if threadIdx.z*2) + o.inner) < 4)) {
   if ((threadIdx.y < 1)) {
 if ((threadIdx.z < 2)) {
   tvm_store_matrix_sync(Conv.wmma.accumulator, 16, 16, 16, 
((n.inner*2) + o.inner), tvm_access_ptr(type_annotation(), Conv, 
(threadIdx.y*401408) + (n.inner*200704)) + (blockIdx.z*1024)) + 
(threadIdx.z*512)) + (o.inner*256)), 256, 2), 16, "row_major")
 }
   }
 }
   }
 }
   }
   ```
   
   into
   
   ```
   if threadIdx.y*2) + n.inner) < 2)) {
 if ((threadIdx.y < 1)) {
   if ((threadIdx.z < 2)) {
 for (n.inner, 0, 2) {
   for (o.inner, 0, 2) {
 if threadIdx.z*2) + o.inner) < 4)) {
   tvm_store_matrix_sync(Conv.wmma.accumulator, 16, 16, 16, 
((n.inner*2) + o.inner), tvm_access_ptr(type_annotation(), Conv, 
(threadIdx.y*401408) + (n.inner*200704)) + (blockIdx.z*1024)) + 
(threadIdx.z*512)) + (o.inner*256)), 256, 2), 16, "row_major")
 }
   }
 }
   }
 }
   }
   ```
   
   where the `if` containing `n.inner` is wrongly hoisted, seeming like a bug 
of `HoistIfThenElse`. I will try to dig it out.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck opened a new pull request #5553: Enable HoistIfThenElse in the default lowering procedure

2020-05-10 Thread GitBox


roastduck opened a new pull request #5553:
URL: https://github.com/apache/incubator-tvm/pull/5553


   Enabling the `HoistIfThenElse` pass (#3865) in the default lowering 
procedure. `HoistIfThenElse` can be very helpful for sparse applications, since 
`LoopPartition` cannot eliminate their if statements with dynamic (unknown at 
compile time) conditions.
   
   Changes:
   - Move `HoistIfThenElse` from `src/tir/pass` to `src/tir/transform`.
   - Added it into `lower`.
   - Fixed a breaking test.
   
   @tqchen Requesting for a review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5547: [Refactor][std::string --> String] IR is updated with String

2020-05-10 Thread GitBox


ANSHUMAN87 commented on pull request #5547:
URL: https://github.com/apache/incubator-tvm/pull/5547#issuecomment-626287184


   @tqchen : Your comment is handle now. Please check. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org