[GitHub] [incubator-tvm] junrushao1994 opened a new pull request #5740: [Draft][Object] Introduce runtime::Map

2020-06-05 Thread GitBox


junrushao1994 opened a new pull request #5740:
URL: https://github.com/apache/incubator-tvm/pull/5740


   TODO: finish this draft.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (de54754 -> 2ec7caa)

2020-06-05 Thread srk
This is an automated email from the ASF dual-hosted git repository.

srk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from de54754  Fix the values for test_fmod since it fails way too often 
otherwise (#5723)
 add 2ec7caa  fix small bug about dense_grad (#5695)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/_tensor_grad.py   | 7 ---
 tests/python/relay/test_op_grad_level2.py | 1 +
 2 files changed, 5 insertions(+), 3 deletions(-)



[GitHub] [incubator-tvm] srkreddy1238 merged pull request #5695: fix small bug about dense_grad

2020-06-05 Thread GitBox


srkreddy1238 merged pull request #5695:
URL: https://github.com/apache/incubator-tvm/pull/5695


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5738: [REFACTOR][ARITH] Remove legacy compute_expr.h

2020-06-05 Thread GitBox


junrushao1994 commented on a change in pull request #5738:
URL: https://github.com/apache/incubator-tvm/pull/5738#discussion_r436232785



##
File path: include/tvm/tir/op.h
##
@@ -762,6 +778,15 @@ inline PrimExpr make_zero(DataType t) {
   }
   return make_const(t, 0);
 }
+
+template 
+inline PrimExpr foldl(FReduce freduce, PrimExpr init_value, const 
Array& values) {

Review comment:
   Okay, it makes sense :-)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5738: [REFACTOR][ARITH] Remove legacy compute_expr.h

2020-06-05 Thread GitBox


tqchen commented on a change in pull request #5738:
URL: https://github.com/apache/incubator-tvm/pull/5738#discussion_r436231973



##
File path: include/tvm/tir/op.h
##
@@ -762,6 +778,15 @@ inline PrimExpr make_zero(DataType t) {
   }
   return make_const(t, 0);
 }
+
+template 
+inline PrimExpr foldl(FReduce freduce, PrimExpr init_value, const 
Array& values) {

Review comment:
   most of the functions in this file uses stl_case(as they are part of 
user facing function), we can discuss whether we want to force all function to 
conform to the CamelCase later.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #5738: [REFACTOR][ARITH] Remove legacy compute_expr.h

2020-06-05 Thread GitBox


junrushao1994 commented on a change in pull request #5738:
URL: https://github.com/apache/incubator-tvm/pull/5738#discussion_r436230123



##
File path: include/tvm/tir/op.h
##
@@ -762,6 +778,15 @@ inline PrimExpr make_zero(DataType t) {
   }
   return make_const(t, 0);
 }
+
+template 
+inline PrimExpr foldl(FReduce freduce, PrimExpr init_value, const 
Array& values) {

Review comment:
   @tqchen I don't have strong opinion, but is it slightly better to use 
`Foldl` instead to conform the naming convention?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on issue #2518: [FRONTEND] Tensorflow Op: FIFOQueueV2 and QueueDequeueManyV2 are not supported

2020-06-05 Thread GitBox


srkreddy1238 commented on issue #2518:
URL: https://github.com/apache/incubator-tvm/issues/2518#issuecomment-639921759


   One option would be mapping them to Identity in frontend.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac opened a new pull request #5739: [Bugfix] Fix reshape

2020-06-05 Thread GitBox


comaniac opened a new pull request #5739:
URL: https://github.com/apache/incubator-tvm/pull/5739


   Sorry for the similar PR as #5732, as I found that some other places, such 
as MXNet frontend, are also using `reshape` with new shape type `Tuple[IntImm, 
IntImm, int]`. This PR fixes the reshape directly by trying to convert `IntImm` 
to primitive integer type.
   
   This PR is more like a patch fix for #5429.
   
   cc @icemelon9 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5738: [REFACTOR][ARITH] Remove legacy compute_expr.h

2020-06-05 Thread GitBox


tqchen commented on pull request #5738:
URL: https://github.com/apache/incubator-tvm/pull/5738#issuecomment-639891530


   @junrushao1994 yes also also remove the legacy templates



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] alex96295 commented on issue #2518: [FRONTEND] Tensorflow Op: FIFOQueueV2 and QueueDequeueManyV2 are not supported

2020-06-05 Thread GitBox


alex96295 commented on issue #2518:
URL: https://github.com/apache/incubator-tvm/issues/2518#issuecomment-639853544


   @yongwww  @zhiics sorry how do you 'map ops to _undef'? I have the very same 
issue



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5738: [REFACTOR][ARITH] Remove legacy compute_expr.h

2020-06-05 Thread GitBox


tqchen edited a comment on pull request #5738:
URL: https://github.com/apache/incubator-tvm/pull/5738#issuecomment-639849385


   cc @junrushao1994 @jroesch @ZihengJiang @yzhliu 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax

2020-06-05 Thread GitBox


anijain2305 commented on pull request #4805:
URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-639835635


   @inadob Can you please fix the CI error?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5723: Fix the values for test_fmod since it fails way too often otherwise

2020-06-05 Thread GitBox


tqchen merged pull request #5723:
URL: https://github.com/apache/incubator-tvm/pull/5723


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (592711e -> de54754)

2020-06-05 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 592711e  [TEST] Fix flaky 
topi/tests/python/test_topi_pooling.py:test_adaptive_pool (#5736)
 add de54754  Fix the values for test_fmod since it fails way too often 
otherwise (#5723)

No new revisions were added by this update.

Summary of changes:
 tests/python/integration/test_ewise.py | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on issue #5709: tvm/apps/bundle_deploy/build --> make failed

2020-06-05 Thread GitBox


tqchen commented on issue #5709:
URL: https://github.com/apache/incubator-tvm/issues/5709#issuecomment-639821849


   Please open a new thread on https://discuss.tvm.ai/ to followup



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] aGiant commented on issue #5709: tvm/apps/bundle_deploy/build --> make failed

2020-06-05 Thread GitBox


aGiant commented on issue #5709:
URL: https://github.com/apache/incubator-tvm/issues/5709#issuecomment-639798666


   @tqchen 
   it worked on Intel CPU Ubuntu 18.04 LTS
   Now, we tried for arm cpu under armbian got error:
   
   compiled with tvm.target.arm_cpu('pynq') in build_model.py
   
   gcc -shared -Wall -std=c99 -O2 -fPIC -I/home/pi/incubator-tvm/include 
-I/home/pi/incubator-tvm/3rdparty/dmlc-core/include 
-I/home/pi/incubator-tvm/3rdparty/dlpack/include -fvisibility=hidden -o 
build/bundle_c.so  bundle.c runtime.c build/model.o -pthread
   TVM_NUM_THREADS=1 build/demo_dynamic build/bundle.so build/cat.bin
   terminate called after throwing an instance of 'dmlc::Error'
 what():  [20:29:01] ../../src/runtime/graph/graph_runtime.cc:377: Check 
failed: pf != nullptr: no such function in module: fused_layout_transform_22
   Stack trace:
 [bt] (0) build/bundle.so(+0xda16) [0xb6c19a16]
 [bt] (1) 
build/bundle.so(tvm::runtime::GraphRuntime::CreateTVMOp(tvm::runtime::TVMOpParam
 const&, std::vector > const&, unsigned 
int)+0x42d) [0xb6c1e1d6]
 [bt] (2) build/bundle.so(tvm::runtime::GraphRuntime::SetupOpExecs()+0x305) 
[0xb6c24a5e]
 [bt] (3) 
build/bundle.so(tvm::runtime::GraphRuntime::Init(std::__cxx11::basic_string, std::allocator > const&, tvm::runtime::Module, 
std::vector > const&)+0x13b) [0xb6c24d80]
 [bt] (4) build/bundle.so(+0x18f26) [0xb6c24f26]
 [bt] (5) build/bundle.so(+0x19088) [0xb6c25088]
 [bt] (6) build/bundle.so(tvm_runtime_create+0x12f) [0xb6c17d4c]
 [bt] (7) build/demo_dynamic(+0x994) [0x42f994]
 [bt] (8) /lib/arm-linux-gnueabihf/libc.so.6(__libc_start_main+0x99) 
[0xb6d33fe6]
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5716: [topi][relay] Add operation gather to relay.

2020-06-05 Thread GitBox


tqchen commented on pull request #5716:
URL: https://github.com/apache/incubator-tvm/pull/5716#issuecomment-639776043


   @abergeron please 
https://tvm.apache.org/docs/contribute/code_review.html#approve-and-request-changes-explicitly



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-06-05 Thread GitBox


kevinthesun commented on a change in pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r436123241



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -3194,6 +3191,55 @@ def _convert_operator(self, op_name, inputs, attrs,
 raise NotImplementedError("Operator {} not 
implemented.".format(op_name))
 return sym
 
+def _licm_construct(self, loop_name, node_name):
+"""Construct a node by considering whether it is
+loop invariant with the given while loop. If yes, we
+generate a loop Variable. Otherwise, return regular
+converted relay expression.
+
+Parameters
+--
+loop_name : str
+TensorFlow while loop name to be checked.
+
+node_name : str
+TensorFlow node name.
+
+Returns
+---
+out : relay.Expr or relay.Var
+Converted relay expression or loop var.
+"""
+actual_expr = self._backtrack_construct(node_name)
+tn = node_name.split(':').split("^")[-1]
+node_name = tn[0]
+cloop_name = find_parent_loop_name(node_name, 
self._while_loop_name_set)
+
+if loop_name in self._while_loop_name_set and not 
cloop_name.startswith(loop_name):

Review comment:
   I add a fix to do loop var lifting inside while loop construction. This 
can at least fix the naming issue for those original tf while loop var nodes.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] abergeron edited a comment on pull request #5723: Fix the values for test_fmod since it fails way too often otherwise

2020-06-05 Thread GitBox


abergeron edited a comment on pull request #5723:
URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639732620


   I did something to fix the values so that the relative difference will never 
be too big.
   
   I also have some code to replace the fmod expression on Metal, but that 
might be better in a separate PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] abergeron commented on pull request #5723: Fix the values for test_fmod since it fails way too often otherwise

2020-06-05 Thread GitBox


abergeron commented on pull request #5723:
URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639732620


   I've did something to fix the values so that the relative difference will 
never be too big.
   
   I also have some code to replace the fmod expression on Metal, but that 
might be better in a separate PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5736: [TEST][FLAKY] Fix flaky test_topi_pooling.py:test_adaptive_pool

2020-06-05 Thread GitBox


tqchen merged pull request #5736:
URL: https://github.com/apache/incubator-tvm/pull/5736


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (24597f0 -> 592711e)

2020-06-05 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 24597f0  Fix reshape usage in ARM Winograd (#5732)
 add 592711e  [TEST] Fix flaky 
topi/tests/python/test_topi_pooling.py:test_adaptive_pool (#5736)

No new revisions were added by this update.

Summary of changes:
 topi/tests/python/test_topi_pooling.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-tvm] abergeron commented on pull request #5723: Fix the random seed for test_fmod since it fails way too often otherwise

2020-06-05 Thread GitBox


abergeron commented on pull request #5723:
URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639716825


   Well that is not the problem since it only makes a difference if x or y is 
negative and we don't test that case.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] abergeron commented on pull request #5723: Fix the random seed for test_fmod since it fails way too often otherwise

2020-06-05 Thread GitBox


abergeron commented on pull request #5723:
URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639695379


   I dicovered that the problem is not that the values are too close it's that 
fmod() on metal doesn't have the same behaviour.
   
   `fmod(x, y)` expands to `x – y * trunc(x/y)` instead of `x - y * floor(x/y)` 
like most other platforms.
   
   So the solution is to change the codegen for fmod in metal I think.  I'll 
try to do that.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (fee5d54 -> 24597f0)

2020-06-05 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from fee5d54  Change 'delete's in Relay VM Instruction dtor to 'delete[]'s 
(#5735)
 add 24597f0  Fix reshape usage in ARM Winograd (#5732)

No new revisions were added by this update.

Summary of changes:
 topi/python/topi/arm_cpu/conv2d_alter_op.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5732: [TOPI][ARM] Fix reshape usage in ARM schedule

2020-06-05 Thread GitBox


tqchen merged pull request #5732:
URL: https://github.com/apache/incubator-tvm/pull/5732


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #5736: [TEST][FLAKY] Fix flaky test_topi_pooling.py:test_adaptive_pool

2020-06-05 Thread GitBox


junrushao1994 commented on pull request #5736:
URL: https://github.com/apache/incubator-tvm/pull/5736#issuecomment-639645065


   @tqchen would be interesting to use pytest’s repeat to detect all potential 
flaky tests



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5737: [TOPI][RELAY][PYTORCH]Conv3d_transpose op support added

2020-06-05 Thread GitBox


siju-samuel opened a new pull request #5737:
URL: https://github.com/apache/incubator-tvm/pull/5737


   Conv3d_transpose implementation.




This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics closed issue #5728: Mismatched new / delete [ ] in Relay VM

2020-06-05 Thread GitBox


zhiics closed issue #5728:
URL: https://github.com/apache/incubator-tvm/issues/5728


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (e1b1171 -> fee5d54)

2020-06-05 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from e1b1171  ROCm: Add warp shuffles and enable reductions (#5727)
 add fee5d54  Change 'delete's in Relay VM Instruction dtor to 'delete[]'s 
(#5735)

No new revisions were added by this update.

Summary of changes:
 src/runtime/vm/vm.cc | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on pull request #5730: [REFACTOR] Separate ArgTypeCode from DLDataTypeCode

2020-06-05 Thread GitBox


tqchen commented on pull request #5730:
URL: https://github.com/apache/incubator-tvm/pull/5730#issuecomment-639568074


   @junrushao1994 we can also send a PR to add the comment about the consistency



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5736: [TEST][FLAKY] Fix flaky test_topi_pooling.py:test_adaptive_pool

2020-06-05 Thread GitBox


tqchen commented on pull request #5736:
URL: https://github.com/apache/incubator-tvm/pull/5736#issuecomment-639566907


   cc @kevinthesun @yzhliu @ZihengJiang @tmoreau89 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #5730: [REFACTOR] Separate ArgTypeCode from DLDataTypeCode

2020-06-05 Thread GitBox


junrushao1994 commented on pull request #5730:
URL: https://github.com/apache/incubator-tvm/pull/5730#issuecomment-639564805


   I seelooks like this might affect downstream projects too because kDLInt 
are not encouraged any more



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5214: [Runtime][Contrib] Support cudnn softmax

2020-06-05 Thread GitBox


tqchen edited a comment on pull request #5214:
URL: https://github.com/apache/incubator-tvm/pull/5214#issuecomment-639560910


   https://github.com/apache/incubator-tvm/pull/5600 for improving softmax with 
warp shuffle.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-05 Thread GitBox


tqchen edited a comment on pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#issuecomment-639560545


   cc @icemelon9 it might be useful to revisit the softmax strategy, given that 
the perf has been improved



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5214: [Runtime][Contrib] Support cudnn softmax

2020-06-05 Thread GitBox


tqchen commented on pull request #5214:
URL: https://github.com/apache/incubator-tvm/pull/5214#issuecomment-639560910


   https://github.com/apache/incubator-tvm/pull/5600



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-05 Thread GitBox


tqchen commented on pull request #5600:
URL: https://github.com/apache/incubator-tvm/pull/5600#issuecomment-639560545


   cc @icemelon9 it might be useful to revisit the softmax strategy.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5733: Make Tensor struct public in Rust runtime

2020-06-05 Thread GitBox


tqchen commented on pull request #5733:
URL: https://github.com/apache/incubator-tvm/pull/5733#issuecomment-639559062


   I think it is useful to have some struct available to other usecases. 
However, as per layered design of TVM runtime, the current Tensor structure is 
not really C ABI compatible, so has some cross language concerns. The best way 
to use the struct is to use it through a generic DLTensor ABI, which is 
available as part of TVM. I believe @jroesch would also do follow up PR aroun 
dthat.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] inadob commented on pull request #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax

2020-06-05 Thread GitBox


inadob commented on pull request #4805:
URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-639512065


   > It seems you still have old `3rdparty/dmlc-core'. You can check that by 
clicking on "Files changed" tab.
   
   It's fixed now. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] akosik-anyvision edited a comment on issue #5728: Mismatched new / delete [ ] in Relay VM

2020-06-05 Thread GitBox


akosik-anyvision edited a comment on issue #5728:
URL: https://github.com/apache/incubator-tvm/issues/5728#issuecomment-639488053


   > ah, I see. We should have used delete[]. You are welcome to send a PR. 
Thanks.
   
   Thanks for confirming. I would've done that directly, but I was waiting on 
approval from my company. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] akosik-anyvision commented on issue #5728: Mismatched new / delete [ ] in Relay VM

2020-06-05 Thread GitBox


akosik-anyvision commented on issue #5728:
URL: https://github.com/apache/incubator-tvm/issues/5728#issuecomment-639488053


   > ah, I see. We should have used delete[]. You are welcome to send a PR. 
Thanks.
   Thanks for confirming. I would've done that directly, but I was waiting on 
approval from my company. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] akosik-anyvision opened a new pull request #5735: Fix new[] / delete mismatches in Relay VM

2020-06-05 Thread GitBox


akosik-anyvision opened a new pull request #5735:
URL: https://github.com/apache/incubator-tvm/pull/5735


   Fixes #5728. 
   
   The Instruction dtor in the Relay VM currently uses `delete` to free memory 
that was allocated with `new[]`. This PR changes each occurrence of `delete` to 
`delete[]`.
   
   @zhiics please take a look, when you get a chance.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] aarongreig opened a new issue #5734: LLVM 10 compatibility

2020-06-05 Thread GitBox


aarongreig opened a new issue #5734:
URL: https://github.com/apache/incubator-tvm/issues/5734


   Hello
   
   I tried to build TVM linking against LLVM 10.0 and I got the following error:
   
   ```
   FAILED: CMakeFiles/tvm.dir/src/codegen/llvm/codegen_x86_64.cc.o 
   /usr/bin/c++  -DDMLC_USE_FOPEN64=0 -DNDEBUG -DTVM_LLVM_VERSION=100 
-DTVM_THREADPOOL_USE_OPENMP=0 -Dtvm_EXPORTS -DDMLC_ENABLE_RTTI=0 -I../include 
-I../3rdparty/dlpack/include -I../3rdparty/dmlc-core/include 
-I../3rdparty/rang/include -I../3rdparty/compiler-rt -I../3rdparty/picojson 
-I../vta/include -I/home/aaron/build/llvm/include -I../topi/include -std=c++14 
-faligned-new -O2 -Wall -fPIC  -O3 -DNDEBUG -fPIC   -pthread -fno-rtti -MD -MT 
CMakeFiles/tvm.dir/src/codegen/llvm/codegen_x86_64.cc.o -MF 
CMakeFiles/tvm.dir/src/codegen/llvm/codegen_x86_64.cc.o.d -o 
CMakeFiles/tvm.dir/src/codegen/llvm/codegen_x86_64.cc.o -c 
../src/codegen/llvm/codegen_x86_64.cc
   ../src/codegen/llvm/codegen_x86_64.cc: In member function 'virtual 
llvm::Value* tvm::codegen::CodeGenX86_64::VisitExpr_(const tvm::ir::Cast*)':
   ../src/codegen/llvm/codegen_x86_64.cc:88:30: error: 
'x86_avx512_mask_vcvtph2ps_512' is not a member of 'llvm::Intrinsic'
  ::llvm::Intrinsic::x86_avx512_mask_vcvtph2ps_512, 16, 
LLVMType(Float(32, from.lanes())),
 ^
   ../src/codegen/llvm/codegen_x86_64.cc:100:30: error: 'x86_vcvtph2ps_256' is 
not a member of 'llvm::Intrinsic'
  ::llvm::Intrinsic::x86_vcvtph2ps_256, 8, LLVMType(Float(32, 
from.lanes())),
 ^
   ```
   
   My CMake invocation was just `cmake -DCMAKE_BUILD_TYPE=Release -GNinja`. I 
used a custom `cmake.config` with `USE_LLVM` pointing to the llvm-config.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #5727: ROCm warp shuffles and reductions

2020-06-05 Thread GitBox


masahi commented on pull request #5727:
URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-639376728


   Thanks @t-vi @wpan11nv @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (fbc2b87 -> e1b1171)

2020-06-05 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from fbc2b87  [ONNX]MaxRoiPool, Mod & Xor op support added (#5729)
 add e1b1171  ROCm: Add warp shuffles and enable reductions (#5727)

No new revisions were added by this update.

Summary of changes:
 src/target/llvm/intrin_rule_rocm.cc|  52 +
 src/target/target.cc   |   3 +-
 src/tir/transforms/lower_thread_allreduce.cc   |  17 +-
 tests/python/integration/test_reduce.py|  12 +-
 tests/python/unittest/test_target_codegen_cuda.py  | 248 +++--
 tests/python/unittest/test_target_codegen_rocm.py  |   4 +-
 .../test_tir_transform_lower_warp_memory.py|  25 ++-
 topi/python/topi/cuda/softmax.py   |   5 +-
 8 files changed, 227 insertions(+), 139 deletions(-)



[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-06-05 Thread GitBox


kevinthesun commented on a change in pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r435748804



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -3194,6 +3191,55 @@ def _convert_operator(self, op_name, inputs, attrs,
 raise NotImplementedError("Operator {} not 
implemented.".format(op_name))
 return sym
 
+def _licm_construct(self, loop_name, node_name):
+"""Construct a node by considering whether it is
+loop invariant with the given while loop. If yes, we
+generate a loop Variable. Otherwise, return regular
+converted relay expression.
+
+Parameters
+--
+loop_name : str
+TensorFlow while loop name to be checked.
+
+node_name : str
+TensorFlow node name.
+
+Returns
+---
+out : relay.Expr or relay.Var
+Converted relay expression or loop var.
+"""
+actual_expr = self._backtrack_construct(node_name)
+tn = node_name.split(':').split("^")[-1]
+node_name = tn[0]
+cloop_name = find_parent_loop_name(node_name, 
self._while_loop_name_set)
+
+if loop_name in self._while_loop_name_set and not 
cloop_name.startswith(loop_name):

Review comment:
   Indeed when user sets tf op name in the way that the graph def node name 
is in the format of ```loop_name/xxx```, we can incorrectly recognize it to be 
part of a while loop.  The problem here is tf op name is a part of node name in 
graph def and there is no ```name``` attribute in node attr. For now I haven't 
found a better way to do licm node construction. In practice, this case should 
be rare since while loop name is a complicated hierarchical combination of op 
and sub-graph names.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-06-05 Thread GitBox


kevinthesun commented on a change in pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r435748804



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -3194,6 +3191,55 @@ def _convert_operator(self, op_name, inputs, attrs,
 raise NotImplementedError("Operator {} not 
implemented.".format(op_name))
 return sym
 
+def _licm_construct(self, loop_name, node_name):
+"""Construct a node by considering whether it is
+loop invariant with the given while loop. If yes, we
+generate a loop Variable. Otherwise, return regular
+converted relay expression.
+
+Parameters
+--
+loop_name : str
+TensorFlow while loop name to be checked.
+
+node_name : str
+TensorFlow node name.
+
+Returns
+---
+out : relay.Expr or relay.Var
+Converted relay expression or loop var.
+"""
+actual_expr = self._backtrack_construct(node_name)
+tn = node_name.split(':').split("^")[-1]
+node_name = tn[0]
+cloop_name = find_parent_loop_name(node_name, 
self._while_loop_name_set)
+
+if loop_name in self._while_loop_name_set and not 
cloop_name.startswith(loop_name):

Review comment:
   Indeed when user sets tf op name in the way that the graph def node name 
is in the format of ```loop_name/xxx```, we will recognize it to be part of a 
while loop.  The problem here is tf op name is a part of node name in graph def 
and there is no ```name``` attribute in node attr. For now I haven't found a 
better way to do licm node construction. In practice, this case should be rare 
since while loop name is a complicated hierarchical combination of op and 
sub-graph names.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-06-05 Thread GitBox


kevinthesun commented on a change in pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r435748804



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -3194,6 +3191,55 @@ def _convert_operator(self, op_name, inputs, attrs,
 raise NotImplementedError("Operator {} not 
implemented.".format(op_name))
 return sym
 
+def _licm_construct(self, loop_name, node_name):
+"""Construct a node by considering whether it is
+loop invariant with the given while loop. If yes, we
+generate a loop Variable. Otherwise, return regular
+converted relay expression.
+
+Parameters
+--
+loop_name : str
+TensorFlow while loop name to be checked.
+
+node_name : str
+TensorFlow node name.
+
+Returns
+---
+out : relay.Expr or relay.Var
+Converted relay expression or loop var.
+"""
+actual_expr = self._backtrack_construct(node_name)
+tn = node_name.split(':').split("^")[-1]
+node_name = tn[0]
+cloop_name = find_parent_loop_name(node_name, 
self._while_loop_name_set)
+
+if loop_name in self._while_loop_name_set and not 
cloop_name.startswith(loop_name):

Review comment:
   Indeed when user sets tf op name in the way that the graph def node name 
is in the format of ```loop_name/xxx```, we can't know whether it belongs to a 
while loop or not.  The problem here is tf op name is a part of node name in 
graph def and there is no ```name``` attribute in node attr. For now I haven't 
found a better way to do licm node construction. In practice, this case should 
be rare since while loop name is a complicated hierarchical combination of op 
and sub-graph names.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-06-05 Thread GitBox


kevinthesun commented on a change in pull request #5699:
URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r435748804



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -3194,6 +3191,55 @@ def _convert_operator(self, op_name, inputs, attrs,
 raise NotImplementedError("Operator {} not 
implemented.".format(op_name))
 return sym
 
+def _licm_construct(self, loop_name, node_name):
+"""Construct a node by considering whether it is
+loop invariant with the given while loop. If yes, we
+generate a loop Variable. Otherwise, return regular
+converted relay expression.
+
+Parameters
+--
+loop_name : str
+TensorFlow while loop name to be checked.
+
+node_name : str
+TensorFlow node name.
+
+Returns
+---
+out : relay.Expr or relay.Var
+Converted relay expression or loop var.
+"""
+actual_expr = self._backtrack_construct(node_name)
+tn = node_name.split(':').split("^")[-1]
+node_name = tn[0]
+cloop_name = find_parent_loop_name(node_name, 
self._while_loop_name_set)
+
+if loop_name in self._while_loop_name_set and not 
cloop_name.startswith(loop_name):

Review comment:
   Indeed when user sets tf op name in the format of ```loop_name/xxx```, 
we can't know whether it belongs to a while loop or not.  The problem here is 
tf op name is a part of node name in graph def and there is no ```name``` 
attribute in node attr. For now I haven't found a better way to do licm node 
construction. In practice, this case should be rare since while loop name is a 
complicated hierarchical combination of op and sub-graph names.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org