[GitHub] [incubator-tvm] yzhliu commented on issue #5121: [TE] reverse-mode autodiff without any optimization

2020-03-24 Thread GitBox
yzhliu commented on issue #5121: [TE] reverse-mode autodiff without any 
optimization
URL: https://github.com/apache/incubator-tvm/pull/5121#issuecomment-603648759
 
 
   @MarisaKirisame sure, I will try.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] ANSHUMAN87 commented on issue #5084: Duplicate likely nodes added when loop axis split unevenly

2020-03-24 Thread GitBox
ANSHUMAN87 commented on issue #5084: Duplicate likely nodes added when loop 
axis split unevenly
URL: https://github.com/apache/incubator-tvm/pull/5084#issuecomment-603636335
 
 
   @yzhliu: Gentle reminder!
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5060: [uTVM][Runtime] Deprecate uTVM Standalone Runtime

2020-03-24 Thread GitBox
tqchen commented on issue #5060: [uTVM][Runtime] Deprecate uTVM Standalone 
Runtime
URL: https://github.com/apache/incubator-tvm/issues/5060#issuecomment-603632782
 
 
   re: fragmentation issue, think the allocation strategies carefully and adopt 
an arena-style allocator(counter based as above) can likely resolve the issue 
of fragementation. In terms of the total memory cost, we can indeed found the 
cost out during compile time for simple graph programs
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] MarisaKirisame commented on a change in pull request #5144: [Relay][VM] Memory planner (part 1)

2020-03-24 Thread GitBox
MarisaKirisame commented on a change in pull request #5144: [Relay][VM] Memory 
planner (part 1)
URL: https://github.com/apache/incubator-tvm/pull/5144#discussion_r397530214
 
 

 ##
 File path: python/tvm/relay/transform/memory_plan.py
 ##
 @@ -0,0 +1,189 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: 
disable=no-else-return,invalid-name,len-as-condition,too-many-nested-blocks
+"""
+A pass for manifesting explicit memory allocations.
+"""
+import attr
+import numpy as np
+from typing import Optional, Dict
+
+from ..expr_functor import ExprMutator
+from ..scope_builder import ScopeBuilder
+from .. import op, ty, expr
+from ... import DataType, register_func, IRModule
+from .. import analysis
+from . import FoldConstant, InferType, function_pass
+from ..backend import compile_engine
+
+def is_primitive(call):
+return hasattr(call, 'op') and hasattr(call.op, 'attrs') and \
+   hasattr(call.op.attrs, 'Primitive') and 
int(call.op.attrs.Primitive) == 1
+
+@attr.s(auto_attribs=True)
+class Region:
+var: expr.Var
+size: expr.Expr
+alignment: Optional[expr.Expr]
+dtype: Optional[str]
+offsets: Dict[expr.Var, expr.Expr] = {}
+
+def grow(self, old_storage: expr.Var, size: expr.Expr, alignment: 
expr.Expr, dtype: str) -> None:
+if self.dtype:
+assert self.dtype == dtype, "must have matching dtypes in a region"
+else:
+self.dtype = dtype
+
+if self.alignment:
+assert analysis.alpha_equal(self.alignment, alignment), "must have 
matching alignments in a region"
+else:
+self.alignment = alignment
+
+# Record the offset at which we allocate the storage.
+self.offsets[old_storage] = self.size
+
+self.size = self.size + size
+
+def to_expr(self) -> expr.Expr:
+return op.memory.alloc_storage(self.size, self.alignment, self.dtype)
+
+def iterative_let(let, each_binding, kont):
+bindings = []
+while isinstance(let, expr.Let):
+lhs = let.var
+rhs = let.value
+bindings.append(each_binding(lhs, rhs))
+let = let.body
+
+return kont(bindings, let)
+
+def mk_let(bindings, body):
 
 Review comment:
   letlist is everywhere. can you put this in some common file?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (686911e -> 3aabbd9)

2020-03-24 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 686911e  [Torch] Fix conv2d conversion for group conv (group > 1 but 
!= in channels) (#5132)
 add 3aabbd9  [Torch] Add support for max_pool1d (#5142)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py  | 24 +-
 src/relay/op/nn/pooling.cc|  4 +++
 tests/python/frontend/pytorch/test_forward.py | 35 ++-
 3 files changed, 51 insertions(+), 12 deletions(-)



[GitHub] [incubator-tvm] masahi merged pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
masahi merged pull request #5142: [Torch] Add support for max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
masahi commented on issue #5142: [Torch] Add support for max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#issuecomment-603510415
 
 
   Thanks @wyc-ruiker 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #5134: [RELAY] Add MergeCompilerRegions pass

2020-03-24 Thread GitBox
masahi commented on issue #5134: [RELAY] Add MergeCompilerRegions pass
URL: https://github.com/apache/incubator-tvm/pull/5134#issuecomment-603507803
 
 
   @mbaret Sorry I haven't been following the development of a new partitioning 
algorithm. Is this related to annotator support for composite?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jroesch commented on issue #5144: [Relay][VM] Memory planner (part 1)

2020-03-24 Thread GitBox
jroesch commented on issue #5144: [Relay][VM] Memory planner (part 1)
URL: https://github.com/apache/incubator-tvm/pull/5144#issuecomment-603496275
 
 
   cc @zhiics @yidawang @icemelon9 @MarisaKirisame 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jroesch opened a new pull request #5144: [Relay][VM] Memory planner (part 1)

2020-03-24 Thread GitBox
jroesch opened a new pull request #5144: [Relay][VM] Memory planner (part 1)
URL: https://github.com/apache/incubator-tvm/pull/5144
 
 
   This PR adds a new pass to the memory scheduling phase of the VM. After we 
have made all memory allocations explicit, we now analyze for basic block style 
regions and allocate a single piece of storage for each. 
   
   The next PR will focus on how to overlap these to do memory compression, i.e 
reduce the amount of live memory. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5132: [Torch] Fix conv2d conversion for group conv (group > 1 but != in channels)

2020-03-24 Thread GitBox
tqchen commented on issue #5132: [Torch] Fix conv2d conversion for group conv 
(group > 1 but != in channels)
URL: https://github.com/apache/incubator-tvm/pull/5132#issuecomment-603470964
 
 
   Thanks @masahi @jwfromm !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] MarisaKirisame commented on issue #5121: [TE] reverse-mode autodiff without any optimization

2020-03-24 Thread GitBox
MarisaKirisame commented on issue #5121: [TE] reverse-mode autodiff without any 
optimization
URL: https://github.com/apache/incubator-tvm/pull/5121#issuecomment-603454210
 
 
   (of course, not in this PR)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] MarisaKirisame commented on issue #5121: [TE] reverse-mode autodiff without any optimization

2020-03-24 Thread GitBox
MarisaKirisame commented on issue #5121: [TE] reverse-mode autodiff without any 
optimization
URL: https://github.com/apache/incubator-tvm/pull/5121#issuecomment-603454001
 
 
   @yzhliu can you do forward mode automatic differentiation? It is easy 
considering you have jacobian - you only need to do JacobianVectorProduct 
instead of VectorJacbianProduct
   
   It is useful in higher order derivative a la hessian vector product.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on issue #5121: [TE] reverse-mode autodiff without any optimization

2020-03-24 Thread GitBox
yzhliu commented on issue #5121: [TE] reverse-mode autodiff without any 
optimization
URL: https://github.com/apache/incubator-tvm/pull/5121#issuecomment-603446436
 
 
   @MarisaKirisame @tqchen @hzfan Could you review again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] hypercubestart commented on issue #5039: [Relay] GradientCell Relay Pass

2020-03-24 Thread GitBox
hypercubestart commented on issue #5039: [Relay] GradientCell Relay Pass
URL: https://github.com/apache/incubator-tvm/pull/5039#issuecomment-603437659
 
 
   thanks @jroesch !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on issue #5131: [Relay][TOPI] Register topi schedule for Relay fast_exp and fast_tanh

2020-03-24 Thread GitBox
icemelon9 commented on issue #5131: [Relay][TOPI] Register topi schedule for 
Relay fast_exp and fast_tanh
URL: https://github.com/apache/incubator-tvm/pull/5131#issuecomment-603427810
 
 
   @selo1412 Could you address @comaniac comments?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] u99127 commented on issue #5060: [uTVM][Runtime] Deprecate uTVM Standalone Runtime

2020-03-24 Thread GitBox
u99127 commented on issue #5060: [uTVM][Runtime] Deprecate uTVM Standalone 
Runtime
URL: https://github.com/apache/incubator-tvm/issues/5060#issuecomment-603408836
 
 
   Thanks for pointing this to me @tmoreau89 and thank you for this work 
@liangfu . Very interesting and good questions to ask.
   
   From a design level point of view for micro-controllers I'd like to take 
this one step further and challenge folks to think about whether this can be 
achieved with static allocation rather than any form of dynamic allocation . 
The hypothesis being that at compile time one would know how much temporary 
space is needed between layers rather than having to face a run time failure.
   
   Dynamic allocation on micro-controllers suffers from fragmentation issues 
and further do we want to have dynamic allocation in the runtime on 
micro-controllers.  Further the model being executed will be part of a larger 
application  - how can we allow our users to specify the amount of heap 
available or being consumed for executing their model ? It would be better to 
try to provide that with diagnostics at link time or compilation time rather 
than at runtime. @mshawcroft  might have more to add.  And yes, in our opinion 
for micro-controllers one of the challenges is the availability and usage of 
temporary storage for working set calculations between layers.
   
   2 further design questions. 
   
   1.   In the micro-controller world, supporting every new device with their 
different memory maps and what not will be painful and beyond one simple 
reference implementation, I don't think we have an efficient route to 
deployment other than integrating with other platforms in the microcontroller 
space. How would this runtime integrate with other platforms like Zephyr, 
mbedOS or FreeRTOS ?

   2. I'd be interested in extending CI with qemu or some such for Cortex-M as 
well or indeed on the STM board that you are using @tmoreau89 .
   
   Purely a nit but from a rationale point of view, I would say that uTVM 
runtime not being tested in a CI is technical debt :)
   
   regards
   Ramana


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] manupa-arm commented on issue #5143: [RELAY] Re-wrote the Graph Partitioner to support multiple outputs

2020-03-24 Thread GitBox
manupa-arm commented on issue #5143: [RELAY] Re-wrote the Graph Partitioner to 
support multiple outputs
URL: https://github.com/apache/incubator-tvm/pull/5143#issuecomment-603402680
 
 
   cc : @zhiics @comaniac @tqchen 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] manupa-arm opened a new pull request #5143: [RELAY] Re-wrote the Graph Partitioner to support multiple outputs

2020-03-24 Thread GitBox
manupa-arm opened a new pull request #5143: [RELAY] Re-wrote the Graph 
Partitioner to support multiple outputs
URL: https://github.com/apache/incubator-tvm/pull/5143
 
 
   This PR aims to **include support for multiple outputs** in the regions that 
get outlined as functions for different compiler backends in the existing 
Partition Graph Pass. Such regions are annotated and bounded by compiler_begin 
and compiler_end annotation ops. 
   
   This is a required step as step 4 in [RFC - 
BYOC](https://discuss.tvm.ai/t/rfc-byoc-an-extended-graph-partitioning-flow/6028).
 Moreover, this feature is requested in prior discussions such as [Improved 
graph partitioning 
algorithm](https://discuss.tvm.ai/t/relay-improved-graph-partitioning-algorithm/5830)
   
   This PR uses the utility functions provided by the [AnnotationRegionSet 
PR](https://github.com/apache/incubator-tvm/pull/5030).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m commented on a change in pull request #5106: [Relay][MergeComposite] Support TupleGetItem in body of pattern

2020-03-24 Thread GitBox
trevor-m commented on a change in pull request #5106: [Relay][MergeComposite] 
Support TupleGetItem in body of pattern
URL: https://github.com/apache/incubator-tvm/pull/5106#discussion_r397337604
 
 

 ##
 File path: src/relay/transforms/merge_composite.cc
 ##
 @@ -66,6 +66,31 @@ class MergeCompositeWrapper : public ExprMutator {
 return root;
   }
 
+  Expr ExtractPattern(const TupleGetItem& pattern, const Expr& root,
+  Map>* var_map, Map* call_map) {
+if (!root->IsInstance()) {
+  return Expr();
+}
+auto root_node = Downcast(root);
+if (pattern->index != root_node->index) {
+  return Expr();
+}
+if (pattern->tuple->IsInstance() &&
+root_node->tuple->IsInstance()) {
+  Expr new_arg;
+  if (call_map->find(pattern->tuple) != call_map->end()) {
+new_arg = (*call_map)[pattern->tuple];
+  } else {
+new_arg = ExtractPattern(Downcast(pattern->tuple),
 
 Review comment:
   Thanks for reviewing!
   `pattern->tuple` can be a tuple or it can be any node with a TupleType 
output. In this call it is a Call node (batch_norm) which has multiple outputs 
so the output is a TupleType.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (0a0e58b -> 686911e)

2020-03-24 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 0a0e58b  [REFACTOR][TIR] Introduce PrimFuncPass. (#5139)
 add 686911e  [Torch] Fix conv2d conversion for group conv (group > 1 but 
!= in channels) (#5132)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py  | 7 ++-
 tests/python/frontend/pytorch/test_forward.py | 6 ++
 2 files changed, 12 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5028: [RELAY] Remove kCompiler attr from ext mod

2020-03-24 Thread GitBox
zhiics commented on a change in pull request #5028: [RELAY] Remove kCompiler 
attr from ext mod
URL: https://github.com/apache/incubator-tvm/pull/5028#discussion_r397321424
 
 

 ##
 File path: src/relay/backend/compile_engine.cc
 ##
 @@ -627,6 +627,7 @@ class CompileEngineImpl : public CompileEngineNode {
 const tvm::tir::StringImmNode* symbol_name = 
ext_symbol.as();
 CHECK(symbol_name) << "No external symbol is set for:\n" << 
AsText(src_func, false);
 auto gv = GlobalVar(symbol_name->value);
+src_func = WithAttr(src_func, attr::kCompiler, 
tir::StringImmNode::make("default"));
 
 Review comment:
   Also, inorder to use CoW, you may want to use `src_func = 
WithAttr(std::move(src_func), ...)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen merged pull request #5132: [Torch] Fix conv2d conversion for group conv (group > 1 but != in channels)

2020-03-24 Thread GitBox
tqchen merged pull request #5132: [Torch] Fix conv2d conversion for group conv 
(group > 1 but != in channels)
URL: https://github.com/apache/incubator-tvm/pull/5132
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5060: [uTVM][Runtime] Deprecate uTVM Standalone Runtime

2020-03-24 Thread GitBox
tqchen commented on issue #5060: [uTVM][Runtime] Deprecate uTVM Standalone 
Runtime
URL: https://github.com/apache/incubator-tvm/issues/5060#issuecomment-603374417
 
 
   The workspace memory could have a different strategy. The way it works is 
that we create a different arena for workspace, along with a counter.
   
   - When a memory is allocated, we allocate memory from the arena, and add the 
counter
   - When a memory is de-allocated, we decrease the counter
   - When the counter goes to zero, we free all the memory.
   
   This will work because all workspace memory are temporal. It also guarantees 
a constant time allocation
   
   As a generalization. If most memory allocation happens in a RAII style 
lifecycle. e.g. everything de-allocates onces we exit a scope, then the counter 
based strategy(per scope) is should work pretty well.
   
   I am not fixated about the arena allocator, but would like to challenge us 
to think a bit how much simpler can we make the allocation strategy looks like 
given what we know about the workload. Of course, we could certainly bring 
sub-allocator strategies that are more complicated, or fallback to libraries 
when needed
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5028: [RELAY] Remove kCompiler attr from ext mod

2020-03-24 Thread GitBox
zhiics commented on a change in pull request #5028: [RELAY] Remove kCompiler 
attr from ext mod
URL: https://github.com/apache/incubator-tvm/pull/5028#discussion_r397314736
 
 

 ##
 File path: src/relay/backend/compile_engine.cc
 ##
 @@ -627,6 +627,7 @@ class CompileEngineImpl : public CompileEngineNode {
 const tvm::tir::StringImmNode* symbol_name = 
ext_symbol.as();
 CHECK(symbol_name) << "No external symbol is set for:\n" << 
AsText(src_func, false);
 auto gv = GlobalVar(symbol_name->value);
+src_func = WithAttr(src_func, attr::kCompiler, 
tir::StringImmNode::make("default"));
 
 Review comment:
   Now you should not give it "default" but a null string ObjectRef, so that 
the returned value is not `defined`
   
   
https://github.com/apache/incubator-tvm/blob/0a0e58bfa4c87b2cbff0be2b401da0b3a08fcfe8/include/tvm/ir/function.h#L99


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m commented on issue #4847: Return empty CSourceModule when no lowered_funcs exists in Relay mod

2020-03-24 Thread GitBox
trevor-m commented on issue #4847: Return empty CSourceModule when no 
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-603369453
 
 
   > I think changing it to a llvm module and import all submodules is okay. 
Now if you only have an external module. You will need to create a llvm module 
first and them import the external module to it.
   > 
   > Stepping into llvm module to find the symbol is not wrong because we will 
always try to find the symbol from the host module first. If it is not found, 
we will then try to check each imported module. See the code here:
   > 
   > 
https://github.com/apache/incubator-tvm/blob/050f2bde2c694af9b5569ca954ca041c3767787b/src/runtime/module.cc#L65
   > 
   > A minimal example to reproduce this and track the root cause would be more 
helpful.
   
   You can reproduce this by running `test_extern_dnnl()` after commenting out 
this line: 
https://github.com/apache/incubator-tvm/blob/master/tests/python/relay/test_pass_partition_graph.py#L203


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mehrdadhe commented on issue #3934: [Runtime] MISRA-C compliant TVM runtime

2020-03-24 Thread GitBox
mehrdadhe commented on issue #3934: [Runtime] MISRA-C compliant TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/3934#issuecomment-603346577
 
 
   Great work. @liangfu have you considered using "[system 
lib](https://docs.tvm.ai/api/python/target.html?highlight=system%20lib)" 
approach since dlopen is banned in some environments?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5030: [RELAY] Added a AnnotatedRegion utility class

2020-03-24 Thread GitBox
zhiics commented on a change in pull request #5030: [RELAY] Added a 
AnnotatedRegion utility class
URL: https://github.com/apache/incubator-tvm/pull/5030#discussion_r397261539
 
 

 ##
 File path: python/tvm/relay/analysis/annotated_regions.py
 ##
 @@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, 
unused-import
+"""Regions used in Relay."""
+
+from tvm.runtime import Object
+from . import _ffi_api
+
+
+class AnnotatedRegionSet(Object):
+"""Class to represent a relay expression split into regions."""
+
+def __init__(self, expr, region_begin_op, region_end_op):
+"""Construct regions from an expression.
+
+Parameters
+--
+expr : tvm.relay.Expr
+The expression from which to construct the regions.
+region_begin_op : tvm.relay.Op
+The region begin annotation.
+region_end_op : tvm.relay.Op
+The region end annotation.
+
+"""
+self.__init_handle_by_constructor__(_ffi_api.AnnotatedRegionSet,
+expr,
+region_begin_op,
+region_end_op)
+
+def __len__(self):
+return len(self.regions)
+
+def get_region(self, expr):
+"""Get the region an expression belongs to.
+
+Parameters
+--
+expr : tvm.relay.Expr
+The expression.
+
+Returns
+---
+region : Region
 
 Review comment:
   Yes, but without defining the class. Sphinx is not able to generate the 
correct documentation for it. It is basically an undefined symbol.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen edited a comment on issue #5124: [uTVM][Runtime] Introduce Virtual Memory Allocator to CRT

2020-03-24 Thread GitBox
tqchen edited a comment on issue #5124: [uTVM][Runtime] Introduce Virtual 
Memory Allocator to CRT
URL: https://github.com/apache/incubator-tvm/pull/5124#issuecomment-603312590
 
 
   The workspace memory could have a different strategy. The way it works is 
that we create a different arena for workspace, along with a counter.
   
   - When a memory is allocated, we allocate memory from the arena, and add the 
counter
   - When a memory is de-allocated, we decrease the counter
   - When the counter goes to zero, we free all the memory.
   
   This will work because all workspace memory are temporal. It also guarantees 
a constant time allocation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5124: [uTVM][Runtime] Introduce Virtual Memory Allocator to CRT

2020-03-24 Thread GitBox
tqchen commented on issue #5124: [uTVM][Runtime] Introduce Virtual Memory 
Allocator to CRT
URL: https://github.com/apache/incubator-tvm/pull/5124#issuecomment-603312590
 
 
   The workspace memory could have a different strategy. The way it works is 
that we create a different arena for workspace, along with a counter.
   
   - When a memory is allocated, we allocate memory from the arena, and add the 
counter
   - When a memory is de-allocated, we decrease the counter
   - When the counter goes to zero, we free all the memory.
   
   This will work because all workspace memory are temporal 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Shawn-Inspur commented on a change in pull request #5099: [TOPI][Tensor Core] Conv2d and Dense ops support on Tensor Core

2020-03-24 Thread GitBox
Shawn-Inspur commented on a change in pull request #5099: [TOPI][Tensor Core] 
Conv2d and Dense ops support on Tensor Core
URL: https://github.com/apache/incubator-tvm/pull/5099#discussion_r397237931
 
 

 ##
 File path: topi/python/topi/cuda/conv2d_nhwc_tensorcore.py
 ##
 @@ -0,0 +1,361 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, too-many-locals, too-many-arguments
+# pylint: disable=too-many-statements, unused-argument
+"""Tensorcore template for cuda backend"""
+import numpy as np
+import tvm
+from tvm import te
+from tvm import autotvm
+from ..util import get_const_tuple, traverse_inline, simplify
+from ..nn.pad import pad
+from ..nn.util import get_pad_tuple
+from .tensor_intrin import intrin_wmma_load_matrix_A
+from .tensor_intrin import intrin_wmma_load_matrix_W
+from .tensor_intrin import intrin_wmma_store_matrix
+
+
+def intrin_wmma_gemm(strides_A, strides_W, strides_Conv, shape, out_dtype):
+"""Intrin for wmma fill_fragment and mma_sync"""
+wmma_m, wmma_n, wmma_k = shape
+A = te.placeholder((wmma_m, 1, 1, wmma_k), name='A', dtype='float16')
+B = te.placeholder((wmma_k, wmma_n), name='B', dtype='float16')
+k = te.reduce_axis((0, wmma_k), name="k")
+C = te.compute((wmma_m, 1, 1, wmma_n),
+   lambda ii, t0, t1, jj:
+   te.sum(A[ii, t0, t1, k].astype(out_dtype) * \
+  B[k, jj].astype(out_dtype), axis=k),
+   name='C')
+BA = tvm.tir.decl_buffer(A.shape, A.dtype, name='BA',
+ scope='wmma.matrix_a', data_alignment=32,
+ offset_factor=8, strides=strides_A)
+BB = tvm.tir.decl_buffer(B.shape, B.dtype, name='BB',
+ scope='wmma.matrix_b', data_alignment=32,
+ offset_factor=8, strides=strides_W)
+BC = tvm.tir.decl_buffer(C.shape, C.dtype, name='BC',
+ scope='wmma.accumulator', data_alignment=32,
+ offset_factor=8, strides=strides_Conv)
+
+def intrin_func(ins, outs):
+BA, BB = ins
+BC, = outs
+
+def warp_idnex(offset, row, col):
+row = row * col
+return offset // row + offset % row // col
+
+warp_index_A = warp_idnex(BA.elem_offset, wmma_m, wmma_k)
+warp_index_B = warp_idnex(BB.elem_offset, wmma_k, wmma_n)
+warp_index_C = warp_idnex(BC.elem_offset, wmma_m, wmma_n)
+
+def init():
+ib = tvm.tir.ir_builder.create()
+ib.emit(
+tvm.tir.call_intrin('handle', 'tvm_fill_fragment', BC.data, 
wmma_m, wmma_n, wmma_k,
+warp_index_C, 0.0))
+return ib.get()
+
+def update():
+ib = tvm.tir.ir_builder.create()
+ib.emit(tvm.tir.call_intrin('handle', 'tvm_mma_sync',
+BC.data, warp_index_C,
+BA.data, warp_index_A,
+BB.data, warp_index_B,
+BC.data, warp_index_C))
+return ib.get()
+
+return update(), init(), update()
+
+return te.decl_tensor_intrin(C.op, intrin_func, binds={A: BA, B: BB, C: 
BC})
+
+
+def nhwc_tensorcore_cuda(cfg, Input, Filter, stride, padding, dilation, 
out_dtype):
+"""Compute declaration for tensorcore"""
+assert isinstance(stride, int) or len(stride) == 2
+assert isinstance(dilation, int) or len(dilation) == 2
+
+if isinstance(stride, int):
+stride_h = stride_w = stride
+else:
+stride_h, stride_w = stride
+
+if isinstance(dilation, int):
+dilation_h = dilation_w = dilation
+else:
+dilation_h, dilation_w = dilation
+
+batch, in_height, in_width, in_channel = Input.shape
+kernel_h, kernel_w, _, num_filter = Filter.shape
+# compute the output shape
+dilated_kernel_h = (kernel_h - 1) * dilation_h + 1
+dilated_kernel_w = (kernel_w - 1) * dilation_w + 1
+pad_top, pad_left, pad_down, pad_right = get_pad_tuple(
+padding, (dilated_kernel_h, dilated_kernel_w))
+out_channel = num_fi

[incubator-tvm] branch master updated: [REFACTOR][TIR] Introduce PrimFuncPass. (#5139)

2020-03-24 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 0a0e58b  [REFACTOR][TIR] Introduce PrimFuncPass. (#5139)
0a0e58b is described below

commit 0a0e58bfa4c87b2cbff0be2b401da0b3a08fcfe8
Author: Tianqi Chen 
AuthorDate: Tue Mar 24 08:15:54 2020 -0700

[REFACTOR][TIR] Introduce PrimFuncPass. (#5139)

* [REFACTOR][TIR] Introduce PrimFuncPass.

- Introduce PrimFuncPass
- Convert one pass to the unified Pass API.

* Address comments

* Fix comments
---
 docs/api/python/tir.rst|   9 ++
 include/tvm/ir/expr.h  |   2 +-
 include/tvm/ir/type_functor.h  |   4 +
 include/tvm/tir/transform.h|  72 ++
 python/tvm/tir/__init__.py |   1 +
 python/tvm/tir/transform/__init__.py   |  21 +++
 python/tvm/tir/transform/_ffi_api.py   |  21 +++
 python/tvm/tir/transform/function_pass.py  | 149 +
 python/tvm/tir/transform/transform.py  |  31 +
 src/ir/module.cc   |   4 +-
 src/ir/type_functor.cc |  14 ++
 src/relay/analysis/alpha_equal.cc  |  16 +++
 src/target/codegen.cc  |   3 +
 src/tir/ir/function.cc |   3 +
 src/tir/ir/transform.cc| 145 
 .../{pass => transforms}/combine_context_call.cc   |  19 +++
 ... => test_tir_transform_combine_context_call.py} |  12 +-
 .../unittest/test_tir_transform_prim_func_pass.py  |  50 +++
 18 files changed, 570 insertions(+), 6 deletions(-)

diff --git a/docs/api/python/tir.rst b/docs/api/python/tir.rst
index ea1ac66..dd08758 100644
--- a/docs/api/python/tir.rst
+++ b/docs/api/python/tir.rst
@@ -22,3 +22,12 @@ tvm.tir
:imported-members:
:exclude-members: PrimExpr, const
:autosummary:
+
+
+
+tvm.tir.transform
+-
+.. automodule:: tvm.tir.transform
+   :members:
+   :imported-members:
+   :autosummary:
diff --git a/include/tvm/ir/expr.h b/include/tvm/ir/expr.h
index 85b3937..44244df 100644
--- a/include/tvm/ir/expr.h
+++ b/include/tvm/ir/expr.h
@@ -150,7 +150,7 @@ class RelayExprNode : public BaseExprNode {
   /*!
* \return The checked_type
*/
-  const Type& checked_type() const;
+  inline const Type& checked_type() const;
   /*!
* \brief Check if the inferred(checked) type of the Expr
*  is backed by a TTypeNode and return it.
diff --git a/include/tvm/ir/type_functor.h b/include/tvm/ir/type_functor.h
index 476538c..5507191 100644
--- a/include/tvm/ir/type_functor.h
+++ b/include/tvm/ir/type_functor.h
@@ -93,6 +93,7 @@ class TypeFunctor {
   virtual R VisitType_(const TypeCallNode* op, Args... args) 
TYPE_FUNCTOR_DEFAULT;
   virtual R VisitType_(const TypeDataNode* op, Args... args) 
TYPE_FUNCTOR_DEFAULT;
   virtual R VisitType_(const PrimTypeNode* op, Args... args) 
TYPE_FUNCTOR_DEFAULT;
+  virtual R VisitType_(const PointerTypeNode* op, Args... args) 
TYPE_FUNCTOR_DEFAULT;
   virtual R VisitTypeDefault_(const Object* op, Args...) {
 LOG(FATAL) << "Do not have a default for " << op->GetTypeKey();
 throw;  // unreachable, written to stop compiler warning
@@ -115,6 +116,7 @@ class TypeFunctor {
 TVM_TYPE_FUNCTOR_DISPATCH(TypeCallNode);
 TVM_TYPE_FUNCTOR_DISPATCH(TypeDataNode);
 TVM_TYPE_FUNCTOR_DISPATCH(PrimTypeNode);
+TVM_TYPE_FUNCTOR_DISPATCH(PointerTypeNode);
 return vtable;
   }
 };
@@ -138,6 +140,7 @@ class TVM_DLL TypeVisitor :
   void VisitType_(const TypeCallNode* op) override;
   void VisitType_(const TypeDataNode* op) override;
   void VisitType_(const PrimTypeNode* op) override;
+  void VisitType_(const PointerTypeNode* op) override;
 };
 
 /*!
@@ -158,6 +161,7 @@ class TVM_DLL TypeMutator :
   Type VisitType_(const TypeCallNode* op) override;
   Type VisitType_(const TypeDataNode* op) override;
   Type VisitType_(const PrimTypeNode* op) override;
+  Type VisitType_(const PointerTypeNode* op) override;
 
  private:
   Array MutateArray(Array arr);
diff --git a/include/tvm/tir/transform.h b/include/tvm/tir/transform.h
new file mode 100644
index 000..5149677
--- /dev/null
+++ b/include/tvm/tir/transform.h
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0

[GitHub] [incubator-tvm] tqchen merged pull request #5139: [REFACTOR][TIR] Introduce PrimFuncPass.

2020-03-24 Thread GitBox
tqchen merged pull request #5139: [REFACTOR][TIR] Introduce PrimFuncPass.
URL: https://github.com/apache/incubator-tvm/pull/5139
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5139: [REFACTOR][TIR] Introduce PrimFuncPass.

2020-03-24 Thread GitBox
tqchen commented on issue #5139: [REFACTOR][TIR] Introduce PrimFuncPass.
URL: https://github.com/apache/incubator-tvm/pull/5139#issuecomment-603298988
 
 
   Thanks @zhiics @siju-samue哈


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [RUNTIME]fix unused-value warning (#5140)

2020-03-24 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new bbbfc1b  [RUNTIME]fix unused-value warning (#5140)
bbbfc1b is described below

commit bbbfc1b0b753880e59d56e2115c65934053d113d
Author: windclarion 
AuthorDate: Tue Mar 24 23:15:42 2020 +0800

[RUNTIME]fix unused-value warning (#5140)
---
 src/runtime/crt/crt_runtime_api.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/runtime/crt/crt_runtime_api.c 
b/src/runtime/crt/crt_runtime_api.c
index 433ae8a..6d7c010 100644
--- a/src/runtime/crt/crt_runtime_api.c
+++ b/src/runtime/crt/crt_runtime_api.c
@@ -79,7 +79,7 @@ int TVMModGetFunction(TVMModuleHandle mod,
   if (!strcmp(func_name, "load_params")) {
 *out = &TVMGraphRuntime_LoadParams;
   } else {
-status -1;
+status = -1;
   }
   return status;
 }



[GitHub] [incubator-tvm] tqchen merged pull request #5140: [RUNTIME]fix unused-value warning

2020-03-24 Thread GitBox
tqchen merged pull request #5140: [RUNTIME]fix unused-value warning
URL: https://github.com/apache/incubator-tvm/pull/5140
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397132458
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -363,9 +363,35 @@ class MaxPool2D2(Module):
 def forward(self, *args):
 return torch.nn.MaxPool2d(kernel_size=[10, 10])(args[0])
 
+class MaxPool2D3(Module):
 
 Review comment:
   done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397132370
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -363,9 +363,35 @@ class MaxPool2D2(Module):
 def forward(self, *args):
 return torch.nn.MaxPool2d(kernel_size=[10, 10])(args[0])
 
+class MaxPool2D3(Module):
+def forward(self, *args):
+return torch.nn.MaxPool2d(kernel_size=[4, 4], padding=2, 
stride=2)(args[0])
+
 input_data = torch.rand(input_shape).float()
 verify_model(MaxPool2D1().float().eval(), input_data=input_data)
 verify_model(MaxPool2D2().float().eval(), input_data=input_data)
+verify_model(MaxPool2D3().float().eval(), input_data=input_data)
+
+def test_forward_maxpool1d():
+torch.set_grad_enabled(False)
+input_shape = [1, 3, 10]
+
+class MaxPool1D1(Module):
+def forward(self, *args):
+return torch.nn.MaxPool1d(kernel_size=1)(args[0])
+
+class MaxPool1D2(Module):
+def forward(self, *args):
+return torch.nn.MaxPool1d(kernel_size=10)(args[0])
+
+class MaxPool1D3(Module):
+def forward(self, *args):
+return torch.nn.MaxPool1d(kernel_size=4, padding=2, 
stride=2)(args[0])
+
+input_data = torch.rand(input_shape).float()
+verify_model(MaxPool1D1().float().eval(), input_data=input_data)
+verify_model(MaxPool1D2().float().eval(), input_data=input_data)
+verify_model(MaxPool1D3().float().eval(), input_data=input_data)
 
 Review comment:
   done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397124944
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -363,9 +363,35 @@ class MaxPool2D2(Module):
 def forward(self, *args):
 return torch.nn.MaxPool2d(kernel_size=[10, 10])(args[0])
 
+class MaxPool2D3(Module):
 
 Review comment:
   > I am talking about the same point as 
https://github.com/apache/incubator-tvm/pull/5142/files#r397046667
   > 
   > Yes, you should have a test, but no need to write a wrapper class
   
   Thanks. I misunderstood your review suggestion :-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
masahi commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397118351
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -363,9 +363,35 @@ class MaxPool2D2(Module):
 def forward(self, *args):
 return torch.nn.MaxPool2d(kernel_size=[10, 10])(args[0])
 
+class MaxPool2D3(Module):
 
 Review comment:
   I am talking about the same point as 
https://github.com/apache/incubator-tvm/pull/5142/files#r397046667
   
   Yes, you should have a test, but no need to write a wrapper class


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5106: [Relay][MergeComposite] Support TupleGetItem in body of pattern

2020-03-24 Thread GitBox
mbaret commented on a change in pull request #5106: [Relay][MergeComposite] 
Support TupleGetItem in body of pattern
URL: https://github.com/apache/incubator-tvm/pull/5106#discussion_r397085411
 
 

 ##
 File path: src/relay/transforms/merge_composite.cc
 ##
 @@ -66,6 +66,31 @@ class MergeCompositeWrapper : public ExprMutator {
 return root;
   }
 
+  Expr ExtractPattern(const TupleGetItem& pattern, const Expr& root,
+  Map>* var_map, Map* call_map) {
+if (!root->IsInstance()) {
+  return Expr();
+}
+auto root_node = Downcast(root);
+if (pattern->index != root_node->index) {
+  return Expr();
+}
+if (pattern->tuple->IsInstance() &&
+root_node->tuple->IsInstance()) {
+  Expr new_arg;
+  if (call_map->find(pattern->tuple) != call_map->end()) {
+new_arg = (*call_map)[pattern->tuple];
+  } else {
+new_arg = ExtractPattern(Downcast(pattern->tuple),
+ Downcast(root_node->tuple),
+ var_map, call_map);
+call_map->Set(pattern->tuple, new_arg);
 
 Review comment:
   I think call_map should now be renamed to expr_map so this is a bit easier 
to follow.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5106: [Relay][MergeComposite] Support TupleGetItem in body of pattern

2020-03-24 Thread GitBox
mbaret commented on a change in pull request #5106: [Relay][MergeComposite] 
Support TupleGetItem in body of pattern
URL: https://github.com/apache/incubator-tvm/pull/5106#discussion_r397086345
 
 

 ##
 File path: src/relay/transforms/merge_composite.cc
 ##
 @@ -66,6 +66,31 @@ class MergeCompositeWrapper : public ExprMutator {
 return root;
   }
 
+  Expr ExtractPattern(const TupleGetItem& pattern, const Expr& root,
+  Map>* var_map, Map* call_map) {
+if (!root->IsInstance()) {
+  return Expr();
+}
+auto root_node = Downcast(root);
+if (pattern->index != root_node->index) {
+  return Expr();
+}
+if (pattern->tuple->IsInstance() &&
+root_node->tuple->IsInstance()) {
+  Expr new_arg;
+  if (call_map->find(pattern->tuple) != call_map->end()) {
+new_arg = (*call_map)[pattern->tuple];
+  } else {
+new_arg = ExtractPattern(Downcast(pattern->tuple),
 
 Review comment:
   I'm not sure I understand why there's a downcast to a Call here, shouldn't 
it be a Tuple?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397074587
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -363,9 +363,35 @@ class MaxPool2D2(Module):
 def forward(self, *args):
 return torch.nn.MaxPool2d(kernel_size=[10, 10])(args[0])
 
+class MaxPool2D3(Module):
 
 Review comment:
   Don't we need a test for padding and stride in Maxpool?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397074097
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -363,9 +363,35 @@ class MaxPool2D2(Module):
 def forward(self, *args):
 return torch.nn.MaxPool2d(kernel_size=[10, 10])(args[0])
 
+class MaxPool2D3(Module):
 
 Review comment:
   Don't we need a test for padding and stride in Maxpool?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397074097
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -363,9 +363,35 @@ class MaxPool2D2(Module):
 def forward(self, *args):
 return torch.nn.MaxPool2d(kernel_size=[10, 10])(args[0])
 
+class MaxPool2D3(Module):
 
 Review comment:
   Don't we need a test for padding and stride in Maxpool?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397073052
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -213,12 +213,33 @@ def _impl(inputs, input_types):
 pool_size = _infer_shape(inputs[1])
 strides = _infer_shape(inputs[2])
 padding = _infer_shape(inputs[3])
-
+dilation = _infer_shape(inputs[4])
 
 Review comment:
   In https://pytorch.org/docs/stable/nn.html#maxpool1d, MaxPool1d and 
MaxPool2d have dilation argument. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
wyc-ruiker commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397071431
 
 

 ##
 File path: src/relay/op/nn/pooling.cc
 ##
 @@ -987,6 +987,10 @@ Array Pool1DCompute(const Attrs& attrs,
   << " or 4-D input (e.g. NCWc on for vector instructions)"
   << " or 5-D input (e.g. NCWnc for tensor accelerators)";
 
+  if (param->padding.size() == 1) {
+padding.push_back(padding[0]);
 
 Review comment:
   Yes, If we don't have this, the max_pool1d will fail in 
https://github.com/apache/incubator-tvm/blob/7c5ff50873e91e9ad27b5f08847c27d58e8b5c4c/topi/include/topi/nn/pooling.h#L679-L680


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
masahi commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397050020
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -363,9 +363,35 @@ class MaxPool2D2(Module):
 def forward(self, *args):
 return torch.nn.MaxPool2d(kernel_size=[10, 10])(args[0])
 
+class MaxPool2D3(Module):
 
 Review comment:
   Remove this wrapper


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
masahi commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397049896
 
 

 ##
 File path: src/relay/op/nn/pooling.cc
 ##
 @@ -987,6 +987,10 @@ Array Pool1DCompute(const Attrs& attrs,
   << " or 4-D input (e.g. NCWc on for vector instructions)"
   << " or 5-D input (e.g. NCWnc for tensor accelerators)";
 
+  if (param->padding.size() == 1) {
+padding.push_back(padding[0]);
 
 Review comment:
   Why do we need this? Does 1D pooling require two pad values (left & right)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
masahi commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397048418
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -213,12 +213,33 @@ def _impl(inputs, input_types):
 pool_size = _infer_shape(inputs[1])
 strides = _infer_shape(inputs[2])
 padding = _infer_shape(inputs[3])
-
+dilation = _infer_shape(inputs[4])
 
 Review comment:
   does pooling has dilation argument?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
masahi commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397048418
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -213,12 +213,33 @@ def _impl(inputs, input_types):
 pool_size = _infer_shape(inputs[1])
 strides = _infer_shape(inputs[2])
 padding = _infer_shape(inputs[3])
-
+dilation = _infer_shape(inputs[4])
 
 Review comment:
   does pooling have dilation argument?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #5142: [Torch] Add support for max_pool1d

2020-03-24 Thread GitBox
masahi commented on a change in pull request #5142: [Torch] Add support for 
max_pool1d
URL: https://github.com/apache/incubator-tvm/pull/5142#discussion_r397046667
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -363,9 +363,35 @@ class MaxPool2D2(Module):
 def forward(self, *args):
 return torch.nn.MaxPool2d(kernel_size=[10, 10])(args[0])
 
+class MaxPool2D3(Module):
+def forward(self, *args):
+return torch.nn.MaxPool2d(kernel_size=[4, 4], padding=2, 
stride=2)(args[0])
+
 input_data = torch.rand(input_shape).float()
 verify_model(MaxPool2D1().float().eval(), input_data=input_data)
 verify_model(MaxPool2D2().float().eval(), input_data=input_data)
+verify_model(MaxPool2D3().float().eval(), input_data=input_data)
+
+def test_forward_maxpool1d():
+torch.set_grad_enabled(False)
+input_shape = [1, 3, 10]
+
+class MaxPool1D1(Module):
+def forward(self, *args):
+return torch.nn.MaxPool1d(kernel_size=1)(args[0])
+
+class MaxPool1D2(Module):
+def forward(self, *args):
+return torch.nn.MaxPool1d(kernel_size=10)(args[0])
+
+class MaxPool1D3(Module):
+def forward(self, *args):
+return torch.nn.MaxPool1d(kernel_size=4, padding=2, 
stride=2)(args[0])
+
+input_data = torch.rand(input_shape).float()
+verify_model(MaxPool1D1().float().eval(), input_data=input_data)
+verify_model(MaxPool1D2().float().eval(), input_data=input_data)
+verify_model(MaxPool1D3().float().eval(), input_data=input_data)
 
 Review comment:
   No need to write a wrapper class. See 
https://github.com/apache/incubator-tvm/blob/86079479f0556002adfce2f438ea2a607e318c23/tests/python/frontend/pytorch/test_forward.py#L704-L732


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5030: [RELAY] Added a AnnotatedRegion utility class

2020-03-24 Thread GitBox
mbaret commented on a change in pull request #5030: [RELAY] Added a 
AnnotatedRegion utility class
URL: https://github.com/apache/incubator-tvm/pull/5030#discussion_r397040388
 
 

 ##
 File path: src/relay/analysis/annotated_region_set.cc
 ##
 @@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "annotated_region_set.h"
+
+#include 
+#include 
+
+#include 
+#include 
+#include 
+
+
+namespace tvm {
+namespace relay {
+
+AnnotatedRegion AnnotatedRegionSetNode::GetRegion(const Expr& expr) const {
+  for (auto candidate : regions_) {
+if (candidate->nodes.find(expr) != candidate->nodes.end()) {
+  return candidate;
+}
+  }
+  return AnnotatedRegion(nullptr);
+}
+
+void AnnotatedRegionSetNode::MergeRegions(AnnotatedRegion region1,
+  AnnotatedRegion region2) {
+  if (region1 == region2) {
+return;
+  }
+
+  // Merge region 2 to region 1 and erase region 2.
+  region1->nodes.insert(region2->nodes.begin(), region2->nodes.end());
+  for (auto arg : region2->ins) {
+region1->ins.push_back(arg);
+  }
+  for (auto out : region2->outs) {
+region1->outs.push_back(out);
+  }
+  // if any of the outputs of 2 are inputs of 1, they become internal nodes
+  // so remove them from outs/args
+  std::vector args_to_remove;
+  for (const auto& arg : region1->ins) {
+auto call = Downcast(arg);
+auto it = std::find(region2->outs.begin(), region2->outs.end(), 
call->args[0]);
+if (it != region2->outs.end()) {
+  args_to_remove.push_back(arg);
+  region1->outs.remove(*it);
+}
+  }
+  for (const auto& arg : args_to_remove) {
+region1->ins.remove(arg);
+  }
+  regions_.erase(region2);
+}
+
+void AnnotatedRegionSetNode::AddToRegion(AnnotatedRegion region, const Expr& 
expr) {
+  auto region2 = GetRegion(expr);
+  if (region2.defined()) {
+MergeRegions(region, region2);
+  } else {
+region->nodes.insert(expr);
+  }
+}
+
+AnnotatedRegion AnnotatedRegionSetNode::MakeRegion() {
+  auto ret = regions_.emplace(AnnotatedRegion());
+  return *ret.first;
+}
+
+class AnnotatedRegionSet::Creator : public ExprVisitor {
+ public:
+  Creator(const Op ®ion_begin_op, const Op ®ion_end_op) :
 
 Review comment:
   Good point, shame the linter doesn't spot this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5030: [RELAY] Added a AnnotatedRegion utility class

2020-03-24 Thread GitBox
mbaret commented on a change in pull request #5030: [RELAY] Added a 
AnnotatedRegion utility class
URL: https://github.com/apache/incubator-tvm/pull/5030#discussion_r397019200
 
 

 ##
 File path: python/tvm/relay/analysis/annotated_regions.py
 ##
 @@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, 
unused-import
+"""Regions used in Relay."""
+
+from tvm.runtime import Object
+from . import _ffi_api
+
+
+class AnnotatedRegionSet(Object):
+"""Class to represent a relay expression split into regions."""
+
+def __init__(self, expr, region_begin_op, region_end_op):
+"""Construct regions from an expression.
+
+Parameters
+--
+expr : tvm.relay.Expr
+The expression from which to construct the regions.
+region_begin_op : tvm.relay.Op
+The region begin annotation.
+region_end_op : tvm.relay.Op
+The region end annotation.
+
+"""
+self.__init_handle_by_constructor__(_ffi_api.AnnotatedRegionSet,
+expr,
+region_begin_op,
+region_end_op)
+
+def __len__(self):
+return len(self.regions)
+
+def get_region(self, expr):
+"""Get the region an expression belongs to.
+
+Parameters
+--
+expr : tvm.relay.Expr
+The expression.
+
+Returns
+---
+region : Region
 
 Review comment:
   I query the attributes of Region in the test (check_region), even though I 
don't explicitly define it. It's my understanding that to do this you need to 
expose the class as a node an implement the VisitAttrs method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker opened a new pull request #5142: Wyc

2020-03-24 Thread GitBox
wyc-ruiker opened a new pull request #5142: Wyc
URL: https://github.com/apache/incubator-tvm/pull/5142
 
 
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   
   add max_pool1d in https://github.com/apache/incubator-tvm/issues/5133, 
@masahi @alexwong 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5030: [RELAY] Added a AnnotatedRegion utility class

2020-03-24 Thread GitBox
mbaret commented on a change in pull request #5030: [RELAY] Added a 
AnnotatedRegion utility class
URL: https://github.com/apache/incubator-tvm/pull/5030#discussion_r397019200
 
 

 ##
 File path: python/tvm/relay/analysis/annotated_regions.py
 ##
 @@ -0,0 +1,62 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, 
unused-import
+"""Regions used in Relay."""
+
+from tvm.runtime import Object
+from . import _ffi_api
+
+
+class AnnotatedRegionSet(Object):
+"""Class to represent a relay expression split into regions."""
+
+def __init__(self, expr, region_begin_op, region_end_op):
+"""Construct regions from an expression.
+
+Parameters
+--
+expr : tvm.relay.Expr
+The expression from which to construct the regions.
+region_begin_op : tvm.relay.Op
+The region begin annotation.
+region_end_op : tvm.relay.Op
+The region end annotation.
+
+"""
+self.__init_handle_by_constructor__(_ffi_api.AnnotatedRegionSet,
+expr,
+region_begin_op,
+region_end_op)
+
+def __len__(self):
+return len(self.regions)
+
+def get_region(self, expr):
+"""Get the region an expression belongs to.
+
+Parameters
+--
+expr : tvm.relay.Expr
+The expression.
+
+Returns
+---
+region : Region
 
 Review comment:
   I query the attributes of region in the test (check_region), even though I 
don't explicitly define it. It's my understanding that to do this you need to 
expose the class as a node an implement the VisitAttrs method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret commented on a change in pull request #5030: [RELAY] Added a AnnotatedRegion utility class

2020-03-24 Thread GitBox
mbaret commented on a change in pull request #5030: [RELAY] Added a 
AnnotatedRegion utility class
URL: https://github.com/apache/incubator-tvm/pull/5030#discussion_r397015819
 
 

 ##
 File path: tests/python/relay/test_annotated_regions.py
 ##
 @@ -0,0 +1,121 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name
+from tvm import relay
+from tvm.relay.op.annotation import compiler_begin, compiler_end
+
+
+def check_region(region_set, args, nodes, rets):
+region = region_set.get_region(args[0])
+assert region
+assert set(args) == set(region.args)
+assert set(nodes) == set(region.nodes)
+assert set(rets) == set(region.rets)
+
+
+def test_region_set_creator_diamond():
+data = relay.var('data', shape=(10, 10))
+cb_1 = compiler_begin(data, 'test_target')
+O_1 = relay.abs(cb_1)
+ce_1 = compiler_end(O_1, 'test_target')
+ce_2 = compiler_end(O_1, 'test_target')
+cb_2 = compiler_begin(ce_1, 'test_target')
+O_2 = relay.nn.relu(cb_2)
+ce_3 = compiler_end(O_2, 'test_target')
+cb_d = compiler_begin(ce_2, "default")
+X = relay.tanh(cb_d)
+ce_d = compiler_end(X, 'default')
+cb_3 = compiler_begin(ce_3, 'test_target')
+cb_4 = compiler_begin(ce_d, 'test_target')
+O_3 = relay.add(cb_3, cb_4)
+ce_4 = compiler_end(O_3, 'test_target')
+diamond = relay.Function([data], ce_4)
+
+region_set = relay.analysis.AnnotatedRegionSet(diamond,
+   
relay.op.get("annotation.compiler_begin"),
+   
relay.op.get("annotation.compiler_end"))
+assert len(region_set) == 4
+check_region(
+region_set,
+[cb_1],
+[cb_1, O_1, ce_1, ce_2],
+[ce_1, ce_2],
+)
+check_region(
+region_set,
+[cb_2],
+[cb_2, O_2, ce_3],
+[ce_3],
+)
+check_region(
+region_set,
+[cb_d],
+[cb_d, X, ce_d],
+[ce_d],
+)
+check_region(
+region_set,
+[cb_3, cb_4],
+[cb_3, cb_4, O_3, ce_4],
+[ce_4],
+)
+
+
+def test_region_set_creator_merged():
+data = relay.var('data', shape=(10, 10))
+cb_1 = compiler_begin(data, 'test_target')
+O_1 = relay.abs(cb_1)
+ce_2 = compiler_end(O_1, 'test_target')
+O_2 = relay.nn.relu(O_1)
+ce_3 = compiler_end(O_2, 'test_target')
+cb_d = compiler_begin(ce_2, "default")
+X = relay.tanh(cb_d)
+ce_d = compiler_end(X, 'default')
+cb_3 = compiler_begin(ce_3, 'test_target')
+cb_4 = compiler_begin(ce_d, 'test_target')
+O_3 = relay.add(cb_3, cb_4)
+ce_4 = compiler_end(O_3, 'test_target')
+merged = relay.Function([data], ce_4)
+
+region_set = relay.analysis.AnnotatedRegionSet(merged,
 
 Review comment:
   We don't deal with targets at all in this pass/class because in principle it 
should work for arbitrary begin/end annotations, not necessarily just compiler 
begin/end ones. All of the outs/ins are necessarily annotations, so one way you 
can query the target (and how I do it) is to look at any one of these 
annotations and check its 'compiler' attribute.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (e6dd8e1 -> 667e24f)

2020-03-24 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from e6dd8e1  [Relay] GradientCell Relay Pass (#5039)
 add 667e24f  [CI] Update rust docker (#5141)

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)



[GitHub] [incubator-tvm] jroesch merged pull request #5141: [CI] Update rust docker

2020-03-24 Thread GitBox
jroesch merged pull request #5141: [CI] Update rust docker
URL: https://github.com/apache/incubator-tvm/pull/5141
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5121: [TE] reverse-mode autodiff without any optimization

2020-03-24 Thread GitBox
yzhliu commented on a change in pull request #5121: [TE] reverse-mode autodiff 
without any optimization
URL: https://github.com/apache/incubator-tvm/pull/5121#discussion_r396935671
 
 

 ##
 File path: src/te/autodiff/jacobian.cc
 ##
 @@ -0,0 +1,381 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file jacobian.cc
+ * \brief Calculate Jacobian of two tensors dY/dX.
+ *X must be direct input tensor of Y.
+ *The result Jacobian shape will be (Y.shape, X.shape)
+ *The algorithm was initially implemented by Sergei Grechanik 
(sgrechanik-h)
+ *in [Automatic differentiation for tensor expressions](#2498)
+ *and [Zero elimination](#2634)
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "ad_util.h"
+
+namespace tvm {
+namespace te {
+
+#define NOT_IMPLEMENTED \
+  { LOG(FATAL) << "Derivative of this expr is not implemented: " << 
GetRef(op); throw; }
+
+/*! \brief Differentiate an expression wrt a variable or a tensor element */
+class JacobianMutator : public ExprMutator {
+ public:
+  /*!
+   * \brief Differentiate wrt `input(indices)`.
+   * \param input The input tensor.
+   * \param indices The indices of the element with respect to which to 
differentiate.
+   */
+  explicit JacobianMutator(Tensor input, Array indices)
+: input_(input), indices_(indices) {}
+  /*!
+   * \brief Differentiate wrt the input variable.
+   * \param input The input variable.
+   */
+  explicit JacobianMutator(Var input) : input_var_(input) {}
+
+  PrimExpr Mutate(PrimExpr e) {
+if (e.dtype().is_int() || e.dtype().is_uint()) {
+  LOG(WARNING) << "For now we assume that the derivative of any integer 
expression is always 0."
+   << " e = " << e;
+  return make_zero(e.dtype());
+} else {
+  return ExprMutator::VisitExpr(e);
+}
+  }
+
+  PrimExpr VisitExpr_(const VarNode* op) {
+if (input_var_.get() && input_var_.get() == op && op->dtype.is_float()) {
+  return FloatImm(op->dtype, 1.0);
+} else {
+  return make_zero(op->dtype);
+}
+  }
+
+  PrimExpr VisitExpr_(const LoadNode* op) NOT_IMPLEMENTED
+  PrimExpr VisitExpr_(const LetNode* op) NOT_IMPLEMENTED
+
+  PrimExpr VisitExpr_(const CallNode* op) {
+PrimExpr expr = GetRef(op);
+if (op->call_type == CallNode::CallType::Halide) {
+  if (input_.get() && op->func.same_as(input_->op) &&
+  op->value_index == input_->value_index) {
+// Tensor(indices)
+CHECK_EQ(indices_.size(), op->args.size());
+PrimExpr condition = const_true();
+for (size_t i = 0; i < input_.ndim(); ++i) {
+  condition = AndNode::make(condition, EQNode::make(indices_[i], 
op->args[i]));
+}
+return CastNode::make(op->dtype, condition);
+  } else {
+return make_zero(op->dtype);
+  }
+} else if (op->call_type == CallNode::CallType::PureIntrinsic) {
+  static std::unordered_set piecewise_const = {"floor", 
"ceil", "trunc", "round"};
+  if (op->name == "exp") {
+return MulNode::make(Mutate(op->args[0]), expr);
+  } else if (op->name == "log") {
+return DivNode::make(Mutate(op->args[0]), op->args[0]);
+  } else if (op->name == "sigmoid") {
+return MulNode::make(Mutate(op->args[0]),
+ MulNode::make(expr, 
SubNode::make(FloatImm(expr.dtype(), 1.0), expr)));
+  } else if (op->name == "sqrt") {
+return DivNode::make(Mutate(op->args[0]),
+ MulNode::make(expr, FloatImm(expr.dtype(), 2.0)));
+  } else if (op->name == "tanh") {
+return MulNode::make(Mutate(op->args[0]),
+ SubNode::make(FloatImm(expr.dtype(), 1.0), 
MulNode::make(expr, expr)));
+  } else if (op->name == "pow") {
+auto x = op->args[0], y = op->args[1];
+return expr * (Mutate(y)*log(x) + Mutate(x)*y/x);
+  } else if (op->name == "fabs") {
+auto type = op->args[0].dtype();
+return MulNode::make(Mutate(op->args[0]),
+ SelectNode::make(GENode::make(op->args[0], 
make_zero(type)),
+  FloatImm(type, 1.0), 
Floa