[GitHub] [incubator-tvm] zhen-jia opened a new pull request #5000: And opt out operator for has_multiple_inputs for graph tuner

2020-03-05 Thread GitBox
zhen-jia opened a new pull request #5000: And opt out operator for 
has_multiple_inputs for graph tuner
URL: https://github.com/apache/incubator-tvm/pull/5000
 
 
   If layout_transform is in the graph, graph tuner’s has_multiple_inputs will 
give wrong answer. This PR is to solve this problem.
   @kevinthesun please help to review this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm 
support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734661
 
 

 ##
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+
+This article is a test script to test fused_batch_norm operators in TensorFlow 
frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+g=tf.Graph()
+with g.as_default(): 
+input_tensor = tf.placeholder(tf.float32,shape=(1, 12, 12, 32), 
name='input')
+alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, 
name='alpha')
+beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, 
name='bn')
+out = tf.identity(bn[0], name='sum')
+data = np.random.rand(1, 12, 12, 32)
+with tf.Session(graph=out.graph) as sess:
+sess.run([tf.global_variables_initializer()])
+tf_out = sess.run(out, feed_dict={input_tensor:data})
+constant_graph = graph_util.convert_variables_to_constants(sess, 
sess.graph_def, ['sum'])
+
+
+layout = None
+target = 'llvm'
+ctx=tvm.cpu(0)
+mod, params = relay.frontend.from_tensorflow(constant_graph, 
layout=layout, outputs=['sum'])
+with relay.build_config(opt_level=3):
+graph, lib, params = relay.build(mod,
+ target=target,
+ target_host = target,
+ params=params)
+from tvm.contrib import graph_runtime
+m = graph_runtime.create(graph, lib, ctx)
+m.set_input(**params)
+m.set_input('input', data)
+m.run()
+tvm_out=m.get_output(0)
 
 Review comment:
   space between `=`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm 
support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734493
 
 

 ##
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+
+This article is a test script to test fused_batch_norm operators in TensorFlow 
frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+g=tf.Graph()
+with g.as_default(): 
+input_tensor = tf.placeholder(tf.float32,shape=(1, 12, 12, 32), 
name='input')
+alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, 
name='alpha')
+beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, 
name='bn')
+out = tf.identity(bn[0], name='sum')
+data = np.random.rand(1, 12, 12, 32)
+with tf.Session(graph=out.graph) as sess:
+sess.run([tf.global_variables_initializer()])
+tf_out = sess.run(out, feed_dict={input_tensor:data})
+constant_graph = graph_util.convert_variables_to_constants(sess, 
sess.graph_def, ['sum'])
+
+
+layout = None
+target = 'llvm'
+ctx=tvm.cpu(0)
+mod, params = relay.frontend.from_tensorflow(constant_graph, 
layout=layout, outputs=['sum'])
+with relay.build_config(opt_level=3):
+graph, lib, params = relay.build(mod,
+ target=target,
+ target_host = target,
+ params=params)
 
 Review comment:
   Align. Make sure `target=target` keep the same align `relay.build(mod,`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm 
support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388735248
 
 

 ##
 File path: python/tvm/relay/frontend/tensorflow.py
 ##
 @@ -887,7 +887,14 @@ def _impl(inputs, attr, params):
 if 'U' in attr:
 need_cast = True
 inputs[0] = _op.cast(inputs[0], dtype=attr['U'].name)
-
+# Check if mean and variance are empty
+# If so, replace them with Mean and Variance Ops
+# For run-time calculation
+moving_mean_shape = [int(n) for n in inputs[3].type_annotation.shape]
 
 Review comment:
   It is the time to add `CHECK` of `len(inputs)`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhen-jia closed pull request #5000: And opt out operator for has_multiple_inputs for graph tuner

2020-03-05 Thread GitBox
zhen-jia closed pull request #5000: And opt out operator for 
has_multiple_inputs for graph tuner
URL: https://github.com/apache/incubator-tvm/pull/5000
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm 
support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734798
 
 

 ##
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+
+This article is a test script to test fused_batch_norm operators in TensorFlow 
frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+g=tf.Graph()
+with g.as_default(): 
+input_tensor = tf.placeholder(tf.float32,shape=(1, 12, 12, 32), 
name='input')
+alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, 
name='alpha')
+beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, 
name='bn')
+out = tf.identity(bn[0], name='sum')
+data = np.random.rand(1, 12, 12, 32)
+with tf.Session(graph=out.graph) as sess:
+sess.run([tf.global_variables_initializer()])
+tf_out = sess.run(out, feed_dict={input_tensor:data})
+constant_graph = graph_util.convert_variables_to_constants(sess, 
sess.graph_def, ['sum'])
+
+
+layout = None
+target = 'llvm'
+ctx=tvm.cpu(0)
+mod, params = relay.frontend.from_tensorflow(constant_graph, 
layout=layout, outputs=['sum'])
+with relay.build_config(opt_level=3):
+graph, lib, params = relay.build(mod,
+ target=target,
+ target_host = target,
+ params=params)
+from tvm.contrib import graph_runtime
+m = graph_runtime.create(graph, lib, ctx)
+m.set_input(**params)
+m.set_input('input', data)
+m.run()
+tvm_out=m.get_output(0)
+tvm.testing.assert_allclose(tvm_out.asnumpy(), 
tf_out.astype(tvm_out.dtype), rtol=1e-3)
+
+if __name__ == "__main__":
+test_fusedbatchnorm()
 
 Review comment:
   test_fused_batch_norm


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm 
support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734523
 
 

 ##
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+
+This article is a test script to test fused_batch_norm operators in TensorFlow 
frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+g=tf.Graph()
+with g.as_default(): 
+input_tensor = tf.placeholder(tf.float32,shape=(1, 12, 12, 32), 
name='input')
+alpha = tf.constant(np.random.rand(32,), dtype=tf.float32, 
name='alpha')
+beta = tf.constant(np.random.rand(32,), dtype=tf.float32, name='beta')
+bn = tf.nn.fused_batch_norm(x=input_tensor, offset=beta, scale=alpha, 
name='bn')
+out = tf.identity(bn[0], name='sum')
+data = np.random.rand(1, 12, 12, 32)
+with tf.Session(graph=out.graph) as sess:
+sess.run([tf.global_variables_initializer()])
+tf_out = sess.run(out, feed_dict={input_tensor:data})
+constant_graph = graph_util.convert_variables_to_constants(sess, 
sess.graph_def, ['sum'])
+
+
+layout = None
+target = 'llvm'
+ctx=tvm.cpu(0)
 
 Review comment:
   space between `=`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm 
support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388735082
 
 

 ##
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+
+This article is a test script to test fused_batch_norm operators in TensorFlow 
frontend when mean and variance are not given.
 
 Review comment:
   remove `article`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
FrozenGene commented on a change in pull request #4990: [TF][Relay] BatchNorm 
support with run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#discussion_r388734566
 
 

 ##
 File path: tests/python/frontend/tensorflow/test_bn_trainingmod.py
 ##
 @@ -0,0 +1,61 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+BatchNorm without given mean and variance given testcases
+
+This article is a test script to test fused_batch_norm operators in TensorFlow 
frontend when mean and variance are not given.
+"""
+import tvm
+import numpy as np
+import tensorflow as tf
+from tvm import relay
+from tensorflow.python.framework import graph_util
+
+def test_fusedbatchnorm():
+g=tf.Graph()
 
 Review comment:
   space between `=`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhen-jia opened a new pull request #5000: And opt out operator for has_multiple_inputs for graph tuner

2020-03-05 Thread GitBox
zhen-jia opened a new pull request #5000: And opt out operator for 
has_multiple_inputs for graph tuner
URL: https://github.com/apache/incubator-tvm/pull/5000
 
 
   If layout_transform is in the graph, graph tuner’s has_multiple_inputs will 
give wrong answer. This PR is to solve this problem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] liangfu edited a comment on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt => scalastyle

2020-03-05 Thread GitBox
liangfu edited a comment on issue #4998: [VTA][Chisel] Change Scala Linter 
scalafmt => scalastyle
URL: https://github.com/apache/incubator-tvm/pull/4998#issuecomment-595612183
 
 
   In addition, based on the suggestions in the linter, I made some final 
changes in fixing indentation errors, here is a typical example:
   ```diff
   diff --git a/vta/hardware/chisel/src/main/scala/core/Load.scala 
b/vta/hardware/chisel/src/main/scala/core/Load.scala
   index 7c79498bd..50c26bb8e 100644
   --- a/vta/hardware/chisel/src/main/scala/core/Load.scala
   +++ b/vta/hardware/chisel/src/main/scala/core/Load.scala
   @@ -25,12 +25,12 @@ import vta.util.config._
import vta.shell._

/** Load.
   -  *
   -  * Load inputs and weights from memory (DRAM) into scratchpads (SRAMs).
   -  * This module instantiate the TensorLoad unit which is in charge of
   -  * loading 1D and 2D tensors to scratchpads, so it can be used by
   -  * other modules such as Compute.
   -  */
   + *
   + * Load inputs and weights from memory (DRAM) into scratchpads (SRAMs).
   + * This module instantiate the TensorLoad unit which is in charge of
   + * loading 1D and 2D tensors to scratchpads, so it can be used by
   + * other modules such as Compute.
   + */
class Load(debug: Boolean = false)(implicit p: Parameters) extends Module {
  val mp = p(ShellKey).memParams
  val io = IO(new Bundle {
   @@ -110,11 +110,10 @@ class Load(debug: Boolean = false)(implicit p: 
Parameters) extends Module {
  when(dec.io.isSync) {
printf("[Load] start sync\n")
  }.elsewhen(dec.io.isInput) {
   -  printf("[Load] start input\n")
   -}
   -.elsewhen(dec.io.isWeight) {
   -  printf("[Load] start weight\n")
   -}
   +printf("[Load] start input\n")
   +  }.elsewhen(dec.io.isWeight) {
   +printf("[Load] start weight\n")
   +  }
}
// done
when(state === sSync) {
   ```
   Please take another look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] liangfu commented on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt => scalastyle

2020-03-05 Thread GitBox
liangfu commented on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt => 
scalastyle
URL: https://github.com/apache/incubator-tvm/pull/4998#issuecomment-595612183
 
 
   In addition, I made some final changes in fixing indentation errors, here is 
a typical example:
   ```diff
   diff --git a/vta/hardware/chisel/src/main/scala/core/Load.scala 
b/vta/hardware/chisel/src/main/scala/core/Load.scala
   index 7c79498bd..50c26bb8e 100644
   --- a/vta/hardware/chisel/src/main/scala/core/Load.scala
   +++ b/vta/hardware/chisel/src/main/scala/core/Load.scala
   @@ -25,12 +25,12 @@ import vta.util.config._
import vta.shell._

/** Load.
   -  *
   -  * Load inputs and weights from memory (DRAM) into scratchpads (SRAMs).
   -  * This module instantiate the TensorLoad unit which is in charge of
   -  * loading 1D and 2D tensors to scratchpads, so it can be used by
   -  * other modules such as Compute.
   -  */
   + *
   + * Load inputs and weights from memory (DRAM) into scratchpads (SRAMs).
   + * This module instantiate the TensorLoad unit which is in charge of
   + * loading 1D and 2D tensors to scratchpads, so it can be used by
   + * other modules such as Compute.
   + */
class Load(debug: Boolean = false)(implicit p: Parameters) extends Module {
  val mp = p(ShellKey).memParams
  val io = IO(new Bundle {
   @@ -110,11 +110,10 @@ class Load(debug: Boolean = false)(implicit p: 
Parameters) extends Module {
  when(dec.io.isSync) {
printf("[Load] start sync\n")
  }.elsewhen(dec.io.isInput) {
   -  printf("[Load] start input\n")
   -}
   -.elsewhen(dec.io.isWeight) {
   -  printf("[Load] start weight\n")
   -}
   +printf("[Load] start input\n")
   +  }.elsewhen(dec.io.isWeight) {
   +printf("[Load] start weight\n")
   +  }
}
// done
when(state === sSync) {
   ```
   Please take another look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with 
run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595611551
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   > I think our pr could remove `name_hint` too.
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   
   Thanks for your discussion! According to our discussion, I have rewritten 
the code as in the newest commit. This time, the function 
`BatchNormToInferUnpack` is not modified. We only modify the tensorflow 
frontend for `_fused_batch_norm`. If `mean` and `variance` are empty, we 
directly add `Mean` and `Variance` relay operators before the `batch_norm` 
relay operator in the frontend graph, without modifying the `batch_norm` relay 
operator at all.
   Thank you for the suggestions!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time 
mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595611551
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   > I think our pr could remove `name_hint` too.
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   
   Thanks for your discussion! According to our discussion, I have rewritten 
the code as in the newest commit. This time, the function 
`BatchNormToInferUnpack` is not modified. We only modify the tensorflow 
frontend for `_fused_batch_norm`. If `mean` and `variance` are empty, we 
directly add `Mean` and `Variance` relay operators to the frontend graph before 
the `batch_norm` operator, without modifying the `batch_norm` operator at all.
   Thank you for the suggestions!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with 
run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just 
checking whether the `mean` and `variance` are empty inside 
`BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. 
Previously I have tried this way but got come compilation errors related with 
data shape checking. If we plan to do in this way, we need to modify the 
`BatchNormRel` for data shape assignment, since the current `batch_norm` relay 
operator only accept `mean` and `variance` with the same shape as the `channel` 
dimension. We need to make this relay operator accept `mean` and `variance` 
with empty shape by doing more modifications.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. 
In one case the `VarNode` is empty without pre-defined values, while in the 
other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with 
run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just 
checking whether the `mean` and `variance` are empty inside 
`BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. 
Previously I have tried this way but got come compilation errors related with 
data shape checking. If we plan to do in this way, we need to modify the 
`BatchNormRel` for data shape assignment too, since the current `batch_norm` 
relay operator only accept `mean` and `variance` with non-empty dimension. We 
need to make it accept `mean` and `variance` with empty dimension.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. 
In one case the `VarNode` is empty without pre-defined values, while in the 
other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with 
run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just 
checking whether the `mean` and `variance` are empty inside 
`BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. 
Previously I have tried this way but got come compilation errors related with 
data shape checking. If we plan to do in this way, we need to modify the 
`BatchNormRel` for data shape assignment, since the current `batch_norm` relay 
operator only accept `mean` and `variance` with non-empty dimension. We need to 
make it accept `mean` and `variance` with empty dimension.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. 
In one case the `VarNode` is empty without pre-defined values, while in the 
other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with 
run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just 
checking whether the `mean` and `variance` are empty inside 
`BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. 
Previously I have tried this way but got come compilation errors related with 
shape checking. If we plan to do in this way, we need to modify the 
`BatchNormRel` for data shape assignment too, since the current `batch_norm` 
relay operator only accept `mean` and `variance` with non-empty dimension. We 
need to make it accept `mean` and `variance` with empty dimension.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. 
In one case the `VarNode` is empty without pre-defined values, while in the 
other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with 
run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just 
checking whether the `mean` and `variance` are empty inside 
`BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. 
Previously I have tried this way but got come compilation errors related with 
layout checking. If we plan to do in this way, we need to modify the 
`BatchNormRel` for data shape assignment too, since the current `batch_norm` 
relay operator only accept `mean` and `variance` with non-empty dimension. We 
need to make it accept `mean` and `variance` with empty dimension.  
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. 
In one case the `VarNode` is empty without pre-defined values, while in the 
other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] MarisaKirisame commented on issue #4979: [Relay/TOPI/TF][Op] Add support for randomuniform Op in TVM

2020-03-05 Thread GitBox
MarisaKirisame commented on issue #4979: [Relay/TOPI/TF][Op] Add support for 
randomuniform Op in TVM
URL: https://github.com/apache/incubator-tvm/pull/4979#issuecomment-595598371
 
 
   Because if there is only one seed, every random uniform call will return the 
same result for the same tensor size.
   Internally, the random computation proceed already by using a random number 
genrator and advancing the seed. This is the implementation detail of the PRNG.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] liangfu commented on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt => scalastyle

2020-03-05 Thread GitBox
liangfu commented on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt => 
scalastyle
URL: https://github.com/apache/incubator-tvm/pull/4998#issuecomment-595597598
 
 
   @tmoreau89 @vegaluisjose Thanks for the review. I think this is now ready 
for a merge, and let's wait for the CI testing results.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt => scalastyle

2020-03-05 Thread GitBox
tmoreau89 commented on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt 
=> scalastyle
URL: https://github.com/apache/incubator-tvm/pull/4998#issuecomment-595596630
 
 
   Thanks @liangfu ; let us know when it's ready for a merge.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] liangfu removed a comment on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt => scalastyle

2020-03-05 Thread GitBox
liangfu removed a comment on issue #4998: [VTA][Chisel] Change Scala Linter 
scalafmt => scalastyle
URL: https://github.com/apache/incubator-tvm/pull/4998#issuecomment-595594406
 
 
   vta/apps still need to be updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] liangfu commented on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt => scalastyle

2020-03-05 Thread GitBox
liangfu commented on issue #4998: [VTA][Chisel] Change Scala Linter scalafmt => 
scalastyle
URL: https://github.com/apache/incubator-tvm/pull/4998#issuecomment-595594406
 
 
   vta/apps still need to be updated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
tmoreau89 commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595593538
 
 
   Agree with @vegaluisjose on the separate PR; if you submit it, we'll work to 
merge it quickly so you can rebase against master and get those changes in


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vegaluisjose opened a new issue #4999: [VTA][Chisel] build in Catalina (OSX)

2020-03-05 Thread GitBox
vegaluisjose opened a new issue #4999: [VTA][Chisel] build in Catalina (OSX)
URL: https://github.com/apache/incubator-tvm/issues/4999
 
 
   Hey @liangfu and @tmoreau89 ,
   
   I was trying to figure out today why building the Chisel backend in VTA is 
failing in Catalina:
   
   * 
[This](https://github.com/apache/incubator-tvm/blob/master/vta/hardware/chisel/Makefile#L107)
 fails if XCode is not installed, maybe we should add another option instead of 
just checking OS?. Making XCode as a dependency might not be a good idea?
   
   * 
[This](https://github.com/apache/incubator-tvm/blob/master/vta/hardware/chisel/Makefile#L96)
 is true if `nproc` is not installed. This is bad because the number of threads 
won't be a number, making the `Verilator` rule fail later on
   
   * The `sbt` 
[version](https://github.com/apache/incubator-tvm/blob/master/vta/hardware/chisel/project/build.properties#L20)
 has to be updated to `1.3.2`. The [chisel 
template](https://github.com/freechipsproject/chisel-template/blob/release/project/build.properties)
 has updated that version.
   
   * Update the `chisel-iotesters` 
[version](https://github.com/apache/incubator-tvm/blob/master/vta/hardware/chisel/build.sbt#L65)
 to `1.2.4`.
   
   I can do the last two, I don't know what are you thoughts in the first two 
@liangfu ?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] liangfu opened a new pull request #4998: [VTA][Chisel] Change Scala Linter scalafmt => scalastyle

2020-03-05 Thread GitBox
liangfu opened a new pull request #4998: [VTA][Chisel] Change Scala Linter 
scalafmt => scalastyle
URL: https://github.com/apache/incubator-tvm/pull/4998
 
 
   As scalafmt changes the code even with `--test` argument in the latest 
version, this PR switches scala linter for Chisel VTA from scalafmt to 
scalastyle.
   
   @tmoreau89 @vegaluisjose Please review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vegaluisjose commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
vegaluisjose commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595590569
 
 
   Hey @pasqoc ,
   
   I just ran/build everything in both Linux/Mac but it seems to be failing 
consistently in `resnet18v1`. I am getting the following
   
   ```
   [traced 408M cycles]
   
   Execution statistics:
   cycle_count : 31427003
   
   resnet18_v1 prediction for sample 0
   #1: grocery store, grocery, food market, market
   #2: scale, weighing machine
   #3: banana
   #4: punching bag, punch bag, punching ball, punchball
   #5: cleaver, meat cleaver, chopper
   Traceback (most recent call last):
   
 File "deploy_classification.py", line 290, in 
   assert(cat_detected)
   
   AssertionError
   ```
   Perhaps I am missing something? I built everything directly from your 
`de10-nano` branch 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vegaluisjose commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
vegaluisjose commented on a change in pull request #4986: 
[VTA][Chisel,de10nano] Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388703764
 
 

 ##
 File path: vta/hardware/chisel/Makefile
 ##
 @@ -109,7 +133,7 @@ else
   lib_path = $(vta_dir)/$(BUILD_NAME)/$(VTA_LIBNAME).so
 endif
 
-default: lint lib
 
 Review comment:
   Hey @liangfu ,
   
   that sounds like a plan. Let's do it in a separate PR. I saw the chisel 
template is also using that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] liangfu commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
liangfu commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388688246
 
 

 ##
 File path: vta/hardware/chisel/Makefile
 ##
 @@ -109,7 +133,7 @@ else
   lib_path = $(vta_dir)/$(BUILD_NAME)/$(VTA_LIBNAME).so
 endif
 
-default: lint lib
 
 Review comment:
   I think it's better to replace scala linter from `scalafmt` to  `scalatyle`, 
the former focus on changing the code format to a predefined style (it removed 
--test argument lately I think), and the later focus on checking style errors 
with no intention in changing the code base. I can put an update to switch the 
linter for scala, if you would like @tmoreau89 @vegaluisjose .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kevinthesun commented on issue #4938: [topi][relay] add operation tan to TVM

2020-03-05 Thread GitBox
kevinthesun commented on issue #4938: [topi][relay] add operation tan to TVM
URL: https://github.com/apache/incubator-tvm/pull/4938#issuecomment-595568771
 
 
   @tqchen Do we need to do something for this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner

2020-03-05 Thread GitBox
masahi commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner
URL: https://github.com/apache/incubator-tvm/pull/4995#issuecomment-595563667
 
 
   AMDGPU also needs RPC runner for autotvm, if I remember correctly


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with 
run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   > >I think our pr could remove `name_hint` too.
   Yeah, I agree that the better way should be removing `name_hint` and just 
checking whether the `mean` and `variance` are empty inside 
`BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. 
Previously I have tried this way but got come compilation errors related with 
layout checking. If we plan to do in this way, we need to modify the layout 
checking of `batch_norm` operator too. 
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. 
In one case the `VarNode` is empty without pre-defined values, while in the 
other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad edited a comment on issue #4990: [TF][Relay] BatchNorm support with 
run-time mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   >> I think our pr could remove `name_hint` too.
   
   Yeah, I agree that the better way should be removing `name_hint` and just 
checking whether the `mean` and `variance` are empty inside 
`BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. 
Previously I have tried this way but got come compilation errors related with 
layout checking. If we plan to do in this way, we need to modify the layout 
checking of `batch_norm` operator too. 
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > >Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. 
In one case the `VarNode` is empty without pre-defined values, while in the 
other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time mean and variance calculation

2020-03-05 Thread GitBox
lfengad commented on issue #4990: [TF][Relay] BatchNorm support with run-time 
mean and variance calculation
URL: https://github.com/apache/incubator-tvm/pull/4990#issuecomment-595562012
 
 
   > > Yeah, our current implementation is just to check whether `mean` / 
`variance` is empty `VarNode` (with zero dimension), and then call `Mean` and 
`Variance` in BatchNormToInferUnpack.
   > 
   > I think our pr could remove `name_hint` too.
   Yeah, I agree that the better way should be removing `name_hint` and just 
checking whether the `mean` and `variance` are empty inside 
`BatchNormToInferUnpack`, with no need to modify the tensorflow frontend. 
Previously I have tried this way but got come compilation errors related with 
layout checking. If we plan to do in this way, we need to modify the layout 
checking of `batch_norm` operator too. 
   > 
   > > if mean / variance is VarNode but with non-zero dimension, it still has 
the possibility to hold the given pre-defined constant values and thus cannot 
be replaced with Mean \ Variance.
   > 
   > Could you give us an example of this condition? I could only imagine 
models have empty or full pre-defined values. So we should only to calculate it 
by calling `Mean` / `Variance` feed by `data` or our current implementation of 
`BatchNormToInferUnpack `.
   What I mean is that for both cases the `mean` and `variance` are `VarNode`. 
In one case the `VarNode` is empty without pre-defined values, while in the 
other case the `VarNode` is not empty with pre-defined values. 
   Thank you for the discussion!
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4938: [topi][relay] add operation tan to TVM

2020-03-05 Thread GitBox
tqchen commented on issue #4938: [topi][relay] add operation tan to TVM
URL: https://github.com/apache/incubator-tvm/pull/4938#issuecomment-595541216
 
 
   NOTE: the git commit message messed up today due to github issue 
https://discuss.tvm.ai/t/github-issue-the-commit-author-is-wrong-since-today/5880/15
 we should refrain from merging others' PR until tomorrow


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kevinthesun merged pull request #4938: [topi][relay] add operation tan to TVM

2020-03-05 Thread GitBox
kevinthesun merged pull request #4938: [topi][relay] add operation tan to TVM
URL: https://github.com/apache/incubator-tvm/pull/4938
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [topi][relay] add operation tan to TVM (#4938)

2020-03-05 Thread kevinthesun
This is an automated email from the ASF dual-hosted git repository.

kevinthesun pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new d992468  [topi][relay] add operation tan to TVM (#4938)
d992468 is described below

commit d992468d80af816f0413fc43c2ee1c02f7fe19c3
Author: Yao Wang 
AuthorDate: Thu Mar 5 17:17:35 2020 -0800

[topi][relay] add operation tan to TVM (#4938)

* Add relay operation relay.op.tan.

* Update tan implementation in TVM.

* Update tests.

* Add shape function for tan.

* Add missing main test to python/frontend/tensorflow/test_forward.

* Revert, back to sin/cos.

* Revert "Revert, back to sin/cos."

This reverts commit 4da5b503b921585ba9d80944b29136142b575c40.

* Fix implementation of tan in cuda. Do not support tan for float16.

Simplify topi/tests/python/test_topi_math. Add testing for tan with float32 
and float64.

Try again to implement tan as sin/cos in llvm.
---
 docs/frontend/tensorflow.rst |  1 +
 include/tvm/tir/op.h |  1 +
 python/tvm/relay/frontend/mxnet.py   |  1 +
 python/tvm/relay/frontend/tensorflow.py  |  1 +
 python/tvm/relay/frontend/tflite.py  |  8 
 python/tvm/relay/op/_tensor.py   |  2 ++
 python/tvm/relay/op/_tensor_grad.py  |  7 +++
 python/tvm/relay/op/tensor.py| 15 +++
 python/tvm/te/__init__.py|  2 +-
 python/tvm/tir/__init__.py   |  2 +-
 python/tvm/tir/op.py | 16 
 src/relay/op/tensor/unary.cc | 11 +++
 src/target/intrin_rule.cc|  3 +++
 src/target/llvm/intrin_rule_llvm.cc  | 14 ++
 src/target/llvm/intrin_rule_nvptx.cc |  3 +++
 src/target/llvm/intrin_rule_rocm.cc  |  3 +++
 src/target/source/intrin_rule_cuda.cc| 19 +++
 src/tir/ir/expr.cc   |  2 +-
 tests/python/frontend/tensorflow/test_forward.py | 10 ++
 tests/python/frontend/tflite/test_forward.py |  8 
 tests/python/relay/test_op_grad_level1.py|  1 +
 tests/python/relay/test_op_level1.py |  1 +
 tests/python/unittest/test_testing.py|  1 +
 topi/include/topi/elemwise.h |  1 +
 topi/python/topi/math.py | 17 +
 topi/src/topi.cc |  5 +
 topi/tests/python/test_topi_basic.py |  1 +
 topi/tests/python/test_topi_math.py  |  2 ++
 28 files changed, 155 insertions(+), 3 deletions(-)

diff --git a/docs/frontend/tensorflow.rst b/docs/frontend/tensorflow.rst
index 87341ab..8a54033 100644
--- a/docs/frontend/tensorflow.rst
+++ b/docs/frontend/tensorflow.rst
@@ -135,6 +135,7 @@ Supported Ops
 - ConcatV2
 - Conv2D
 - Cos
+- Tan
 - CropAndResize
 - DecodeJpeg
 - DepthwiseConv2dNative
diff --git a/include/tvm/tir/op.h b/include/tvm/tir/op.h
index 5172b14..0a714d8 100644
--- a/include/tvm/tir/op.h
+++ b/include/tvm/tir/op.h
@@ -515,6 +515,7 @@ TVM_DECLARE_INTRIN_UNARY(sqrt);
 TVM_DECLARE_INTRIN_UNARY(rsqrt);
 TVM_DECLARE_INTRIN_UNARY(log);
 TVM_DECLARE_INTRIN_UNARY(popcount);
+TVM_DECLARE_INTRIN_UNARY(tan);
 TVM_DECLARE_INTRIN_UNARY(cos);
 TVM_DECLARE_INTRIN_UNARY(sin);
 TVM_DECLARE_INTRIN_UNARY(atan);
diff --git a/python/tvm/relay/frontend/mxnet.py 
b/python/tvm/relay/frontend/mxnet.py
index 0020a63..c2bfd75 100644
--- a/python/tvm/relay/frontend/mxnet.py
+++ b/python/tvm/relay/frontend/mxnet.py
@@ -1696,6 +1696,7 @@ _identity_list = [
 "ones_like",
 "where",
 "gather_nd",
+"tan",
 "cos",
 "sin"
 ]
diff --git a/python/tvm/relay/frontend/tensorflow.py 
b/python/tvm/relay/frontend/tensorflow.py
index 14d2418..24164a3 100644
--- a/python/tvm/relay/frontend/tensorflow.py
+++ b/python/tvm/relay/frontend/tensorflow.py
@@ -1564,6 +1564,7 @@ _convert_map = {
 'LessEqual' : _broadcast('less_equal'),
 'Log'   : AttrCvt('log'),
 'Log1p' : _log1p(),
+'Tan'   : AttrCvt('tan'),
 'Cos'   : AttrCvt('cos'),
 'Sin'   : AttrCvt('sin'),
 'LogicalAnd': _logical('logical_and'),
diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index bc51c91..c2ec4d4 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -68,6 +68,7 @@ class OperatorConverter(object):
 'LOG': self.convert_log,
 'SIN': self.convert_sin,
 'COS': 

[GitHub] [incubator-tvm] kevinthesun commented on issue #4938: [topi][relay] add operation tan to TVM

2020-03-05 Thread GitBox
kevinthesun commented on issue #4938: [topi][relay] add operation tan to TVM
URL: https://github.com/apache/incubator-tvm/pull/4938#issuecomment-595537526
 
 
   @notoraptor Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (a2c7f52 -> a198c9f)

2020-03-05 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from a2c7f52  hotfix gcn tutorial fail (#4994)
 add a198c9f  Adding Hua Jiang as reviewer. (#4993)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md | 1 +
 1 file changed, 1 insertion(+)



[GitHub] [incubator-tvm] tqchen merged pull request #4993: [COMMUNITY] @huajsj -> Reviewer

2020-03-05 Thread GitBox
tqchen merged pull request #4993: [COMMUNITY] @huajsj -> Reviewer
URL: https://github.com/apache/incubator-tvm/pull/4993
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vegaluisjose commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
vegaluisjose commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595522233
 
 
   I think it would be nice to do that in a separate PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595517463
 
 
   Do you want me to remove python from 
   find_program(PYTHON NAMES python python3 python3.6)
   in cmake/modules/VTA.cmake?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vegaluisjose commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
vegaluisjose commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595516892
 
 
   This is an error on that cmake file actually because everything should be 
python3 by now. In fact, f-strings are used in other parts of the python 
codebase. This is a good find.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595514049
 
 
   Just removed them, but f strings are so much better .
   Any particular reason why python 2 is still used?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vegaluisjose commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
vegaluisjose commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595512678
 
 
   I was running into the same issue with the `f-strings` in your code. I think 
the problem is that we are running `python2` in the 
[cmake](https://github.com/apache/incubator-tvm/blob/master/cmake/modules/VTA.cmake#L19-L28)
 maybe @tmoreau89 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner

2020-03-05 Thread GitBox
comaniac commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner
URL: https://github.com/apache/incubator-tvm/pull/4995#issuecomment-595512579
 
 
   > Did you test GPU tuning? There are runtime issues like we cannot do fork 
after initializing CUDA runtime
   
   That's a good point. I'll find a GPU instance to test it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] merrymercy commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner

2020-03-05 Thread GitBox
merrymercy commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner
URL: https://github.com/apache/incubator-tvm/pull/4995#issuecomment-595508666
 
 
   Did you test GPU tuning? There are runtime issues like we cannot do fork 
after initializing CUDA runtime


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (f63b249 -> a2c7f52)

2020-03-05 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from f63b249  refactor build module to take IRModule (#4988)
 add a2c7f52  hotfix gcn tutorial fail (#4994)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/build_module.py | 4 
 tutorials/frontend/build_gcn.py  | 7 +--
 2 files changed, 9 insertions(+), 2 deletions(-)



[GitHub] [incubator-tvm] zhiics merged pull request #4994: [Hotfix] Fix gcn tutorial failure

2020-03-05 Thread GitBox
zhiics merged pull request #4994: [Hotfix] Fix gcn tutorial failure
URL: https://github.com/apache/incubator-tvm/pull/4994
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595504375
 
 
   Any chance the docker image maybe using python2 instead of python3.
   Not running docker so not able to reproduce, but it seems the f string is 
not recognized.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner

2020-03-05 Thread GitBox
comaniac commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner
URL: https://github.com/apache/incubator-tvm/pull/4995#issuecomment-595503657
 
 
   > The main reason to adopt the RPC runner is for the process isolation -- 
because we cannot guarantee that the local runner won't crash and as a result 
crash the AutoTVM process itself.
   > 
   > It might be useful though to have a persistent RPC server over the tuning 
process
   
   While I do agree with you that we should make sure the failure of a 
measurement won't crash the tuning process itself, I still don't think local 
RPC is the only and the best solution when considering its overhead and 
reliability. For example, we can use a separate thread to run `time_evaluator` 
locally and catch the exception if failed, just like what we did for task 
extraction.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595502268
 
 
   Right now the build is failing in runtime.cc, not sure why.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388629955
 
 

 ##
 File path: vta/tutorials/matrix_multiply.py
 ##
 @@ -52,7 +52,7 @@
 
 # We configure both the bitstream and the runtime system on the Pynq
 # to match the VTA configuration specified by the vta_config.json file.
-if env.TARGET == "pynq":
+if env.TARGET == "pynq" or env.TARGET == "de10nano":
 
 Review comment:
   I would prefer to decouple it in a follow up PR, since I have everything 
tested at this point, if you don't mind.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] abergeron opened a new issue #4997: [relay][VM] Miscompile with double-if

2020-03-05 Thread GitBox
abergeron opened a new issue #4997: [relay][VM] Miscompile with double-if
URL: https://github.com/apache/incubator-tvm/issues/4997
 
 
   This relay code:
   
   ```
   def @main(%b: bool) -> ((int32,), (int32,)) {
 let %v0: (int32,) = (0 /* ty=int32 */,);
 let %v1: (int32,) = (1 /* ty=int32 */,);
 let %v2: (int32,) = if (%b) {
   %v0
 } else {
   %v1
 };
 let %v3: (int32,) = if (%b) {
   %v1
 } else {
   %v0
 };
 (%v2, %v3)
   }
   ```
   
   Compiles to this in VM bytcode:
   
   ```
   main: 
   0: load_const $1 Const[0];
   1: alloc_data $2 tag(0) [$1];
   2: load_const $3 Const[1];
   3: alloc_data $4 tag(0) [$3];
   4: load_consti $5 1;
   5: if $0 $5 1 2;
   6: goto 2;
   7: move $2 $4;
   8: load_consti $6 1;
   9: if $0 $6 1 2;
   10: goto 2;
   11: move $4 $2;
   12: alloc_data $7 tag(0) [$2,$4];
   13: ret $7;
   ```
   
   If you pass `False` to the original function this makes the VM run both `7: 
move $2 $4` and `11: move $4 $2` which sets both of them to (1,) thus returning 
the wrong result.
   
   The debug interpreter doesn't have this problem.
   
   This script reproduces the problem:
   ```
   import tvm
   from tvm import relay
   
   
   def f(b):
   v0 = (0,)
   v1 = (1,)
   v2 = v0 if (b) else v1
   v3 = v1 if (b) else v0
   return (v2, v3)
   
   mod = tvm.IRModule({})
   
   b = relay.var('b')
   
   v0 = relay.var('v0')
   v1 = relay.var('v1')
   v2 = relay.var('v2')
   v3 = relay.var('v3')
   
   out = relay.Tuple([v2, v3])
   out = relay.Let(v3, relay.If(b, v1, v0), out)
   out = relay.Let(v2, relay.If(b, v0, v1), out)
   out = relay.Let(v1, relay.Tuple([relay.const(1)]), out)
   out = relay.Let(v0, relay.Tuple([relay.const(0)]), out)
   
   fn = relay.Function([b], out)
   
   mod['main'] = fn
   
   print(str(mod))
   
   ctx = tvm.runtime.ndarray.context('llvm', 0)
   
   debug = relay.create_executor(ctx=ctx, mod=mod, kind='debug')
   res = debug.evaluate()(False)
   res = ((int(res[0][0].asnumpy()),), (int(res[1][0].asnumpy()),))
   print("debug:", res)
   
   vm = relay.create_executor(ctx=ctx, mod=mod, kind='vm')
   res = vm.evaluate()(False)
   res = ((int(res[0][0].asnumpy()),), (int(res[1][0].asnumpy()),))
   print("vm:", res)
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #4996: [relay][external codegen] outline and inline lifted functions for external codegen

2020-03-05 Thread GitBox
zhiics commented on issue #4996: [relay][external codegen] outline and inline 
lifted functions for external codegen
URL: https://github.com/apache/incubator-tvm/pull/4996#issuecomment-595489617
 
 
   cc @comaniac @mbaret @soiferj @tqchen @icemelon9 @masahi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics opened a new pull request #4996: [relay][external codegen] outline and inline lifted functions for external codegen

2020-03-05 Thread GitBox
zhiics opened a new pull request #4996: [relay][external codegen] outline and 
inline lifted functions for external codegen
URL: https://github.com/apache/incubator-tvm/pull/4996
 
 
   This PR outlines the functions that will be handled by external codegen and 
then inline them back to avoid the optimizations from Relay. All function level 
passes are handled directly from the pass manager. A few module level passes 
needs to detect if a function uses default compiler or not. This is because 
these passes take the functions internally and they are invisible to the pass 
manager.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4964: [Torch] Add initial control flow support

2020-03-05 Thread GitBox
alexwong commented on a change in pull request #4964: [Torch] Add initial 
control flow support 
URL: https://github.com/apache/incubator-tvm/pull/4964#discussion_r388613725
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -614,6 +615,55 @@ def _impl(inputs, input_types):
 return _op.tensor.sqrt(data)
 return _impl
 
+def _neg():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.tensor.negative(data)
+return _impl
+
+def _tanh():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.tensor.tanh(data)
+return _impl
+
+def _ge():
+def _impl(inputs, input_types):
+assert len(inputs) == 2
+lhs = _wrap_const(inputs[0])
+rhs = _wrap_const(inputs[1])
+return _op.tensor.greater_equal(lhs, rhs)
+return _impl
+
+def _gt():
+def _impl(inputs, input_types):
+assert len(inputs) == 2
+lhs = _wrap_const(inputs[0])
+rhs = _wrap_const(inputs[1])
+return _op.tensor.greater(lhs, rhs)
+return _impl
+
+def _lt():
+def _impl(inputs, input_types):
+assert len(inputs) == 2
+lhs = _wrap_const(inputs[0])
+rhs = _wrap_const(inputs[1])
+return _op.tensor.less(lhs, rhs)
+return _impl
+
+def _Bool():
+def _impl(inputs, input_types):
 
 Review comment:
   Ah, I see. I think it's fine to leave this like this for this PR then. I 
think how the converter handles typing currently might need to be reworked in 
the future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner

2020-03-05 Thread GitBox
tqchen commented on issue #4995: [AutoTVM] Avoid using RPC for LocalRunner
URL: https://github.com/apache/incubator-tvm/pull/4995#issuecomment-595486181
 
 
   The main reason to adopt the RPC runner is for the process isolation -- 
because we cannot guarantee that the local runner won't crash and as a result 
crash the AutoTVM process itself. 
   
   It might be useful though to have a persistent RPC server over the tuning 
process 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4964: [Torch] Add initial control flow support

2020-03-05 Thread GitBox
masahi commented on a change in pull request #4964: [Torch] Add initial control 
flow support 
URL: https://github.com/apache/incubator-tvm/pull/4964#discussion_r388606332
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -842,7 +898,13 @@ def _convert_elemwise_input(data, input_type):
 "aten::detach"  : _identity(),
 "aten::upsample_bilinear2d" : _upsample("bilinear"),
 "aten::upsample_nearest2d"  : _upsample("nearest_neighbor"),
-"aten::expand_as"   : _expand_as()
+"aten::expand_as"   : _expand_as(),
+'aten::lt'  : _lt(),
 
 Review comment:
   Added. See the last commit 
https://github.com/apache/incubator-tvm/pull/4964/commits/b936bfa4c46b50949ffd303e243b83b3a385106f
   
   Also updates some tests to exercise all of them.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
tmoreau89 commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595474164
 
 
   It looks like at the moment the cpp lint test is failing: 
https://ci.tvm.ai/blue/organizations/jenkins/tvm/detail/PR-4986/5/pipeline
   
   You may have to go in and address the lint errors in 
`vta/src/de10nano/de10nano_mgr.h`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
tmoreau89 commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388597868
 
 

 ##
 File path: vta/tutorials/matrix_multiply.py
 ##
 @@ -52,7 +52,7 @@
 
 # We configure both the bitstream and the runtime system on the Pynq
 # to match the VTA configuration specified by the vta_config.json file.
-if env.TARGET == "pynq":
+if env.TARGET == "pynq" or env.TARGET == "de10nano":
 
 Review comment:
   Perhaps in this PR I would add a couple methods to the `Environment` class 
in this file: 
https://github.com/apache/incubator-tvm/blob/master/vta/python/vta/environment.py
   and then call it from these unit tests.
   
   That would help keep the code nice and easy to maintain! If you'd rather not 
do it, I can do a follow up PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4964: [Torch] Add initial control flow support

2020-03-05 Thread GitBox
masahi commented on a change in pull request #4964: [Torch] Add initial control 
flow support 
URL: https://github.com/apache/incubator-tvm/pull/4964#discussion_r388596458
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -614,6 +615,55 @@ def _impl(inputs, input_types):
 return _op.tensor.sqrt(data)
 return _impl
 
+def _neg():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.tensor.negative(data)
+return _impl
+
+def _tanh():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.tensor.tanh(data)
+return _impl
+
+def _ge():
+def _impl(inputs, input_types):
+assert len(inputs) == 2
+lhs = _wrap_const(inputs[0])
+rhs = _wrap_const(inputs[1])
+return _op.tensor.greater_equal(lhs, rhs)
+return _impl
+
+def _gt():
+def _impl(inputs, input_types):
+assert len(inputs) == 2
+lhs = _wrap_const(inputs[0])
+rhs = _wrap_const(inputs[1])
+return _op.tensor.greater(lhs, rhs)
+return _impl
+
+def _lt():
+def _impl(inputs, input_types):
+assert len(inputs) == 2
+lhs = _wrap_const(inputs[0])
+rhs = _wrap_const(inputs[1])
+return _op.tensor.less(lhs, rhs)
+return _impl
+
+def _Bool():
+def _impl(inputs, input_types):
 
 Review comment:
   ping? @alexwong


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac opened a new pull request #4995: [AutoTVM] Avoid using RPC for LocalRunner

2020-03-05 Thread GitBox
comaniac opened a new pull request #4995: [AutoTVM] Avoid using RPC for 
LocalRunner
URL: https://github.com/apache/incubator-tvm/pull/4995
 
 
   ### Motivation and Summary
   
   `LocalRunner`, which measures the runtime of an op with a certain schedule 
config on the host machine directly, is one of the two runners in AutoTVM. 
However, `LocalRunner` was derived from `RPCRunner` and launched a local RPC 
server. The reason for this implementation was to have a unified interface and 
logic for both runners, but it introduces two problems:
   
   1. Overhead.
   Although local RPC session doesn't really have to send the built binary via 
network, it still has 1) the RPC connection overhead, and 2) an additional 
binary copy overhead (`RPCRunner` will upload the locally built binary to 
remote RPC server. In the local RPC case, it means a copy from `/tmp`to `pwd`). 
   
   2. Reliability.
   As many people have reported in the discuss, the local RPC connection may be 
dropped or unstable. One possible reason may come from the tornado package but 
it's hard to be identified and fixed. Anyway, when we are using `LocalRunner`, 
it doesn't make any sense to see an error or warning regrad to RPC connection.
   
   This PR refactors AutoTVM `Runner` and fully decouples `LocalRunner` and 
`RPCRunner`. In summary:
   * Now both `LocalRunner` and `RPCRunner` are derived from `Runner`.
   * Common logic and interfaces such as "prepare golden reference for 
correctness checking" and "run N configs in parallel" are lifted to base 
`Runner` class.
   * Each derived runner only needs to specify how to acquire TVM context 
(local or remote) and how to run oneconfig.
   * No change on the user interfaces and APIs.
   
   ### Evaluation
   
   Since user interface and underlying measurement remain the same, I just 
tested the first task extracted from ResNet-18 on CPU for evaluation. Both 
tests use `LocalRunner`.
   
   Without this PR:
   
   ```
   [Task  1/12]  Current/Best:   20.81/ 169.89 GFLOPS | Progress: (252/252) | 
556.76 s Done
   ```
   
   With this PR:
   
   ```
   [Task  1/12]  Current/Best:   17.89/ 150.35 GFLOPS | Progress: (252/252) | 
192.78 s Done
   ```
   
   While I think the performance difference should be due to the measurement 
error, we can clearly see that this PR is capable of reducing the tuning time.
   
   ### Issue
   
   For VTA using `RPCRunner`, it will reprogram FPGA every time before a 
measurement, but I am not sure if we still need this step for the real local 
runner. Please help review and provide your suggestions (cc @tmoreau89).
   
   @merrymercy @eqy @kevinthesun please help to review. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595457508
 
 
   General question.
   I see the CI jenkins merge-hook build fails for reasons other than the PR 
itself (not right now, I need to change include order to pass lint after the 
last commit).
   Is it a requirement to have this build pass? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #4988: [Refactor][Relay Build] refactor build module to take IRModule

2020-03-05 Thread GitBox
zhiics commented on issue #4988: [Refactor][Relay Build] refactor build module 
to take IRModule
URL: https://github.com/apache/incubator-tvm/pull/4988#issuecomment-595457431
 
 
   hmm, this actually revealed that the gcn tutorial didn't pass mod. We need 
to bind params when adding the function to the mod. Interestingly, it didn't 
fail for the last two runs before merging. #4994 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics opened a new pull request #4994: [Hotfix] Fix gcn tutorial failure

2020-03-05 Thread GitBox
zhiics opened a new pull request #4994: [Hotfix] Fix gcn tutorial failure
URL: https://github.com/apache/incubator-tvm/pull/4994
 
 
   Fix gcn tutorial fail in the upstream CI.
   
   cc @tqchen 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388575975
 
 

 ##
 File path: vta/hardware/chisel/Makefile
 ##
 @@ -109,7 +133,7 @@ else
   lib_path = $(vta_dir)/$(BUILD_NAME)/$(VTA_LIBNAME).so
 endif
 
-default: lint lib
 
 Review comment:
   I really don't mind if lint stays or not.
   Given your preference I am going to put it back.
   But there is definitely a bug in either scalafmt or the way sbt calls it.
   I tried to change the scalafmt configuration to fix the indentation but 
without success... 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388572722
 
 

 ##
 File path: vta/tutorials/matrix_multiply.py
 ##
 @@ -52,7 +52,7 @@
 
 # We configure both the bitstream and the runtime system on the Pynq
 # to match the VTA configuration specified by the vta_config.json file.
-if env.TARGET == "pynq":
+if env.TARGET == "pynq" or env.TARGET == "de10nano":
 
 Review comment:
   Totally agree.
   Do you want me to remove the de10nano check at this point.
   I just wanted to document how to invoke a test with the de10nano target.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388572033
 
 

 ##
 File path: vta/tests/python/de10nano/test_program_rpc.py
 ##
 @@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import sys, os
+import tvm
+from tvm import rpc
+from vta import get_bitstream_path, download_bitstream, program_fpga, 
reconfig_runtime
+
+host = os.environ.get("VTA_PYNQ_RPC_HOST", "de10nano")
 
 Review comment:
   Makes perfect sense, perhaps coupled with your idea of folding fpga target 
specific info in environment.py.
   Right now introducing VTA_DE10_RPC_HOST anywhere VTA_PYNQ_RPC_HOST, 
including matrix_multiply.py for instance would add a lot of boiler plate code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4964: [Torch] Add initial control flow support

2020-03-05 Thread GitBox
alexwong commented on a change in pull request #4964: [Torch] Add initial 
control flow support 
URL: https://github.com/apache/incubator-tvm/pull/4964#discussion_r388557544
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -842,7 +898,13 @@ def _convert_elemwise_input(data, input_type):
 "aten::detach"  : _identity(),
 "aten::upsample_bilinear2d" : _upsample("bilinear"),
 "aten::upsample_nearest2d"  : _upsample("nearest_neighbor"),
-"aten::expand_as"   : _expand_as()
+"aten::expand_as"   : _expand_as(),
+'aten::lt'  : _lt(),
 
 Review comment:
   Nitpick: Can we add aten::le, aten::ge, aten::ne ops?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #4988: [Refactor][Relay Build] refactor build module to take IRModule

2020-03-05 Thread GitBox
zhiics commented on issue #4988: [Refactor][Relay Build] refactor build module 
to take IRModule
URL: https://github.com/apache/incubator-tvm/pull/4988#issuecomment-595436882
 
 
   Okay. I will. Thanks for reminding. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4992: [Frontend][Torch] Check graph inputs match expected

2020-03-05 Thread GitBox
masahi commented on issue #4992: [Frontend][Torch] Check graph inputs match 
expected
URL: https://github.com/apache/incubator-tvm/pull/4992#issuecomment-595431009
 
 
   Let's wait until the github bug is fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4988: [Refactor][Relay Build] refactor build module to take IRModule

2020-03-05 Thread GitBox
tqchen commented on issue #4988: [Refactor][Relay Build] refactor build module 
to take IRModule
URL: https://github.com/apache/incubator-tvm/pull/4988#issuecomment-595428966
 
 
   @zhiics seems this change breaks the master ci, please look into it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on issue #3934: [Runtime] MISRA-C compliant TVM runtime

2020-03-05 Thread GitBox
tmoreau89 commented on issue #3934: [Runtime] MISRA-C compliant TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/3934#issuecomment-595428050
 
 
   Also @u99127 and @weberlo may be interested in this PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on issue #3934: [Runtime] MISRA-C compliant TVM runtime

2020-03-05 Thread GitBox
tmoreau89 commented on issue #3934: [Runtime] MISRA-C compliant TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/3934#issuecomment-595427659
 
 
   Excellent, thanks for adding this to CI so quickly! I was able to reproduce 
the demo by typing in `make demo`; it ran for the most part successfully, but I 
got an illegal instruction error in the end:
   
   ```$ make demo
   python3 build_model.py -o build
   INFO:root:Model file not found. Downloading to 
/Users/moreau/.mxnet/models/mobilenet0.25-9f83e440.params.
   Downloading /Users/moreau/.mxnet/models/mobilenet0.25-9f83e440.zip from 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/models/mobilenet0.25-9f83e440.zip...
   INFO:autotvm:Download pre-tuned parameters package from 
https://raw.githubusercontent.com/uwsampl/tvm-distro/master/tophub/llvm_v0.04.log
   ...100%, 0.02 MB, 121 KB/s, 0 seconds passed
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op sqrt
   INFO:compile_engine:Use implementation injective.cpu for op divide
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op negative
   INFO:compile_engine:Use implementation injective.cpu for op add
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op multiply
   INFO:compile_engine:Use implementation injective.cpu for op squeeze
   INFO:compile_engine:Use implementation injective.cpu for op expand_dims
   INFO:compile_engine:Use implementation injective.cpu for op 

[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
tmoreau89 commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388535249
 
 

 ##
 File path: vta/tutorials/matrix_multiply.py
 ##
 @@ -52,7 +52,7 @@
 
 # We configure both the bitstream and the runtime system on the Pynq
 # to match the VTA configuration specified by the vta_config.json file.
-if env.TARGET == "pynq":
+if env.TARGET == "pynq" or env.TARGET == "de10nano":
 
 Review comment:
   To keep the code maintainable as we scale the number of supported FPGAs, we 
should a function in the environment.py library that essentially checks the 
target against a list of FPGA target and used as such: `env.target_is_fpga()` 
for all of the FPGA delineations, and `env.target_is_sim()` for tsim, fsim


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
tmoreau89 commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388533974
 
 

 ##
 File path: vta/hardware/chisel/Makefile
 ##
 @@ -109,7 +133,7 @@ else
   lib_path = $(vta_dir)/$(BUILD_NAME)/$(VTA_LIBNAME).so
 endif
 
-default: lint lib
 
 Review comment:
   It is indeed an odd indentation choice, but I would be in favor of enabling 
lint even if the output is odd to have uniformity/consistency across the Chisel 
codebase. @liangfu @vegaluisjose any input on why the indentation of the `}` 
looks funky?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
tmoreau89 commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388532585
 
 

 ##
 File path: vta/tests/python/de10nano/test_program_rpc.py
 ##
 @@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import sys, os
+import tvm
+from tvm import rpc
+from vta import get_bitstream_path, download_bitstream, program_fpga, 
reconfig_runtime
+
+host = os.environ.get("VTA_PYNQ_RPC_HOST", "de10nano")
 
 Review comment:
   I think I've been using this variable to be able to target and program 
different FPGAs. For in that spririt, I would use `VTA_DE10_RPC_HOST` for this 
test. Your bashrc could contain multiple hosts including `VTA_PYNQ_RPC_HOST`, 
`VTA_DE10_RPC_HOST`, `VTA_ULTRA96_RPC_HOST` etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] maheshambule commented on issue #4979: [Relay/TOPI/TF][Op] Add support for randomuniform Op in TVM

2020-03-05 Thread GitBox
maheshambule commented on issue #4979: [Relay/TOPI/TF][Op] Add support for 
randomuniform Op in TVM
URL: https://github.com/apache/incubator-tvm/pull/4979#issuecomment-595393925
 
 
   Thanks @MarisaKirisame. I understand that We want to keep TVM operator pure 
and Relay can have side effects. But I am not able to figure out Why do we want 
the operator to return the new seed? And how that new seed will get generated? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc edited a comment on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc edited a comment on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595390387
 
 
   Last commit addresses previous comments.
   sbt scalafmt --test is still open since it changes code.
   Please let me know what you think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on issue #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on issue #4986: [VTA][Chisel,de10nano] Chisel fixes and 
de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#issuecomment-595390387
 
 
   Last commit address previous comments.
   sbt scalafmt --test is still open since it changes code.
   Please let me know what you think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch tmoreau89-reviewer-huajsj created (now 976fccd)

2020-03-05 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch tmoreau89-reviewer-huajsj
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


  at 976fccd  Adding Hua Jiang as reviewer.

This branch includes the following new commits:

 new 976fccd  Adding Hua Jiang as reviewer.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-tvm] 01/01: Adding Hua Jiang as reviewer.

2020-03-05 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a commit to branch tmoreau89-reviewer-huajsj
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git

commit 976fccd1a7cb9be17c815a9025cd7cf943fec74b
Author: Thierry Moreau 
AuthorDate: Thu Mar 5 09:55:16 2020 -0800

Adding Hua Jiang as reviewer.
---
 CONTRIBUTORS.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index 1f7ec33..b225536 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -80,6 +80,7 @@ We do encourage everyone to work anything they are interested 
in.
 - [Nick Hynes](https://github.com/nhynes): @nhynes
 - [Yuwei Hu](https://github.com/Huyuwei): @Huyuwei
 - [Animesh Jain](https://github.com/anijain2305): @anijain2305
+- [Hua Jiang](https://github.com/huajsj): @huajsj
 - [Yizhi Liu](https://github.com/yzhliu) : @yzhliu
 - [Zhixun Tan](https://github.com/phisiart): @phisiart
 - [Zhi Chen](https://github.com/zhiics): @zhiics



[GitHub] [incubator-tvm] tmoreau89 opened a new pull request #4993: [COMMUNITY] @huajsj -> Reviewer

2020-03-05 Thread GitBox
tmoreau89 opened a new pull request #4993: [COMMUNITY] @huajsj -> Reviewer
URL: https://github.com/apache/incubator-tvm/pull/4993
 
 
   Congratulations to @huajsj for joining TVM's list of reviewers.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on issue #4828: [QNN][TFLite] TFLite rounding mode support

2020-03-05 Thread GitBox
anijain2305 commented on issue #4828: [QNN][TFLite] TFLite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-595347242
 
 
   Good finding @LiangHao151941 
   
   I don't think we should add tflite_mode attr to Pool2D op. It seems adhoc to 
me.
   
   I think we can directly implement the UPWARD rounding in pooling, **whenever 
the datatype is interger**. For integer division, it is valid to implement 
rounding by default.
   
   If the tests fail for integer pool, we can change those tests as well. I 
added those tests in https://github.com/apache/incubator-tvm/pull/3607. Prior 
to that integer computation in pooling was wrong.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] inadob commented on issue #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax

2020-03-05 Thread GitBox
inadob commented on issue #4805: [Frontend][TFlite] Add parser support for 
relu6, leaky_relu, relu_n1_to_1, log_softmax
URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-595344454
 
 
   > Can you show me code example here? I have typically used float numbers 
like 1.0 or 6.0 to work with `a_min` or `a_max`
   
   ```c++
   Expr ClipQnnCanonicalize(const Attrs& attrs, const Array& new_args,
  const Array& arg_types) {
 CHECK_EQ(new_args.size(), 7);
 auto& input_tensor = new_args[0];
 auto& input_scale = new_args[1]; // in fp
 auto& input_zero_point = new_args[2]; // in int32
 auto& clip_min = new_args[3];  // value is in fp
 auto& clip_max = new_args[4];  // value is in fp
 auto& output_scale = new_args[5];
 auto& output_zero_point = new_args[6];
   
 // Get the input dtype and shape.
 CHECK_EQ(arg_types.size(), 8);
 auto tensor_type = arg_types[0].as();
 CHECK(tensor_type != nullptr);
 auto input_dtype = tensor_type->dtype;
 auto input_shape = tensor_type->shape;
   
   
 // shift the input by subtracting the input zero_point
 auto shifted_input = Subtract(Cast(input_tensor, DataType::Int(32)), 
input_zero_point);
   
 // do the clipping in int32
 // auto clipped_tensor = Clip(shifted_input, clip_min, clip_max)
 auto clipped_tensor = Clip(shifted_input, Cast(clip_min, 
DataType::Float(64)), Cast(clip_max, DataType::Float(64)))
 // shift the input back by adding the zero_point
 clipped_tensor = Add(clipped_tensor, input_zero_point);
   
 // requantize the output if needed
 auto requantized_output = clipped_tensor;
 if (!IsEqualScalar(input_scale, output_scale) ||
 !IsEqualScalar(input_zero_point, output_zero_point)) {
requantized_output = Requantize(clipped_tensor, input_shape, 
input_scale, input_zero_point, output_scale,
output_zero_point, DataType::Int(32));
   }
   
 // Go back to lower precision.
 auto q_min = GetQmin(input_dtype);
 auto q_max = GetQmax(input_dtype);
 requantized_output = Clip(requantized_output, q_min, q_max); 
 return Cast(requantized_output, input_dtype);
   }
   ```
   And I am getting a complaint about **double dtype**...
   ```
   /workspace/src/relay/qnn/op/clip.cc:61:117: error: cannot convert 
'tvm::relay::Expr {aka tvm::RelayExpr}' to 'double' for argument '2' to 
'tvm::relay::Expr tvm::relay::Clip(tvm::relay::Expr, double, double)'
  auto clipped_tensor = Clip(shifted_input, Cast(clip_min, 
DataType::Float(64)), Cast(clip_max, DataType::Float(64)))
   
   ```
   
   The commented out line where I directly do clip() without casting to float64 
didn't work too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388434267
 
 

 ##
 File path: vta/hardware/chisel/Makefile
 ##
 @@ -109,7 +133,7 @@ else
   lib_path = $(vta_dir)/$(BUILD_NAME)/$(VTA_LIBNAME).so
 endif
 
-default: lint lib
 
 Review comment:
   But unfortunately it does not work, I invite you to try yourself.
   sbt scalafmt --test does change the code.
   In my case it changes 6 scala files, for instance:
   
   ```diff
   diff --git a/vta/hardware/chisel/src/main/scala/core/LoadUop.scala 
b/vta/hardware/chisel/src/main/scala/core/LoadUop.scala
   index 31e0b56bd..f99ac4948 100644
   --- a/vta/hardware/chisel/src/main/scala/core/LoadUop.scala
   +++ b/vta/hardware/chisel/src/main/scala/core/LoadUop.scala
   @@ -118,12 +118,11 @@ class LoadUop(debug: Boolean = false)(implicit p: 
Parameters) extends Module {
  state := sReadCmd
  xlen := xrem
  xrem := 0.U
   -}
   -.otherwise {
   -  state := sReadCmd
   -  xlen := xmax - 1.U
   -  xrem := xrem - xmax
   -}
   +}.otherwise {
   +state := sReadCmd
   +xlen := xmax - 1.U
   +xrem := xrem - xmax
   +  }
  }
}
  }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] MarisaKirisame commented on issue #4979: [Relay/TOPI/TF][Op] Add support for randomuniform Op in TVM

2020-03-05 Thread GitBox
MarisaKirisame commented on issue #4979: [Relay/TOPI/TF][Op] Add support for 
randomuniform Op in TVM
URL: https://github.com/apache/incubator-tvm/pull/4979#issuecomment-595330595
 
 
   I am proposing the following design:
   the tvm operator randomuniform take a seed, and return a random uniform 
tensor paired with the new seed.
   relay has a global reference holding a seed.
   there are relay helper function that, read the seed, call randomuniform, 
write the new seed to the reference, and returning the tensor.
   this way, the tvm level can be kept pure, which is beneficial for 
optimization, while the relay level provide effectful api that is easy to use.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] maheshambule commented on issue #4979: [Relay/TOPI/TF][Op] Add support for randomuniform Op in TVM

2020-03-05 Thread GitBox
maheshambule commented on issue #4979: [Relay/TOPI/TF][Op] Add support for 
randomuniform Op in TVM
URL: https://github.com/apache/incubator-tvm/pull/4979#issuecomment-595320437
 
 
   @MarisaKirisame, could you please elaborate more on outputting the new seed? 
 Does it mean that in case if seed is not provided Relay should generate (and 
not the operator to keep it pure) a new seed,  pass it to the operator and then 
store it in a global mutable reference at Relay level. And in case if seed is 
provided just pass it to the operator.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388403925
 
 

 ##
 File path: vta/tests/python/de10nano/test_program_rpc.py
 ##
 @@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import sys, os
+import tvm
+from tvm import rpc
+from vta import get_bitstream_path, download_bitstream, program_fpga, 
reconfig_runtime
+
+host = os.environ.get("VTA_PYNQ_RPC_HOST", "de10nano")
 
 Review comment:
   I was also confused when I first browsed the code.
   I would have expected something like:
   VTA_RPC_HOST = {pynq|de10nano|ultra96|etc}
   VTA_RPC_PORT = 9091
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388402428
 
 

 ##
 File path: vta/src/de10nano/de10nano_mgr.h
 ##
 @@ -0,0 +1,554 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ * \file de10nano_mgr.h
+ * \brief DE10-Nano fpga manager.
+ */
+
+#ifndef VTA_DE10NANO_DE10NANO_MGR_H_
+#define VTA_DE10NANO_DE10NANO_MGR_H_
+
+extern "C" {
+  #include 
+  #include 
+  #include 
+  #include 
+  #include 
+  #include 
+  #include 
+  #include 
+  #include 
+}
+
+// Register definition and address map taken from cv_5v4.pdf,
+// Cyclone V Hard Processor System Technical Reference Manual,
+// chapter 5: FPGA Manager.
+struct de10nano_mgr {
 
 Review comment:
   Sure, I don't mind either way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388402066
 
 

 ##
 File path: vta/src/de10nano/de10nano_mgr.h
 ##
 @@ -0,0 +1,554 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ *
+ * \file de10nano_mgr.h
+ * \brief DE10-Nano fpga manager.
+ */
+
+#ifndef VTA_DE10NANO_DE10NANO_MGR_H_
+#define VTA_DE10NANO_DE10NANO_MGR_H_
+
+extern "C" {
 
 Review comment:
   This style is kind of nowadays redundant as the compiler knows already what 
to do.
   You are right, this is a C++ header file and chances are it will not be used 
in a C only context.
   I can make it pure C++ if you like.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #4988: [Refactor][Relay Build] refactor build module to take IRModule

2020-03-05 Thread GitBox
zhiics commented on issue #4988: [Refactor][Relay Build] refactor build module 
to take IRModule
URL: https://github.com/apache/incubator-tvm/pull/4988#issuecomment-595313695
 
 
   Thanks @comaniac @tqchen @wweic 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388400268
 
 

 ##
 File path: vta/hardware/chisel/Makefile
 ##
 @@ -32,16 +32,19 @@ ifeq (, $(VERILATOR_INC_DIR))
   endif
 endif
 
-CONFIG = DefaultPynqConfig
+CONFIG = DefaultDe10Config
 TOP = VTA
 TOP_TEST = Test
 BUILD_NAME = build
 USE_TRACE = 0
+USE_TRACE_FST = 0
 
 Review comment:
   Makes sense, although the logic is fairly simple and self-explanatory.
   I did not see any comments in the Makefile for any of the configuration 
variables so I did not want to start adding ones.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] pasqoc commented on a change in pull request #4986: [VTA][Chisel, de10nano] Chisel fixes and de10nano support

2020-03-05 Thread GitBox
pasqoc commented on a change in pull request #4986: [VTA][Chisel,de10nano] 
Chisel fixes and de10nano support
URL: https://github.com/apache/incubator-tvm/pull/4986#discussion_r388398027
 
 

 ##
 File path: vta/hardware/chisel/Makefile
 ##
 @@ -109,7 +133,7 @@ else
   lib_path = $(vta_dir)/$(BUILD_NAME)/$(VTA_LIBNAME).so
 endif
 
-default: lint lib
 
 Review comment:
   Yes, this is the right way to go.
   With --test the user is left to do the actual edits.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] greysteil commented on issue #4977: [Torch, QNN] Add support for quantized models via QNN

2020-03-05 Thread GitBox
greysteil commented on issue #4977: [Torch, QNN] Add support for quantized 
models via QNN
URL: https://github.com/apache/incubator-tvm/pull/4977#issuecomment-595305194
 
 
   FYI, this issue (the change in commit author) got escalated to me at GitHub. 
We have a bug in our squash and merge logic right now (introduced yesterday) 
which causes the original PR author to be removed from the list of commit 
co-authors in some cases. We're working on a fix now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (fe74b37 -> f63b249)

2020-03-05 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from fe74b37  Conditions updated to cover better user scenarios (#4951)
 add f63b249  refactor build module to take IRModule (#4988)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/transform.h| 10 +++
 python/tvm/relay/build_module.py | 58 +---
 src/relay/backend/build_module.cc| 40 ++---
 src/relay/backend/vm/compiler.cc |  1 -
 tests/cpp/relay_build_module_test.cc |  4 ++-
 5 files changed, 63 insertions(+), 50 deletions(-)



[GitHub] [incubator-tvm] zhiics merged pull request #4988: [Refactor][Relay Build] refactor build module to take IRModule

2020-03-05 Thread GitBox
zhiics merged pull request #4988: [Refactor][Relay Build] refactor build module 
to take IRModule
URL: https://github.com/apache/incubator-tvm/pull/4988
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jjohnson-arm commented on a change in pull request #4992: [Frontend][Torch] Check graph inputs match expected

2020-03-05 Thread GitBox
jjohnson-arm commented on a change in pull request #4992: [Frontend][Torch] 
Check graph inputs match expected
URL: https://github.com/apache/incubator-tvm/pull/4992#discussion_r388323372
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -902,6 +902,21 @@ def _report_missing_conversion(op_names):
 msg = "The following operators are not implemented: {}".format(missing)
 raise NotImplementedError(msg)
 
+def _check_input_names(graph, input_shapes):
+""" Check the graph inputs match the inputs """
+# remove self at the 0th arg
+ir_inputs = _get_input_names(graph)[1:]
 
 Review comment:
   Fair enough - though I was just trying to avoid the extra graph.copy().


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >