junrushao1994 commented on a change in pull request #4275: imp module is
deprecated
URL: https://github.com/apache/incubator-tvm/pull/4275#discussion_r346697415
##
File path: topi/python/topi/cpp/vision/__init__.py
##
@@ -0,0 +1,7 @@
+"""FFI for vision TOPI ops and
junrushao1994 commented on a change in pull request #4275: imp module is
deprecated
URL: https://github.com/apache/incubator-tvm/pull/4275#discussion_r346697376
##
File path: topi/python/topi/cpp/__init__.py
##
@@ -0,0 +1,9 @@
+"""FFI for C++ TOPI ops and schedules"""
t-vi commented on issue #4342: Add workgroup size attribute to AMDGPU functions
in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#issuecomment-554248489
I changed the default to 256. This patch is independent of #4305 in terms of
merge order (the decoupling of the device
were commented on issue #4275: imp module is deprecated
URL: https://github.com/apache/incubator-tvm/pull/4275#issuecomment-554248004
@Laurawly @Huyuwei Any suggestions?
This is an automated message from the Apache Git
were commented on issue #4275: imp module is deprecated
URL: https://github.com/apache/incubator-tvm/pull/4275#issuecomment-554247918
In `topi.cpp`, a bunch of dummy modules are created to get rid of a bunch of
modules with only one line.
t-vi commented on a change in pull request #4342: Add workgroup size attribute
to AMDGPU functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#discussion_r346690529
##
File path: src/codegen/llvm/codegen_amdgpu.cc
##
@@ -36,13 +36,39 @@
namespace
t-vi commented on a change in pull request #4342: Add workgroup size attribute
to AMDGPU functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#discussion_r346690529
##
File path: src/codegen/llvm/codegen_amdgpu.cc
##
@@ -36,13 +36,39 @@
namespace
tmoreau89 commented on issue #4293: [VTA] Bug fix for padded load with large
inputs
URL: https://github.com/apache/incubator-tvm/pull/4293#issuecomment-554243956
Thank you @liangfu; if you could test the unit test on Pynq that would be
excellent; one known issue is that the padded load
liangfu commented on issue #4293: [VTA] Bug fix for padded load with large
inputs
URL: https://github.com/apache/incubator-tvm/pull/4293#issuecomment-554241928
@vegaluisjose Yes, current Chisel implement passes all the unit-tests.
@tmoreau89 I did not test this with HLS version yet.
tmoreau89 commented on issue #4318: [Relay][TOPI]Fix meaning of
conv2d_transpose output_padding parameter
URL: https://github.com/apache/incubator-tvm/pull/4318#issuecomment-554237615
I see; I'm a little confused because the test case with output padding set
to `(0,0)` should be
tmoreau89 closed pull request #3881: [CMAKE][VTA] Modularizing Runtime Sources
URL: https://github.com/apache/incubator-tvm/pull/3881
This is an automated message from the Apache Git Service.
To respond to the message,
tmoreau89 commented on issue #3881: [CMAKE][VTA] Modularizing Runtime Sources
URL: https://github.com/apache/incubator-tvm/pull/3881#issuecomment-554236128
Closing the PR due to minimal impact on compilation time.
This is an
abergeron commented on issue #4318: [Relay][TOPI]Fix meaning of
conv2d_transpose output_padding parameter
URL: https://github.com/apache/incubator-tvm/pull/4318#issuecomment-554228705
It fails with (0, 0) in the CI right now.
It might also be a good idea to test it with non-zero
wweic opened a new pull request #4346: [Runtime] Make ADTObject POD container
type
URL: https://github.com/apache/incubator-tvm/pull/4346
This is an automated message from the Apache Git Service.
To respond to the message,
masahi commented on issue #4305: [RUNTIME] Proper Device Attribute Query for
AMD GPU
URL: https://github.com/apache/incubator-tvm/pull/4305#issuecomment-554211746
@petrex sorry I merged #4341 first and there is a conflict. Can you rebase?
tqchen opened a new pull request #4345: [COMMUNITY] Add DISCLAIMER, KEYS for
ASF release
URL: https://github.com/apache/incubator-tvm/pull/4345
cc @yzhliu
This is an automated message from the Apache Git Service.
To respond
masahi merged pull request #4341: [RUNTIME] Add device query for AMD GcnArch
URL: https://github.com/apache/incubator-tvm/pull/4341
This is an automated message from the Apache Git Service.
To respond to the message, please
junrushao1994 commented on a change in pull request #4303: [TOPI][Relay][OP]
Add a strided_set operation.
URL: https://github.com/apache/incubator-tvm/pull/4303#discussion_r346657735
##
File path: topi/python/topi/transform.py
##
@@ -155,6 +157,78 @@ def strided_slice(a,
junrushao1994 commented on a change in pull request #4303: [TOPI][Relay][OP]
Add a strided_set operation.
URL: https://github.com/apache/incubator-tvm/pull/4303#discussion_r346657674
##
File path: topi/python/topi/transform.py
##
@@ -155,6 +157,78 @@ def strided_slice(a,
junrushao1994 commented on a change in pull request #4303: [TOPI][Relay][OP]
Add a strided_set operation.
URL: https://github.com/apache/incubator-tvm/pull/4303#discussion_r346657709
##
File path: topi/python/topi/transform.py
##
@@ -155,6 +157,78 @@ def strided_slice(a,
junrushao1994 commented on issue #4303: [TOPI][Relay][OP] Add a strided_set
operation.
URL: https://github.com/apache/incubator-tvm/pull/4303#issuecomment-554205813
@yzhliu I think this is something like `a[strides] = b`
junrushao1994 commented on a change in pull request #4303: [TOPI][Relay][OP]
Add a strided_set operation.
URL: https://github.com/apache/incubator-tvm/pull/4303#discussion_r346656799
##
File path: python/tvm/relay/op/_transform.py
##
@@ -304,6 +305,31 @@ def
junrushao1994 commented on a change in pull request #4303: [TOPI][Relay][OP]
Add a strided_set operation.
URL: https://github.com/apache/incubator-tvm/pull/4303#discussion_r346657038
##
File path: python/tvm/relay/op/_transform.py
##
@@ -304,6 +305,31 @@ def
junrushao1994 commented on a change in pull request #4303: [TOPI][Relay][OP]
Add a strided_set operation.
URL: https://github.com/apache/incubator-tvm/pull/4303#discussion_r346657023
##
File path: python/tvm/relay/op/_transform.py
##
@@ -304,6 +305,31 @@ def
FrozenGene opened a new pull request #4344: [ThreadPool] Solve thread
transitions issue
URL: https://github.com/apache/incubator-tvm/pull/4344
This pr solved thread transitions issue.
When we use OpenCV + TVM, we will occur much thread transitions so that
OpenCV + TVM is slow.
tqchen commented on issue #4327: [Relay][Frontend][TF] Fix transpose when axes
is not a param
URL: https://github.com/apache/incubator-tvm/pull/4327#issuecomment-554200646
Thanks @soiferj
This is an automated message from
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 72821b2 [Contrib] Add MKL DNN option (#4323)
add 1e2c525 [Relay][Frontend][TF] Fix transpose when axes
tqchen merged pull request #4327: [Relay][Frontend][TF] Fix transpose when axes
is not a param
URL: https://github.com/apache/incubator-tvm/pull/4327
This is an automated message from the Apache Git Service.
To respond to
tqchen closed pull request #3828: [QUANTIZE] Improve explicitness of rules
during annotation/realization
URL: https://github.com/apache/incubator-tvm/pull/3828
This is an automated message from the Apache Git Service.
To
tqchen commented on issue #3966: Const loop partition fix
URL: https://github.com/apache/incubator-tvm/pull/3966#issuecomment-554199981
superceded by #4025
This is an automated message from the Apache Git Service.
To respond
tqchen closed pull request #3658: [Relay][MxNet] Add support for foreach
URL: https://github.com/apache/incubator-tvm/pull/3658
This is an automated message from the Apache Git Service.
To respond to the message, please log
tqchen closed pull request #3966: Const loop partition fix
URL: https://github.com/apache/incubator-tvm/pull/3966
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
tqchen edited a comment on issue #3966: Const loop partition fix
URL: https://github.com/apache/incubator-tvm/pull/3966#issuecomment-554199981
superceded by #4025 thanks @kimishpatel @yinghai
This is an automated message
tqchen commented on issue #3881: [CMAKE][VTA] Modularizing Runtime Sources
URL: https://github.com/apache/incubator-tvm/pull/3881#issuecomment-554199820
ping @tmoreau89
This is an automated message from the Apache Git
tqchen edited a comment on issue #4323: [Contrib] Add MKL DNN option
URL: https://github.com/apache/incubator-tvm/pull/4323#issuecomment-554199391
Thanks @icemelon9 @TaoLv @soiferj @gasgallo @minminsun !
This is an automated
tqchen commented on issue #4323: [Contrib] Add MKL DNN option
URL: https://github.com/apache/incubator-tvm/pull/4323#issuecomment-554199391
Thanks @icemelon9 @TaoLv @soiferj !
This is an automated message from the Apache Git
tqchen edited a comment on issue #4323: [Contrib] Add MKL DNN option
URL: https://github.com/apache/incubator-tvm/pull/4323#issuecomment-554199391
Thanks @icemelon9 @TaoLv @soiferj @gasgallo @minminsun @ZhennanQin !
This is
tqchen merged pull request #4333: Deprecate NNVM warning msg
URL: https://github.com/apache/incubator-tvm/pull/4333
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from de1bfa4 Solve custom model of prelu (#4326)
add 2573b3b Deprecate NNVM warning msg (#4333)
No new
tqchen commented on issue #4303: [TOPI][Relay][OP] Add a strided_set operation.
URL: https://github.com/apache/incubator-tvm/pull/4303#issuecomment-554199080
cc @jroesch @junrushao1994 can you also take a look?
This is an
tqchen merged pull request #4326: [TFLite] Fix Prelu unified shape error
URL: https://github.com/apache/incubator-tvm/pull/4326
This is an automated message from the Apache Git Service.
To respond to the message, please log
tqchen commented on issue #4326: [TFLite] Fix Prelu unified shape error
URL: https://github.com/apache/incubator-tvm/pull/4326#issuecomment-554198952
Thanks @FrozenGene @apivovarov
This is an automated message from the
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 23691b8 Add topi.nn.fifo_buffer to TVM doc (#4343)
add de1bfa4 Solve custom model of prelu (#4326)
No
tqchen commented on issue #4343: Add topi.nn.fifo_buffer to TOPI API doc
URL: https://github.com/apache/incubator-tvm/pull/4343#issuecomment-554198837
Thanks @hcho3 @zhiics @yongwww @junrushao1994
This is an automated
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 70a3a61 Add support for quant. mul operator in tflite frontend (#4283)
add 23691b8 Add
tqchen merged pull request #4343: Add topi.nn.fifo_buffer to TOPI API doc
URL: https://github.com/apache/incubator-tvm/pull/4343
This is an automated message from the Apache Git Service.
To respond to the message, please log
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 7dca655 [Relay][Pass] Add pass to remove unused functions in relay
module (#4334)
add 70a3a61 Add
tqchen commented on issue #4283: Add support for quant. mul operator in tflite
frontend
URL: https://github.com/apache/incubator-tvm/pull/4283#issuecomment-554198542
Thanks @u99127 @inadob @cchung100m
This is an automated
tqchen commented on a change in pull request #4274: [µTVM] Enable AutoTVM for
ARM STM32F746XX Boards
URL: https://github.com/apache/incubator-tvm/pull/4274#discussion_r346651119
##
File path: src/runtime/micro/micro_session.cc
##
@@ -76,79 +214,122 @@
tqchen commented on issue #4340: [CodeGen] Add build config option
disable_assert to control whether to generate assert
URL: https://github.com/apache/incubator-tvm/pull/4340#issuecomment-554196234
We can do it one step before the final codegen
jwfromm commented on issue #4271: [Relay][Frontend][ONNX] operator support:
DepthToSpace, SpaceToDepth
URL: https://github.com/apache/incubator-tvm/pull/4271#issuecomment-554184646
LGTM! Thanks for putting up with all my feedback. This turned out to a be
great PR. I think if you rebase
jwfromm removed a comment on issue #4271: [Relay][Frontend][ONNX] operator
support: DepthToSpace, SpaceToDepth
URL: https://github.com/apache/incubator-tvm/pull/4271#issuecomment-554184231
@cchung100m, now that PR #4313 is merged we should be able to get this PR
merged. Can you take a
jwfromm commented on issue #4271: [Relay][Frontend][ONNX] operator support:
DepthToSpace, SpaceToDepth
URL: https://github.com/apache/incubator-tvm/pull/4271#issuecomment-554184231
@cchung100m, now that PR #4313 is merged we should be able to get this PR
merged. Can you take a look at my
FrozenGene commented on issue #4340: [CodeGen] Add build config option
disable_assert to control whether to generate assert
URL: https://github.com/apache/incubator-tvm/pull/4340#issuecomment-554183342
> Actually, it would be great if we can do it at an earlier point, and add a
pass to
vinx13 commented on issue #4295: [Relay][Quantize] Integrate data-aware
calibration into quantization
URL: https://github.com/apache/incubator-tvm/pull/4295#issuecomment-554178653
It's difficult to do unit tests. The refactor is covered by nightly tests. I
plan to add more nightly tests
tqchen commented on issue #4335: [Relay][WIP] ConvertLayout pass.
URL: https://github.com/apache/incubator-tvm/pull/4335#issuecomment-554175946
I feel live patching the OpAttr is not a pattern we are looking for in here.
It would be great if we can factor out some common logic between
tqchen commented on issue #4259: [DEV][DRAFT] TVM v0.6 Release candidate
URL: https://github.com/apache/incubator-tvm/issues/4259#issuecomment-554174983
@ZihengJiang can you help look into update the benchmark script?
This is
zhiics edited a comment on issue #4334: [Relay][Pass] Add pass to remove unused
functions in relay module
URL: https://github.com/apache/incubator-tvm/pull/4334#issuecomment-554173088
Thanks @wweic @icemelon9 @MarisaKirisame
zhiics commented on issue #4334: [Relay][Pass] Add pass to remove unused
functions in relay module
URL: https://github.com/apache/incubator-tvm/pull/4334#issuecomment-554173088
Thanks @wweic @jroesch @MarisaKirisame
This is
jackwish commented on issue #4338: AutoTVM: selecting tuning templates when
extracting task
URL: https://github.com/apache/incubator-tvm/pull/4338#issuecomment-554173072
Thank you for the review @comaniac @tmoreau89 @vinx13 , really good points
to have a dict holding the op to key
This is an automated email from the ASF dual-hosted git repository.
zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 5b9f459 Enable hipModuleGetGlobal() (#4321)
add 7dca655 [Relay][Pass] Add pass to remove unused
zhiics merged pull request #4334: [Relay][Pass] Add pass to remove unused
functions in relay module
URL: https://github.com/apache/incubator-tvm/pull/4334
This is an automated message from the Apache Git Service.
To respond
jackwish commented on a change in pull request #4338: AutoTVM: selecting tuning
templates when extracting task
URL: https://github.com/apache/incubator-tvm/pull/4338#discussion_r346632104
##
File path: python/tvm/autotvm/task/relay_integration.py
##
@@ -98,6 +103,8 @@ def
TaoLv commented on a change in pull request #4323: [Contrib] Add MKL DNN option
URL: https://github.com/apache/incubator-tvm/pull/4323#discussion_r346631647
##
File path: cmake/modules/contrib/BLAS.cmake
##
@@ -55,3 +55,11 @@ elseif(USE_BLAS STREQUAL "none")
else()
optima2005 commented on issue #4300: [Relay][Frontend][Tensorflow]Add
conv2d_transpose
URL: https://github.com/apache/incubator-tvm/pull/4300#issuecomment-554170862
tf1.4 doesn't have this error while tf1.3 does.
This is an
ZihengJiang edited a comment on issue #4332: [RFC] Support for Sparse
Computation
URL: https://github.com/apache/incubator-tvm/issues/4332#issuecomment-553681206
Welcome comment and discussion! @cylinbao @yuluny2 @tmoreau89 @Huyuwei
@yzh119 @tqchen
szha commented on issue #4332: [RFC] Support for Sparse Computation
URL: https://github.com/apache/incubator-tvm/issues/4332#issuecomment-554159846
cc @eric-haibin-lin @sxjscience
This is an automated message from the Apache
anijain2305 commented on issue #4292: Retain qnn input kernel scales
URL: https://github.com/apache/incubator-tvm/pull/4292#issuecomment-554159517
https://github.com/apache/incubator-tvm/blob/5b9f459d638d1cf6bd820f3bdc58d7e5632d7ed7/python/tvm/relay/qnn/op/legalizations.py#L89-L90
cylinbao commented on issue #4332: [RFC] Support for Sparse Computation
URL: https://github.com/apache/incubator-tvm/issues/4332#issuecomment-554158427
Thank @ZihengJiang for bringing up the RFC.
One question. It seems like, in this RFC, the axis to store the data values
is always the
u99127 commented on issue #4292: Retain qnn input kernel scales
URL: https://github.com/apache/incubator-tvm/pull/4292#issuecomment-554157700
Ok, it looks like there is some more work to be done here because of other
bits that have landed.
- Fix legalisations.py to consider qnn.conv2d
masahi commented on a change in pull request #4342: Add workgroup size
attribute to AMDGPU functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#discussion_r346617474
##
File path: src/codegen/llvm/codegen_amdgpu.cc
##
@@ -36,13 +36,39 @@
namespace
masahi merged pull request #4321: [RUNTIME] Enable hipModuleGetGlobal() in ROCm
module
URL: https://github.com/apache/incubator-tvm/pull/4321
This is an automated message from the Apache Git Service.
To respond to the
This is an automated email from the ASF dual-hosted git repository.
masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from d9b8a6c [Build][Windows] Fix Windows build by including cctype (#4319)
add 5b9f459 Enable
cchung100m commented on a change in pull request #4271: [Relay][Frontend][ONNX]
operator support: DepthToSpace, SpaceToDepth
URL: https://github.com/apache/incubator-tvm/pull/4271#discussion_r346612067
##
File path: tests/python/frontend/onnx/test_forward.py
##
@@ -77,22
u99127 commented on issue #4320: Fix TFLite RESHAPE assert
URL: https://github.com/apache/incubator-tvm/pull/4320#issuecomment-554135038
I've been noticing differences in the tflite converter graph between 1.13
and 1.14 especially with split and unpack - have either of you guys played with
t-vi commented on a change in pull request #4342: Add workgroup size attribute
to AMDGPU functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#discussion_r346595415
##
File path: src/codegen/llvm/codegen_amdgpu.cc
##
@@ -36,13 +36,39 @@
namespace
This is an automated email from the ASF dual-hosted git repository.
liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 5d66e7a [CI] Set workspace to be per executor (#4336)
add d9b8a6c [Build][Windows] Fix Windows build
yzhliu commented on issue #4319: [Build][Windows] Fix Windows build by
including cctype
URL: https://github.com/apache/incubator-tvm/pull/4319#issuecomment-554128110
Thanks @soiferj @jmorrill @tqchen
This is an automated
yzhliu merged pull request #4319: [Build][Windows] Fix Windows build by
including cctype
URL: https://github.com/apache/incubator-tvm/pull/4319
This is an automated message from the Apache Git Service.
To respond to the
zhiics commented on issue #4179: Add More Shape Functions
URL: https://github.com/apache/incubator-tvm/pull/4179#issuecomment-554127427
I would prefer register_rt_shape_func or just keep register_shape_func. Is
it bad to think it is similar to type_rel?
petrex commented on a change in pull request #4342: Add workgroup size
attribute to AMDGPU functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#discussion_r346592319
##
File path: src/codegen/llvm/codegen_amdgpu.cc
##
@@ -36,13 +36,39 @@
namespace
icemelon9 commented on issue #4179: Add More Shape Functions
URL: https://github.com/apache/incubator-tvm/pull/4179#issuecomment-554123136
JIT is not accurate here since these shape function are compiled at
compilation time instead of runtime. We can probably use
`register_rt_shape_func`
apivovarov commented on issue #4326: [TFLite] Fix Prelu unified shape error
URL: https://github.com/apache/incubator-tvm/pull/4326#issuecomment-554122057
+1
This is an automated message from the Apache Git Service.
To respond
ZihengJiang commented on issue #4243: Fix broken loop partitioning due to
recent changes.
URL: https://github.com/apache/incubator-tvm/pull/4243#issuecomment-554121953
Approved. Thanks for contributing! @kimishpatel
This is
yzhliu commented on issue #4315: Add check to ensure input file was
successfully opened in NNVM deploy…
URL: https://github.com/apache/incubator-tvm/pull/4315#issuecomment-554115882
please rebase to retrigger the ci
This is
masahi commented on issue #4342: Add workgroup size attribute to AMDGPU
functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#issuecomment-554114635
Great! Thanks.
This is an automated message from the
petrex commented on issue #4342: Add workgroup size attribute to AMDGPU
functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#issuecomment-554114502
Sure, Let's not used hardcoded value. btw you are testing on gfx900 right
(or another arch)?
petrex edited a comment on issue #4342: Add workgroup size attribute to AMDGPU
functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#issuecomment-554114502
Sure, Let's not use hardcoded value. btw you are testing on gfx900 right (or
another arch)?
t-vi commented on issue #4342: Add workgroup size attribute to AMDGPU functions
in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#issuecomment-554111798
@masahi Yes, indeed, it fixes this, in fact I'm doing this with @mvermeulen
's tests in mind.
@petrex So this
yzhliu commented on issue #4303: [TOPI][Relay][OP] Add a strided_set operation.
URL: https://github.com/apache/incubator-tvm/pull/4303#issuecomment-554109627
is it used in dl frameworks?
This is an automated message from the
masahi edited a comment on issue #4342: Add workgroup size attribute to AMDGPU
functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#issuecomment-554107796
does this solve the issue of INVALID_ISA errors[ discussed in the forum
masahi commented on issue #4342: Add workgroup size attribute to AMDGPU
functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#issuecomment-554107796
does this solve the issue of INVALID_ISA errors[ discussed in the forum
comaniac commented on issue #4280: [TVM][RUNTIME] A minimum example to generate
external library wrappers for DSOModule
URL: https://github.com/apache/incubator-tvm/pull/4280#issuecomment-554107934
cc @tqchen
This is an
tmoreau89 commented on issue #4318: [Relay][TOPI]Fix meaning of
conv2d_transpose output_padding parameter
URL: https://github.com/apache/incubator-tvm/pull/4318#issuecomment-554107152
@abergeron is the issue right now that VTA test will fail when you set
output padding to `(1,1)`? Or does
petrex commented on issue #4342: Add workgroup size attribute to AMDGPU
functions in codegen
URL: https://github.com/apache/incubator-tvm/pull/4342#issuecomment-554106244
Thanks @t-vi .
@masahi This PR utilizes device query for `kMaxThreadsPerBlock`, in that
case we might need to
hcho3 opened a new pull request #4343: Add topi.nn.fifo_buffer to TOPI API doc
URL: https://github.com/apache/incubator-tvm/pull/4343
Follow-up to #4039. Add `topi.nn.fifo_buffer` to the TOPI API doc.
This is an automated
apivovarov edited a comment on issue #4320: Fix TFLite RESHAPE assert
URL: https://github.com/apache/incubator-tvm/pull/4320#issuecomment-554081798
TVM code does not use the second input tensor in RESHAPE op. From the
beginning the assert should just check that input tensors array length
wweic commented on a change in pull request #4334: [Relay][Pass] Add pass to
remove unused functions in relay module
URL: https://github.com/apache/incubator-tvm/pull/4334#discussion_r346548590
##
File path: src/relay/backend/vm/compiler.cc
##
@@ -863,6 +864,8 @@ void
apivovarov commented on issue #4320: Fix TFLite RESHAPE assert
URL: https://github.com/apache/incubator-tvm/pull/4320#issuecomment-554081798
TVM code does not use the second input tensor in RESHAPE op. From the
beginning the assert should just check that input tensors array length is at
yzhliu commented on a change in pull request #4333: Deprecate NNVM warning msg
URL: https://github.com/apache/incubator-tvm/pull/4333#discussion_r346547323
##
File path: nnvm/python/nnvm/__init__.py
##
@@ -10,3 +10,4 @@
from . import frontend
__version__ =
1 - 100 of 148 matches
Mail list logo