Re: [PR] [OpenCL][Texture] Improved texture memory planning [tvm]

2023-10-10 Thread via GitHub
echuraev commented on code in PR #15058: URL: https://github.com/apache/tvm/pull/15058#discussion_r1354122827 ## src/runtime/memory/memory_manager.cc: ## @@ -128,6 +128,7 @@ Allocator* MemoryManager::GetOrCreateAllocator(Device dev, AllocatorType type) { // Look for any

Re: [PR] [OpenCL][Texture] Improved texture memory planning [tvm]

2023-10-10 Thread via GitHub
echuraev commented on code in PR #15058: URL: https://github.com/apache/tvm/pull/15058#discussion_r1354122128 ## apps/android_deploy/app/src/main/jni/tvm_runtime.h: ## @@ -49,6 +49,5 @@ #include "../src/runtime/opencl/opencl_device_api.cc" #include

[tvm] branch nightly updated (899435956a -> eb2a4bc8e2)

2023-10-10 Thread github-bot
This is an automated email from the ASF dual-hosted git repository. github-bot pushed a change to branch nightly in repository https://gitbox.apache.org/repos/asf/tvm.git from 899435956a [Relay][Bugfix] fix axis parsing of repeat converter in the MXNet frontend (#15891) add a79f632333

Re: [PR] [TFLite][Frontend] Support quantized ELU [tvm]

2023-10-10 Thread via GitHub
tlopex commented on PR #15821: URL: https://github.com/apache/tvm/pull/15821#issuecomment-1756797590 cc @leandron @lhutton1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment.

[tvm] branch unity updated: [Unity] [Bugfix] Fix MaxPool TypeError in ONNX frontend (#15908)

2023-10-10 Thread syfeng
This is an automated email from the ASF dual-hosted git repository. syfeng pushed a commit to branch unity in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/unity by this push: new 67d61935df [Unity] [Bugfix] Fix MaxPool TypeError in

Re: [PR] [Unity] [Bugfix] Fix MaxPool TypeError in ONNX frontend [tvm]

2023-10-10 Thread via GitHub
Hzfengsy merged PR #15908: URL: https://github.com/apache/tvm/pull/15908 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [Unity] [Bugfix] Fix MaxPool TypeError in ONNX frontend [tvm]

2023-10-10 Thread via GitHub
Thrsu commented on PR #15908: URL: https://github.com/apache/tvm/pull/15908#issuecomment-1756650178 Please review and consider merging this PR. Thank you! cc @Hzfengsy @leandron -- This is an automated message from the Apache Git Service. To respond to the message, please log on to

[PR] [TIR] Fix offset_factor in cuda tensor core intrins [tvm]

2023-10-10 Thread via GitHub
vinx13 opened a new pull request, #15913: URL: https://github.com/apache/tvm/pull/15913 This PR updated the offset factor of cuda tensor intrinsic. Now the offset factor is always the leading dimension of the fragment size and fixed some failing test cases previous not caught by CI as they

[PR] [VM] modify the date pointer of tensor to point to the buffer [tvm]

2023-10-10 Thread via GitHub
yongwww opened a new pull request, #15912: URL: https://github.com/apache/tvm/pull/15912 This change is from the unity, the kv cache will need it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above

[I] [Bug] [TensorIR] Wrong results with reverse_compute_at [tvm]

2023-10-10 Thread via GitHub
TH3CHARLie opened a new issue, #15911: URL: https://github.com/apache/tvm/issues/15911 ### Expected behavior program scheduled with `reverse_compute_at` should generate the same answer. ### Actual behavior wrong result, seems like some part of array is not computed

[PR] [Unity] Paged KV Cache as LM Support [tvm]

2023-10-10 Thread via GitHub
MasterJH5574 opened a new pull request, #15910: URL: https://github.com/apache/tvm/pull/15910 This PR introduces the PagedKVCache object to `lm_support.cc` for the KV cache value management in batching settings. One test file is included. Note that this file does not contain the test

Re: [PR] [TOPI][ADRENO] Add conv2d transpose nchw texture schedule [tvm]

2023-10-10 Thread via GitHub
srkreddy1238 commented on PR #15786: URL: https://github.com/apache/tvm/pull/15786#issuecomment-1755917006 Over all LGTM. Few comments above. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

Re: [PR] [TOPI][ADRENO] Add conv2d transpose nchw texture schedule [tvm]

2023-10-10 Thread via GitHub
srkreddy1238 commented on code in PR #15786: URL: https://github.com/apache/tvm/pull/15786#discussion_r1353039853 ## tests/python/relay/opencl_texture/test_conv2d_transpose_nchw_texture.py: ## @@ -0,0 +1,119 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or

Re: [PR] [TOPI][ADRENO] Add conv2d transpose nchw texture schedule [tvm]

2023-10-10 Thread via GitHub
srkreddy1238 commented on code in PR #15786: URL: https://github.com/apache/tvm/pull/15786#discussion_r1353034383 ## tests/python/relay/opencl_texture/test_conv2d_transpose_nchw_texture.py: ## @@ -0,0 +1,119 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or

Re: [PR] [TOPI][ADRENO] Add conv2d transpose nchw texture schedule [tvm]

2023-10-10 Thread via GitHub
krishnaraj36 commented on PR #15786: URL: https://github.com/apache/tvm/pull/15786#issuecomment-1755879197 @srkreddy1238 : Please review -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-10 Thread via GitHub
kparzysz-quic commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1755730601 > Is there any technical reason blocking us from extending DLDataType to have a `is_scalable` vector field, allowing us to maintain the meaning of the lanes field to represent the

Re: [PR] [TFLite][Frontend] Fix test failures caused by div-by-zero [tvm]

2023-10-10 Thread via GitHub
Aleksei-grovety commented on code in PR #15844: URL: https://github.com/apache/tvm/pull/15844#discussion_r1352800533 ## tests/python/frontend/tflite/test_forward.py: ## @@ -2480,6 +2481,16 @@ def __test_elemwise(in_data): inq0_min, inq0_max = (out_min, out_max)

Re: [PR] [IR] Make sure Range types match when binding to Var [tvm]

2023-10-10 Thread via GitHub
kparzysz-quic commented on PR #15909: URL: https://github.com/apache/tvm/pull/15909#issuecomment-1755694020 This is a replacement PR for https://github.com/apache/tvm/pull/15909. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to

Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-10 Thread via GitHub
neildhickey commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1755692891 > Another idea to handle this would be to add a new field to DLDataType, e.g. bool is_scalable, but I'm not sure how feasible changing that standard is. I feel extending

Re: [I] [Bug] [Unity] TypeError when converting TorchFX layer_norm model to TVM using from_fx [tvm]

2023-10-10 Thread via GitHub
Thrsu closed issue #15892: [Bug] [Unity] TypeError when converting TorchFX layer_norm model to TVM using from_fx URL: https://github.com/apache/tvm/issues/15892 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL

Re: [PR] [TIR] Autocorrect Range min and extent types when possible [tvm]

2023-10-10 Thread via GitHub
kparzysz-quic closed pull request #15872: [TIR] Autocorrect Range min and extent types when possible URL: https://github.com/apache/tvm/pull/15872 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to

Re: [PR] [TIR] Autocorrect Range min and extent types when possible [tvm]

2023-10-10 Thread via GitHub
kparzysz-quic commented on PR #15872: URL: https://github.com/apache/tvm/pull/15872#issuecomment-1755669269 This cannot be done correctly in `Range` alone. This is because there are cases like `Bind(var, Range(a, b))`. If `a` and `b` are both immediates of different types, then there is

Re: [PR] [TFLite][Frontend] Fix test failures caused by div-by-zero [tvm]

2023-10-10 Thread via GitHub
p3achyjr commented on PR #15844: URL: https://github.com/apache/tvm/pull/15844#issuecomment-1755646894 @Lunderberg thanks for getting back!! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

[tvm] branch main updated: [TFLite][Frontend] Fix test failures caused by div-by-zero (#15844)

2023-10-10 Thread lunderberg
This is an automated email from the ASF dual-hosted git repository. lunderberg pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new eb2a4bc8e2 [TFLite][Frontend] Fix test failures

Re: [PR] [TFLite][Frontend] Fix test failures caused by div-by-zero [tvm]

2023-10-10 Thread via GitHub
Lunderberg merged PR #15844: URL: https://github.com/apache/tvm/pull/15844 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [PR] [TFLite][Frontend] Fix test failures caused by div-by-zero [tvm]

2023-10-10 Thread via GitHub
Lunderberg commented on PR #15844: URL: https://github.com/apache/tvm/pull/15844#issuecomment-1755560175 @p3achyjr Thank you for the reminder, and it would still be good to get this PR in, even if re-enabling the flaky `floor_mod` test is saved for a follow-up PR. -- This is an

Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-10 Thread via GitHub
kparzysz-quic commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1755456586 I guess we could pass an argument to the vectorizer whether to generate SVE-friendly code. If this is limited to emitting additional TIR builtins, then I'm ok with that. I just

[PR] [Unity] [Bugfix] Fix MaxPool TypeError in ONNX frontend [tvm]

2023-10-10 Thread via GitHub
Thrsu opened a new pull request, #15908: URL: https://github.com/apache/tvm/pull/15908 As the ONNX documentation description [here](https://onnx.ai/onnx/operators/onnx__MaxPool.html) The parameter strides in MaxPool2d is an optional attribute using [1, 1] as default. This PR

Re: [PR] [OpenCL][Texture] Improved texture memory planning [tvm]

2023-10-10 Thread via GitHub
srkreddy1238 commented on code in PR #15058: URL: https://github.com/apache/tvm/pull/15058#discussion_r1352506642 ## src/runtime/opencl/opencl_device_api.cc: ## @@ -224,8 +241,57 @@ void* OpenCLWorkspace::CreateHostPtrIfEnabled(cl::BufferDescriptor* desc, Device void*

Re: [PR] [OpenCL][Texture] Improved texture memory planning [tvm]

2023-10-10 Thread via GitHub
srkreddy1238 commented on code in PR #15058: URL: https://github.com/apache/tvm/pull/15058#discussion_r1352416290 ## src/runtime/opencl/opencl_device_api.cc: ## @@ -269,36 +430,32 @@ void OpenCLWorkspace::FreeDataSpace(Device dev, void* ptr) {

Re: [PR] [OpenCL][Texture] Improved texture memory planning [tvm]

2023-10-10 Thread via GitHub
srkreddy1238 commented on code in PR #15058: URL: https://github.com/apache/tvm/pull/15058#discussion_r1352403642 ## apps/android_deploy/app/src/main/jni/tvm_runtime.h: ## @@ -49,6 +49,5 @@ #include "../src/runtime/opencl/opencl_device_api.cc" #include

Re: [PR] [OpenCL][Texture] Improved texture memory planning [tvm]

2023-10-10 Thread via GitHub
srkreddy1238 commented on code in PR #15058: URL: https://github.com/apache/tvm/pull/15058#discussion_r1352374602 ## src/runtime/memory/memory_manager.cc: ## @@ -128,6 +128,7 @@ Allocator* MemoryManager::GetOrCreateAllocator(Device dev, AllocatorType type) { // Look for

[I] [Bug] inconsistent ouputs produced by the same model [tvm]

2023-10-10 Thread via GitHub
ntcmp2u opened a new issue, #15907: URL: https://github.com/apache/tvm/issues/15907 I use TVM to optimize an ONNX model but the output of the optmized model is inconsistent. The poc code as the follow: ```python import tvm import onnx from tvm import relay import

[tvm] branch unity updated: [Unity] [Bugfix] Fix TypeError in TVM PyTorch frontend for LayerNorm operator (#15902)

2023-10-10 Thread syfeng
This is an automated email from the ASF dual-hosted git repository. syfeng pushed a commit to branch unity in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/unity by this push: new b1380059f0 [Unity] [Bugfix] Fix TypeError in TVM

Re: [PR] [Unity] [Bugfix] Fix TypeError in TVM PyTorch frontend for LayerNorm operator [tvm]

2023-10-10 Thread via GitHub
Hzfengsy merged PR #15902: URL: https://github.com/apache/tvm/pull/15902 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:

Re: [I] [Bug] [Relay][MXNet] wrong conversion logic in Repeat operator about axis [tvm]

2023-10-10 Thread via GitHub
jikechao closed issue #15890: [Bug] [Relay][MXNet] wrong conversion logic in Repeat operator about axis URL: https://github.com/apache/tvm/issues/15890 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go

Re: [I] [Bug] [Relay][MXNet] wrong conversion logic in Repeat operator about axis [tvm]

2023-10-10 Thread via GitHub
jikechao commented on issue #15890: URL: https://github.com/apache/tvm/issues/15890#issuecomment-1754916457 This bug has been fixed by #15891 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

Re: [I] [Bug] [Relay][MXNet] wrong conversion logic in Repeat operator about axis [tvm]

2023-10-10 Thread via GitHub
jikechao commented on issue #15890: URL: https://github.com/apache/tvm/issues/15890#issuecomment-1754913364 This bug has been fixed by #15891 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

Re: [PR] [Unity] [Bugfix] Fix TypeError in TVM PyTorch frontend for LayerNorm operator [tvm]

2023-10-10 Thread via GitHub
jikechao commented on PR #15902: URL: https://github.com/apache/tvm/pull/15902#issuecomment-1754903214 @Hzfengsy @echuraev @vvchernov. Could you help me review it again? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use

Re: [PR] [OpenCL][Texture] Improved texture memory planning [tvm]

2023-10-10 Thread via GitHub
echuraev commented on code in PR #15058: URL: https://github.com/apache/tvm/pull/15058#discussion_r1349867420 ## apps/android_deploy/app/src/main/jni/tvm_runtime.h: ## @@ -49,6 +49,5 @@ #include "../src/runtime/opencl/opencl_device_api.cc" #include

[tvm] branch main updated: [CI][ADRENO] Few updates to Adreno docker setup (#15897)

2023-10-10 Thread srk
This is an automated email from the ASF dual-hosted git repository. srk pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new a79f632333 [CI][ADRENO] Few updates to Adreno docker

Re: [PR] [CI][ADRENO] Few updates to Adreno docker setup [tvm]

2023-10-10 Thread via GitHub
srkreddy1238 merged PR #15897: URL: https://github.com/apache/tvm/pull/15897 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: