[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #6889: [TOPI] sparse_dense Op sparse_data input added

2020-11-18 Thread GitBox
ANSHUMAN87 commented on pull request #6889: URL: https://github.com/apache/incubator-tvm/pull/6889#issuecomment-729671304 > @tqchen : Tristan is helping here with his valuable review efforts. But i think we need a third opinion here (possibly an official reviewer or committer) to help

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #6889: [TOPI] sparse_dense Op sparse_data input added

2020-11-16 Thread GitBox
ANSHUMAN87 commented on pull request #6889: URL: https://github.com/apache/incubator-tvm/pull/6889#issuecomment-728244556 > But maybe someone else can chime in. Thanks @tkonolige for your feedback. I believe the performance stats are quite clear to opt for a new Op in the case.

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #6889: [TOPI] sparse_dense Op sparse_data input added

2020-11-14 Thread GitBox
ANSHUMAN87 commented on pull request #6889: URL: https://github.com/apache/incubator-tvm/pull/6889#issuecomment-727237614 Gentle ping @tkonolige !!! I am not too sure who else from TVM official reviewer or committer interested in sparse. If you are aware of anyone please feel free to

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #6889: [TOPI] sparse_dense Op sparse_data input added

2020-11-13 Thread GitBox
ANSHUMAN87 commented on pull request #6889: URL: https://github.com/apache/incubator-tvm/pull/6889#issuecomment-726801407 > Just to check, you're only transposing the dense matrix? Also, what is the density of the sparse matrix? > > I'm curious, could you do a benchmark with a

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #6889: [TOPI] sparse_dense Op sparse_data input added

2020-11-11 Thread GitBox
ANSHUMAN87 commented on pull request #6889: URL: https://github.com/apache/incubator-tvm/pull/6889#issuecomment-725492811 Hi @tkonolige , below is the benchmark data i have obtained for 4 different input dimensions. NOTE: Here with Transpose means using existing sparse_dense Op with

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #6889: [TOPI] sparse_dense Op sparse_data input added

2020-11-09 Thread GitBox
ANSHUMAN87 commented on pull request #6889: URL: https://github.com/apache/incubator-tvm/pull/6889#issuecomment-724235751 Sure I will check on how much overhead added in case of transpose with existing Op case. This is an

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #6889: [TOPI] sparse_dense Op sparse_data input added

2020-11-09 Thread GitBox
ANSHUMAN87 commented on pull request #6889: URL: https://github.com/apache/incubator-tvm/pull/6889#issuecomment-724198770 > I'm not sure that the correct course of action is to add a flag to `sparse_dense` to support AB^T with B sparse. This makes all the implementations of `sparse_dense`