Useful resource, thanks.  I ended up fixing the approach in my 2nd example 
(using three `ndarray`s.  Basically one can only pass sparse tensors that have 
the same sparsity pattern, not just the same level of sparsity as the 
placeholders you pass. 

Thus, when constructing the placeholder tensors for compilation, one needs to 
give them sizes from the sparse data you will pass to the compiled function.

E.g. if your sparse data is in a SciPy CSR object called `W_sp_np`, your TVM 
placeholders would be constructed with:

```
W_data = te.placeholder(shape=W_sp_np.data.shape, 
dtype=str(W_sp_np.data.dtype), name='W_data')
W_indices = te.placeholder(shape=W_sp_np.indices.shape, 
dtype=str(W_sp_np.indices.dtype), name='W_indices')
W_indptr = te.placeholder(shape=W_sp_np.indptr.shape, 
dtype=str(W_sp_np.indptr.dtype), name='W_indptr')
```

My (unoptimised) sparse NCHW GEMM convolution is now working, I'll see if I can 
draft a tutorial about what I've learned once I've completed some other things.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/pass-sparse-tensor-to-tvm-build/7739/4) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/3e53da98e0c95e548259aeda3ffd4bd5850ce63a0cc2019be6305eaa7a4d71ce).

Reply via email to