madhavajay commented on issue #12553:
URL: https://github.com/apache/arrow/issues/12553#issuecomment-1150518605

   @rok thanks for the information, it sounds like theres some great work going 
on here. I hope we can utilise more of this stuff in the future. 
   
   Regarding the Compressed Sparse Fibers `SparseCSFTensor`, firstly wow, thats 
really cool. For anyone arriving here who doesn't know what they are this blog 
is great: 
https://www.boristhebrave.com/2021/01/01/compressed-sparse-fibers-explained/
   
   So now my only question is, while this seems like an optimal generalized 
solution for storage, how much computation is required to explode back out to 
the dense form in memory to do computation?
   
   In our simple implementation since we are going by whole dimensions only, we 
can just use broadcast when necessary and then collapse back so the underlying 
data is just normal numpy arrays?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to