rok commented on pull request #7044:
URL: https://github.com/apache/arrow/pull/7044#issuecomment-654824921


   > Not only scipy, but also 
[SuiteSparse](https://github.com/DrTimothyAldenDavis/SuiteSparse) employs the 
split format.
   
   I didn't realize, Then we should indeed have proper support for this option.
   
   > > Question: could we handle this as a special case of COO tensor rather 
than a new type? Could we serialize `(row, col)` data as a single row major 
tensor of COO type and only deserialize it into the SciPy layout if desired? 
(I'm asking because I'm not sure if such approach is feasible)
   > 
   > Although we can handle the split-format as an internal variation of 
SparseCOOIndex, we still need to introduce the new flatbuffer type.
   
   Could we use the original COO flatbuffer type by 'concatenating' vectors 
when serializing? Concatenated vectors would effectively be like a row-major 2D 
tensor when serialized. Again - I'm not sure this is feasible with flatbuffers 
or desirable that's why I ask. The reason I'm interested is that we could avoid 
introducing another format.
    
   
   > > If we go for a new type - could I propose a name SparseCOOMatrix (as 
opposed to n-dimensional SparseCOOTensor). It could perhaps be shortened to 
`COOM`?
   > 
   > The implementation in this pull-request can handle more than 2-dimension.
   
   Oh I see. In case we introduce a new format - your proposal was `SplitCOO` - 
could we shorten it to `SCO` to keep naming style?
   
   Just thinking out loud: we could also call this format coordinate list - 
`COL` 
([wikipedia](https://en.wikipedia.org/wiki/Sparse_matrix#Coordinate_list_(COO)))?
   Or keep `COO` for vector-style and call tensor style `COT` (coordinate 
tensor).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to