rok commented on issue #16264:
URL: https://github.com/apache/arrow/issues/16264#issuecomment-2313010412

   > I don't think using FixedSizeBinary precludes zero-copy -- The following, 
similar to @zeroshade 's suggestion in 
https://github.com/apache/arrow/issues/39753#issuecomment-1908608726 seems to 
work (although I realise more work would be needed at the cython layer to 
accept np.array(..., np.complex64) in pa.array)
   
   Agreed, my point was that an extension array with `FixedSizeList` storage 
doesn't preclude zero-copy and I'm not sure there's benefits to using it over 
`FixedSizeBinary` over storage. Introducing an extension type doesn't come with 
requirement to introduce kernels or other machinery for it.
   
   > Yes this should work -- I've used something similar with nested 
FixedSizeListArrays to represent complex arrays whose underlying buffers can 
simply be passed to the appropriate NumPy method. However, would this not 
create the need to special case a lot of type handling? i.e. there may need to 
be:
   > 
   > 1. A basic ComplexFloat + ComplexDouble
   > 2. A FixedShapeTensor and a ComplexFixedShapeTensor (or possibly indicate 
complex in the serialized metadata?)
   > 3. Same for VariableShapeTensors
   > 4.  and maybe other compound types.
   
   I think the question here is also do we want a complex *tensor* extension 
array or a complex extension array. I am not sure we can use an complex 
extension array as storage of FixedShapeTensorArray, though if we can that 
would be best. Can an extension array be storage to another extension array? 
(Or am I misunderstanding and you mean we should introduce a primary complex 
type?)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to