haohuaijin commented on issue #6363:
URL: https://github.com/apache/arrow-rs/issues/6363#issuecomment-2366788331

   Hi @alamb, I think I found the reason after reading the code.
   
   While decoding the `RecordBatch`, we first construct an `ArrayReader`. Then, 
we call the `next_buffer` method in `ArrayReader` to get the `Buffer` needed to 
construct each `Array`. In `next_buffer`, we use `slice_with_length` to obtain 
a slice of the total `Buffer`.
   
   The code shows that the `data` in `Buffer` is shared among all 
[`Buffer`](https://github.com/apache/arrow-rs/blob/d05cf6d5e74e79ddcacaa4a68bddaba230b0f163/arrow-buffer/src/buffer/immutable.rs#L233)
 instances. When `get_array_memory_size()` is called on a `RecordBatch`, it 
calls `get_array_memory_size()` for each `Array`. Each `Array` then calls 
`capacity()` on each `Buffer`. Since `capacity()` refers to the total capacity 
of the `data`(the size of `RecordBatch`) rather than the capacity used by this 
`Array`, this has caused `get_array_memory_size()` to be very large(especially 
when there are many fields).
   
   
https://github.com/apache/arrow-rs/blob/d05cf6d5e74e79ddcacaa4a68bddaba230b0f163/arrow-ipc/src/reader.rs#L562-L569
 
https://github.com/apache/arrow-rs/blob/d05cf6d5e74e79ddcacaa4a68bddaba230b0f163/arrow-ipc/src/reader.rs#L404-L406
 
https://github.com/apache/arrow-rs/blob/d05cf6d5e74e79ddcacaa4a68bddaba230b0f163/arrow-ipc/src/reader.rs#L51-L63
 
https://github.com/apache/arrow-rs/blob/d05cf6d5e74e79ddcacaa4a68bddaba230b0f163/arrow-buffer/src/buffer/immutable.rs#L223-L237
 
https://github.com/apache/arrow-rs/blob/d05cf6d5e74e79ddcacaa4a68bddaba230b0f163/arrow-buffer/src/buffer/immutable.rs#L166-L168
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to