[ 
https://issues.apache.org/jira/browse/IMPALA-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16573925#comment-16573925
 ] 

Todd Lipcon commented on IMPALA-7406:
-------------------------------------

I filed https://github.com/google/flatbuffers/issues/4865 with some 
measurements about different approaches to avoid the flyweight allocations at 
access time

> Flatbuffer wrappers use almost as much memory as underlying data
> ----------------------------------------------------------------
>
>                 Key: IMPALA-7406
>                 URL: https://issues.apache.org/jira/browse/IMPALA-7406
>             Project: IMPALA
>          Issue Type: Improvement
>          Components: Catalog
>            Reporter: Todd Lipcon
>            Priority: Major
>
> Currently the file descriptors stored in the catalogd memory for each 
> partition use a FlatBuffer to reduce the number of separate objects on the 
> Java heap. However, the FlatBuffer objects internally each store a ByteBuffer 
> and int position, so each object takes 32 bytes on its own. The ByteBuffer 
> takes 56 bytes since it stores various references, endianness, limit, mark, 
> position, etc. This amounts to about 88 bytes overhead on top of the actual 
> underlying flatbuf byte array which is typically around 100 bytes for a 
> single-block file. So, we're have about a 1:1 ratio of memory overhead and a 
> 2:1 ratio of object count overhead for each partition.
> If we simply stored the byte[] array and constructed wrappers on demand, we'd 
> save 88 bytes and 2 objects per partition. The downside is that we'd need to 
> do short-lived ByteBuffer allocations at access time, and based on some 
> benchmarking I did, they don't get escape-analyzed out. So, it's not a super 
> clear win, but still worth considering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to