[ 
https://issues.apache.org/jira/browse/ARROW-2019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16337819#comment-16337819
 ] 

ASF GitHub Bot commented on ARROW-2019:
---------------------------------------

jacques-n commented on issue #1497: ARROW-2019: [JAVA] Control the memory 
allocated for inner vector in LIST
URL: https://github.com/apache/arrow/pull/1497#issuecomment-360184071
 
 
   I gave this some more thought and I think the actual concept is variable 
width data density. So I think the parameter should probably be called density 
and we should allow it to be set on varchar/varbinary as well. I think we 
should also add a new method on each of these types which is getDensity() which 
returns the relative density of the structure.
   
   For example, density for list vector would be the average list size per 
entry. Such as:
   
   10 => on average, each position has a list of 10 values.
   0.1 => out of ten lists, one has a single element and all the other lists 
are null.
   
   The same could be applied to varchar/varbinary but the density would be the 
number of data bytes on average per element. If we have a large amount of null 
varchars, those would naturally decrease the density of the vector.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


> Control the memory allocated for inner vector in LIST
> -----------------------------------------------------
>
>                 Key: ARROW-2019
>                 URL: https://issues.apache.org/jira/browse/ARROW-2019
>             Project: Apache Arrow
>          Issue Type: Improvement
>            Reporter: Siddharth Teotia
>            Assignee: Siddharth Teotia
>            Priority: Critical
>              Labels: pull-request-available
>
> We have observed cases in our external sort code where the amount of memory 
> actually allocated for a record batch sometimes turns out to be more than 
> necessary and also more than what was reserved by the operator for special 
> purposes. Thus queries fail with OOM.
> Usually to control the memory allocated by vector.allocateNew() is to do a 
> setInitialCapacity() and the latter modifies the vector state variables which 
> are then used to allocate memory. However, due to the multiplier of 5 used in 
> List Vector, we end up asking for more memory than necessary. For example, 
> for a value count of 4095, we asked for 128KB of memory for an offset buffer 
> of VarCharVector for a field which was list of varchars. 
> We did ((4095 * 5) + 1) * 4 => 80KB . => 128KB (rounded off to power of 2 
> allocation). 
> We had earlier made changes to setInitialCapacity() of ListVector when we 
> were facing problems with deeply nested lists and decided to use the 
> multiplier only for the leaf scalar vector. 
> It looks like there is a need for a specialized setInitialCapacity() for 
> ListVector where the caller dictates the repeatedness.
> Also, there is another bug in setInitialCapacity() where the allocation of 
> validity buffer doesn't obey the capacity specified in setInitialCapacity(). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to