[
https://issues.apache.org/jira/browse/DRILL-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14941183#comment-14941183
]
ASF GitHub Bot commented on DRILL-3874:
---------------------------------------
Github user cwestin commented on the pull request:
https://github.com/apache/drill/pull/181#issuecomment-145033096
I tried to make the getBufferSize()-calls-getBufferSizeFor() change that
you suggested, Jason, but it causes a lot of unit test failures, so there must
be something else going on in some places. I want to get this into 1.2, so I've
undone that for now. But I've added the valueCount + 1 you spotted, and also
changed the check to if (valueCount == 0) in
BaseRepeatedValueVector.getBufferSizeFor() -- that was a bug nobody spotted.
Testing with those changes now, will push shortly. It'd be great if one of you
could merge this after that.
> flattening large JSON objects consumes too much direct memory
> -------------------------------------------------------------
>
> Key: DRILL-3874
> URL: https://issues.apache.org/jira/browse/DRILL-3874
> Project: Apache Drill
> Issue Type: Bug
> Components: Execution - Flow
> Affects Versions: 1.1.0
> Reporter: Chris Westin
> Assignee: Chris Westin
>
> A JSON record has a field whose value is an array with 20,000 elements; the
> record's size is 4MB. A select is used to flatten this. The query profile
> reports that the peak memory utilization was 8GB, most of it used by the
> flatten.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)