[
https://issues.apache.org/jira/browse/DRILL-6853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16688844#comment-16688844
]
ASF GitHub Bot commented on DRILL-6853:
---------------------------------------
sachouche commented on issue #1544: DRILL-6853: Make the complex parquet reader
batch max row size config…
URL: https://github.com/apache/drill/pull/1544#issuecomment-439240598
The test-suite (java-exec) passed; the checks failed because of a timeout:
_The job exceeded the maximum time limit for jobs, and has been terminated._
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Parquet Complex Reader for nested schema should have configurable memory or
> max records to fetch
> ------------------------------------------------------------------------------------------------
>
> Key: DRILL-6853
> URL: https://issues.apache.org/jira/browse/DRILL-6853
> Project: Apache Drill
> Issue Type: Bug
> Affects Versions: 1.14.0
> Reporter: Nitin Sharma
> Assignee: salim achouche
> Priority: Major
> Labels: doc-impacting, pull-request-available, ready-to-commit
> Fix For: 1.15.0
>
>
> Parquet Complex reader while fetching nested schema should have configurable
> memory or max records to fetch and not default to 4000 records.
> While scanning TB of data with wider columns, this could easily cause OOM
> issues.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)