[
https://issues.apache.org/jira/browse/DRILL-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16012253#comment-16012253
]
ASF GitHub Bot commented on DRILL-5516:
---------------------------------------
GitHub user arina-ielchiieva opened a pull request:
https://github.com/apache/drill/pull/839
DRILL-5516: Use max allowed allocated memory when defining batch size…
… for hbase record reader
Instead of using rows number (4000), we will use max allowed allocated
memory which will default to 64 mb. If first row in batch is larger than
allowed default, it will be written in batch but batch will contain only this
row.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/arina-ielchiieva/drill DRILL-5516
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/drill/pull/839.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #839
----
commit ce3f227f7f06baa5e43f8f2529036899549495aa
Author: Arina Ielchiieva <[email protected]>
Date: 2017-05-15T15:51:02Z
DRILL-5516: Use max allowed allocated memory when defining batch size for
hbase record reader
----
> Use max allowed allocated memory when defining batch size for hbase record
> reader
> ---------------------------------------------------------------------------------
>
> Key: DRILL-5516
> URL: https://issues.apache.org/jira/browse/DRILL-5516
> Project: Apache Drill
> Issue Type: Improvement
> Components: Storage - HBase
> Affects Versions: 1.10.0
> Reporter: Arina Ielchiieva
> Assignee: Arina Ielchiieva
>
> If early limit 0 optimization is set to true (alter session set
> `planner.enable_limit0_optimization` = true), when executing limit 0 queries
> Drill will return data type from available metadata if possible.
> When Drill can not determine data types from metadata (or if early limit 0
> optimization is set to false), Drill will read first batch of data and
> determine schema.
> Hbase reader determines max batch size using magic number (4000) which can
> lead to OOM when row size is large. The overall vector/batch size issue will
> be reconsidered in future releases.This is temporary fix to avoid OOM.
> Instead of using rows number, we will use max allowed allocated memory which
> will default to 64 mb. If first row in batch is larger than allowed default,
> it will be written in batch but batch will contain only this row.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)