[
https://issues.apache.org/jira/browse/DRILL-5516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16016056#comment-16016056
]
ASF GitHub Bot commented on DRILL-5516:
---------------------------------------
Github user arina-ielchiieva commented on the issue:
https://github.com/apache/drill/pull/839
@paul-rogers
1. Renamed PR as you suggested to better convey changes idea. Thank you for
suggestion.
2. Included row count into batch size determination. Thus batch size will
be limited to max allowed rows (as before if memory limit does not exceed) or
to number of records that are within max allowed memory limit.
3. Unfortunately did not add unit tests, Not sure if there is a way to test
Hbase reader in isolation for true unit testing (I know you have been working
to some changes in this area but I don't know the details).
But I have tested the fix manually, changing in code max rows size and
memory allocation. Could not write Drill "unit tests" for this since it's not
easy to mock `static final int fields` and adding such large data sets to test
on real numbers may extend our unit tests running time. I guess in this case
it's better to add such test to Functional tests suite. So if possible let's
leave this change without unit tests for now.
> Limit memory usage for Hbase reader
> -----------------------------------
>
> Key: DRILL-5516
> URL: https://issues.apache.org/jira/browse/DRILL-5516
> Project: Apache Drill
> Issue Type: Improvement
> Components: Storage - HBase
> Affects Versions: 1.10.0
> Reporter: Arina Ielchiieva
> Assignee: Arina Ielchiieva
> Fix For: 1.11.0
>
>
> If early limit 0 optimization is set to true (alter session set
> `planner.enable_limit0_optimization` = true), when executing limit 0 queries
> Drill will return data type from available metadata if possible.
> When Drill can not determine data types from metadata (or if early limit 0
> optimization is set to false), Drill will read first batch of data and
> determine schema.
> Hbase reader determines max batch size using magic number (4000) which can
> lead to OOM when row size is large. The overall vector/batch size issue will
> be reconsidered in future releases.This is temporary fix to avoid OOM.
> To limit memory usage for Hbase reader we are adding max allowed allocated
> memory contant which will default to 64 mb. Thus batch size will be limited
> to 4000 (as before if memory limit does not exceed) or to number of records
> that are within max allowed memory limit. If first row in batch is larger
> than allowed default, it will be written in batch but batch will contain only
> this row.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)