[ 
https://issues.apache.org/jira/browse/DRILL-3678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705755#comment-14705755
 ] 

Aman Sinha commented on DRILL-3678:
-----------------------------------

This is dependent on DRILL-3209 which [~jaltekruse] is working on.  Assigning 
to him.  

> Plan generating for Drill on Hive takes huge java heap size
> -----------------------------------------------------------
>
>                 Key: DRILL-3678
>                 URL: https://issues.apache.org/jira/browse/DRILL-3678
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Query Planning & Optimization
>    Affects Versions: 1.1.0
>         Environment: 1.1
>            Reporter: Hao Zhu
>            Assignee: Jinfeng Ni
>
> ===Env===
> Drill 1.0 on Hive 0.13
> (Also tested Drill 1.1 and get the same behavior)
> 8 nodes drill cluster.
> Jave heap size is set to 8G and Direct memory is set to 96G on each drillbit.
> ===Symptom===
> This is a Hive parquet partition table which has multi-level partitions.  
> The Hive table size is several TB with tens of thousands leaf partitions.
> When doing a "select * from table limit 10", the query keeps in "pending" 
> state to generate the SQL plan. And finally the drillbits crashed with java 
> heap OOM.
> {code}
> java.lang.OutOfMemoryError: Java heap space
>         at 
> hive.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:599)
>         at 
> hive.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:360)
>         at 
> hive.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:100)
>         at 
> hive.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:172)
>         at 
> hive.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:130)
>         at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:95)
>         at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:66)
>         at 
> org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:72)
>         at 
> org.apache.drill.exec.store.hive.HiveRecordReader.init(HiveRecordReader.java:246)
>         at 
> org.apache.drill.exec.store.hive.HiveRecordReader.<init>(HiveRecordReader.java:138)
>         at 
> org.apache.drill.exec.store.hive.HiveScanBatchCreator.getBatch(HiveScanBatchCreator.java:58)
>         at 
> org.apache.drill.exec.store.hive.HiveScanBatchCreator.getBatch(HiveScanBatchCreator.java:34)
>         at 
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:150)
>         at 
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:173)
>         at 
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:106)
>         at 
> org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:81)
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:235)
>         at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> Node ran out of Heap memory, exiting.
> java.lang.OutOfMemoryError: Java heap space
> {code}
> We captured some stacktraces of foreman thread in foreman drillbit, here are 
> 2 times' examples:
> {code}
> 2a482cd9-7fb2-c492-1356-d049e90870c8:foreman id=115 state=RUNNABLE
>     at org.apache.xerces.dom.DeferredElementNSImpl.synchronizeData(Unknown 
> Source)
>     at org.apache.xerces.dom.ElementImpl.getTagName(Unknown Source)
>     at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2348)
>     at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2234)
>     at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2151)
>     at org.apache.hadoop.conf.Configuration.get(Configuration.java:871)
>     at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2069)
>     at org.apache.hadoop.mapred.JobConf.(JobConf.java:421)
>     at org.apache.drill.exec.store.hive.HiveScan.splitInput(HiveScan.java:178)
>     at org.apache.drill.exec.store.hive.HiveScan.getSplits(HiveScan.java:167)
>     at org.apache.drill.exec.store.hive.HiveScan.access$000(HiveScan.java:69)
>     at org.apache.drill.exec.store.hive.HiveScan$1.run(HiveScan.java:146)
>     at org.apache.drill.exec.store.hive.HiveScan$1.run(HiveScan.java:144)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566)
>     at 
> org.apache.drill.exec.store.hive.HiveScan.getSplitsWithUGI(HiveScan.java:144)
>     at org.apache.drill.exec.store.hive.HiveScan.(HiveScan.java:119)
>     at 
> org.apache.drill.exec.store.hive.HiveStoragePlugin.getPhysicalScan(HiveStoragePlugin.java:78)
>     at 
> org.apache.drill.exec.store.hive.HiveStoragePlugin.getPhysicalScan(HiveStoragePlugin.java:41)
>     at 
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(AbstractStoragePlugin.java:53)
>     at 
> org.apache.drill.exec.planner.logical.DrillTable.getGroupScan(DrillTable.java:72)
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRel.(DrillScanRel.java:81)
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRel.(DrillScanRel.java:63)
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(DrillScanRule.java:37)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:228)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:795)
>     at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:303)
>     at org.apache.calcite.prepare.PlannerImpl.transform(PlannerImpl.java:316)
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.logicalPlanningVolcanoAndLopt(DefaultSqlHandler.java:514)
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:211)
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:246)
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:162)
>     at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:178)
>     at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:903)
>     at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:242)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> --------
> 2a482cd9-7fb2-c492-1356-d049e90870c8:foreman id=115 state=RUNNABLE
>     at java.util.zip.Inflater.inflateBytes(Native Method)
>     at java.util.zip.Inflater.inflate(Inflater.java:259)
>     at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152)
>     at java.io.FilterInputStream.read(FilterInputStream.java:133)
>     at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
>     at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
>     at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
>       - locked java.io.InputStreamReader@63f0c79a
>     at java.io.InputStreamReader.read(InputStreamReader.java:184)
>     at java.io.BufferedReader.fill(BufferedReader.java:154)
>     at java.io.BufferedReader.readLine(BufferedReader.java:317)
>       - locked java.io.InputStreamReader@63f0c79a
>     at java.io.BufferedReader.readLine(BufferedReader.java:382)
>     at 
> javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319)
>     at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255)
>     at 
> javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)
>     at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2257)
>     at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2234)
>     at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2151)
>       - locked org.apache.hadoop.mapred.JobConf@6ef80e0e
>     at org.apache.hadoop.conf.Configuration.get(Configuration.java:871)
>     at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2069)
>     at org.apache.hadoop.mapred.JobConf.(JobConf.java:421)
>     at org.apache.drill.exec.store.hive.HiveScan.splitInput(HiveScan.java:178)
>     at org.apache.drill.exec.store.hive.HiveScan.getSplits(HiveScan.java:167)
>     at org.apache.drill.exec.store.hive.HiveScan.access$000(HiveScan.java:69)
>     at org.apache.drill.exec.store.hive.HiveScan$1.run(HiveScan.java:146)
>     at org.apache.drill.exec.store.hive.HiveScan$1.run(HiveScan.java:144)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566)
>     at 
> org.apache.drill.exec.store.hive.HiveScan.getSplitsWithUGI(HiveScan.java:144)
>     at org.apache.drill.exec.store.hive.HiveScan.(HiveScan.java:119)
>     at 
> org.apache.drill.exec.store.hive.HiveStoragePlugin.getPhysicalScan(HiveStoragePlugin.java:78)
>     at 
> org.apache.drill.exec.store.hive.HiveStoragePlugin.getPhysicalScan(HiveStoragePlugin.java:41)
>     at 
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(AbstractStoragePlugin.java:53)
>     at 
> org.apache.drill.exec.planner.logical.DrillTable.getGroupScan(DrillTable.java:72)
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRel.(DrillScanRel.java:81)
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRel.(DrillScanRel.java:63)
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(DrillScanRule.java:37)
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:228)
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:795)
>     at org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:303)
>     at org.apache.calcite.prepare.PlannerImpl.transform(PlannerImpl.java:316)
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.logicalPlanningVolcanoAndLopt(DefaultSqlHandler.java:514)
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:211)
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:246)
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:162)
>     at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:178)
>     at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:903)
>     at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:242)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:744)
> {code}
> ===Workaround===
> We have to increase the JAVA heap size to 24GB and then query can finish 
> successfully.
> Even though the query took about 16mins to generate the plan.
> ===Asks===
> Drill on Hive parquet query should utilize direct memory instead of java heap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to