[ 
https://issues.apache.org/jira/browse/DRILL-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154837#comment-16154837
 ] 

Robert Hou commented on DRILL-5670:
-----------------------------------

Sorry, found the "other" log.

Here are the error messages and stack trace.  Looks like it is in the merge 
phase:
{noformat}
2017-09-02 07:12:22,348 [26555749-4d36-10d2-6faf-e403db40c370:frag:3:0] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 26555749-4d36-10d2-6faf-e403db40c370:3:0: 
State to report: FINISHED
2017-09-02 07:12:22,348 [drill-executor-23] DEBUG 
o.a.d.exec.rpc.control.WorkEventBus - Removing fragment manager: 
26555749-4d36-10d2-6faf-e403db40c370:3:0
2017-09-02 07:12:22,370 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG 
o.a.d.e.t.g.SingleBatchSorterGen68 - Took 3622 us to sort 1023 records
2017-09-02 07:12:22,370 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG 
o.a.d.e.w.batch.BaseRawBatchBuffer - Got last batch from 3:0
2017-09-02 07:12:22,370 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG 
o.a.d.e.w.batch.BaseRawBatchBuffer - Streams finished
2017-09-02 07:12:22,384 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG 
o.a.d.e.t.g.SingleBatchSorterGen68 - Took 1907 us to sort 529 records
2017-09-02 07:12:22,384 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG 
o.a.d.e.p.i.x.m.ExternalSortBatch - Completed load phase: read 978 batches, 
spilled 194 times, total input bytes: 49162397056
2017-09-02 07:12:22,384 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG 
o.a.d.e.p.i.x.m.ExternalSortBatch - Starting consolidate phase. Batches = 978, 
Records = 1000000, Memory = 378422096, In-memory batches 8, spilled runs 194
2017-09-02 07:22:09,943 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG 
o.a.d.e.p.i.x.managed.SpilledRuns - Starting merge phase. Runs = 62, Alloc. 
memory = 0
2017-09-02 07:22:17,568 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - User Error Occurred: One or more nodes 
ran out of memory while executing the query. (Unable to allocate buffer of size 
16777216 (rounded from 15834000) due to memory limit. Current allocation: 
525809920)
org.apache.drill.common.exceptions.UserException: RESOURCE ERROR: One or more 
nodes ran out of memory while executing the query.

Unable to allocate buffer of size 16777216 (rounded from 15834000) due to 
memory limit. Current allocation: 525809920

[Error Id: 34b695f5-b41d-440a-b07e-7e11531f9419 ]
        at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550)
 ~[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:244)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_51]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
        at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to 
allocate buffer of size 16777216 (rounded from 15834000) due to memory limit. 
Current allocation: 525809920
        at 
org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:238) 
~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:213) 
~[drill-memory-base-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.vector.VarCharVector.allocateNew(VarCharVector.java:402) 
~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.vector.RepeatedVarCharVector.allocateNew(RepeatedVarCharVector.java:272)
 ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount(AllocationHelper.java:39)
 ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.vector.AllocationHelper.allocate(AllocationHelper.java:46)
 ~[vector-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:115)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.record.VectorInitializer.allocateVector(VectorInitializer.java:95)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.record.VectorInitializer.allocateBatch(VectorInitializer.java:85)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next(PriorityQueueCopierWrapper.java:262)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load(ExternalSortBatch.java:374)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext(ExternalSortBatch.java:303)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:105) 
~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:92)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:95) 
~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:234)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:227)
 ~[drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.7.0_51]
        at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_51]
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
 ~[hadoop-common-2.7.0-mapr-1607.jar:na]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:227)
 [drill-java-exec-1.12.0-SNAPSHOT.jar:1.12.0-SNAPSHOT]
        ... 4 common frames omitted
2017-09-02 07:22:17,568 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 26555749-4d36-10d2-6faf-e403db40c370:2:0: 
State change requested RUNNING --> FAILED
2017-09-02 07:22:17,568 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG 
o.a.d.e.physical.impl.BaseRootExec - closed operator 347397726
2017-09-02 07:22:17,784 [26555749-4d36-10d2-6faf-e403db40c370:frag:2:0] DEBUG 
o.a.d.e.p.i.x.managed.SpilledRuns - End of sort. Total write bytes: 
82135266715, Total read bytes: 82135266715
{noformat}

> Varchar vector throws an assertion error when allocating a new vector
> ---------------------------------------------------------------------
>
>                 Key: DRILL-5670
>                 URL: https://issues.apache.org/jira/browse/DRILL-5670
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Execution - Relational Operators
>    Affects Versions: 1.11.0
>            Reporter: Rahul Challapalli
>            Assignee: Paul Rogers
>             Fix For: 1.12.0
>
>         Attachments: 26555749-4d36-10d2-6faf-e403db40c370.sys.drill, 
> 266290f3-5fdc-5873-7372-e9ee053bf867.sys.drill, 
> 269969ca-8d4d-073a-d916-9031e3d3fbf0.sys.drill, drillbit.log, drillbit.log, 
> drillbit.log, drillbit.out, drill-override.conf
>
>
> I am running this test on a private branch of [paul's 
> repository|https://github.com/paul-rogers/drill]. Below is the commit info
> {code}
> git.commit.id.abbrev=d86e16c
> git.commit.user.email=prog...@maprtech.com
> git.commit.message.full=DRILL-5601\: Rollup of external sort fixes an 
> improvements\n\n- DRILL-5513\: Managed External Sort \: OOM error during the 
> merge phase\n- DRILL-5519\: Sort fails to spill and results in an OOM\n- 
> DRILL-5522\: OOM during the merge and spill process of the managed external 
> sort\n- DRILL-5594\: Excessive buffer reallocations during merge phase of 
> external sort\n- DRILL-5597\: Incorrect "bits" vector allocation in nullable 
> vectors allocateNew()\n- DRILL-5602\: Repeated List Vector fails to 
> initialize the offset vector\n\nAll of the bugs have to do with handling 
> low-memory conditions, and with\ncorrectly estimating the sizes of vectors, 
> even when those vectors come\nfrom the spill file or from an exchange. Hence, 
> the changes for all of\nthe above issues are interrelated.\n
> git.commit.id=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.message.short=DRILL-5601\: Rollup of external sort fixes an 
> improvements
> git.commit.user.name=Paul Rogers
> git.build.user.name=Rahul Challapalli
> git.commit.id.describe=0.9.0-1078-gd86e16c
> git.build.user.email=challapallira...@gmail.com
> git.branch=d86e16c551e7d3553f2cde748a739b1c5a7a7659
> git.commit.time=05.07.2017 @ 20\:34\:39 PDT
> git.build.time=12.07.2017 @ 14\:27\:03 PDT
> git.remote.origin.url=g...@github.com\:paul-rogers/drill.git
> {code}
> Below query fails with an Assertion Error
> {code}
> 0: jdbc:drill:zk=10.10.100.190:5181> ALTER SESSION SET 
> `exec.sort.disable_managed` = false;
> +-------+-------------------------------------+
> |  ok   |               summary               |
> +-------+-------------------------------------+
> | true  | exec.sort.disable_managed updated.  |
> +-------+-------------------------------------+
> 1 row selected (1.044 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.memory.max_query_memory_per_node` = 482344960;
> +-------+----------------------------------------------------+
> |  ok   |                      summary                       |
> +-------+----------------------------------------------------+
> | true  | planner.memory.max_query_memory_per_node updated.  |
> +-------+----------------------------------------------------+
> 1 row selected (0.372 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.width.max_per_node` = 1;
> +-------+--------------------------------------+
> |  ok   |               summary                |
> +-------+--------------------------------------+
> | true  | planner.width.max_per_node updated.  |
> +-------+--------------------------------------+
> 1 row selected (0.292 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> alter session set 
> `planner.width.max_per_query` = 1;
> +-------+---------------------------------------+
> |  ok   |                summary                |
> +-------+---------------------------------------+
> | true  | planner.width.max_per_query updated.  |
> +-------+---------------------------------------+
> 1 row selected (0.25 seconds)
> 0: jdbc:drill:zk=10.10.100.190:5181> select count(*) from (select * from 
> dfs.`/drill/testdata/resource-manager/3500cols.tbl` order by 
> columns[450],columns[330],columns[230],columns[220],columns[110],columns[90],columns[80],columns[70],columns[40],columns[10],columns[20],columns[30],columns[40],columns[50],
>  
> columns[454],columns[413],columns[940],columns[834],columns[73],columns[140],columns[104],columns[2222],columns[30],columns[2420],columns[1520],
>  columns[1410], 
> columns[1110],columns[1290],columns[2380],columns[705],columns[45],columns[1054],columns[2430],columns[420],columns[404],columns[3350],
>  
> columns[3333],columns[153],columns[356],columns[84],columns[745],columns[1450],columns[103],columns[2065],columns[343],columns[3420],columns[530],
>  columns[3210] ) d where d.col433 = 'sjka skjf';
> Error: RESOURCE ERROR: External Sort encountered an error while spilling to 
> disk
> Fragment 2:0
> [Error Id: 26b55576-1a5c-4756-96d6-fbec25eecf03 on qa-node190.qa.lab:31010]
>   (java.lang.AssertionError) null
>     org.apache.drill.exec.vector.VarCharVector.allocateNew():400
>     org.apache.drill.exec.vector.RepeatedVarCharVector.allocateNew():272
>     
> org.apache.drill.exec.vector.AllocationHelper.allocatePrecomputedChildCount():37
>     org.apache.drill.exec.vector.AllocationHelper.allocate():44
>     org.apache.drill.exec.record.SmartAllocationHelper.allocateVector():117
>     org.apache.drill.exec.record.SmartAllocationHelper.allocateVector():97
>     org.apache.drill.exec.record.SmartAllocationHelper.allocateBatch():87
>     
> org.apache.drill.exec.physical.impl.xsort.managed.PriorityQueueCopierWrapper$BatchMerger.next():262
>     
> org.apache.drill.exec.physical.impl.xsort.managed.SpilledRuns.doMergeAndSpill():171
>     
> org.apache.drill.exec.physical.impl.xsort.managed.SpilledRuns.safeMergeAndSpill():138
>     
> org.apache.drill.exec.physical.impl.xsort.managed.SpilledRuns.mergeAndSpill():91
>     
> org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.spillFromMemory():298
>     org.apache.drill.exec.physical.impl.xsort.managed.SortImpl.addBatch():191
>     
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.loadBatch():402
>     
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.load():338
>     
> org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch.innerNext():290
>     org.apache.drill.exec.record.AbstractRecordBatch.next():162
>     
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():225
>     org.apache.drill.exec.record.AbstractRecordBatch.next():119
>     org.apache.drill.exec.record.AbstractRecordBatch.next():109
>     org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
>     
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext():93
>     org.apache.drill.exec.record.AbstractRecordBatch.next():162
>     
> org.apache.drill.exec.physical.impl.validate.IteratorValidatorBatchIterator.next():225
>     org.apache.drill.exec.physical.impl.BaseRootExec.next():105
>     
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
>     org.apache.drill.exec.physical.impl.BaseRootExec.next():95
>     org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():234
>     org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():227
>     java.security.AccessController.doPrivileged():-2
>     javax.security.auth.Subject.doAs():415
>     org.apache.hadoop.security.UserGroupInformation.doAs():1595
>     org.apache.drill.exec.work.fragment.FragmentExecutor.run():227
>     org.apache.drill.common.SelfCleaningRunnable.run():38
>     java.util.concurrent.ThreadPoolExecutor.runWorker():1145
>     java.util.concurrent.ThreadPoolExecutor$Worker.run():615
>     java.lang.Thread.run():745 (state=,code=0)
> {code}
> The data set is too large to attach here. Logs and profiles are attached



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to