[ 
https://issues.apache.org/jira/browse/DRILL-5077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15703004#comment-15703004
 ] 

Khurram Faraaz commented on DRILL-5077:
---------------------------------------

Problem (memory leak) is seen even when there are two fragments, splitting the 
original JSON file into two smaller file leads to two fragments.
Again when we stop the Drillbit while the query is running (i.e. under 
execution) the memory leak is reported in the drillbit.log

{noformat}
2016-11-28 19:54:16,460 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.drill.exec.work.foreman.Foreman - Query text for query id 
27c37497-2f9b-b1ef-5540-4bbe5a85e348: select count(*) from `large_json`
2016-11-28 19:54:17,026 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-11-28 19:54:17,027 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-11-28 19:54:17,040 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-11-28 19:54:17,040 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 1
2016-11-28 19:54:17,042 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 3
2016-11-28 19:54:17,043 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 3
2016-11-28 19:54:17,043 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 3
2016-11-28 19:54:17,043 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 3
2016-11-28 19:54:17,043 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 3
2016-11-28 19:54:17,044 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 3
2016-11-28 19:54:17,044 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 3
2016-11-28 19:54:17,045 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 3
Mon Nov 28 19:54:17 UTC 2016 Terminating drillbit pid 16529
2016-11-28 19:54:17,270 [Drillbit-ShutdownHook#0] INFO  
o.apache.drill.exec.server.Drillbit - Received shutdown request.
2016-11-28 19:54:17,404 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
numFiles: 3
2016-11-28 19:54:17,448 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 3 out of 3 using 
3 threads. Time: 40ms total, 37.756094ms avg, 39ms max.
2016-11-28 19:54:17,448 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:foreman] INFO  
o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 3 out of 3 using 
3 threads. Earliest start: 702.262000 μs, Latest start: 1775.180000 μs, Average 
start: 1193.948333 μs .
2016-11-28 19:54:17,930 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 27c37497-2f9b-b1ef-5540-4bbe5a85e348:0:0: 
State change requested AWAITING_ALLOCATION --> RUNNING
2016-11-28 19:54:17,937 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:frag:0:0] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 27c37497-2f9b-b1ef-5540-4bbe5a85e348:0:0: 
State to report: RUNNING
2016-11-28 19:54:18,046 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:frag:1:2] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 27c37497-2f9b-b1ef-5540-4bbe5a85e348:1:2: 
State change requested AWAITING_ALLOCATION --> RUNNING
2016-11-28 19:54:18,046 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:frag:1:2] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 27c37497-2f9b-b1ef-5540-4bbe5a85e348:1:2: 
State to report: RUNNING
2016-11-28 19:54:18,674 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:frag:1:2] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 27c37497-2f9b-b1ef-5540-4bbe5a85e348:1:2: 
State change requested RUNNING --> FINISHED
2016-11-28 19:54:18,674 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:frag:1:2] INFO  
o.a.d.e.w.f.FragmentStatusReporter - 27c37497-2f9b-b1ef-5540-4bbe5a85e348:1:2: 
State to report: FINISHED
2016-11-28 19:54:24,345 [pool-7-thread-2] INFO  
o.a.drill.exec.rpc.data.DataServer - closed eventLoopGroup 
io.netty.channel.nio.NioEventLoopGroup@67a9cc26 in 1035 ms
2016-11-28 19:54:24,345 [pool-7-thread-2] INFO  
o.a.drill.exec.service.ServiceEngine - closed dataPool in 1036 ms
2016-11-28 19:54:26,336 [Drillbit-ShutdownHook#0] WARN  
o.apache.drill.exec.work.WorkManager - Closing WorkManager but there are 1 
running fragments.
2016-11-28 19:54:27,347 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 27c37497-2f9b-b1ef-5540-4bbe5a85e348:0:0: 
State change requested RUNNING --> FAILED
2016-11-28 19:54:27,348 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:frag:0:0] INFO  
o.a.d.e.w.fragment.FragmentExecutor - 27c37497-2f9b-b1ef-5540-4bbe5a85e348:0:0: 
State change requested FAILED --> FINISHED
2016-11-28 19:54:27,352 [27c37497-2f9b-b1ef-5540-4bbe5a85e348:frag:0:0] ERROR 
o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: InterruptedException

Fragment 0:0

[Error Id: a08310c1-95eb-4cea-ac18-7cd6ce52f1b8 on centos-01.qa.lab:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
InterruptedException

Fragment 0:0

[Error Id: a08310c1-95eb-4cea-ac18-7cd6ce52f1b8 on centos-01.qa.lab:31010]
        at 
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
 ~[drill-common-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
 [drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
 [drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
 [drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) 
[drill-common-1.9.0.jar:1.9.0]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_101]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_101]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_101]
Caused by: org.apache.drill.common.exceptions.DrillRuntimeException: 
Interrupted but context.shouldContinue() is true
        at 
org.apache.drill.exec.work.batch.BaseRawBatchBuffer.getNext(BaseRawBatchBuffer.java:178)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.physical.impl.unorderedreceiver.UnorderedReceiverBatch.getNextBatch(UnorderedReceiverBatch.java:141)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.physical.impl.unorderedreceiver.UnorderedReceiverBatch.next(UnorderedReceiverBatch.java:164)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.test.generated.StreamingAggregatorGen0.doWork(StreamingAggTemplate.java:173)
 ~[na:na]
        at 
org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext(StreamingAggBatch.java:167)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at java.security.AccessController.doPrivileged(Native Method) 
~[na:1.7.0_101]
        at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_101]
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
 ~[hadoop-common-2.7.0-mapr-1607.jar:na]
        at 
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
 [drill-java-exec-1.9.0.jar:1.9.0]
        ... 4 common frames omitted
Caused by: java.lang.InterruptedException: null
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2017)
 ~[na:1.7.0_101]
        at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2052)
 ~[na:1.7.0_101]
        at 
java.util.concurrent.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:489)
 ~[na:1.7.0_101]
        at 
java.util.concurrent.LinkedBlockingDeque.take(LinkedBlockingDeque.java:678) 
~[na:1.7.0_101]
        at 
org.apache.drill.exec.work.batch.UnlimitedRawBatchBuffer$UnlimitedBufferQueue.take(UnlimitedRawBatchBuffer.java:61)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.work.batch.BaseRawBatchBuffer.getNext(BaseRawBatchBuffer.java:170)
 ~[drill-java-exec-1.9.0.jar:1.9.0]
        ... 24 common frames omitted
2016-11-28 19:54:28,347 [Drillbit-ShutdownHook#0] ERROR 
o.a.d.exec.server.BootStrapContext - Pool did not terminate
2016-11-28 19:54:28,349 [Drillbit-ShutdownHook#0] WARN  
o.apache.drill.exec.server.Drillbit - Failure on close()
java.lang.RuntimeException: Exception while closing
        at 
org.apache.drill.common.DrillAutoCloseables.closeNoChecked(DrillAutoCloseables.java:46)
 ~[drill-common-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:153) 
~[drill-java-exec-1.9.0.jar:1.9.0]
        at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
~[drill-common-1.9.0.jar:1.9.0]
        at org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
~[drill-common-1.9.0.jar:1.9.0]
        at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:160) 
~[drill-java-exec-1.9.0.jar:1.9.0]
        at 
org.apache.drill.exec.server.Drillbit$ShutdownThread.run(Drillbit.java:254) 
[drill-java-exec-1.9.0.jar:1.9.0]
Caused by: java.lang.IllegalStateException: Memory was leaked by query. Memory 
leaked: (1048576)
Allocator(ROOT) 0/1048576/9244864/8589934592 (res/actual/peak/limit)

        at 
org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:492) 
~[drill-memory-base-1.9.0.jar:1.9.0]
        at 
org.apache.drill.common.DrillAutoCloseables.closeNoChecked(DrillAutoCloseables.java:44)
 ~[drill-common-1.9.0.jar:1.9.0]
        ... 5 common frames omitted
2016-11-28 19:54:28,350 [Drillbit-ShutdownHook#0] INFO  
o.apache.drill.exec.server.Drillbit - Shutdown completed (11078 ms).
Mon Nov 28 19:56:38 UTC 2016 Starting drillbit on centos-01.qa.lab
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 192931
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 192931
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
{noformat}

> Memory Leak - 
> --------------
>
>                 Key: DRILL-5077
>                 URL: https://issues.apache.org/jira/browse/DRILL-5077
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Execution - Flow
>    Affects Versions: 1.9.0
>         Environment: 4 node cluster CentOS
>            Reporter: Khurram Faraaz
>            Priority: Blocker
>
> terminating foreman drillbit while a query is running results in memory leak
> Drill 1.9.0 git commit id: 4312d65b
> Stack trace from drillbit.log
> {noformat}
> 2016-11-28 06:12:45,338 [27c43522-79ca-989f-3659-d7ccbc77e2e7:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 27c43522-79ca-989f-3659-d7ccbc77e2e7: select count(*) from `twoKeyJsn.json`
> 2016-11-28 06:12:45,602 [27c43522-79ca-989f-3659-d7ccbc77e2e7:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-11-28 06:12:45,602 [27c43522-79ca-989f-3659-d7ccbc77e2e7:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-11-28 06:12:45,603 [27c43522-79ca-989f-3659-d7ccbc77e2e7:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-11-28 06:12:45,603 [27c43522-79ca-989f-3659-d7ccbc77e2e7:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-11-28 06:12:45,603 [27c43522-79ca-989f-3659-d7ccbc77e2e7:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-11-28 06:12:45,633 [27c43522-79ca-989f-3659-d7ccbc77e2e7:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2016-11-28 06:12:45,669 [27c43522-79ca-989f-3659-d7ccbc77e2e7:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 
> using 1 threads. Time: 33ms total, 33.494123ms avg, 33ms max.
> 2016-11-28 06:12:45,669 [27c43522-79ca-989f-3659-d7ccbc77e2e7:foreman] INFO  
> o.a.d.e.s.schedule.BlockMapBuilder - Get block maps: Executed 1 out of 1 
> using 1 threads. Earliest start: 9.540000 μs, Latest start: 9.540000 μs, 
> Average start: 9.540000 μs .
> 2016-11-28 06:12:45,913 [27c43522-79ca-989f-3659-d7ccbc77e2e7:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 27c43522-79ca-989f-3659-d7ccbc77e2e7:0:0: State change requested 
> AWAITING_ALLOCATION --> RUNNING
> 2016-11-28 06:12:45,913 [27c43522-79ca-989f-3659-d7ccbc77e2e7:frag:0:0] INFO  
> o.a.d.e.w.f.FragmentStatusReporter - 
> 27c43522-79ca-989f-3659-d7ccbc77e2e7:0:0: State to report: RUNNING
> Mon Nov 28 06:12:48 UTC 2016 Terminating drillbit pid 28004
> 2016-11-28 06:12:48,697 [Drillbit-ShutdownHook#0] INFO  
> o.apache.drill.exec.server.Drillbit - Received shutdown request.
> 2016-11-28 06:12:55,749 [pool-6-thread-2] INFO  
> o.a.drill.exec.rpc.data.DataServer - closed eventLoopGroup 
> io.netty.channel.nio.NioEventLoopGroup@15bcfacd in 1017 ms
> 2016-11-28 06:12:55,750 [pool-6-thread-2] INFO  
> o.a.drill.exec.service.ServiceEngine - closed dataPool in 1018 ms
> 2016-11-28 06:12:57,749 [Drillbit-ShutdownHook#0] WARN  
> o.apache.drill.exec.work.WorkManager - Closing WorkManager but there are 1 
> running fragments.
> 2016-11-28 06:12:57,751 [27c43522-79ca-989f-3659-d7ccbc77e2e7:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 27c43522-79ca-989f-3659-d7ccbc77e2e7:0:0: State change requested RUNNING --> 
> FAILED
> 2016-11-28 06:12:57,751 [27c43522-79ca-989f-3659-d7ccbc77e2e7:frag:0:0] INFO  
> o.a.d.e.w.fragment.FragmentExecutor - 
> 27c43522-79ca-989f-3659-d7ccbc77e2e7:0:0: State change requested FAILED --> 
> FINISHED
> 2016-11-28 06:12:57,756 [27c43522-79ca-989f-3659-d7ccbc77e2e7:frag:0:0] ERROR 
> o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: NullPointerException
> Fragment 0:0
> [Error Id: 2df2f9a1-a7bf-4454-a31b-717ab4ebd815 on centos-01.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> NullPointerException
> Fragment 0:0
> [Error Id: 2df2f9a1-a7bf-4454-a31b-717ab4ebd815 on centos-01.qa.lab:31010]
>         at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:543)
>  ~[drill-common-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
>  [drill-common-1.9.0.jar:1.9.0]
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_101]
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_101]
>         at java.lang.Thread.run(Thread.java:745) [na:1.7.0_101]
> Caused by: java.lang.NullPointerException: null
>         at com.mapr.fs.MapRFsInStream.read(MapRFsInStream.java:276) 
> ~[maprfs-5.2.0-mapr.jar:5.2.0-mapr]
>         at java.io.DataInputStream.read(DataInputStream.java:149) 
> ~[na:1.7.0_101]
>         at 
> org.apache.drill.exec.store.dfs.DrillFSDataInputStream$WrappedInputStream.read(DrillFSDataInputStream.java:216)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at java.io.DataInputStream.read(DataInputStream.java:149) 
> ~[na:1.7.0_101]
>         at 
> com.fasterxml.jackson.core.json.UTF8StreamJsonParser.loadMore(UTF8StreamJsonParser.java:207)
>  ~[jackson-core-2.7.1.jar:2.7.1]
>         at 
> com.fasterxml.jackson.core.json.UTF8StreamJsonParser.parseEscapedName(UTF8StreamJsonParser.java:1983)
>  ~[jackson-core-2.7.1.jar:2.7.1]
>         at 
> com.fasterxml.jackson.core.json.UTF8StreamJsonParser.slowParseName(UTF8StreamJsonParser.java:1885)
>  ~[jackson-core-2.7.1.jar:2.7.1]
>         at 
> com.fasterxml.jackson.core.json.UTF8StreamJsonParser._parseName(UTF8StreamJsonParser.java:1669)
>  ~[jackson-core-2.7.1.jar:2.7.1]
>         at 
> com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:752)
>  ~[jackson-core-2.7.1.jar:2.7.1]
>         at 
> com.fasterxml.jackson.core.base.ParserMinimalBase.skipChildren(ParserMinimalBase.java:147)
>  ~[jackson-core-2.7.1.jar:2.7.1]
>         at 
> org.apache.drill.exec.store.easy.json.reader.CountingJsonReader.write(CountingJsonReader.java:58)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.store.easy.json.JSONRecordReader.next(JSONRecordReader.java:206)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:178) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.test.generated.StreamingAggregatorGen2.doWork(StreamingAggTemplate.java:173)
>  ~[na:na]
>         at 
> org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext(StreamingAggBatch.java:167)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext(StreamingAggBatch.java:137)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at java.security.AccessController.doPrivileged(Native Method) 
> ~[na:1.7.0_101]
>         at javax.security.auth.Subject.doAs(Subject.java:415) ~[na:1.7.0_101]
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
>  ~[hadoop-common-2.7.0-mapr-1607.jar:na]
>         at 
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
>  [drill-java-exec-1.9.0.jar:1.9.0]
>         ... 4 common frames omitted
> 2016-11-28 06:12:59,758 [Drillbit-ShutdownHook#0] ERROR 
> o.a.d.exec.server.BootStrapContext - Pool did not terminate
> 2016-11-28 06:12:59,760 [Drillbit-ShutdownHook#0] WARN  
> o.apache.drill.exec.server.Drillbit - Failure on close()
> java.lang.RuntimeException: Exception while closing
>         at 
> org.apache.drill.common.DrillAutoCloseables.closeNoChecked(DrillAutoCloseables.java:46)
>  ~[drill-common-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.server.BootStrapContext.close(BootStrapContext.java:153)
>  ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:76) 
> ~[drill-common-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.common.AutoCloseables.close(AutoCloseables.java:64) 
> ~[drill-common-1.9.0.jar:1.9.0]
>         at org.apache.drill.exec.server.Drillbit.close(Drillbit.java:160) 
> ~[drill-java-exec-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.exec.server.Drillbit$ShutdownThread.run(Drillbit.java:254) 
> [drill-java-exec-1.9.0.jar:1.9.0]
> Caused by: java.lang.IllegalStateException: Memory was leaked by query. 
> Memory leaked: (1048576)
> Allocator(ROOT) 0/1048576/12686144/8589934592 (res/actual/peak/limit)
>         at 
> org.apache.drill.exec.memory.BaseAllocator.close(BaseAllocator.java:492) 
> ~[drill-memory-base-1.9.0.jar:1.9.0]
>         at 
> org.apache.drill.common.DrillAutoCloseables.closeNoChecked(DrillAutoCloseables.java:44)
>  ~[drill-common-1.9.0.jar:1.9.0]
>         ... 5 common frames omitted
> 2016-11-28 06:12:59,760 [Drillbit-ShutdownHook#0] INFO  
> o.apache.drill.exec.server.Drillbit - Shutdown completed (11063 ms).
> Mon Nov 28 06:13:46 UTC 2016 Starting drillbit on centos-01.qa.lab
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 192931
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 10240
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 192931
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to