[
https://issues.apache.org/jira/browse/DRILL-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Khurram Faraaz updated DRILL-5191:
----------------------------------
Attachment: 2789eba3-60f0-0b2f-eba8-82331735d5c4.sys.drill
query profile attached
> OutOfMemoryException - TPCDS query4
> ------------------------------------
>
> Key: DRILL-5191
> URL: https://issues.apache.org/jira/browse/DRILL-5191
> Project: Apache Drill
> Issue Type: Bug
> Components: Execution - Flow
> Affects Versions: 1.10.0
> Environment: 4 node cluster CentOS
> Reporter: Khurram Faraaz
> Priority: Critical
> Attachments: 2789eba3-60f0-0b2f-eba8-82331735d5c4.sys.drill
>
>
> TPC-DS Query4 against SF100 on Drill 1.10.0 (ee399317), on a 4 node CentOS
> cluster
> Query4 => https://raw.githubusercontent.com/Agirish/tpcds/master/query4.sql
> total number of fragments : 1,125
> Stack trace from drillbit.log
> {noformat}
> 2017-01-11 11:17:57,007 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:33:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:33:5: State change requested
> AWAITING_ALLOCATION --> RUNNING
> 2017-01-11 11:17:57,008 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:33:5] INFO
> o.a.d.e.w.f.FragmentStatusReporter -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:33:5: State to report: RUNNING
> 2017-01-11 11:17:57,009 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:33:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:33:5: State change requested RUNNING -->
> FAILED
> 2017-01-11 11:17:57,009 [BitServer-6] ERROR
> o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication.
> Connection: /10.10.100.202:31012 <--> /10.10.100.201:44712 (data server).
> Closing connection.
> org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate
> buffer of size 16384 due to memory limit. Current allocation: 16777216
> at
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:216)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:191)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.DrillByteBufAllocator.buffer(DrillByteBufAllocator.java:49)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.DrillByteBufAllocator.ioBuffer(DrillByteBufAllocator.java:64)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> io.netty.channel.AdaptiveRecvByteBufAllocator$HandleImpl.allocate(AdaptiveRecvByteBufAllocator.java:104)
> ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
> ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> [netty-common-4.0.27.Final.jar:4.0.27.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> 2017-01-11 11:17:57,009 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:24:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:24:5: State change requested
> AWAITING_ALLOCATION --> FAILED
> 2017-01-11 11:17:57,010 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:63:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:63:5: State change requested
> AWAITING_ALLOCATION --> RUNNING
> 2017-01-11 11:17:57,010 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:63:5] INFO
> o.a.d.e.w.f.FragmentStatusReporter -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:63:5: State to report: RUNNING
> 2017-01-11 11:17:57,010 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:24:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:24:5: State change requested FAILED -->
> FINISHED
> 2017-01-11 11:17:57,010 [BitServer-6] INFO
> o.a.d.exec.rpc.ProtobufLengthDecoder - Channel is closed, discarding
> remaining 3240924 byte(s) in buffer.
> 2017-01-11 11:17:57,011 [BitServer-10] ERROR
> o.a.d.exec.rpc.RpcExceptionHandler - Exception in RPC communication.
> Connection: /10.10.100.202:31012 <--> /10.10.100.202:52127 (data server).
> Closing connection.
> org.apache.drill.exec.exception.OutOfMemoryException: Unable to allocate
> buffer of size 4096 due to memory limit. Current allocation: 16777216
> at
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:216)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:191)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.DrillByteBufAllocator.buffer(DrillByteBufAllocator.java:49)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.DrillByteBufAllocator.ioBuffer(DrillByteBufAllocator.java:64)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> io.netty.channel.AdaptiveRecvByteBufAllocator$HandleImpl.allocate(AdaptiveRecvByteBufAllocator.java:104)
> ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
> ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> [netty-common-4.0.27.Final.jar:4.0.27.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> 2017-01-11 11:17:57,012 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:66:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:66:5: State change requested
> AWAITING_ALLOCATION --> FAILED
> 2017-01-11 11:17:57,012 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:24:5]
> ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR:
> OutOfMemoryException: Failure trying to allocate initial reservation for
> Allocator. Attempted to allocate 1000000 bytes and received an outcome of
> FAILED_PARENT.
> Fragment 24:5
> [Error Id: 53b33d3a-7251-4c1a-a19d-6fc75ed5c494 on centos-02.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> OutOfMemoryException: Failure trying to allocate initial reservation for
> Allocator. Attempted to allocate 1000000 bytes and received an outcome of
> FAILED_PARENT.
> Fragment 24:5
> [Error Id: 53b33d3a-7251-4c1a-a19d-6fc75ed5c494 on centos-02.qa.lab:31010]
> at
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
> ~[drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> [drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_65]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure
> trying to allocate initial reservation for Allocator. Attempted to allocate
> 1000000 bytes and received an outcome of FAILED_PARENT.
> at org.apache.drill.exec.memory.Accountant.<init>(Accountant.java:75)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.BaseAllocator.<init>(BaseAllocator.java:72)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.ChildAllocator.<init>(ChildAllocator.java:49)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.BaseAllocator.newChildAllocator(BaseAllocator.java:262)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.ops.FragmentContext.getNewChildAllocator(FragmentContext.java:298)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.ops.OperatorContextImpl.<init>(OperatorContextImpl.java:65)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.ops.FragmentContext.newOperatorContext(FragmentContext.java:358)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.<init>(AbstractRecordBatch.java:53)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.<init>(AbstractSingleRecordBatch.java:35)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.<init>(ProjectRecordBatch.java:109)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectBatchCreator.getBatch(ProjectBatchCreator.java:37)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectBatchCreator.getBatch(ProjectBatchCreator.java:30)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:148)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:101)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:79)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:206)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> ... 4 common frames omitted
> 2017-01-11 11:17:57,012 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:66:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:66:5: State change requested FAILED -->
> FINISHED
> 2017-01-11 11:17:57,012 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:30:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:30:5: State change requested
> AWAITING_ALLOCATION --> FAILED
> 2017-01-11 11:17:57,012 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:69:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:69:5: State change requested
> AWAITING_ALLOCATION --> FAILED
> 2017-01-11 11:17:57,012 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:30:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:30:5: State change requested FAILED -->
> FINISHED
> 2017-01-11 11:17:57,012 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:69:5] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:69:5: State change requested FAILED -->
> FINISHED
> 2017-01-11 11:17:57,013 [BitClient-2] WARN
> o.a.d.exec.rpc.RpcExceptionHandler - Exception occurred with closed channel.
> Connection: /10.10.100.202:52127 <--> centos-02.qa.lab/10.10.100.202:31012
> (data client)
> java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_65]
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> ~[na:1.8.0_65]
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> ~[na:1.8.0_65]
> at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_65]
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> ~[na:1.8.0_65]
> at
> io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311)
> ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final]
> at io.netty.buffer.WrappedByteBuf.setBytes(WrappedByteBuf.java:407)
> ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.buffer.UnsafeDirectLittleEndian.setBytes(UnsafeDirectLittleEndian.java:30)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:4.0.27.Final]
> at io.netty.buffer.DrillBuf.setBytes(DrillBuf.java:770)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:4.0.27.Final]
> at
> io.netty.buffer.MutableWrappedByteBuf.setBytes(MutableWrappedByteBuf.java:280)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:4.0.27.Final]
> at
> io.netty.buffer.ExpandableByteBuf.setBytes(ExpandableByteBuf.java:26)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:4.0.27.Final]
> at
> io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
> ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:241)
> ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
> ~[netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> [netty-common-4.0.27.Final.jar:4.0.27.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> 2017-01-11 11:17:57,013 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:66:5]
> ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR:
> OutOfMemoryException: Failure trying to allocate initial reservation for
> Allocator. Attempted to allocate 1000000 bytes and received an outcome of
> FAILED_PARENT.
> Fragment 66:5
> [Error Id: cba2a57b-1ec3-4b86-9914-209cdb72a518 on centos-02.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> OutOfMemoryException: Failure trying to allocate initial reservation for
> Allocator. Attempted to allocate 1000000 bytes and received an outcome of
> FAILED_PARENT.
> Fragment 66:5
> Fragment 66:5
> [Error Id: cba2a57b-1ec3-4b86-9914-209cdb72a518 on centos-02.qa.lab:31010]
> at
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
> ~[drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> [drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_65]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Failure
> trying to allocate initial reservation for Allocator. Attempted to allocate
> 1000000 bytes and received an outcome of FAILED_PARENT.
> at org.apache.drill.exec.memory.Accountant.<init>(Accountant.java:75)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.BaseAllocator.<init>(BaseAllocator.java:72)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.ChildAllocator.<init>(ChildAllocator.java:49)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.BaseAllocator.newChildAllocator(BaseAllocator.java:262)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.ops.FragmentContext.getNewChildAllocator(FragmentContext.java:298)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.ops.OperatorContextImpl.<init>(OperatorContextImpl.java:65)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.ops.FragmentContext.newOperatorContext(FragmentContext.java:358)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.<init>(AbstractRecordBatch.java:53)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.<init>(AbstractSingleRecordBatch.java:35)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.<init>(ProjectRecordBatch.java:109)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectBatchCreator.getBatch(ProjectBatchCreator.java:37)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectBatchCreator.getBatch(ProjectBatchCreator.java:30)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:148)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:128)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:171)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:101)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:79)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:206)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> ... 4 common frames omitted
> 2017-01-11 11:17:57,029 [BitClient-2] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:33:5: State change requested FAILED -->
> FAILED
> 2017-01-11 11:17:57,029 [2789eba3-60f0-0b2f-eba8-82331735d5c4:frag:69:5]
> ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR:
> OutOfMemoryException: Failure trying to allocate initial reservation for
> Allocator. Attempted to allocate 1000000 bytes and received an outcome of
> FAILED_PARENT.
> Fragment 69:5
> [Error Id: 077595d4-e1b9-4a64-a713-a5a712480b8e on centos-02.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> OutOfMemoryException: Failure trying to allocate initial reservation for
> Allocator. Attempted to allocate 1000000 bytes and received an outcome of
> FAILED_PARENT.
> Fragment 69:5
> ...
> [Error Id: 9a393521-1ac2-47ac-ac36-117d672433a5 on centos-02.qa.lab:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> OutOfMemoryException: Unable to allocate buffer of size 2097152 (rounded from
> 1048590) due to memory limit. Current allocation: 26099712
> Fragment 33:5
> [Error Id: 9a393521-1ac2-47ac-ac36-117d672433a5 on centos-02.qa.lab:31010]
> at
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
> ~[drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
> [drill-common-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_65]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_65]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> Caused by: org.apache.drill.common.exceptions.DrillRuntimeException: Error in
> parquet record reader.
> Message:
> Hadoop path: /drill/testdata/tpcds_sf100/parquet/catalog_sales/1_15_0.parquet
> Total records read: 0
> Mock records read: 0
> Records to read: 10922
> Row group index: 0
> Records in row group: 6227874
> Parquet Metadata: ParquetMetaData{FileMetaData{schema: message root {
> optional int32 cs_sold_date_sk;
> optional int32 cs_sold_time_sk;
> optional int32 cs_ship_date_sk;
> optional int32 cs_bill_customer_sk;
> optional int32 cs_bill_cdemo_sk;
> optional int32 cs_bill_hdemo_sk;
> optional int32 cs_bill_addr_sk;
> optional int32 cs_ship_customer_sk;
> optional int32 cs_ship_cdemo_sk;
> optional int32 cs_ship_hdemo_sk;
> optional int32 cs_ship_addr_sk;
> optional int32 cs_call_center_sk;
> optional int32 cs_catalog_page_sk;
> optional int32 cs_ship_mode_sk;
> optional int32 cs_warehouse_sk;
> optional int32 cs_item_sk;
> optional int32 cs_promo_sk;
> optional int32 cs_order_number;
> optional int32 cs_quantity;
> optional int32 cs_wholesale_cost (DECIMAL(7,2));
> optional int32 cs_list_price (DECIMAL(7,2));
> optional int32 cs_sales_price (DECIMAL(7,2));
> optional int32 cs_ext_discount_amt (DECIMAL(7,2));
> optional int32 cs_ext_sales_price (DECIMAL(7,2));
> optional int32 cs_ext_wholesale_cost (DECIMAL(7,2));
> optional int32 cs_ext_list_price (DECIMAL(7,2));
> optional int32 cs_ext_tax (DECIMAL(7,2));
> optional int32 cs_coupon_amt (DECIMAL(7,2));
> optional int32 cs_ext_ship_cost (DECIMAL(7,2));
> optional int32 cs_net_paid (DECIMAL(7,2));
> optional int32 cs_net_paid_inc_tax (DECIMAL(7,2));
> optional int32 cs_net_paid_inc_ship (DECIMAL(7,2));
> optional int32 cs_net_paid_inc_ship_tax (DECIMAL(7,2));
> optional int32 cs_net_profit (DECIMAL(7,2));
> }
> , metadata: {}}, blocks: [BlockMetaData{6227874, 847440378
> [ColumnMetaData{SNAPPY [cs_sold_date_sk] INT32 [RLE, BIT_PACKED, PLAIN], 4},
> ColumnMetaData{SNAPPY [cs_sold_time_sk] INT32 [RLE, BIT_PACKED, PLAIN],
> 1255924}, ColumnMetaData{SNAPPY [cs_ship_date_sk] INT32 [RLE, BIT_PACKED,
> PLAIN], 6184965}, ColumnMetaData{SNAPPY [cs_bill_customer_sk] INT32 [RLE,
> BIT_PACKED, PLAIN], 18656870}, ColumnMetaData{SNAPPY [cs_bill_cdemo_sk] INT32
> [RLE, BIT_PACKED, PLAIN], 23597370}, ColumnMetaData{SNAPPY
> [cs_bill_hdemo_sk] INT32 [RLE, BIT_PACKED, PLAIN], 28537396},
> ColumnMetaData{SNAPPY [cs_bill_addr_sk] INT32 [RLE, BIT_PACKED, PLAIN],
> 33365540}, ColumnMetaData{SNAPPY [cs_ship_customer_sk] INT32 [RLE,
> BIT_PACKED, PLAIN], 38305493}, ColumnMetaData{SNAPPY [cs_ship_cdemo_sk] INT32
> [RLE, BIT_PACKED, PLAIN], 43246354}, ColumnMetaData{SNAPPY
> [cs_ship_hdemo_sk] INT32 [RLE, BIT_PACKED, PLAIN], 48187546},
> ColumnMetaData{SNAPPY [cs_ship_addr_sk] INT32 [RLE, BIT_PACKED, PLAIN],
> 53015279}, ColumnMetaData{SNAPPY [cs_call_center_sk] INT32 [RLE, BIT_PACKED,
> PLAIN], 57955581}, ColumnMetaData{SNAPPY [cs_catalog_page_sk] INT32 [RLE,
> BIT_PACKED, PLAIN], 61247579}, ColumnMetaData{SNAPPY [cs_ship_mode_sk] INT32
> [RLE, BIT_PACKED, PLAIN], 76663937}, ColumnMetaData{SNAPPY [cs_warehouse_sk]
> INT32 [RLE, BIT_PACKED, PLAIN], 88571488}, ColumnMetaData{SNAPPY
> [cs_item_sk] INT32 [RLE, BIT_PACKED, PLAIN], 100277398},
> ColumnMetaData{SNAPPY [cs_promo_sk] INT32 [RLE, BIT_PACKED, PLAIN],
> 125037788}, ColumnMetaData{SNAPPY [cs_order_number] INT32 [RLE, BIT_PACKED,
> PLAIN], 142133844}, ColumnMetaData{SNAPPY [cs_quantity] INT32 [RLE,
> BIT_PACKED, PLAIN], 146987270}, ColumnMetaData{SNAPPY [cs_wholesale_cost]
> INT32 [RLE, BIT_PACKED, PLAIN], 159490859}, ColumnMetaData{SNAPPY
> [cs_list_price] INT32 [RLE, BIT_PACKED, PLAIN], 182216370},
> ColumnMetaData{SNAPPY [cs_sales_price] INT32 [RLE, BIT_PACKED, PLAIN],
> 206230189}, ColumnMetaData{SNAPPY [cs_ext_discount_amt] INT32 [RLE,
> BIT_PACKED, PLAIN], 228905479}, ColumnMetaData{SNAPPY [cs_ext_sales_price]
> INT32 [RLE, BIT_PACKED, PLAIN], 253780442}, ColumnMetaData{SNAPPY
> [cs_ext_wholesale_cost] INT32 [RLE, BIT_PACKED, PLAIN], 278655927},
> ColumnMetaData{SNAPPY [cs_ext_list_price] INT32 [RLE, BIT_PACKED, PLAIN],
> 303537715}, ColumnMetaData{SNAPPY [cs_ext_tax] INT32 [RLE, BIT_PACKED,
> PLAIN], 328418004}, ColumnMetaData{SNAPPY [cs_coupon_amt] INT32 [RLE,
> BIT_PACKED, PLAIN], 349842163}, ColumnMetaData{SNAPPY [cs_ext_ship_cost]
> INT32 [RLE, BIT_PACKED, PLAIN], 357118937}, ColumnMetaData{SNAPPY
> [cs_net_paid] INT32 [RLE, BIT_PACKED, PLAIN], 381766082},
> ColumnMetaData{SNAPPY [cs_net_paid_inc_tax] INT32 [RLE, BIT_PACKED, PLAIN],
> 406627412}, ColumnMetaData{SNAPPY [cs_net_paid_inc_ship] INT32 [RLE,
> BIT_PACKED, PLAIN], 431501466}, ColumnMetaData{SNAPPY
> [cs_net_paid_inc_ship_tax] INT32 [RLE, BIT_PACKED, PLAIN], 456418217},
> ColumnMetaData{SNAPPY [cs_net_profit] INT32 [RLE, BIT_PACKED, PLAIN],
> 481334133}]}]}
> at
> org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.handleAndRaise(ParquetRecordReader.java:435)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next(ParquetRecordReader.java:582)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:179)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.join.HashJoinBatch.buildSchema(HashJoinBatch.java:175)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.join.HashJoinBatch.buildSchema(HashJoinBatch.java:175)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.buildSchema(HashAggBatch.java:107)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:142)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:92)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at java.security.AccessController.doPrivileged(Native Method)
> ~[na:1.8.0_65]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_65]
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
> ~[hadoop-common-2.7.0-mapr-1607.jar:na]
> at
> org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
> [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> ... 4 common frames omitted
> Caused by: org.apache.drill.exec.exception.OutOfMemoryException: Unable to
> allocate buffer of size 2097152 (rounded from 1048590) due to memory limit.
> Current allocation: 26099712
> at
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:216)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.memory.BaseAllocator.buffer(BaseAllocator.java:191)
> ~[drill-memory-base-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.PageReader.allocateTemporaryBuffer(PageReader.java:345)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.decompress(AsyncPageReader.java:162)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.getDecompressedPageData(AsyncPageReader.java:96)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.nextInternal(AsyncPageReader.java:219)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.PageReader.next(PageReader.java:280)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.NullableColumnReader.processPages(NullableColumnReader.java:69)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.readAllFixedFieldsSerial(ParquetRecordReader.java:485)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.readAllFixedFields(ParquetRecordReader.java:479)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> at
> org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next(ParquetRecordReader.java:562)
> ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
> ... 46 common frames omitted
> 2017-01-11 11:17:57,105 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.fragment.FragmentExecutor -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:20:3: State change requested RUNNING -->
> CANCELLATION_REQUESTED
> 2017-01-11 11:17:57,105 [CONTROL-rpc-event-queue] INFO
> o.a.d.e.w.f.FragmentStatusReporter -
> 2789eba3-60f0-0b2f-eba8-82331735d5c4:20:3: State to report:
> CANCELLATION_REQUESTED
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)