[
https://issues.apache.org/jira/browse/DRILL-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17035595#comment-17035595
]
ASF GitHub Bot commented on DRILL-7578:
---------------------------------------
cgivre commented on issue #1978: DRILL-7578: HDF5 Metadata Queries Fail with
Large Files
URL: https://github.com/apache/drill/pull/1978#issuecomment-585353181
Here are the stack traces:
```
apache drill> select *
2..semicolon> from dfs.test.`eFitOut.h5`;
Error: RESOURCE ERROR: One or more nodes ran out of memory while executing
the query.
A single column value is larger than the maximum allowed size of 16 MB
Fragment 0:0
[Error Id: 8f722576-9dfc-4462-8acd-dcba76815f37 on localhost:31010]
(state=,code=0)
java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory
while executing the query.
A single column value is larger than the maximum allowed size of 16 MB
Fragment 0:0
[Error Id: 8f722576-9dfc-4462-8acd-dcba76815f37 on localhost:31010]
at
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:537)
at
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:609)
at
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1278)
at
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:58)
at
org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:667)
at
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1102)
at
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1113)
at
org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
at
org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:200)
at
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
at
org.apache.calcite.avatica.AvaticaStatement.execute(AvaticaStatement.java:217)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
Caused by: org.apache.drill.common.exceptions.UserRemoteException: RESOURCE
ERROR: One or more nodes ran out of memory while executing the query.
A single column value is larger than the maximum allowed size of 16 MB
Fragment 0:0
[Error Id: 8f722576-9dfc-4462-8acd-dcba76815f37 on localhost:31010]
at
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:125)
at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:422)
at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:96)
at
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:273)
at
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:243)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: RESOURCE ERROR: One or more nodes ran out of
memory while executing the query.
A single column value is larger than the maximum allowed size of 16 MB
Fragment 0:0
[Error Id: 8f722576-9dfc-4462-8acd-dcba76815f37 on localhost:31010]
at
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:653)
at
org.apache.drill.exec.physical.resultSet.impl.ResultSetLoaderImpl.overflowed(ResultSetLoaderImpl.java:650)
at
org.apache.drill.exec.physical.resultSet.impl.ColumnState$PrimitiveColumnState.overflowed(ColumnState.java:73)
at
org.apache.drill.exec.vector.accessor.writer.BaseScalarWriter.overflowed(BaseScalarWriter.java:222)
at
org.apache.drill.exec.vector.accessor.writer.AbstractFixedWidthWriter.resize(AbstractFixedWidthWriter.java:251)
at
org.apache.drill.exec.vector.accessor.writer.AbstractFixedWidthWriter$BaseFixedWidthWriter.prepareWrite(AbstractFixedWidthWriter.java:99)
at
org.apache.drill.exec.vector.accessor.writer.AbstractFixedWidthWriter$BaseFixedWidthWriter.prepareWrite(AbstractFixedWidthWriter.java:86)
at
org.apache.drill.exec.vector.accessor.ColumnAccessors$Float8ColumnWriter.setDouble(ColumnAccessors.java:1150)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.doubleMatrixHelper(HDF5BatchReader.java:839)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.mapDoubleMatrixField(HDF5BatchReader.java:812)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.projectDataset(HDF5BatchReader.java:546)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.projectMetadataRow(HDF5BatchReader.java:392)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.next(HDF5BatchReader.java:340)
at
org.apache.drill.exec.physical.impl.scan.framework.ShimBatchReader.next(ShimBatchReader.java:132)
at
org.apache.drill.exec.physical.impl.scan.ReaderState.readBatch(ReaderState.java:414)
at
org.apache.drill.exec.physical.impl.scan.ReaderState.next(ReaderState.java:371)
at
org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.nextAction(ScanOperatorExec.java:263)
at
org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.next(ScanOperatorExec.java:234)
at
org.apache.drill.exec.physical.impl.protocol.OperatorDriver.doNext(OperatorDriver.java:201)
at
org.apache.drill.exec.physical.impl.protocol.OperatorDriver.start(OperatorDriver.java:179)
at
org.apache.drill.exec.physical.impl.protocol.OperatorDriver.next(OperatorDriver.java:129)
at
org.apache.drill.exec.physical.impl.protocol.OperatorRecordBatch.next(OperatorRecordBatch.java:150)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:122)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:114)
at
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:64)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:87)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:177)
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104)
at
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:83)
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94)
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:326)
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:313)
at .......(:0)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:313)
at
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
at .......(:0)
```
This query should not fail:
```
apache drill> select path, data_type
2..semicolon> from dfs.test.`eFitOut.h5`;
Error: RESOURCE ERROR: One or more nodes ran out of memory while executing
the query.
A single column value is larger than the maximum allowed size of 16 MB
Fragment 0:0
[Error Id: 28070678-b89c-444d-9f0c-cbd3fb69c431 on localhost:31010]
(state=,code=0)
java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory
while executing the query.
A single column value is larger than the maximum allowed size of 16 MB
Fragment 0:0
[Error Id: 28070678-b89c-444d-9f0c-cbd3fb69c431 on localhost:31010]
at
org.apache.drill.jdbc.impl.DrillCursor.nextRowInternally(DrillCursor.java:537)
at
org.apache.drill.jdbc.impl.DrillCursor.loadInitialSchema(DrillCursor.java:609)
at
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:1278)
at
org.apache.drill.jdbc.impl.DrillResultSetImpl.execute(DrillResultSetImpl.java:58)
at
org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:667)
at
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1102)
at
org.apache.drill.jdbc.impl.DrillMetaImpl.prepareAndExecute(DrillMetaImpl.java:1113)
at
org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
at
org.apache.drill.jdbc.impl.DrillConnectionImpl.prepareAndExecuteInternal(DrillConnectionImpl.java:200)
at
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
at
org.apache.calcite.avatica.AvaticaStatement.execute(AvaticaStatement.java:217)
at sqlline.Commands.executeSingleQuery(Commands.java:1054)
at sqlline.Commands.execute(Commands.java:1003)
at sqlline.Commands.sql(Commands.java:967)
at sqlline.SqlLine.dispatch(SqlLine.java:734)
at sqlline.SqlLine.begin(SqlLine.java:541)
at sqlline.SqlLine.start(SqlLine.java:267)
at sqlline.SqlLine.main(SqlLine.java:206)
Caused by: org.apache.drill.common.exceptions.UserRemoteException: RESOURCE
ERROR: One or more nodes ran out of memory while executing the query.
A single column value is larger than the maximum allowed size of 16 MB
Fragment 0:0
[Error Id: 28070678-b89c-444d-9f0c-cbd3fb69c431 on localhost:31010]
at
org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:125)
at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:422)
at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:96)
at
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:273)
at
org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:243)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
at
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: RESOURCE ERROR: One or more nodes ran out of
memory while executing the query.
A single column value is larger than the maximum allowed size of 16 MB
Fragment 0:0
[Error Id: 28070678-b89c-444d-9f0c-cbd3fb69c431 on localhost:31010]
at
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:653)
at
org.apache.drill.exec.physical.resultSet.impl.ResultSetLoaderImpl.overflowed(ResultSetLoaderImpl.java:650)
at
org.apache.drill.exec.physical.resultSet.impl.ColumnState$PrimitiveColumnState.overflowed(ColumnState.java:73)
at
org.apache.drill.exec.vector.accessor.writer.BaseScalarWriter.overflowed(BaseScalarWriter.java:222)
at
org.apache.drill.exec.vector.accessor.writer.AbstractFixedWidthWriter.resize(AbstractFixedWidthWriter.java:251)
at
org.apache.drill.exec.vector.accessor.writer.AbstractFixedWidthWriter$BaseFixedWidthWriter.prepareWrite(AbstractFixedWidthWriter.java:99)
at
org.apache.drill.exec.vector.accessor.writer.AbstractFixedWidthWriter$BaseFixedWidthWriter.prepareWrite(AbstractFixedWidthWriter.java:86)
at
org.apache.drill.exec.vector.accessor.ColumnAccessors$Float8ColumnWriter.setDouble(ColumnAccessors.java:1150)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.doubleMatrixHelper(HDF5BatchReader.java:839)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.mapDoubleMatrixField(HDF5BatchReader.java:812)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.projectDataset(HDF5BatchReader.java:546)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.projectMetadataRow(HDF5BatchReader.java:392)
at
org.apache.drill.exec.store.hdf5.HDF5BatchReader.next(HDF5BatchReader.java:340)
at
org.apache.drill.exec.physical.impl.scan.framework.ShimBatchReader.next(ShimBatchReader.java:132)
at
org.apache.drill.exec.physical.impl.scan.ReaderState.readBatch(ReaderState.java:414)
at
org.apache.drill.exec.physical.impl.scan.ReaderState.next(ReaderState.java:371)
at
org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.nextAction(ScanOperatorExec.java:263)
at
org.apache.drill.exec.physical.impl.scan.ScanOperatorExec.next(ScanOperatorExec.java:234)
at
org.apache.drill.exec.physical.impl.protocol.OperatorDriver.doNext(OperatorDriver.java:201)
at
org.apache.drill.exec.physical.impl.protocol.OperatorDriver.start(OperatorDriver.java:179)
at
org.apache.drill.exec.physical.impl.protocol.OperatorDriver.next(OperatorDriver.java:129)
at
org.apache.drill.exec.physical.impl.protocol.OperatorRecordBatch.next(OperatorRecordBatch.java:150)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:122)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:114)
at
org.apache.drill.exec.record.AbstractUnaryRecordBatch.innerNext(AbstractUnaryRecordBatch.java:64)
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:87)
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:177)
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104)
at
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:83)
at
org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94)
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:326)
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:313)
at .......(:0)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:313)
at
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
at .......(:0)
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> HDF5 Metadata Queries Fail with Large Files
> -------------------------------------------
>
> Key: DRILL-7578
> URL: https://issues.apache.org/jira/browse/DRILL-7578
> Project: Apache Drill
> Issue Type: Bug
> Affects Versions: 1.18.0
> Reporter: Charles Givre
> Assignee: Charles Givre
> Priority: Major
> Fix For: 1.18.0
>
>
> With large files, Drill runs out of memory when attempting to project large
> datasets in the metadata.
> This PR adds a configuration option which removes the dataset projection from
> metadata queries and fixes this issue.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)