[
https://issues.apache.org/jira/browse/DRILL-7825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17326959#comment-17326959
]
ASF GitHub Bot commented on DRILL-7825:
---------------------------------------
vvysotskyi commented on a change in pull request #2143:
URL: https://github.com/apache/drill/pull/2143#discussion_r617911298
##########
File path:
exec/java-exec/src/main/java/org/apache/parquet/hadoop/ParquetColumnChunkPageWriteStore.java
##########
@@ -260,14 +260,16 @@ public long getMemSize() {
}
/**
- * Writes a number of pages within corresponding column chunk
+ * Writes a number of pages within corresponding column chunk <br>
+ * // TODO: the Bloom Filter can be useful in filtering entire row groups,
+ * see <a
href="https://issues.apache.org/jira/browse/DRILL-7895">DRILL-7895</a>
Review comment:
@vdiravka, thanks for sharing screenshots and providing more details.
> 3. And we converted that buf to bytes via BytesInput.from(buf) and
compressedBytes.writeAllTo(buf). So all data still placed in heap.
Please note, that when calling `BytesInput.from(buf)`, it doesn't convert
all bytes of the buffer at the same time, it creates `CapacityBAOSBytesInput`
that wraps provided `CapacityByteArrayOutputStream` and uses it when writing to
the OutputStream.
Regarding the `compressedBytes.writeAllTo(buf)` call this is fine to have
bytes here since GC will take care of them, no reasons for possible leaks, data
that should be processed later will be stored in direct memory.
But when using `ConcatenatingByteArrayCollector`, all bytes will be stored
in heap (including data that should be processed later) so GC has no power here.
Not sure why the heap usage you provided is similar, perhaps it may make
difference when we will have more data, or GC will do its work right before
flushing data from the `buf`...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Error: SYSTEM ERROR: RuntimeException: Unknown logical type <LogicalType
> UUID:UUIDType()>
> -----------------------------------------------------------------------------------------
>
> Key: DRILL-7825
> URL: https://issues.apache.org/jira/browse/DRILL-7825
> Project: Apache Drill
> Issue Type: Bug
> Components: Storage - Parquet
> Affects Versions: 1.17.0
> Environment: Windows 10 single local node.
> Reporter: ian
> Assignee: Vitalii Diravka
> Priority: Critical
> Fix For: 1.19.0
>
> Attachments: uuid-simple-fixed-length-array.parquet, uuid.parquet
>
>
> Parquet logical type UUID fails on read. Only workaround is to store as
> text, a 125% penalty.
> Here is the schema dump for the attached test parquet file. I can read the
> file okay from R and natively through C++.
> {code:java}
> 3961 $ parquet-dump-schema uuid.parquet
> required group field_id=0 schema {
> required fixed_len_byte_array(16) field_id=1 uuid_req1 (UUID);
> optional fixed_len_byte_array(16) field_id=2 uuid_opt1 (UUID);
> required fixed_len_byte_array(16) field_id=3 uuid_req2 (UUID);
> }{code}
> UPDATE: I tested with a simple fixed binary column, and received the
> following error.
> See second attached uuid-simple-fixed-length-array.parquet.
>
> {code:java}
> org.apache.drill.common.exceptions.UserRemoteException: INTERNAL_ERROR ERROR:
> Error in drill parquet reader (complex).
> Message: Failure in setting up reader
> Parquet Metadata: null
> Fragment: 0:0
> Please, refer to logs for more information.
> [Error Id: f6fdd477-c208-4a3d-8476-e366921e5787 on PWXAA:31010]
> at
> org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:125)
> at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:422)
> at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:96)
> at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:273)
> at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:243)
> at
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
> at
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
> at
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> at
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>
>
> {code}
> I'm new.. I put this as MAJOR from reading the severity definitions, but
> gladly defer to those who know better how to classify.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)