[ 
https://issues.apache.org/jira/browse/DRILL-7825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17328555#comment-17328555
 ] 

ASF GitHub Bot commented on DRILL-7825:
---------------------------------------

cgivre commented on a change in pull request #2143:
URL: https://github.com/apache/drill/pull/2143#discussion_r618332415



##########
File path: 
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetRecordWriter.java
##########
@@ -263,10 +258,11 @@ private void newSchema() throws IOException {
         .withAllocator(new ParquetDirectByteBufferAllocator(oContext))
         .withValuesWriterFactory(new DefaultV1ValuesWriterFactory())
         .build();
-    pageStore = new 
ParquetColumnChunkPageWriteStore(codecFactory.getCompressor(codec), schema, 
initialSlabSize,
-        pageSize, parquetProperties.getAllocator(), 
parquetProperties.getPageWriteChecksumEnabled(),
-        parquetProperties.getColumnIndexTruncateLength()
-    );
+    // TODO: Replace ParquetColumnChunkPageWriteStore with 
ColumnChunkPageWriteStore from parquet library
+    // once PARQUET-1006 will be resolved

Review comment:
       @vdiravka Could you please create a JIRA for this and any other TODOs 
from this PR?

##########
File path: 
exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/ParquetSimpleTestFileGenerator.java
##########
@@ -46,7 +46,8 @@
  * that are supported by Drill. Embedded types specified in the Parquet 
specification are not covered by the
  * examples but can be added.
  * To create a new parquet file, define a schema, create a GroupWriter based 
on the schema, then add values
- * for individual records to the GroupWriter.
+ * for individual records to the GroupWriter.<br>
+ *     TODO: to run this tool please use 28.2-jre <guava.version> instead of 
19.0 in main POM file

Review comment:
       See comment above re: TODOs.  I know you already created one, but could 
you please update the comment with the JIRA?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


> Error: SYSTEM ERROR: RuntimeException: Unknown logical type <LogicalType 
> UUID:UUIDType()>
> -----------------------------------------------------------------------------------------
>
>                 Key: DRILL-7825
>                 URL: https://issues.apache.org/jira/browse/DRILL-7825
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - Parquet
>    Affects Versions: 1.17.0
>         Environment: Windows 10 single local node.
>            Reporter: ian
>            Assignee: Vitalii Diravka
>            Priority: Critical
>             Fix For: 1.19.0
>
>         Attachments: uuid-simple-fixed-length-array.parquet, uuid.parquet
>
>
> Parquet logical type UUID fails on read.  Only workaround is to store as 
> text, a 125% penalty. 
> Here is the schema dump for the attached test parquet file.  I can read the 
> file okay from R and natively through C++.
> {code:java}
> 3961 $ parquet-dump-schema uuid.parquet
> required group field_id=0 schema {
>  required fixed_len_byte_array(16) field_id=1 uuid_req1 (UUID);
>  optional fixed_len_byte_array(16) field_id=2 uuid_opt1 (UUID);
>  required fixed_len_byte_array(16) field_id=3 uuid_req2 (UUID);
> }{code}
> UPDATE:  I tested with a simple fixed binary column, and received the 
> following error.
> See second attached uuid-simple-fixed-length-array.parquet.
>  
> {code:java}
> org.apache.drill.common.exceptions.UserRemoteException: INTERNAL_ERROR ERROR: 
> Error in drill parquet reader (complex).
> Message: Failure in setting up reader
> Parquet Metadata: null
> Fragment: 0:0
> Please, refer to logs for more information.
> [Error Id: f6fdd477-c208-4a3d-8476-e366921e5787 on PWXAA:31010]
>  at 
> org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:125)
>  at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:422)
>  at org.apache.drill.exec.rpc.user.UserClient.handle(UserClient.java:96)
>  at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:273)
>  at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:243)
>  at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
>  at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
>  at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
>  at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
>  at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
>  at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>  at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>  at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
>  at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
>  at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
>  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
>  at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
>  
>  
> {code}
> I'm new.. I put this as MAJOR from reading the severity definitions, but 
> gladly defer to those who know better how to classify.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to