[jira] [Commented] (PARQUET-2169) Upgrade Avro to version 1.11.1

2022-08-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17581245#comment-17581245
 ] 

ASF GitHub Bot commented on PARQUET-2169:
-

gszadovszky merged PR #981:
URL: https://github.com/apache/parquet-mr/pull/981




> Upgrade Avro to version 1.11.1
> --
>
> Key: PARQUET-2169
> URL: https://issues.apache.org/jira/browse/PARQUET-2169
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-avro
>Reporter: Ismaël Mejía
>Assignee: Ismaël Mejía
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2169) Upgrade Avro to version 1.11.1

2022-08-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17580977#comment-17580977
 ] 

ASF GitHub Bot commented on PARQUET-2169:
-

iemejia commented on PR #981:
URL: https://github.com/apache/parquet-mr/pull/981#issuecomment-1218433182

   Ah oups sorry for the confusion @sunchao :)
   
   @gszadovszky maybe?
   




> Upgrade Avro to version 1.11.1
> --
>
> Key: PARQUET-2169
> URL: https://issues.apache.org/jira/browse/PARQUET-2169
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-avro
>Reporter: Ismaël Mejía
>Assignee: Ismaël Mejía
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2173) Fix parquet build against hadoop 3.3.3+

2022-08-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17580443#comment-17580443
 ] 

ASF GitHub Bot commented on PARQUET-2173:
-

steveloughran commented on PR #985:
URL: https://github.com/apache/parquet-mr/pull/985#issuecomment-1217104595

   i've also built against the next release of hadoop, and of 3.4.0-SNAPSHOT.
   
   the parquet build fails there as jackson 1 is purged from the hadoop 
classpath, breaking the japicmp plugin.
   
   ```
   Execution default of goal 
com.github.siom79.japicmp:japicmp-maven-plugin:0.14.2:cmp failed: Could not 
load 'org.codehaus.jackson.type.TypeReference
   ```




> Fix parquet build against hadoop 3.3.3+
> ---
>
> Key: PARQUET-2173
> URL: https://issues.apache.org/jira/browse/PARQUET-2173
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-cli
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> parquet won't build against hadoop 3.3.3+ because it swapped out log4j 1.17 
> for reload4j, and this creates maven dependency problems in parquet cli
> {code}
> [INFO] --- maven-dependency-plugin:3.1.1:analyze-only (default) @ parquet-cli 
> ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]ch.qos.reload4j:reload4j:jar:1.2.22:provided
> {code}
> the hadoop common dependencies need to exclude this jar and any changed slf4j 
> ones.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2173) Fix parquet build against hadoop 3.3.3+

2022-08-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17580435#comment-17580435
 ] 

ASF GitHub Bot commented on PARQUET-2173:
-

steveloughran opened a new pull request, #985:
URL: https://github.com/apache/parquet-mr/pull/985

   
   Hadoop 3.3.3 moved to reload4j for logging to stop
   shipping a version of log4j with known (albeit unused)
   CVEs.
   
   This bypasses the existing exclusion code used to
   keep hadoop's SLF4J dependency off the classpaths,
   and by adding a new jar, breaks parquet-cli build.
   
   
   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [X] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references 
them in the PR title. For example, "PARQUET-1234: My Parquet PR"
 - https://issues.apache.org/jira/browse/PARQUET-XXX
 - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [X] My PR adds the following unit tests __OR__ does not need testing for 
this extremely good reason:
   
   The testing is regression testing "does the build work?", "does a test run 
complete without SLF4J warnings of duplicates?". done manually with 
`-Dhadoop.version=3.3.4`
   
   ### Commits
   
   - [X] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
 1. Subject is separated from body by a blank line
 1. Subject is limited to 50 characters (not including Jira issue reference)
 1. Subject does not end with a period
 1. Subject uses the imperative mood ("add", not "adding")
 1. Body wraps at 72 characters
 1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes 
how to use it.
 - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   




> Fix parquet build against hadoop 3.3.3+
> ---
>
> Key: PARQUET-2173
> URL: https://issues.apache.org/jira/browse/PARQUET-2173
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-cli
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> parquet won't build against hadoop 3.3.3+ because it swapped out log4j 1.17 
> for reload4j, and this creates maven dependency problems in parquet cli
> {code}
> [INFO] --- maven-dependency-plugin:3.1.1:analyze-only (default) @ parquet-cli 
> ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]ch.qos.reload4j:reload4j:jar:1.2.22:provided
> {code}
> the hadoop common dependencies need to exclude this jar and any changed slf4j 
> ones.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-08-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17580267#comment-17580267
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

ggershinsky commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r946662874


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -126,6 +127,42 @@ public class ParquetFileReader implements Closeable {
 
   public static String PARQUET_READ_PARALLELISM = 
"parquet.metadata.read.parallelism";
 
+  public static int numProcessors = Runtime.getRuntime().availableProcessors();
+
+  // Thread pool to read column chunk data from disk. Applications should call 
setAsyncIOThreadPool
+  // to initialize this with their own implementations.
+  // Default initialization is useful only for testing
+  public static ExecutorService ioThreadPool = Executors.newCachedThreadPool(
+r -> new Thread(r, "parquet-io"));
+
+  // Thread pool to process pages for multiple columns in parallel. 
Applications should call
+  // setAsyncProcessThreadPool to initialize this with their own 
implementations.
+  // Default initialization is useful only for testing
+  public static ExecutorService processThreadPool = 
Executors.newCachedThreadPool(

Review Comment:
   not sure; looks like many tests use copy/paste, rather than extension..





> Implement async IO for Parquet file reader
> --
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Reporter: Parth Chandra
>Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) - 
>       - For every column -> Read from storage in 8MB blocks -> Read all 
> uncompressed pages into output queue 
>       - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked 
> until the data has been read. Because a large part of the time spent is 
> waiting for data from storage, threads are idle and CPU utilization is really 
> low.
> There is no reason why this cannot be made asynchronous _and_ parallel. So 
> For Column _i_ -> reading one chunk until end, from storage -> intermediate 
> output queue -> read one uncompressed page until end -> output queue -> 
> (downstream ) decompression + decoding
> Note that this can be made completely self contained in ParquetFileReader and 
> downstream implementations like Iceberg and Spark will automatically be able 
> to take advantage without code change as long as the ParquetFileReader apis 
> are not changed. 
> In past work with async io  [Drill - async page reader 
> |https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java]
>  , I have seen 2x-3x improvement in reading speed for Parquet files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2160) Close decompression stream to free off-heap memory in time

2022-08-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17579529#comment-17579529
 ] 

ASF GitHub Bot commented on PARQUET-2160:
-

zhongyujiang commented on code in PR #982:
URL: https://github.com/apache/parquet-mr/pull/982#discussion_r945428783


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java:
##
@@ -109,7 +110,12 @@ public BytesInput decompress(BytesInput bytes, int 
uncompressedSize) throws IOEx
   decompressor.reset();
 }
 InputStream is = codec.createInputStream(bytes.toInputStream(), 
decompressor);
-decompressed = BytesInput.from(is, uncompressedSize);
+if (codec instanceof ZstandardCodec) {

Review Comment:
   Added comment.





> Close decompression stream to free off-heap memory in time
> --
>
> Key: PARQUET-2160
> URL: https://issues.apache.org/jira/browse/PARQUET-2160
> Project: Parquet
>  Issue Type: Improvement
> Environment: Spark 3.1.2 + Iceberg 0.12 + Parquet 1.12.3 + zstd-jni 
> 1.4.9.1 + glibc
>Reporter: Yujiang Zhong
>Priority: Major
>
> The decompressed stream in HeapBytesDecompressor$decompress now relies on the 
> JVM GC to close. When reading parquet in zstd compressed format, sometimes I 
> ran into OOM cause high off-heap usage. I think the reason is that the GC is 
> not timely and causes off-heap memory fragmentation. I had to set  lower 
> MALLOC_TRIM_THRESHOLD_ to make glibc give back memory to system quickly. 
> There is a 
> [thread|[https://apache-iceberg.slack.com/archives/C025PH0G1D4/p1650928750269869?thread_ts=1650927062.590789=C025PH0G1D4]]
>  of this zstd parquet issus in Iceberg community slack:  some people had the 
> same problem. 
> I think maybe we can use ByteArrayBytesInput as decompressed bytes input and 
> close decompressed stream in time to solve this problem:
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.from(is, uncompressedSize); {code}
> ->
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.copy(BytesInput.from(is, uncompressedSize));
> is.close(); {code}
> After I made this change to decompress, I found off-heap memory is 
> significantly reduced (with same query on same data).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2160) Close decompression stream to free off-heap memory in time

2022-08-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17579180#comment-17579180
 ] 

ASF GitHub Bot commented on PARQUET-2160:
-

sunchao commented on code in PR #982:
URL: https://github.com/apache/parquet-mr/pull/982#discussion_r944925723


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java:
##
@@ -109,7 +110,12 @@ public BytesInput decompress(BytesInput bytes, int 
uncompressedSize) throws IOEx
   decompressor.reset();
 }
 InputStream is = codec.createInputStream(bytes.toInputStream(), 
decompressor);
-decompressed = BytesInput.from(is, uncompressedSize);
+if (codec instanceof ZstandardCodec) {

Review Comment:
   The change looks OK to me, we probably should add some comments explaining 
why ZSTD deserves the special treatment here. 
   
   The change on `BytesInput` looks more intrusive since it is used not only 
for decompression but other places like compression. For instance, 
`BytesInput.copy` calls `toByteArray` underneath, and after the call the 
original object should still be valid.





> Close decompression stream to free off-heap memory in time
> --
>
> Key: PARQUET-2160
> URL: https://issues.apache.org/jira/browse/PARQUET-2160
> Project: Parquet
>  Issue Type: Improvement
> Environment: Spark 3.1.2 + Iceberg 0.12 + Parquet 1.12.3 + zstd-jni 
> 1.4.9.1 + glibc
>Reporter: Yujiang Zhong
>Priority: Major
>
> The decompressed stream in HeapBytesDecompressor$decompress now relies on the 
> JVM GC to close. When reading parquet in zstd compressed format, sometimes I 
> ran into OOM cause high off-heap usage. I think the reason is that the GC is 
> not timely and causes off-heap memory fragmentation. I had to set  lower 
> MALLOC_TRIM_THRESHOLD_ to make glibc give back memory to system quickly. 
> There is a 
> [thread|[https://apache-iceberg.slack.com/archives/C025PH0G1D4/p1650928750269869?thread_ts=1650927062.590789=C025PH0G1D4]]
>  of this zstd parquet issus in Iceberg community slack:  some people had the 
> same problem. 
> I think maybe we can use ByteArrayBytesInput as decompressed bytes input and 
> close decompressed stream in time to solve this problem:
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.from(is, uncompressedSize); {code}
> ->
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.copy(BytesInput.from(is, uncompressedSize));
> is.close(); {code}
> After I made this change to decompress, I found off-heap memory is 
> significantly reduced (with same query on same data).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PARQUET-2172) [C++] Make field return const NodePtr& instead of forcing copy of shared_ptr

2022-08-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/PARQUET-2172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated PARQUET-2172:

Labels: pull-request-available  (was: )

> [C++] Make field return const NodePtr& instead of forcing copy of shared_ptr
> 
>
> Key: PARQUET-2172
> URL: https://issues.apache.org/jira/browse/PARQUET-2172
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-cpp
>Reporter: Micah Kornfield
>Assignee: Micah Kornfield
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This potentially removes some amount of tax from atomic increments/decrements.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2169) Upgrade Avro to version 1.11.1

2022-08-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17578696#comment-17578696
 ] 

ASF GitHub Bot commented on PARQUET-2169:
-

sunchao commented on PR #981:
URL: https://github.com/apache/parquet-mr/pull/981#issuecomment-1212528245

   I'm not a committer. I think @nandorKollar can do it since he gave +1.




> Upgrade Avro to version 1.11.1
> --
>
> Key: PARQUET-2169
> URL: https://issues.apache.org/jira/browse/PARQUET-2169
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-avro
>Reporter: Ismaël Mejía
>Assignee: Ismaël Mejía
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2169) Upgrade Avro to version 1.11.1

2022-08-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17578654#comment-17578654
 ] 

ASF GitHub Bot commented on PARQUET-2169:
-

iemejia commented on PR #981:
URL: https://github.com/apache/parquet-mr/pull/981#issuecomment-1212462489

   @sunchao maybe?




> Upgrade Avro to version 1.11.1
> --
>
> Key: PARQUET-2169
> URL: https://issues.apache.org/jira/browse/PARQUET-2169
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-avro
>Reporter: Ismaël Mejía
>Assignee: Ismaël Mejía
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2169) Upgrade Avro to version 1.11.1

2022-08-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17578069#comment-17578069
 ] 

ASF GitHub Bot commented on PARQUET-2169:
-

iemejia commented on PR #981:
URL: https://github.com/apache/parquet-mr/pull/981#issuecomment-1210998275

   Can somebody please merge this one?




> Upgrade Avro to version 1.11.1
> --
>
> Key: PARQUET-2169
> URL: https://issues.apache.org/jira/browse/PARQUET-2169
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-avro
>Reporter: Ismaël Mejía
>Assignee: Ismaël Mejía
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2160) Close decompression stream to free off-heap memory in time

2022-08-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17576512#comment-17576512
 ] 

ASF GitHub Bot commented on PARQUET-2160:
-

zhongyujiang commented on code in PR #982:
URL: https://github.com/apache/parquet-mr/pull/982#discussion_r939789492


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java:
##
@@ -109,7 +110,12 @@ public BytesInput decompress(BytesInput bytes, int 
uncompressedSize) throws IOEx
   decompressor.reset();
 }
 InputStream is = codec.createInputStream(bytes.toInputStream(), 
decompressor);
-decompressed = BytesInput.from(is, uncompressedSize);
+if (codec instanceof ZstandardCodec) {

Review Comment:
   Maybe we can consider closing the decompressed stream after it has been read:
   
https://github.com/apache/parquet-mr/blob/0819356a9dafd2ca07c5eab68e2bffeddc3bd3d9/parquet-common/src/main/java/org/apache/parquet/bytes/BytesInput.java#L283-L288
   But I'm not sure if there is a situation where the decompressed stream is 
read more than once.





> Close decompression stream to free off-heap memory in time
> --
>
> Key: PARQUET-2160
> URL: https://issues.apache.org/jira/browse/PARQUET-2160
> Project: Parquet
>  Issue Type: Improvement
> Environment: Spark 3.1.2 + Iceberg 0.12 + Parquet 1.12.3 + zstd-jni 
> 1.4.9.1 + glibc
>Reporter: Yujiang Zhong
>Priority: Major
>
> The decompressed stream in HeapBytesDecompressor$decompress now relies on the 
> JVM GC to close. When reading parquet in zstd compressed format, sometimes I 
> ran into OOM cause high off-heap usage. I think the reason is that the GC is 
> not timely and causes off-heap memory fragmentation. I had to set  lower 
> MALLOC_TRIM_THRESHOLD_ to make glibc give back memory to system quickly. 
> There is a 
> [thread|[https://apache-iceberg.slack.com/archives/C025PH0G1D4/p1650928750269869?thread_ts=1650927062.590789=C025PH0G1D4]]
>  of this zstd parquet issus in Iceberg community slack:  some people had the 
> same problem. 
> I think maybe we can use ByteArrayBytesInput as decompressed bytes input and 
> close decompressed stream in time to solve this problem:
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.from(is, uncompressedSize); {code}
> ->
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.copy(BytesInput.from(is, uncompressedSize));
> is.close(); {code}
> After I made this change to decompress, I found off-heap memory is 
> significantly reduced (with same query on same data).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2160) Close decompression stream to free off-heap memory in time

2022-08-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17576510#comment-17576510
 ] 

ASF GitHub Bot commented on PARQUET-2160:
-

zhongyujiang commented on code in PR #982:
URL: https://github.com/apache/parquet-mr/pull/982#discussion_r939787073


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java:
##
@@ -109,7 +110,12 @@ public BytesInput decompress(BytesInput bytes, int 
uncompressedSize) throws IOEx
   decompressor.reset();
 }
 InputStream is = codec.createInputStream(bytes.toInputStream(), 
decompressor);
-decompressed = BytesInput.from(is, uncompressedSize);
+if (codec instanceof ZstandardCodec) {

Review Comment:
   This looks a little weird, but considering doing so will load the 
decompressor stream into heap in advance, and only zstd has this problem 
currently, so I made this modification only for zstd stream.
   





> Close decompression stream to free off-heap memory in time
> --
>
> Key: PARQUET-2160
> URL: https://issues.apache.org/jira/browse/PARQUET-2160
> Project: Parquet
>  Issue Type: Improvement
> Environment: Spark 3.1.2 + Iceberg 0.12 + Parquet 1.12.3 + zstd-jni 
> 1.4.9.1 + glibc
>Reporter: Yujiang Zhong
>Priority: Major
>
> The decompressed stream in HeapBytesDecompressor$decompress now relies on the 
> JVM GC to close. When reading parquet in zstd compressed format, sometimes I 
> ran into OOM cause high off-heap usage. I think the reason is that the GC is 
> not timely and causes off-heap memory fragmentation. I had to set  lower 
> MALLOC_TRIM_THRESHOLD_ to make glibc give back memory to system quickly. 
> There is a 
> [thread|[https://apache-iceberg.slack.com/archives/C025PH0G1D4/p1650928750269869?thread_ts=1650927062.590789=C025PH0G1D4]]
>  of this zstd parquet issus in Iceberg community slack:  some people had the 
> same problem. 
> I think maybe we can use ByteArrayBytesInput as decompressed bytes input and 
> close decompressed stream in time to solve this problem:
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.from(is, uncompressedSize); {code}
> ->
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.copy(BytesInput.from(is, uncompressedSize));
> is.close(); {code}
> After I made this change to decompress, I found off-heap memory is 
> significantly reduced (with same query on same data).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2160) Close decompression stream to free off-heap memory in time

2022-08-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17576509#comment-17576509
 ] 

ASF GitHub Bot commented on PARQUET-2160:
-

zhongyujiang opened a new pull request, #982:
URL: https://github.com/apache/parquet-mr/pull/982

   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [ ] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET-2160) issues and references 
them in the PR title. For example, "PARQUET-1234: My Parquet PR"
 - https://issues.apache.org/jira/browse/PARQUET-2160
 - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [ ] My PR adds the following unit tests __OR__ does not need testing for 
this extremely good reason:
   
   ### Commits
   
   - [ ] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
 1. Subject is separated from body by a blank line
 1. Subject is limited to 50 characters (not including Jira issue reference)
 1. Subject does not end with a period
 1. Subject uses the imperative mood ("add", not "adding")
 1. Body wraps at 72 characters
 1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes 
how to use it.
 - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   




> Close decompression stream to free off-heap memory in time
> --
>
> Key: PARQUET-2160
> URL: https://issues.apache.org/jira/browse/PARQUET-2160
> Project: Parquet
>  Issue Type: Improvement
> Environment: Spark 3.1.2 + Iceberg 0.12 + Parquet 1.12.3 + zstd-jni 
> 1.4.9.1 + glibc
>Reporter: Yujiang Zhong
>Priority: Major
>
> The decompressed stream in HeapBytesDecompressor$decompress now relies on the 
> JVM GC to close. When reading parquet in zstd compressed format, sometimes I 
> ran into OOM cause high off-heap usage. I think the reason is that the GC is 
> not timely and causes off-heap memory fragmentation. I had to set  lower 
> MALLOC_TRIM_THRESHOLD_ to make glibc give back memory to system quickly. 
> There is a 
> [thread|[https://apache-iceberg.slack.com/archives/C025PH0G1D4/p1650928750269869?thread_ts=1650927062.590789=C025PH0G1D4]]
>  of this zstd parquet issus in Iceberg community slack:  some people had the 
> same problem. 
> I think maybe we can use ByteArrayBytesInput as decompressed bytes input and 
> close decompressed stream in time to solve this problem:
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.from(is, uncompressedSize); {code}
> ->
> {code:java}
> InputStream is = codec.createInputStream(bytes.toInputStream(), decompressor);
> decompressed = BytesInput.copy(BytesInput.from(is, uncompressedSize));
> is.close(); {code}
> After I made this change to decompress, I found off-heap memory is 
> significantly reduced (with same query on same data).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-08-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17576056#comment-17576056
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

parthchandra commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r927128065


##
parquet-hadoop/src/main/java/org/apache/parquet/HadoopReadOptions.java:
##
@@ -61,9 +65,10 @@ private HadoopReadOptions(boolean useSignedStringMinMax,
 Configuration conf,
 FileDecryptionProperties fileDecryptionProperties) 
{
 super(
-useSignedStringMinMax, useStatsFilter, useDictionaryFilter, 
useRecordFilter, useColumnIndexFilter,
-usePageChecksumVerification, useBloomFilter, recordFilter, 
metadataFilter, codecFactory, allocator,
-maxAllocationSize, properties, fileDecryptionProperties
+  useSignedStringMinMax, useStatsFilter, useDictionaryFilter, 
useRecordFilter,

Review Comment:
   That's how Intellij formatted it after I added the new parameters. Added 
them back.



##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -796,6 +835,30 @@ public ParquetFileReader(InputFile file, 
ParquetReadOptions options) throws IOEx
 this.crc = options.usePageChecksumVerification() ? new CRC32() : null;
   }
 
+  private boolean isAsyncIOReaderEnabled(){
+if (options.isAsyncIOReaderEnabled() ) {
+  if (ioThreadPool != null) {
+return true;
+  } else {
+LOG.warn("Parquet async IO is configured but the IO thread pool has 
not been " +
+  "initialized. Configuration is being ignored");
+  }
+}
+return false;
+  }
+
+  private boolean isParallelColumnReaderEnabled(){
+if (options.isParallelColumnReaderEnabled() ) {
+  if (processThreadPool != null) {
+return true;
+  } else {
+LOG.warn("Parallel column reading is configured but the process thread 
pool has " +

Review Comment:
   Ditto



##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -1455,6 +1578,8 @@ protected PageHeader readPageHeader() throws IOException {
 }
 
 protected PageHeader readPageHeader(BlockCipher.Decryptor blockDecryptor, 
byte[] pageHeaderAAD) throws IOException {
+  String mode = (isAsyncIOReaderEnabled())? "ASYNC":"SYNC";

Review Comment:
   Done



##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -1796,5 +1882,314 @@ public void readAll(SeekableInputStream f, 
ChunkListBuilder builder) throws IOEx
 public long endPos() {
   return offset + length;
 }
+
+@Override
+public String toString() {
+  return "ConsecutivePartList{" +
+"offset=" + offset +
+", length=" + length +
+", chunks=" + chunks +
+'}';
+}
   }
+
+  /**
+   * Encapsulates the reading of a single page.
+   */
+  public class PageReader implements Closeable {
+private final Chunk chunk;
+private final int currentBlock;
+private final BlockCipher.Decryptor headerBlockDecryptor;
+private final BlockCipher.Decryptor pageBlockDecryptor;
+private final byte[] aadPrefix;
+private final int rowGroupOrdinal;
+private final int columnOrdinal;
+
+//state
+private final LinkedBlockingDeque> pagesInChunk = new 
LinkedBlockingDeque<>();
+private DictionaryPage dictionaryPage = null;
+private int pageIndex = 0;
+private long valuesCountReadSoFar = 0;
+private int dataPageCountReadSoFar = 0;
+
+// derived
+private final PrimitiveType type;
+private final byte[] dataPageAAD;
+private final byte[] dictionaryPageAAD;
+private byte[] dataPageHeaderAAD = null;
+
+private final BytesInputDecompressor decompressor;
+
+private final ConcurrentLinkedQueue> readFutures = new 
ConcurrentLinkedQueue<>();
+
+private final LongAdder totalTimeReadOnePage = new LongAdder();
+private final LongAdder totalCountReadOnePage = new LongAdder();
+private final LongAccumulator maxTimeReadOnePage = new 
LongAccumulator(Long::max, 0L);
+private final LongAdder totalTimeBlockedPagesInChunk = new LongAdder();
+private final LongAdder totalCountBlockedPagesInChunk = new LongAdder();
+private final LongAccumulator maxTimeBlockedPagesInChunk = new 
LongAccumulator(Long::max, 0L);
+
+public PageReader(Chunk chunk, int currentBlock, Decryptor 
headerBlockDecryptor,
+  Decryptor pageBlockDecryptor, byte[] aadPrefix, int rowGroupOrdinal, int 
columnOrdinal,
+  BytesInputDecompressor decompressor
+  ) {
+  this.chunk = chunk;
+  this.currentBlock = currentBlock;
+  this.headerBlockDecryptor = headerBlockDecryptor;
+  this.pageBlockDecryptor = pageBlockDecryptor;
+  this.aadPrefix = aadPrefix;
+  this.rowGroupOrdinal = 

[jira] [Commented] (PARQUET-2169) Upgrade Avro to version 1.11.1

2022-07-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573472#comment-17573472
 ] 

ASF GitHub Bot commented on PARQUET-2169:
-

iemejia opened a new pull request, #981:
URL: https://github.com/apache/parquet-mr/pull/981

   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [x] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET-2169) issues and references 
them in the PR title. For example, "PARQUET-2169: Upgrade Avro to version 
1.11.1"
 - https://issues.apache.org/jira/browse/PARQUET-2169
 - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [x] My PR adds the following unit tests __OR__ does not need testing for 
this extremely good reason:
   
   ### Commits
   
   - [x] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
 1. Subject is separated from body by a blank line
 1. Subject is limited to 50 characters (not including Jira issue reference)
 1. Subject does not end with a period
 1. Subject uses the imperative mood ("add", not "adding")
 1. Body wraps at 72 characters
 1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes 
how to use it.
 - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   




> Upgrade Avro to version 1.11.1
> --
>
> Key: PARQUET-2169
> URL: https://issues.apache.org/jira/browse/PARQUET-2169
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-avro
>Reporter: Ismaël Mejía
>Assignee: Ismaël Mejía
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2126) Thread safety bug in CodecFactory

2022-07-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573149#comment-17573149
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-

steveloughran commented on PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#issuecomment-1199891512

   ypu might want to look at WeakReferences...we've been using them recently to 
implement threadlocal-like storage where GCs will trigger cleanup of instances 
which aren't being used any more
   
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/WeakReferenceThreadMap.java
   
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/WeakReferenceMap.java
   
   the evolution of that code would be to implement the callback the JVM can 
issue on reference expiry and so do extra cleanup there




> Thread safety bug in CodecFactory
> -
>
> Key: PARQUET-2126
> URL: https://issues.apache.org/jira/browse/PARQUET-2126
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.2
>Reporter: James Turton
>Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2126) Thread safety bug in CodecFactory

2022-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17571518#comment-17571518
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-

theosib-amazon commented on PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#issuecomment-1195676003

   I just thought of something that makes me nervous about this PR that 
requires further investigation. Consider the following scenario:
   - Thread A allocates a codec
   - Thread A releases the codec, which puts it into a global pool of codecs
   - Thread B allocates the same kind of codec, which comes from that same pool
   - Thread A allocates that same kind of codec again, but it gets it from the 
factory's map instead of the pool
   I'm concerned that this could result in the same codec being given to both 
threads at the same time. The solution would be to remove the codec from the 
factory's map when release() is called on the codec itself. 
   Note that this problem is not introduced by this PR, since the double 
pooling existed before. The irony is that the pool is thread-safe, while the 
factory was not.




> Thread safety bug in CodecFactory
> -
>
> Key: PARQUET-2126
> URL: https://issues.apache.org/jira/browse/PARQUET-2126
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.2
>Reporter: James Turton
>Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2161) Row positions are computed incorrectly when range or offset metadata filter is used

2022-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17571233#comment-17571233
 ] 

ASF GitHub Bot commented on PARQUET-2161:
-

ggershinsky commented on PR #978:
URL: https://github.com/apache/parquet-mr/pull/978#issuecomment-1195083014

   cc @shangxinli 




> Row positions are computed incorrectly when range or offset metadata filter 
> is used
> ---
>
> Key: PARQUET-2161
> URL: https://issues.apache.org/jira/browse/PARQUET-2161
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Ala Luszczak
>Priority: Major
>
> The row indexes introduced in PARQUET-2117 are not computed correctly when
> (1) range or offset metadata filter is applied, and
> (2) the first row group was eliminated by the filter
> For example, if a file has two row groups with 10 rows each, and we attempt 
> to only read the 2nd row group, we are going to produce row indexes 0, 1, 2, 
> ..., 9 instead of expected 10, 11, ..., 19.
> This happens because functions `filterFileMetaDataByStart` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1453)
>  and `filterFileMetaDataByMidpoint` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1460)
>  modify their input `FileMetaData`. To address the issue we need to 
> `generateRowGroupOffsets` before these filters are applied.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2167) CLI show footer command fails if Parquet file contains date fields

2022-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17571178#comment-17571178
 ] 

ASF GitHub Bot commented on PARQUET-2167:
-

shangxinli merged PR #980:
URL: https://github.com/apache/parquet-mr/pull/980




> CLI show footer command fails if Parquet file contains date fields
> --
>
> Key: PARQUET-2167
> URL: https://issues.apache.org/jira/browse/PARQUET-2167
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-cli
>Affects Versions: 1.12.2
>Reporter: Bryan Keller
>Priority: Minor
> Attachments: sample.parquet
>
>
> The show footer command in the CLI fails with the following error if run 
> against a file with date fields:
> com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Java 8 
> date/time type `java.time.ZoneOffset` not supported by default: add Module 
> "com.fasterxml.jackson.datatype:jackson-datatype-jsr310" to enable handling 
> (through reference chain: 
> org.apache.parquet.hadoop.metadata.ParquetMetadata["blocks"]->java.util.ArrayList[0]->org.apache.parquet.hadoop.metadata.BlockMetaData["columns"]->java.util.ArrayList[2]->org.apache.parquet.hadoop.metadata.IntColumnChunkMetaData["statistics"]->org.apache.parquet.column.statistics.IntStatistics["stringifier"]->org.apache.parquet.schema.PrimitiveStringifier$5["formatter"]->java.time.format.DateTimeFormatter["zone"])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2126) Thread safety bug in CodecFactory

2022-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570968#comment-17570968
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-

theosib-amazon commented on PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#issuecomment-1194259084

   I did some poking around. It looks like if you call release() on a codec, it 
(a) resets the codec (freeing resources, I think) and (b) returns it to a pool 
of codecs without actually destroying the codec. 
   
   Later, when release() is called on the factory, it just calls release() 
again on each of the codecs, returning them to the pool. The only other effect 
is that references are removed from a container in the factory.
   
   The only question, then, is what happens if release is called twice on a 
codec. It looks like nothing happens because CodecPool.payback() will return 
false when the codec is already in the pool. Moreover, I'm pretty sure the 
original implementation already did this.
   
   So I think the solution it to literally do nothing. The new usage pattern is 
now:
   - Create Codec factory
   - Create worker threads
   - Threads create codecs
   - Threads finish using codecs
   - Threads *optionally* call release on their codecs if they want to free 
resources right away.
   - Threads terminate
   - The thread that created the worker threads waits until those threads are 
done
   - release is called on the factory, cleaning up any codecs that were not 
released already
   




> Thread safety bug in CodecFactory
> -
>
> Key: PARQUET-2126
> URL: https://issues.apache.org/jira/browse/PARQUET-2126
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.2
>Reporter: James Turton
>Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2069) Parquet file containing arrays, written by Parquet-MR, cannot be read again by Parquet-MR

2022-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570948#comment-17570948
 ] 

ASF GitHub Bot commented on PARQUET-2069:
-

theosib-amazon commented on code in PR #957:
URL: https://github.com/apache/parquet-mr/pull/957#discussion_r928999801


##
parquet-avro/src/main/java/org/apache/parquet/avro/AvroReadSupport.java:
##
@@ -136,10 +137,22 @@ public RecordMaterializer prepareForRead(
 
 GenericData model = getDataModel(configuration);
 String compatEnabled = metadata.get(AvroReadSupport.AVRO_COMPATIBILITY);
-if (compatEnabled != null && Boolean.valueOf(compatEnabled)) {
-  return newCompatMaterializer(parquetSchema, avroSchema, model);
+
+try {
+  if (compatEnabled != null && Boolean.valueOf(compatEnabled)) {
+return newCompatMaterializer(parquetSchema, avroSchema, model);
+  }
+  return new AvroRecordMaterializer(parquetSchema, avroSchema, model);
+} catch (InvalidRecordException | ClassCastException e) {

Review Comment:
   That's up to you. I see this change as just a fall-back in case it bombs. 
Either it'll work, or it'll bomb again, in which case we're no worse off.





> Parquet file containing arrays, written by Parquet-MR, cannot be read again 
> by Parquet-MR
> -
>
> Key: PARQUET-2069
> URL: https://issues.apache.org/jira/browse/PARQUET-2069
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-avro
>Affects Versions: 1.12.0
> Environment: Windows 10
>Reporter: Devon Kozenieski
>Priority: Blocker
> Attachments: modified.parquet, original.parquet, parquet-diff.png
>
>
> In the attached files, there is one original file, and one written modified 
> file that results after reading the original file and writing it back with 
> Parquet-MR, with a few values modified. The schema should not be modified, 
> since the schema of the input file is used as the schema to write the output 
> file. However, the output file has a slightly modified schema that then 
> cannot be read back the same way again with Parquet-MR, resulting in the 
> exception message:  java.lang.ClassCastException: optional binary element 
> (STRING) is not a group
> My guess is that the issue lies in the Avro schema conversion.
> The Parquet files attached have some arrays and some nested fields.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2126) Thread safety bug in CodecFactory

2022-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570942#comment-17570942
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-

theosib-amazon commented on PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#issuecomment-1194158751

   One option is to provide another API call that releases the cached instance 
for only the current thread. What should we call it? I forget whether close or 
release is used more, but if everyone is using close, we could repurpose 
release to apply to only the current thread.




> Thread safety bug in CodecFactory
> -
>
> Key: PARQUET-2126
> URL: https://issues.apache.org/jira/browse/PARQUET-2126
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.2
>Reporter: James Turton
>Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570831#comment-17570831
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

steveloughran closed pull request #971: PARQUET-2134: Improve binding to 
ByteBufferReadable
URL: https://github.com/apache/parquet-mr/pull/971




> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-1020) Add support for Dynamic Messages in parquet-protobuf

2022-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570689#comment-17570689
 ] 

ASF GitHub Bot commented on PARQUET-1020:
-

guillaume-fetter commented on PR #963:
URL: https://github.com/apache/parquet-mr/pull/963#issuecomment-1193643505

   Thank you very much!




> Add support for Dynamic Messages in parquet-protobuf
> 
>
> Key: PARQUET-1020
> URL: https://issues.apache.org/jira/browse/PARQUET-1020
> Project: Parquet
>  Issue Type: New Feature
>  Components: parquet-protobuf
>Reporter: Alex Buck
>Assignee: Alex Buck
>Priority: Major
>
> Hello. We would like to pass in a DynamicMessage rather than using the 
> generated protobuf classes to allow us to make our job very generic. 
> I think this could be achieved by setting the descriptor upfront, similarly 
> to how there is a ProtoParquetOutputFormat today.
> In ProtoWriteSupport in the init method it could then generate the parquet 
> schema created by ProtoSchemaConverter using the passed in descriptor, rather 
> than taking it from the generated proto class.
> Would there be interest in incorporating this change? If so does the approach 
> above sound sensible? I am happy to do a pull request
> initial PR here: https://github.com/apache/parquet-mr/pull/414



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2167) CLI show footer command fails if Parquet file contains date fields

2022-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570553#comment-17570553
 ] 

ASF GitHub Bot commented on PARQUET-2167:
-

shangxinli commented on PR #980:
URL: https://github.com/apache/parquet-mr/pull/980#issuecomment-1193400394

   LGTM




> CLI show footer command fails if Parquet file contains date fields
> --
>
> Key: PARQUET-2167
> URL: https://issues.apache.org/jira/browse/PARQUET-2167
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-cli
>Affects Versions: 1.12.2
>Reporter: Bryan Keller
>Priority: Minor
> Attachments: sample.parquet
>
>
> The show footer command in the CLI fails with the following error if run 
> against a file with date fields:
> com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Java 8 
> date/time type `java.time.ZoneOffset` not supported by default: add Module 
> "com.fasterxml.jackson.datatype:jackson-datatype-jsr310" to enable handling 
> (through reference chain: 
> org.apache.parquet.hadoop.metadata.ParquetMetadata["blocks"]->java.util.ArrayList[0]->org.apache.parquet.hadoop.metadata.BlockMetaData["columns"]->java.util.ArrayList[2]->org.apache.parquet.hadoop.metadata.IntColumnChunkMetaData["statistics"]->org.apache.parquet.column.statistics.IntStatistics["stringifier"]->org.apache.parquet.schema.PrimitiveStringifier$5["formatter"]->java.time.format.DateTimeFormatter["zone"])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570552#comment-17570552
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

shangxinli commented on PR #971:
URL: https://github.com/apache/parquet-mr/pull/971#issuecomment-1193400138

   This PR is combined with https://github.com/apache/parquet-mr/pull/951. 




> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2069) Parquet file containing arrays, written by Parquet-MR, cannot be read again by Parquet-MR

2022-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570547#comment-17570547
 ] 

ASF GitHub Bot commented on PARQUET-2069:
-

shangxinli commented on code in PR #957:
URL: https://github.com/apache/parquet-mr/pull/957#discussion_r928310312


##
parquet-avro/src/main/java/org/apache/parquet/avro/AvroReadSupport.java:
##
@@ -136,10 +137,22 @@ public RecordMaterializer prepareForRead(
 
 GenericData model = getDataModel(configuration);
 String compatEnabled = metadata.get(AvroReadSupport.AVRO_COMPATIBILITY);
-if (compatEnabled != null && Boolean.valueOf(compatEnabled)) {
-  return newCompatMaterializer(parquetSchema, avroSchema, model);
+
+try {
+  if (compatEnabled != null && Boolean.valueOf(compatEnabled)) {
+return newCompatMaterializer(parquetSchema, avroSchema, model);
+  }
+  return new AvroRecordMaterializer(parquetSchema, avroSchema, model);
+} catch (InvalidRecordException | ClassCastException e) {

Review Comment:
   I don't have a good solution either. I am just afraid that if we introduce 
this, there could be some unknown side effects. Given this is a problematic 
area already(I see you commented on 
https://issues.apache.org/jira/browse/PARQUET-1681), I am not confident to 
merge it now.  
   
   Or at least, we can have a feature flag to turn on/off this fix. 
   





> Parquet file containing arrays, written by Parquet-MR, cannot be read again 
> by Parquet-MR
> -
>
> Key: PARQUET-2069
> URL: https://issues.apache.org/jira/browse/PARQUET-2069
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-avro
>Affects Versions: 1.12.0
> Environment: Windows 10
>Reporter: Devon Kozenieski
>Priority: Blocker
> Attachments: modified.parquet, original.parquet, parquet-diff.png
>
>
> In the attached files, there is one original file, and one written modified 
> file that results after reading the original file and writing it back with 
> Parquet-MR, with a few values modified. The schema should not be modified, 
> since the schema of the input file is used as the schema to write the output 
> file. However, the output file has a slightly modified schema that then 
> cannot be read back the same way again with Parquet-MR, resulting in the 
> exception message:  java.lang.ClassCastException: optional binary element 
> (STRING) is not a group
> My guess is that the issue lies in the Avro schema conversion.
> The Parquet files attached have some arrays and some nested fields.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2126) Thread safety bug in CodecFactory

2022-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570545#comment-17570545
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-

shangxinli commented on PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#issuecomment-1193390004

   @theosib-amazon, I am not concerned if release/close isn't called and I 
agree the caller must call release/close after finishing. My question is that 
before release/close is called, there could be short-living threads that are 
used to create the compressor/decompressor in the cache. Those short-living 
threads exit and the cache is not aware of that, then that causes the cache 
grows with a lot of dead compressor/decompressors. In the scenario where 
short-living threads just come and go as a normal business, this could be a 
problem. I know normally it is not a problem because in most of the cases we 
use thread pool but I am just not sure there is a corner case like that. 
Parquet is a low-level library and is used in so many cases. 
   
I am sorry if I didn't make my previous comment more obvious.




> Thread safety bug in CodecFactory
> -
>
> Key: PARQUET-2126
> URL: https://issues.apache.org/jira/browse/PARQUET-2126
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.2
>Reporter: James Turton
>Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2042) Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay

2022-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570544#comment-17570544
 ] 

ASF GitHub Bot commented on PARQUET-2042:
-

shangxinli commented on PR #900:
URL: https://github.com/apache/parquet-mr/pull/900#issuecomment-1193386419

   I think we are close to merge this PR. Resolve the conflict and use the 
imports , then we can merge. 




> Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay
> ---
>
> Key: PARQUET-2042
> URL: https://issues.apache.org/jira/browse/PARQUET-2042
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Reporter: Michael Wong
>Priority: Major
>
> Related to https://issues.apache.org/jira/browse/PARQUET-1595



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2042) Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay

2022-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570543#comment-17570543
 ] 

ASF GitHub Bot commented on PARQUET-2042:
-

shangxinli commented on code in PR #900:
URL: https://github.com/apache/parquet-mr/pull/900#discussion_r928306582


##
parquet-protobuf/src/main/java/org/apache/parquet/proto/ProtoMessageConverter.java:
##
@@ -427,6 +485,218 @@ public void addBinary(Binary binary) {
 
   }
 
+  final class ProtoTimestampConverter extends PrimitiveConverter {
+
+final ParentValueContainer parent;
+final LogicalTypeAnnotation.TimestampLogicalTypeAnnotation 
logicalTypeAnnotation;
+
+public ProtoTimestampConverter(ParentValueContainer parent, 
LogicalTypeAnnotation.TimestampLogicalTypeAnnotation logicalTypeAnnotation) {
+  this.parent = parent;
+  this.logicalTypeAnnotation = logicalTypeAnnotation;
+}
+
+@Override
+public void addLong(long value) {
+  switch (logicalTypeAnnotation.getUnit()) {
+case MICROS:
+  parent.add(Timestamps.fromMicros(value));
+  break;
+case MILLIS:
+  parent.add(Timestamps.fromMillis(value));
+  break;
+case NANOS:
+  parent.add(Timestamps.fromNanos(value));
+  break;
+  }
+}
+  }
+
+  final class ProtoDateConverter extends PrimitiveConverter {
+
+final ParentValueContainer parent;
+
+public ProtoDateConverter(ParentValueContainer parent) {
+  this.parent = parent;
+}
+
+@Override
+public void addInt(int value) {
+  LocalDate localDate = LocalDate.ofEpochDay(value);
+  com.google.type.Date date = com.google.type.Date.newBuilder()

Review Comment:
   If there is no collision on the imports, we should use imports generally. 





> Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay
> ---
>
> Key: PARQUET-2042
> URL: https://issues.apache.org/jira/browse/PARQUET-2042
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Reporter: Michael Wong
>Priority: Major
>
> Related to https://issues.apache.org/jira/browse/PARQUET-1595



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2042) Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay

2022-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570542#comment-17570542
 ] 

ASF GitHub Bot commented on PARQUET-2042:
-

shangxinli commented on code in PR #900:
URL: https://github.com/apache/parquet-mr/pull/900#discussion_r928306114


##
parquet-protobuf/src/main/java/org/apache/parquet/proto/ProtoSchemaConverter.java:
##
@@ -97,6 +127,46 @@ public MessageType convert(Class 
protobufClass) {
 
   private  Builder>, GroupBuilder> 
addField(FieldDescriptor descriptor, final GroupBuilder builder) {
 if (descriptor.getJavaType() == JavaType.MESSAGE) {
+  if (unwrapProtoWrappers) {
+String typeName = descriptor.getMessageType().getFullName();
+if (typeName.equals(PROTOBUF_TIMESTAMP_TYPE)) {
+  return builder.primitive(INT64, 
getRepetition(descriptor)).as(timestampType(true, TimeUnit.NANOS));

Review Comment:
   @emkornfield  Do you still have a comment on this? 





> Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay
> ---
>
> Key: PARQUET-2042
> URL: https://issues.apache.org/jira/browse/PARQUET-2042
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Reporter: Michael Wong
>Priority: Major
>
> Related to https://issues.apache.org/jira/browse/PARQUET-1595



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570538#comment-17570538
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

shangxinli commented on PR #951:
URL: https://github.com/apache/parquet-mr/pull/951#issuecomment-1193383006

   LGTM




> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570539#comment-17570539
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

shangxinli merged PR #951:
URL: https://github.com/apache/parquet-mr/pull/951




> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2167) CLI show footer command fails if Parquet file contains date fields

2022-07-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570233#comment-17570233
 ] 

ASF GitHub Bot commented on PARQUET-2167:
-

bryanck opened a new pull request, #980:
URL: https://github.com/apache/parquet-mr/pull/980

   This PR fixes an issue when attempting to use the CLI to view the footer of 
a file with date fields. The error thrown is 
```com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Java 8 
date/time type `java.time.ZoneOffset` not supported by default```.
   
   The fix is to register the JavaTimeModue() with the Jackson object mapper.
   
   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [X ] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references 
them in the PR title. For example, "PARQUET-1234: My Parquet PR"
 - https://issues.apache.org/jira/browse/PARQUET-2167
 - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [ ] My PR adds the following unit tests __OR__ does not need testing for 
this extremely good reason:
   This is a minimal code change that existing tests should cover.
   
   ### Commits
   
   - [ X] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
 1. Subject is separated from body by a blank line
 1. Subject is limited to 50 characters (not including Jira issue reference)
 1. Subject does not end with a period
 1. Subject uses the imperative mood ("add", not "adding")
 1. Body wraps at 72 characters
 1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes 
how to use it.
 - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   




> CLI show footer command fails if Parquet file contains date fields
> --
>
> Key: PARQUET-2167
> URL: https://issues.apache.org/jira/browse/PARQUET-2167
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-cli
>Affects Versions: 1.12.2
>Reporter: Bryan Keller
>Priority: Minor
>
> The show footer command in the CLI fails with the following error if run 
> against a file with date fields:
> com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Java 8 
> date/time type `java.time.ZoneOffset` not supported by default: add Module 
> "com.fasterxml.jackson.datatype:jackson-datatype-jsr310" to enable handling 
> (through reference chain: 
> org.apache.parquet.hadoop.metadata.ParquetMetadata["blocks"]->java.util.ArrayList[0]->org.apache.parquet.hadoop.metadata.BlockMetaData["columns"]->java.util.ArrayList[2]->org.apache.parquet.hadoop.metadata.IntColumnChunkMetaData["statistics"]->org.apache.parquet.column.statistics.IntStatistics["stringifier"]->org.apache.parquet.schema.PrimitiveStringifier$5["formatter"]->java.time.format.DateTimeFormatter["zone"])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2126) Thread safety bug in CodecFactory

2022-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17569270#comment-17569270
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-

jnturton commented on PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#issuecomment-1191048626

   > Are you concerned about leaking if release/close isn't called? I'm pretty 
sure that would result in leaks. I suppose that might be solvable if we added a 
finalize() method that called release(). That might solve the problem. Should 
we do that?
   
   My 2c: finalize() is problematic and deprecated in Java so I don't recommend 
adding it. The requirement here that the caller must close after they're 
finished is totally reasonable and to be found in APIs everywhere.




> Thread safety bug in CodecFactory
> -
>
> Key: PARQUET-2126
> URL: https://issues.apache.org/jira/browse/PARQUET-2126
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.2
>Reporter: James Turton
>Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2126) Thread safety bug in CodecFactory

2022-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17569177#comment-17569177
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-

theosib-amazon commented on PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#issuecomment-1190765848

   > @theosib-amazon Do you still have time for addressing the feedback? I 
think we are very close to merge.
   
   I'm not really sure which feedback to address. Are you concerned about 
leaking if release/close isn't called? I'm pretty sure that would result in 
leaks. I suppose that might be solvable if we added a finalize() method that 
called release(). That might solve the problem. Should we do that?




> Thread safety bug in CodecFactory
> -
>
> Key: PARQUET-2126
> URL: https://issues.apache.org/jira/browse/PARQUET-2126
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.2
>Reporter: James Turton
>Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2155) Upgrade protobuf version to 3.20.1

2022-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17569117#comment-17569117
 ] 

ASF GitHub Bot commented on PARQUET-2155:
-

steveloughran commented on PR #973:
URL: https://github.com/apache/parquet-mr/pull/973#issuecomment-1190501542

   now this is merged in, should the jira be closed?




> Upgrade protobuf version to 3.20.1
> --
>
> Key: PARQUET-2155
> URL: https://issues.apache.org/jira/browse/PARQUET-2155
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Chao Sun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2158) Upgrade Hadoop dependency to version 3.2.0

2022-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17569107#comment-17569107
 ] 

ASF GitHub Bot commented on PARQUET-2158:
-

steveloughran commented on PR #976:
URL: https://github.com/apache/parquet-mr/pull/976#issuecomment-1190483397

   thanks.




> Upgrade Hadoop dependency to version 3.2.0
> --
>
> Key: PARQUET-2158
> URL: https://issues.apache.org/jira/browse/PARQUET-2158
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> Parquet still builds against Hadoop 2.10. This is very out of date and does 
> not work with java 11, let alone later releases.
> Upgrading the dependency to Hadoop 3.2.0 makes the release compatible with 
> java 11, and lines up with active work on  HADOOP-18287,  _Provide a shim 
> library for modern FS APIs_ 
> This will significantly speed up access to columnar data, especially  in 
> cloud stores.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2161) Row positions are computed incorrectly when range or offset metadata filter is used

2022-07-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568960#comment-17568960
 ] 

ASF GitHub Bot commented on PARQUET-2161:
-

ala commented on PR #978:
URL: https://github.com/apache/parquet-mr/pull/978#issuecomment-1190057981

   @ggershinsky Do you know when the next release that will include the fix 
might happen? We are looking to unblock 
https://issues.apache.org/jira/browse/SPARK-39634 in Apache Spark.




> Row positions are computed incorrectly when range or offset metadata filter 
> is used
> ---
>
> Key: PARQUET-2161
> URL: https://issues.apache.org/jira/browse/PARQUET-2161
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Ala Luszczak
>Priority: Major
>
> The row indexes introduced in PARQUET-2117 are not computed correctly when
> (1) range or offset metadata filter is applied, and
> (2) the first row group was eliminated by the filter
> For example, if a file has two row groups with 10 rows each, and we attempt 
> to only read the 2nd row group, we are going to produce row indexes 0, 1, 2, 
> ..., 9 instead of expected 10, 11, ..., 19.
> This happens because functions `filterFileMetaDataByStart` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1453)
>  and `filterFileMetaDataByMidpoint` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1460)
>  modify their input `FileMetaData`. To address the issue we need to 
> `generateRowGroupOffsets` before these filters are applied.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2150) parquet-protobuf to compile on mac M1

2022-07-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568693#comment-17568693
 ] 

ASF GitHub Bot commented on PARQUET-2150:
-

steveloughran closed pull request #970: PARQUET-2150: parquet-protobuf to 
compile on Mac M1
URL: https://github.com/apache/parquet-mr/pull/970




> parquet-protobuf to compile on mac M1
> -
>
> Key: PARQUET-2150
> URL: https://issues.apache.org/jira/browse/PARQUET-2150
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> parquet-protobuf module fails to compile on Mac M1 because the maven protoc 
> plugin cannot find the native osx-aarch_64:3.16.1  binary.
> the build needs to be tweaked to pick up the x86 binaries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2150) parquet-protobuf to compile on mac M1

2022-07-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568692#comment-17568692
 ] 

ASF GitHub Bot commented on PARQUET-2150:
-

steveloughran commented on PR #970:
URL: https://github.com/apache/parquet-mr/pull/970#issuecomment-1189454282

   resolved by #973




> parquet-protobuf to compile on mac M1
> -
>
> Key: PARQUET-2150
> URL: https://issues.apache.org/jira/browse/PARQUET-2150
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> parquet-protobuf module fails to compile on Mac M1 because the maven protoc 
> plugin cannot find the native osx-aarch_64:3.16.1  binary.
> the build needs to be tweaked to pick up the x86 binaries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2155) Upgrade protobuf version to 3.20.1

2022-07-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568617#comment-17568617
 ] 

ASF GitHub Bot commented on PARQUET-2155:
-

ggershinsky merged PR #973:
URL: https://github.com/apache/parquet-mr/pull/973




> Upgrade protobuf version to 3.20.1
> --
>
> Key: PARQUET-2155
> URL: https://issues.apache.org/jira/browse/PARQUET-2155
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Chao Sun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2126) Thread safety bug in CodecFactory

2022-07-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568612#comment-17568612
 ] 

ASF GitHub Bot commented on PARQUET-2126:
-

shangxinli commented on PR #959:
URL: https://github.com/apache/parquet-mr/pull/959#issuecomment-1189198163

   @theosib-amazon Do you still have time for addressing the feedback? I think 
we are very close to merge. 




> Thread safety bug in CodecFactory
> -
>
> Key: PARQUET-2126
> URL: https://issues.apache.org/jira/browse/PARQUET-2126
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.2
>Reporter: James Turton
>Priority: Major
>
> The code for returning Compressor objects to the caller goes to some lengths 
> to achieve thread safety, including keeping Codec objects in an Apache 
> Commons pool that has thread-safe borrow semantics.  This is all undone by 
> the BytesCompressor and BytesDecompressor Maps in 
> org.apache.parquet.hadoop.CodecFactory which end up caching single compressor 
> and decompressor instances due to code in CodecFactory@getCompressor and 
> CodecFactory@getDecompressor.  When the caller runs multiple threads, those 
> threads end up sharing compressor and decompressor instances.
> For compressors based on Xerial Snappy this bug has no effect because that 
> library is itself thread safe.  But when BuiltInGzipCompressor from Hadoop is 
> selected for the CompressionCodecName.GZIP case, serious problems ensue.  
> That class is not thread safe and sharing one instance of it between threads 
> produces both silent data corruption and JVM crashes.
> To fix this situation, parquet-mr should stop caching single compressor and 
> decompressor instances.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568609#comment-17568609
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

shangxinli commented on code in PR #971:
URL: https://github.com/apache/parquet-mr/pull/971#discussion_r924651838


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java:
##
@@ -50,51 +46,45 @@ public class HadoopStreams {
*/
   public static SeekableInputStream wrap(FSDataInputStream stream) {
 Objects.requireNonNull(stream, "Cannot wrap a null input stream");
-if (byteBufferReadableClass != null && h2SeekableConstructor != null &&
-byteBufferReadableClass.isInstance(stream.getWrappedStream())) {
-  try {
-return h2SeekableConstructor.newInstance(stream);
-  } catch (InstantiationException | IllegalAccessException e) {
-LOG.warn("Could not instantiate H2SeekableInputStream, falling back to 
byte array reads", e);
-return new H1SeekableInputStream(stream);
-  } catch (InvocationTargetException e) {
-throw new ParquetDecodingException(
-"Could not instantiate H2SeekableInputStream", 
e.getTargetException());
-  }
+if (isWrappedStreamByteBufferReadable(stream)) {
+  return new H2SeekableInputStream(stream);
 } else {
   return new H1SeekableInputStream(stream);
 }
   }
 
-  private static Class getReadableClass() {
-try {
-  return Class.forName("org.apache.hadoop.fs.ByteBufferReadable");
-} catch (ClassNotFoundException | NoClassDefFoundError e) {
-  return null;
+  /**
+   * Is the inner stream byte buffer readable?
+   * The test is "the stream is not FSDataInputStream
+   * and implements ByteBufferReadable'
+   *
+   * That is: all streams which implement ByteBufferReadable
+   * other than FSDataInputStream successfuly support read(ByteBuffer).
+   * This is true for all filesytem clients the hadoop codebase.
+   *
+   * In hadoop 3.3.0+, the StreamCapabilities probe can be used to
+   * check this: only those streams which provide the read(ByteBuffer)
+   * semantics MAY return true for the probe "in:readbytebuffer";
+   * FSDataInputStream will pass the probe down to the underlying stream.
+   *
+   * @param stream stream to probe
+   * @return true if it is safe to a H2SeekableInputStream to access the data
+   */
+  private static boolean isWrappedStreamByteBufferReadable(FSDataInputStream 
stream) {
+if (stream.hasCapability("in:readbytebuffer")) {

Review Comment:
   Let's be careful about introducing incompatibility & Hadoop is a fundamental 
dependency for Parquet. 





> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2158) Upgrade Hadoop dependency to version 3.2.0

2022-07-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568608#comment-17568608
 ] 

ASF GitHub Bot commented on PARQUET-2158:
-

shangxinli merged PR #976:
URL: https://github.com/apache/parquet-mr/pull/976




> Upgrade Hadoop dependency to version 3.2.0
> --
>
> Key: PARQUET-2158
> URL: https://issues.apache.org/jira/browse/PARQUET-2158
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> Parquet still builds against Hadoop 2.10. This is very out of date and does 
> not work with java 11, let alone later releases.
> Upgrading the dependency to Hadoop 3.2.0 makes the release compatible with 
> java 11, and lines up with active work on  HADOOP-18287,  _Provide a shim 
> library for modern FS APIs_ 
> This will significantly speed up access to columnar data, especially  in 
> cloud stores.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2155) Upgrade protobuf version to 3.20.1

2022-07-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568599#comment-17568599
 ] 

ASF GitHub Bot commented on PARQUET-2155:
-

shangxinli commented on PR #973:
URL: https://github.com/apache/parquet-mr/pull/973#issuecomment-1189167965

   LGTM




> Upgrade protobuf version to 3.20.1
> --
>
> Key: PARQUET-2155
> URL: https://issues.apache.org/jira/browse/PARQUET-2155
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Chao Sun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2155) Upgrade protobuf version to 3.20.1

2022-07-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568333#comment-17568333
 ] 

ASF GitHub Bot commented on PARQUET-2155:
-

ggershinsky commented on PR #973:
URL: https://github.com/apache/parquet-mr/pull/973#issuecomment-1188622982

   sure. if no other input by the end of this week, I'll merge it then.




> Upgrade protobuf version to 3.20.1
> --
>
> Key: PARQUET-2155
> URL: https://issues.apache.org/jira/browse/PARQUET-2155
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Chao Sun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17568138#comment-17568138
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

steveloughran commented on code in PR #971:
URL: https://github.com/apache/parquet-mr/pull/971#discussion_r923661793


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java:
##
@@ -50,51 +46,45 @@ public class HadoopStreams {
*/
   public static SeekableInputStream wrap(FSDataInputStream stream) {
 Objects.requireNonNull(stream, "Cannot wrap a null input stream");
-if (byteBufferReadableClass != null && h2SeekableConstructor != null &&
-byteBufferReadableClass.isInstance(stream.getWrappedStream())) {
-  try {
-return h2SeekableConstructor.newInstance(stream);
-  } catch (InstantiationException | IllegalAccessException e) {
-LOG.warn("Could not instantiate H2SeekableInputStream, falling back to 
byte array reads", e);
-return new H1SeekableInputStream(stream);
-  } catch (InvocationTargetException e) {
-throw new ParquetDecodingException(
-"Could not instantiate H2SeekableInputStream", 
e.getTargetException());
-  }
+if (isWrappedStreamByteBufferReadable(stream)) {
+  return new H2SeekableInputStream(stream);
 } else {
   return new H1SeekableInputStream(stream);
 }
   }
 
-  private static Class getReadableClass() {
-try {
-  return Class.forName("org.apache.hadoop.fs.ByteBufferReadable");
-} catch (ClassNotFoundException | NoClassDefFoundError e) {
-  return null;
+  /**
+   * Is the inner stream byte buffer readable?
+   * The test is "the stream is not FSDataInputStream
+   * and implements ByteBufferReadable'
+   *
+   * That is: all streams which implement ByteBufferReadable
+   * other than FSDataInputStream successfuly support read(ByteBuffer).
+   * This is true for all filesytem clients the hadoop codebase.
+   *
+   * In hadoop 3.3.0+, the StreamCapabilities probe can be used to
+   * check this: only those streams which provide the read(ByteBuffer)
+   * semantics MAY return true for the probe "in:readbytebuffer";
+   * FSDataInputStream will pass the probe down to the underlying stream.
+   *
+   * @param stream stream to probe
+   * @return true if it is safe to a H2SeekableInputStream to access the data
+   */
+  private static boolean isWrappedStreamByteBufferReadable(FSDataInputStream 
stream) {
+if (stream.hasCapability("in:readbytebuffer")) {

Review Comment:
   that would be nice. do that and the library we are doing to help give 3.2+ 
apps access to the higher performance cloud storage APIs when available would 
be great.





> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566915#comment-17566915
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

sunchao commented on code in PR #971:
URL: https://github.com/apache/parquet-mr/pull/971#discussion_r921361565


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java:
##
@@ -50,51 +46,45 @@ public class HadoopStreams {
*/
   public static SeekableInputStream wrap(FSDataInputStream stream) {
 Objects.requireNonNull(stream, "Cannot wrap a null input stream");
-if (byteBufferReadableClass != null && h2SeekableConstructor != null &&
-byteBufferReadableClass.isInstance(stream.getWrappedStream())) {
-  try {
-return h2SeekableConstructor.newInstance(stream);
-  } catch (InstantiationException | IllegalAccessException e) {
-LOG.warn("Could not instantiate H2SeekableInputStream, falling back to 
byte array reads", e);
-return new H1SeekableInputStream(stream);
-  } catch (InvocationTargetException e) {
-throw new ParquetDecodingException(
-"Could not instantiate H2SeekableInputStream", 
e.getTargetException());
-  }
+if (isWrappedStreamByteBufferReadable(stream)) {
+  return new H2SeekableInputStream(stream);
 } else {
   return new H1SeekableInputStream(stream);
 }
   }
 
-  private static Class getReadableClass() {
-try {
-  return Class.forName("org.apache.hadoop.fs.ByteBufferReadable");
-} catch (ClassNotFoundException | NoClassDefFoundError e) {
-  return null;
+  /**
+   * Is the inner stream byte buffer readable?
+   * The test is "the stream is not FSDataInputStream
+   * and implements ByteBufferReadable'
+   *
+   * That is: all streams which implement ByteBufferReadable
+   * other than FSDataInputStream successfuly support read(ByteBuffer).
+   * This is true for all filesytem clients the hadoop codebase.
+   *
+   * In hadoop 3.3.0+, the StreamCapabilities probe can be used to
+   * check this: only those streams which provide the read(ByteBuffer)
+   * semantics MAY return true for the probe "in:readbytebuffer";
+   * FSDataInputStream will pass the probe down to the underlying stream.
+   *
+   * @param stream stream to probe
+   * @return true if it is safe to a H2SeekableInputStream to access the data
+   */
+  private static boolean isWrappedStreamByteBufferReadable(FSDataInputStream 
stream) {
+if (stream.hasCapability("in:readbytebuffer")) {

Review Comment:
   Personally I'm in favor of moving on and adopt the new APIs especially if we 
are going to depend on Hadoop 3 features more. Maybe we can call the next 
Parquet release 1.13.0 and declare that it's no longer compatible with older 
Hadoop versions? 
   
   cc @shangxinli 





> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566826#comment-17566826
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

steveloughran commented on code in PR #971:
URL: https://github.com/apache/parquet-mr/pull/971#discussion_r921125471


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java:
##
@@ -50,51 +46,45 @@ public class HadoopStreams {
*/
   public static SeekableInputStream wrap(FSDataInputStream stream) {
 Objects.requireNonNull(stream, "Cannot wrap a null input stream");
-if (byteBufferReadableClass != null && h2SeekableConstructor != null &&
-byteBufferReadableClass.isInstance(stream.getWrappedStream())) {
-  try {
-return h2SeekableConstructor.newInstance(stream);
-  } catch (InstantiationException | IllegalAccessException e) {
-LOG.warn("Could not instantiate H2SeekableInputStream, falling back to 
byte array reads", e);
-return new H1SeekableInputStream(stream);
-  } catch (InvocationTargetException e) {
-throw new ParquetDecodingException(
-"Could not instantiate H2SeekableInputStream", 
e.getTargetException());
-  }
+if (isWrappedStreamByteBufferReadable(stream)) {
+  return new H2SeekableInputStream(stream);
 } else {
   return new H1SeekableInputStream(stream);
 }
   }
 
-  private static Class getReadableClass() {
-try {
-  return Class.forName("org.apache.hadoop.fs.ByteBufferReadable");
-} catch (ClassNotFoundException | NoClassDefFoundError e) {
-  return null;
+  /**
+   * Is the inner stream byte buffer readable?
+   * The test is "the stream is not FSDataInputStream
+   * and implements ByteBufferReadable'
+   *
+   * That is: all streams which implement ByteBufferReadable
+   * other than FSDataInputStream successfuly support read(ByteBuffer).
+   * This is true for all filesytem clients the hadoop codebase.
+   *
+   * In hadoop 3.3.0+, the StreamCapabilities probe can be used to
+   * check this: only those streams which provide the read(ByteBuffer)
+   * semantics MAY return true for the probe "in:readbytebuffer";
+   * FSDataInputStream will pass the probe down to the underlying stream.
+   *
+   * @param stream stream to probe
+   * @return true if it is safe to a H2SeekableInputStream to access the data
+   */
+  private static boolean isWrappedStreamByteBufferReadable(FSDataInputStream 
stream) {
+if (stream.hasCapability("in:readbytebuffer")) {

Review Comment:
   if you are targeting the older hadoop releases, you'd also need to build 
java7 artifacts. does anyone want to do that?





> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566825#comment-17566825
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

steveloughran commented on code in PR #971:
URL: https://github.com/apache/parquet-mr/pull/971#discussion_r921124617


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java:
##
@@ -50,51 +46,45 @@ public class HadoopStreams {
*/
   public static SeekableInputStream wrap(FSDataInputStream stream) {
 Objects.requireNonNull(stream, "Cannot wrap a null input stream");
-if (byteBufferReadableClass != null && h2SeekableConstructor != null &&
-byteBufferReadableClass.isInstance(stream.getWrappedStream())) {
-  try {
-return h2SeekableConstructor.newInstance(stream);
-  } catch (InstantiationException | IllegalAccessException e) {
-LOG.warn("Could not instantiate H2SeekableInputStream, falling back to 
byte array reads", e);
-return new H1SeekableInputStream(stream);
-  } catch (InvocationTargetException e) {
-throw new ParquetDecodingException(
-"Could not instantiate H2SeekableInputStream", 
e.getTargetException());
-  }
+if (isWrappedStreamByteBufferReadable(stream)) {
+  return new H2SeekableInputStream(stream);
 } else {
   return new H1SeekableInputStream(stream);
 }
   }
 
-  private static Class getReadableClass() {
-try {
-  return Class.forName("org.apache.hadoop.fs.ByteBufferReadable");
-} catch (ClassNotFoundException | NoClassDefFoundError e) {
-  return null;
+  /**
+   * Is the inner stream byte buffer readable?
+   * The test is "the stream is not FSDataInputStream
+   * and implements ByteBufferReadable'
+   *
+   * That is: all streams which implement ByteBufferReadable
+   * other than FSDataInputStream successfuly support read(ByteBuffer).
+   * This is true for all filesytem clients the hadoop codebase.
+   *
+   * In hadoop 3.3.0+, the StreamCapabilities probe can be used to
+   * check this: only those streams which provide the read(ByteBuffer)
+   * semantics MAY return true for the probe "in:readbytebuffer";
+   * FSDataInputStream will pass the probe down to the underlying stream.
+   *
+   * @param stream stream to probe
+   * @return true if it is safe to a H2SeekableInputStream to access the data
+   */
+  private static boolean isWrappedStreamByteBufferReadable(FSDataInputStream 
stream) {
+if (stream.hasCapability("in:readbytebuffer")) {
+  // stream is issuing the guarantee that it implements the
+  // API. Holds for all implementations in hadoop-*
+  // since Hadoop 3.3.0 (HDFS-14111).
+  return true;
 }
-  }
-
-  @SuppressWarnings("unchecked")

Review Comment:
   I believe it's because of the transitive dependencies; 





> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566550#comment-17566550
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

sunchao commented on code in PR #971:
URL: https://github.com/apache/parquet-mr/pull/971#discussion_r920576934


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java:
##
@@ -50,51 +46,45 @@ public class HadoopStreams {
*/
   public static SeekableInputStream wrap(FSDataInputStream stream) {
 Objects.requireNonNull(stream, "Cannot wrap a null input stream");
-if (byteBufferReadableClass != null && h2SeekableConstructor != null &&
-byteBufferReadableClass.isInstance(stream.getWrappedStream())) {
-  try {
-return h2SeekableConstructor.newInstance(stream);
-  } catch (InstantiationException | IllegalAccessException e) {
-LOG.warn("Could not instantiate H2SeekableInputStream, falling back to 
byte array reads", e);
-return new H1SeekableInputStream(stream);
-  } catch (InvocationTargetException e) {
-throw new ParquetDecodingException(
-"Could not instantiate H2SeekableInputStream", 
e.getTargetException());
-  }
+if (isWrappedStreamByteBufferReadable(stream)) {
+  return new H2SeekableInputStream(stream);
 } else {
   return new H1SeekableInputStream(stream);
 }
   }
 
-  private static Class getReadableClass() {
-try {
-  return Class.forName("org.apache.hadoop.fs.ByteBufferReadable");
-} catch (ClassNotFoundException | NoClassDefFoundError e) {
-  return null;
+  /**
+   * Is the inner stream byte buffer readable?
+   * The test is "the stream is not FSDataInputStream
+   * and implements ByteBufferReadable'
+   *
+   * That is: all streams which implement ByteBufferReadable
+   * other than FSDataInputStream successfuly support read(ByteBuffer).
+   * This is true for all filesytem clients the hadoop codebase.
+   *
+   * In hadoop 3.3.0+, the StreamCapabilities probe can be used to
+   * check this: only those streams which provide the read(ByteBuffer)
+   * semantics MAY return true for the probe "in:readbytebuffer";
+   * FSDataInputStream will pass the probe down to the underlying stream.
+   *
+   * @param stream stream to probe
+   * @return true if it is safe to a H2SeekableInputStream to access the data
+   */
+  private static boolean isWrappedStreamByteBufferReadable(FSDataInputStream 
stream) {
+if (stream.hasCapability("in:readbytebuffer")) {
+  // stream is issuing the guarantee that it implements the
+  // API. Holds for all implementations in hadoop-*
+  // since Hadoop 3.3.0 (HDFS-14111).
+  return true;
 }
-  }
-
-  @SuppressWarnings("unchecked")

Review Comment:
   I don't understand why Parquet need to use reflection to look up a class 
defined by itself.



##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java:
##
@@ -50,51 +46,45 @@ public class HadoopStreams {
*/
   public static SeekableInputStream wrap(FSDataInputStream stream) {
 Objects.requireNonNull(stream, "Cannot wrap a null input stream");
-if (byteBufferReadableClass != null && h2SeekableConstructor != null &&
-byteBufferReadableClass.isInstance(stream.getWrappedStream())) {
-  try {
-return h2SeekableConstructor.newInstance(stream);
-  } catch (InstantiationException | IllegalAccessException e) {
-LOG.warn("Could not instantiate H2SeekableInputStream, falling back to 
byte array reads", e);
-return new H1SeekableInputStream(stream);
-  } catch (InvocationTargetException e) {
-throw new ParquetDecodingException(
-"Could not instantiate H2SeekableInputStream", 
e.getTargetException());
-  }
+if (isWrappedStreamByteBufferReadable(stream)) {
+  return new H2SeekableInputStream(stream);
 } else {
   return new H1SeekableInputStream(stream);
 }
   }
 
-  private static Class getReadableClass() {
-try {
-  return Class.forName("org.apache.hadoop.fs.ByteBufferReadable");
-} catch (ClassNotFoundException | NoClassDefFoundError e) {
-  return null;
+  /**
+   * Is the inner stream byte buffer readable?
+   * The test is "the stream is not FSDataInputStream
+   * and implements ByteBufferReadable'
+   *
+   * That is: all streams which implement ByteBufferReadable
+   * other than FSDataInputStream successfuly support read(ByteBuffer).
+   * This is true for all filesytem clients the hadoop codebase.
+   *
+   * In hadoop 3.3.0+, the StreamCapabilities probe can be used to
+   * check this: only those streams which provide the read(ByteBuffer)
+   * semantics MAY return true for the probe "in:readbytebuffer";
+   * FSDataInputStream will pass the probe down to the underlying stream.
+   *
+   * @param stream stream to probe
+   * @return true if it is 

[jira] [Commented] (PARQUET-2155) Upgrade protobuf version to 3.20.1

2022-07-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566544#comment-17566544
 ] 

ASF GitHub Bot commented on PARQUET-2155:
-

sunchao commented on PR #973:
URL: https://github.com/apache/parquet-mr/pull/973#issuecomment-1183753880

   gently ping @shangxinli @ggershinsky 




> Upgrade protobuf version to 3.20.1
> --
>
> Key: PARQUET-2155
> URL: https://issues.apache.org/jira/browse/PARQUET-2155
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Chao Sun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-1020) Add support for Dynamic Messages in parquet-protobuf

2022-07-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566350#comment-17566350
 ] 

ASF GitHub Bot commented on PARQUET-1020:
-

dossett commented on PR #963:
URL: https://github.com/apache/parquet-mr/pull/963#issuecomment-1183301646

   Terrific, thank you!




> Add support for Dynamic Messages in parquet-protobuf
> 
>
> Key: PARQUET-1020
> URL: https://issues.apache.org/jira/browse/PARQUET-1020
> Project: Parquet
>  Issue Type: New Feature
>  Components: parquet-protobuf
>Reporter: Alex Buck
>Assignee: Alex Buck
>Priority: Major
>
> Hello. We would like to pass in a DynamicMessage rather than using the 
> generated protobuf classes to allow us to make our job very generic. 
> I think this could be achieved by setting the descriptor upfront, similarly 
> to how there is a ProtoParquetOutputFormat today.
> In ProtoWriteSupport in the init method it could then generate the parquet 
> schema created by ProtoSchemaConverter using the passed in descriptor, rather 
> than taking it from the generated proto class.
> Would there be interest in incorporating this change? If so does the approach 
> above sound sensible? I am happy to do a pull request
> initial PR here: https://github.com/apache/parquet-mr/pull/414



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-1020) Add support for Dynamic Messages in parquet-protobuf

2022-07-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566347#comment-17566347
 ] 

ASF GitHub Bot commented on PARQUET-1020:
-

shangxinli commented on PR #963:
URL: https://github.com/apache/parquet-mr/pull/963#issuecomment-1183298278

   Merged. Thanks again! 




> Add support for Dynamic Messages in parquet-protobuf
> 
>
> Key: PARQUET-1020
> URL: https://issues.apache.org/jira/browse/PARQUET-1020
> Project: Parquet
>  Issue Type: New Feature
>  Components: parquet-protobuf
>Reporter: Alex Buck
>Assignee: Alex Buck
>Priority: Major
>
> Hello. We would like to pass in a DynamicMessage rather than using the 
> generated protobuf classes to allow us to make our job very generic. 
> I think this could be achieved by setting the descriptor upfront, similarly 
> to how there is a ProtoParquetOutputFormat today.
> In ProtoWriteSupport in the init method it could then generate the parquet 
> schema created by ProtoSchemaConverter using the passed in descriptor, rather 
> than taking it from the generated proto class.
> Would there be interest in incorporating this change? If so does the approach 
> above sound sensible? I am happy to do a pull request
> initial PR here: https://github.com/apache/parquet-mr/pull/414



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-1020) Add support for Dynamic Messages in parquet-protobuf

2022-07-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566346#comment-17566346
 ] 

ASF GitHub Bot commented on PARQUET-1020:
-

shangxinli merged PR #963:
URL: https://github.com/apache/parquet-mr/pull/963




> Add support for Dynamic Messages in parquet-protobuf
> 
>
> Key: PARQUET-1020
> URL: https://issues.apache.org/jira/browse/PARQUET-1020
> Project: Parquet
>  Issue Type: New Feature
>  Components: parquet-protobuf
>Reporter: Alex Buck
>Assignee: Alex Buck
>Priority: Major
>
> Hello. We would like to pass in a DynamicMessage rather than using the 
> generated protobuf classes to allow us to make our job very generic. 
> I think this could be achieved by setting the descriptor upfront, similarly 
> to how there is a ProtoParquetOutputFormat today.
> In ProtoWriteSupport in the init method it could then generate the parquet 
> schema created by ProtoSchemaConverter using the passed in descriptor, rather 
> than taking it from the generated proto class.
> Would there be interest in incorporating this change? If so does the approach 
> above sound sensible? I am happy to do a pull request
> initial PR here: https://github.com/apache/parquet-mr/pull/414



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-1020) Add support for Dynamic Messages in parquet-protobuf

2022-07-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566326#comment-17566326
 ] 

ASF GitHub Bot commented on PARQUET-1020:
-

dossett commented on PR #963:
URL: https://github.com/apache/parquet-mr/pull/963#issuecomment-1183261724

   Thank you @shangxinli !  Do you want to merge it now or closer to the next 
release?




> Add support for Dynamic Messages in parquet-protobuf
> 
>
> Key: PARQUET-1020
> URL: https://issues.apache.org/jira/browse/PARQUET-1020
> Project: Parquet
>  Issue Type: New Feature
>  Components: parquet-protobuf
>Reporter: Alex Buck
>Assignee: Alex Buck
>Priority: Major
>
> Hello. We would like to pass in a DynamicMessage rather than using the 
> generated protobuf classes to allow us to make our job very generic. 
> I think this could be achieved by setting the descriptor upfront, similarly 
> to how there is a ProtoParquetOutputFormat today.
> In ProtoWriteSupport in the init method it could then generate the parquet 
> schema created by ProtoSchemaConverter using the passed in descriptor, rather 
> than taking it from the generated proto class.
> Would there be interest in incorporating this change? If so does the approach 
> above sound sensible? I am happy to do a pull request
> initial PR here: https://github.com/apache/parquet-mr/pull/414



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2165) remove deprecated PathGlobPattern and DeprecatedFieldProjectionFilter to compile on hadoop 3.2+

2022-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565884#comment-17565884
 ] 

ASF GitHub Bot commented on PARQUET-2165:
-

steveloughran opened a new pull request, #979:
URL: https://github.com/apache/parquet-mr/pull/979

   
   
   Remove the deprecated classes PathGlobPattern and
   DeprecatedFieldProjectionFilter so that Parquet will
   compile against hadoop 3.x.
   
   If a thrift reader is configured to use the now-deleted filter,
   by setting the filter in "parquet.thrift.column.filter",
   a ThriftProjectionException will be thrown.
   
   
   ### Jira
   
   - [X] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references 
them in the PR title. For example, "PARQUET-1234: My Parquet PR"
 - https://issues.apache.org/jira/browse/PARQUET-XXX
 - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [X] My PR adds the following unit tests __OR__ does not need testing for 
this extremely good reason:
   
   It modifies the test `TestParquetToThriftReadWriteAndProjection` to switch 
to the strict filter in all test cases where the old one was being used.
   
   *these tests now all fail with `ThriftProjectionException: No columns have 
been selected`
   
   I could cut the tests "obsolete" but it would seem to me that moving the 
tests to the strict filter would be better. I will just need help doing this.
   
   ### Commits
   
   - [X] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
 1. Subject is separated from body by a blank line
 1. Subject is limited to 50 characters (not including Jira issue reference)
 1. Subject does not end with a period
 1. Subject uses the imperative mood ("add", not "adding")
 1. Body wraps at 72 characters
 1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [ ] In case of new functionality, my PR adds documentation that describes 
how to use it.
 - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   




> remove deprecated PathGlobPattern and DeprecatedFieldProjectionFilter to 
> compile on hadoop 3.2+
> ---
>
> Key: PARQUET-2165
> URL: https://issues.apache.org/jira/browse/PARQUET-2165
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-thrift
>Affects Versions: 1.12.3
>Reporter: Steve Loughran
>Priority: Major
>
> remove the deprecated PathGlobPattern class and its uses from parquet-thrift
> The return types from the hadoop  GlobPattern code changed in HADOOP-12436; 
> in the class as is will not compile against hadoop 3.x
> Parquet releases compiled against hadoop 2.x will not be able to instantiate 
> these classes on a hadoop 3 release, because things will not link.
> Nobody appears to have complained about the linkage problem to the extent of 
> filing a JIRA. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565427#comment-17565427
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

steveloughran commented on PR #951:
URL: https://github.com/apache/parquet-mr/pull/951#issuecomment-1181629067

   thanks. created 
[HADOOP-18336](https://issues.apache.org/jira/browse/HADOOP-18336)
   tag FSDataInputStream.getWrappedStream() @Public/@Stable  to make sure that 
hadoop code knows external libs may be calling the method.




> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2150) parquet-protobuf to compile on mac M1

2022-07-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17563878#comment-17563878
 ] 

ASF GitHub Bot commented on PARQUET-2150:
-

steveloughran commented on PR #970:
URL: https://github.com/apache/parquet-mr/pull/970#issuecomment-1177929495

   oh, upgrading is better!




> parquet-protobuf to compile on mac M1
> -
>
> Key: PARQUET-2150
> URL: https://issues.apache.org/jira/browse/PARQUET-2150
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> parquet-protobuf module fails to compile on Mac M1 because the maven protoc 
> plugin cannot find the native osx-aarch_64:3.16.1  binary.
> the build needs to be tweaked to pick up the x86 binaries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562229#comment-17562229
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

7c00 commented on PR #951:
URL: https://github.com/apache/parquet-mr/pull/951#issuecomment-1173982766

   @shangxinli Thank you for reminding me. I have squashed the PR and added 
@steveloughran as the co-author.




> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2042) Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay

2022-07-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17562101#comment-17562101
 ] 

ASF GitHub Bot commented on PARQUET-2042:
-

mwong38 commented on code in PR #900:
URL: https://github.com/apache/parquet-mr/pull/900#discussion_r912817641


##
parquet-protobuf/src/main/java/org/apache/parquet/proto/ProtoSchemaConverter.java:
##
@@ -97,6 +127,46 @@ public MessageType convert(Class 
protobufClass) {
 
   private  Builder>, GroupBuilder> 
addField(FieldDescriptor descriptor, final GroupBuilder builder) {
 if (descriptor.getJavaType() == JavaType.MESSAGE) {
+  if (unwrapProtoWrappers) {
+String typeName = descriptor.getMessageType().getFullName();
+if (typeName.equals(PROTOBUF_TIMESTAMP_TYPE)) {
+  return builder.primitive(INT64, 
getRepetition(descriptor)).as(timestampType(true, TimeUnit.NANOS));

Review Comment:
   Isn't that outside the scope of ProtoSchemaConverter/ParquetWriter? I don't 
think it should go into the business of doing transformations. If the point of 
`ProtoSchemaConverter` is to give the closest representation of the Protobuf 
object in Parquet, then it should be NANOS and nothing more.





> Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay
> ---
>
> Key: PARQUET-2042
> URL: https://issues.apache.org/jira/browse/PARQUET-2042
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Reporter: Michael Wong
>Priority: Major
>
> Related to https://issues.apache.org/jira/browse/PARQUET-1595



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2042) Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay

2022-07-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561973#comment-17561973
 ] 

ASF GitHub Bot commented on PARQUET-2042:
-

shangxinli commented on code in PR #900:
URL: https://github.com/apache/parquet-mr/pull/900#discussion_r912586665


##
parquet-protobuf/src/main/java/org/apache/parquet/proto/ProtoSchemaConverter.java:
##
@@ -97,6 +127,46 @@ public MessageType convert(Class 
protobufClass) {
 
   private  Builder>, GroupBuilder> 
addField(FieldDescriptor descriptor, final GroupBuilder builder) {
 if (descriptor.getJavaType() == JavaType.MESSAGE) {
+  if (unwrapProtoWrappers) {
+String typeName = descriptor.getMessageType().getFullName();
+if (typeName.equals(PROTOBUF_TIMESTAMP_TYPE)) {
+  return builder.primitive(INT64, 
getRepetition(descriptor)).as(timestampType(true, TimeUnit.NANOS));

Review Comment:
   In that case, you can use the default 'TimeUnit.NANOS' while having that 
configured. 





> Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay
> ---
>
> Key: PARQUET-2042
> URL: https://issues.apache.org/jira/browse/PARQUET-2042
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Reporter: Michael Wong
>Priority: Major
>
> Related to https://issues.apache.org/jira/browse/PARQUET-1595



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2042) Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay

2022-07-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561948#comment-17561948
 ] 

ASF GitHub Bot commented on PARQUET-2042:
-

mwong38 commented on code in PR #900:
URL: https://github.com/apache/parquet-mr/pull/900#discussion_r912547741


##
parquet-protobuf/src/main/java/org/apache/parquet/proto/ProtoSchemaConverter.java:
##
@@ -97,6 +127,46 @@ public MessageType convert(Class 
protobufClass) {
 
   private  Builder>, GroupBuilder> 
addField(FieldDescriptor descriptor, final GroupBuilder builder) {
 if (descriptor.getJavaType() == JavaType.MESSAGE) {
+  if (unwrapProtoWrappers) {
+String typeName = descriptor.getMessageType().getFullName();

Review Comment:
   That's a good point. I'll change it to compare the `Descriptor` directly to 
the Wrapper's Descriptor rather than the name.





> Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay
> ---
>
> Key: PARQUET-2042
> URL: https://issues.apache.org/jira/browse/PARQUET-2042
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Reporter: Michael Wong
>Priority: Major
>
> Related to https://issues.apache.org/jira/browse/PARQUET-1595



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2042) Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay

2022-07-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561946#comment-17561946
 ] 

ASF GitHub Bot commented on PARQUET-2042:
-

mwong38 commented on code in PR #900:
URL: https://github.com/apache/parquet-mr/pull/900#discussion_r912545610


##
parquet-protobuf/src/main/java/org/apache/parquet/proto/ProtoSchemaConverter.java:
##
@@ -97,6 +127,46 @@ public MessageType convert(Class 
protobufClass) {
 
   private  Builder>, GroupBuilder> 
addField(FieldDescriptor descriptor, final GroupBuilder builder) {
 if (descriptor.getJavaType() == JavaType.MESSAGE) {
+  if (unwrapProtoWrappers) {
+String typeName = descriptor.getMessageType().getFullName();
+if (typeName.equals(PROTOBUF_TIMESTAMP_TYPE)) {
+  return builder.primitive(INT64, 
getRepetition(descriptor)).as(timestampType(true, TimeUnit.NANOS));

Review Comment:
   I don't think it's worth complicating the API. The Timestamp common Proto 
stores time in nanoseconds. There's no good reason to deviate from that or to 
truncate the resolution. If the user wishes to do more manipulation, it can be 
done downstream.





> Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay
> ---
>
> Key: PARQUET-2042
> URL: https://issues.apache.org/jira/browse/PARQUET-2042
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Reporter: Michael Wong
>Priority: Major
>
> Related to https://issues.apache.org/jira/browse/PARQUET-1595



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2042) Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay

2022-07-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561945#comment-17561945
 ] 

ASF GitHub Bot commented on PARQUET-2042:
-

mwong38 commented on code in PR #900:
URL: https://github.com/apache/parquet-mr/pull/900#discussion_r912545477


##
parquet-protobuf/pom.xml:
##
@@ -57,6 +58,16 @@
   protobuf-java
   ${protobuf.version}
 
+
+  com.google.protobuf
+  protobuf-java-util
+  ${protobuf.version}
+
+
+  com.google.api.grpc
+  proto-google-common-protos

Review Comment:
   If you want to pre-packaged common Proto classes, you need to include this.





> Unwrap common Protobuf wrappers and logical Timestamps, Date, TimeOfDay
> ---
>
> Key: PARQUET-2042
> URL: https://issues.apache.org/jira/browse/PARQUET-2042
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Reporter: Michael Wong
>Priority: Major
>
> Related to https://issues.apache.org/jira/browse/PARQUET-1595



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2155) Upgrade protobuf version to 3.20.1

2022-07-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561780#comment-17561780
 ] 

ASF GitHub Bot commented on PARQUET-2155:
-

sunchao commented on PR #973:
URL: https://github.com/apache/parquet-mr/pull/973#issuecomment-1172949094

   updated




> Upgrade protobuf version to 3.20.1
> --
>
> Key: PARQUET-2155
> URL: https://issues.apache.org/jira/browse/PARQUET-2155
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Chao Sun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2138) Add ShowBloomFilterCommand to parquet-cli

2022-07-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561770#comment-17561770
 ] 

ASF GitHub Bot commented on PARQUET-2138:
-

shangxinli commented on PR #958:
URL: https://github.com/apache/parquet-mr/pull/958#issuecomment-1172927105

   Let's merge it now and we can add column decryption later. 




> Add ShowBloomFilterCommand to parquet-cli
> -
>
> Key: PARQUET-2138
> URL: https://issues.apache.org/jira/browse/PARQUET-2138
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-cli
>Reporter: EdisonWang
>Priority: Minor
>
> Add ShowBloomFilterCommand to parquet-cli, which can check whether given 
> values of a column match bloom filter



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2138) Add ShowBloomFilterCommand to parquet-cli

2022-07-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561771#comment-17561771
 ] 

ASF GitHub Bot commented on PARQUET-2138:
-

shangxinli merged PR #958:
URL: https://github.com/apache/parquet-mr/pull/958




> Add ShowBloomFilterCommand to parquet-cli
> -
>
> Key: PARQUET-2138
> URL: https://issues.apache.org/jira/browse/PARQUET-2138
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-cli
>Reporter: EdisonWang
>Priority: Minor
>
> Add ShowBloomFilterCommand to parquet-cli, which can check whether given 
> values of a column match bloom filter



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-1020) Add support for Dynamic Messages in parquet-protobuf

2022-07-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-1020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561769#comment-17561769
 ] 

ASF GitHub Bot commented on PARQUET-1020:
-

shangxinli commented on PR #963:
URL: https://github.com/apache/parquet-mr/pull/963#issuecomment-1172925425

   Sorry for the late response and thank you @guillaume-fetter and @dossett for 
the contribution. Yeah, it seems low risk and LGTM. 




> Add support for Dynamic Messages in parquet-protobuf
> 
>
> Key: PARQUET-1020
> URL: https://issues.apache.org/jira/browse/PARQUET-1020
> Project: Parquet
>  Issue Type: New Feature
>  Components: parquet-protobuf
>Reporter: Alex Buck
>Assignee: Alex Buck
>Priority: Major
>
> Hello. We would like to pass in a DynamicMessage rather than using the 
> generated protobuf classes to allow us to make our job very generic. 
> I think this could be achieved by setting the descriptor upfront, similarly 
> to how there is a ProtoParquetOutputFormat today.
> In ProtoWriteSupport in the init method it could then generate the parquet 
> schema created by ProtoSchemaConverter using the passed in descriptor, rather 
> than taking it from the generated proto class.
> Would there be interest in incorporating this change? If so does the approach 
> above sound sensible? I am happy to do a pull request
> initial PR here: https://github.com/apache/parquet-mr/pull/414



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-07-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561766#comment-17561766
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

shangxinli commented on PR #951:
URL: https://github.com/apache/parquet-mr/pull/951#issuecomment-1172924045

   @7c00 and @steveloughran Thank both of you for the great contribution! This 
PR comes from two authors. Can @7c00 add @steveloughran as the co-author to 
this PR? 
[This](https://github.blog/2018-01-29-commit-together-with-co-authors/) is an 
example. 




> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2150) parquet-protobuf to compile on mac M1

2022-07-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561764#comment-17561764
 ] 

ASF GitHub Bot commented on PARQUET-2150:
-

shangxinli commented on PR #970:
URL: https://github.com/apache/parquet-mr/pull/970#issuecomment-1172920665

   @steveloughran Thanks for the explanation!  Do you have concerns if we use 
[PR-973](https://github.com/apache/parquet-mr/pull/973)?  It seems we can rely 
on proto-buf itself to solve the issue.
   
   @sunchao please add @steveloughran as co-author for your PR. 
   
   




> parquet-protobuf to compile on mac M1
> -
>
> Key: PARQUET-2150
> URL: https://issues.apache.org/jira/browse/PARQUET-2150
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> parquet-protobuf module fails to compile on Mac M1 because the maven protoc 
> plugin cannot find the native osx-aarch_64:3.16.1  binary.
> the build needs to be tweaked to pick up the x86 binaries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2155) Upgrade protobuf version to 3.20.1

2022-07-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561763#comment-17561763
 ] 

ASF GitHub Bot commented on PARQUET-2155:
-

shangxinli commented on PR #973:
URL: https://github.com/apache/parquet-mr/pull/973#issuecomment-1172920087

   Yeah, we can do 3.20.1 later.  




> Upgrade protobuf version to 3.20.1
> --
>
> Key: PARQUET-2155
> URL: https://issues.apache.org/jira/browse/PARQUET-2155
> Project: Parquet
>  Issue Type: Improvement
>Reporter: Chao Sun
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2156) Column bloom filter: Show bloom filters in tools

2022-07-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561761#comment-17561761
 ] 

ASF GitHub Bot commented on PARQUET-2156:
-

shangxinli commented on PR #974:
URL: https://github.com/apache/parquet-mr/pull/974#issuecomment-1172919112

   @panbingkun Did you check 
[PR-958](https://github.com/apache/parquet-mr/pull/958) ?




> Column bloom filter: Show bloom filters in tools
> 
>
> Key: PARQUET-2156
> URL: https://issues.apache.org/jira/browse/PARQUET-2156
> Project: Parquet
>  Issue Type: Improvement
>Reporter: BingKun Pan
>Priority: Minor
>
> command result as follow:
> parquet-tools bloom-filter BloomFilter.snappy.parquet
> row-group 0:
> bloom filter for column id:
> NONE
> bloom filter for column uuid:
> Hash strategy: block
> Algorithm: block
> Compression: uncompressed
> Bitset size: 1048576



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2158) Upgrade Hadoop dependency to version 3.2.0

2022-07-02 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17561760#comment-17561760
 ] 

ASF GitHub Bot commented on PARQUET-2158:
-

shangxinli commented on PR #976:
URL: https://github.com/apache/parquet-mr/pull/976#issuecomment-1172918478

   LGTM




> Upgrade Hadoop dependency to version 3.2.0
> --
>
> Key: PARQUET-2158
> URL: https://issues.apache.org/jira/browse/PARQUET-2158
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> Parquet still builds against Hadoop 2.10. This is very out of date and does 
> not work with java 11, let alone later releases.
> Upgrading the dependency to Hadoop 3.2.0 makes the release compatible with 
> java 11, and lines up with active work on  HADOOP-18287,  _Provide a shim 
> library for modern FS APIs_ 
> This will significantly speed up access to columnar data, especially  in 
> cloud stores.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2161) Row positions are computed incorrectly when range or offset metadata filter is used

2022-06-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17560629#comment-17560629
 ] 

ASF GitHub Bot commented on PARQUET-2161:
-

ggershinsky commented on PR #978:
URL: https://github.com/apache/parquet-mr/pull/978#issuecomment-1170396062

   Thanks @ala 




> Row positions are computed incorrectly when range or offset metadata filter 
> is used
> ---
>
> Key: PARQUET-2161
> URL: https://issues.apache.org/jira/browse/PARQUET-2161
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Ala Luszczak
>Priority: Major
>
> The row indexes introduced in PARQUET-2117 are not computed correctly when
> (1) range or offset metadata filter is applied, and
> (2) the first row group was eliminated by the filter
> For example, if a file has two row groups with 10 rows each, and we attempt 
> to only read the 2nd row group, we are going to produce row indexes 0, 1, 2, 
> ..., 9 instead of expected 10, 11, ..., 19.
> This happens because functions `filterFileMetaDataByStart` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1453)
>  and `filterFileMetaDataByMidpoint` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1460)
>  modify their input `FileMetaData`. To address the issue we need to 
> `generateRowGroupOffsets` before these filters are applied.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2161) Row positions are computed incorrectly when range or offset metadata filter is used

2022-06-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17560630#comment-17560630
 ] 

ASF GitHub Bot commented on PARQUET-2161:
-

ggershinsky merged PR #978:
URL: https://github.com/apache/parquet-mr/pull/978




> Row positions are computed incorrectly when range or offset metadata filter 
> is used
> ---
>
> Key: PARQUET-2161
> URL: https://issues.apache.org/jira/browse/PARQUET-2161
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Ala Luszczak
>Priority: Major
>
> The row indexes introduced in PARQUET-2117 are not computed correctly when
> (1) range or offset metadata filter is applied, and
> (2) the first row group was eliminated by the filter
> For example, if a file has two row groups with 10 rows each, and we attempt 
> to only read the 2nd row group, we are going to produce row indexes 0, 1, 2, 
> ..., 9 instead of expected 10, 11, ..., 19.
> This happens because functions `filterFileMetaDataByStart` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1453)
>  and `filterFileMetaDataByMidpoint` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1460)
>  modify their input `FileMetaData`. To address the issue we need to 
> `generateRowGroupOffsets` before these filters are applied.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PARQUET-2158) Upgrade Hadoop dependency to version 3.2.0

2022-06-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17559120#comment-17559120
 ] 

ASF GitHub Bot commented on PARQUET-2158:
-

steveloughran commented on PR #976:
URL: https://github.com/apache/parquet-mr/pull/976#issuecomment-1167200968

   i will do a separate PR to remove `PathGlobPattern`; not this week though. 
   
   It is used in DeprecatedFieldProjectionFilter, and that is used in 
org.apache.parquet.hadoop.thrift.ThriftReadSupport if 
"parquet.thrift.column.filter" is set. that use would have to be cut and rather 
than just print a deprecation warning, actually fail.
   
   nobody must be using this on anything with ASF hadoop binaries 3.2+ or they 
would have complained about linkage errors by now. 




> Upgrade Hadoop dependency to version 3.2.0
> --
>
> Key: PARQUET-2158
> URL: https://issues.apache.org/jira/browse/PARQUET-2158
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> Parquet still builds against Hadoop 2.10. This is very out of date and does 
> not work with java 11, let alone later releases.
> Upgrading the dependency to Hadoop 3.2.0 makes the release compatible with 
> java 11, and lines up with active work on  HADOOP-18287,  _Provide a shim 
> library for modern FS APIs_ 
> This will significantly speed up access to columnar data, especially  in 
> cloud stores.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2161) Row positions are computed incorrectly when range or offset metadata filter is used

2022-06-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558552#comment-17558552
 ] 

ASF GitHub Bot commented on PARQUET-2161:
-

ala commented on PR #978:
URL: https://github.com/apache/parquet-mr/pull/978#issuecomment-1165708679

   cc @ggershinsky
   




> Row positions are computed incorrectly when range or offset metadata filter 
> is used
> ---
>
> Key: PARQUET-2161
> URL: https://issues.apache.org/jira/browse/PARQUET-2161
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Ala Luszczak
>Priority: Major
>
> The row indexes introduced in PARQUET-2117 are not computed correctly when
> (1) range or offset metadata filter is applied, and
> (2) the first row group was eliminated by the filter
> For example, if a file has two row groups with 10 rows each, and we attempt 
> to only read the 2nd row group, we are going to produce row indexes 0, 1, 2, 
> ..., 9 instead of expected 10, 11, ..., 19.
> This happens because functions `filterFileMetaDataByStart` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1453)
>  and `filterFileMetaDataByMidpoint` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1460)
>  modify their input `FileMetaData`. To address the issue we need to 
> `generateRowGroupOffsets` before these filters are applied.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2158) Upgrade Hadoop dependency to version 3.2.0

2022-06-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558442#comment-17558442
 ] 

ASF GitHub Bot commented on PARQUET-2158:
-

steveloughran commented on code in PR #976:
URL: https://github.com/apache/parquet-mr/pull/976#discussion_r905967944


##
pom.xml:
##
@@ -76,7 +76,7 @@
 2.13.2.2
 0.14.2
 shaded.parquet
-2.10.1
+3.2.0

Review Comment:
   I was being unambitious. move to this, the oldest 3.x release working on 
java11 ensures that anything else on a version >= to this should link properly.
   
   if you do want to be more current, well, spark is on 3.3.3, hive is trying 
to move to 3.3.x and I will be doing a 3.3.4 release in a week's time, which is 
just some security changes mostly of relevance to servers





> Upgrade Hadoop dependency to version 3.2.0
> --
>
> Key: PARQUET-2158
> URL: https://issues.apache.org/jira/browse/PARQUET-2158
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> Parquet still builds against Hadoop 2.10. This is very out of date and does 
> not work with java 11, let alone later releases.
> Upgrading the dependency to Hadoop 3.2.0 makes the release compatible with 
> java 11, and lines up with active work on  HADOOP-18287,  _Provide a shim 
> library for modern FS APIs_ 
> This will significantly speed up access to columnar data, especially  in 
> cloud stores.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2158) Upgrade Hadoop dependency to version 3.2.0

2022-06-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558440#comment-17558440
 ] 

ASF GitHub Bot commented on PARQUET-2158:
-

steveloughran commented on code in PR #976:
URL: https://github.com/apache/parquet-mr/pull/976#discussion_r905965620


##
parquet-thrift/src/main/java/org/apache/parquet/thrift/projection/deprecated/PathGlobPattern.java:
##
@@ -20,8 +20,8 @@
 
 import org.apache.hadoop.fs.GlobPattern;
 
-import java.util.regex.Pattern;
-import java.util.regex.PatternSyntaxException;
+import com.google.re2j.Pattern;

Review Comment:
   +1 for cutting. i will update the patch





> Upgrade Hadoop dependency to version 3.2.0
> --
>
> Key: PARQUET-2158
> URL: https://issues.apache.org/jira/browse/PARQUET-2158
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> Parquet still builds against Hadoop 2.10. This is very out of date and does 
> not work with java 11, let alone later releases.
> Upgrading the dependency to Hadoop 3.2.0 makes the release compatible with 
> java 11, and lines up with active work on  HADOOP-18287,  _Provide a shim 
> library for modern FS APIs_ 
> This will significantly speed up access to columnar data, especially  in 
> cloud stores.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2161) Row positions are computed incorrectly when range or offset metadata filter is used

2022-06-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558001#comment-17558001
 ] 

ASF GitHub Bot commented on PARQUET-2161:
-

ala commented on PR #978:
URL: https://github.com/apache/parquet-mr/pull/978#issuecomment-1164263841

   cc @shangxinli This is a small follow-up bug fix for 
https://github.com/apache/parquet-mr/pull/945




> Row positions are computed incorrectly when range or offset metadata filter 
> is used
> ---
>
> Key: PARQUET-2161
> URL: https://issues.apache.org/jira/browse/PARQUET-2161
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Ala Luszczak
>Priority: Major
>
> The row indexes introduced in PARQUET-2117 are not computed correctly when
> (1) range or offset metadata filter is applied, and
> (2) the first row group was eliminated by the filter
> For example, if a file has two row groups with 10 rows each, and we attempt 
> to only read the 2nd row group, we are going to produce row indexes 0, 1, 2, 
> ..., 9 instead of expected 10, 11, ..., 19.
> This happens because functions `filterFileMetaDataByStart` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1453)
>  and `filterFileMetaDataByMidpoint` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1460)
>  modify their input `FileMetaData`. To address the issue we need to 
> `generateRowGroupOffsets` before these filters are applied.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557430#comment-17557430
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

ggershinsky commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r903697361


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -1796,5 +1882,314 @@ public void readAll(SeekableInputStream f, 
ChunkListBuilder builder) throws IOEx
 public long endPos() {
   return offset + length;
 }
+
+@Override
+public String toString() {
+  return "ConsecutivePartList{" +
+"offset=" + offset +
+", length=" + length +
+", chunks=" + chunks +
+'}';
+}
   }
+
+  /**
+   * Encapsulates the reading of a single page.
+   */
+  public class PageReader implements Closeable {
+private final Chunk chunk;
+private final int currentBlock;
+private final BlockCipher.Decryptor headerBlockDecryptor;
+private final BlockCipher.Decryptor pageBlockDecryptor;
+private final byte[] aadPrefix;
+private final int rowGroupOrdinal;
+private final int columnOrdinal;
+
+//state
+private final LinkedBlockingDeque> pagesInChunk = new 
LinkedBlockingDeque<>();
+private DictionaryPage dictionaryPage = null;
+private int pageIndex = 0;
+private long valuesCountReadSoFar = 0;
+private int dataPageCountReadSoFar = 0;
+
+// derived
+private final PrimitiveType type;
+private final byte[] dataPageAAD;
+private final byte[] dictionaryPageAAD;

Review Comment:
   probably not needed





> Implement async IO for Parquet file reader
> --
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Reporter: Parth Chandra
>Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) - 
>       - For every column -> Read from storage in 8MB blocks -> Read all 
> uncompressed pages into output queue 
>       - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked 
> until the data has been read. Because a large part of the time spent is 
> waiting for data from storage, threads are idle and CPU utilization is really 
> low.
> There is no reason why this cannot be made asynchronous _and_ parallel. So 
> For Column _i_ -> reading one chunk until end, from storage -> intermediate 
> output queue -> read one uncompressed page until end -> output queue -> 
> (downstream ) decompression + decoding
> Note that this can be made completely self contained in ParquetFileReader and 
> downstream implementations like Iceberg and Spark will automatically be able 
> to take advantage without code change as long as the ParquetFileReader apis 
> are not changed. 
> In past work with async io  [Drill - async page reader 
> |https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java]
>  , I have seen 2x-3x improvement in reading speed for Parquet files.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557425#comment-17557425
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

ggershinsky commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r903693351


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -1796,5 +1882,314 @@ public void readAll(SeekableInputStream f, 
ChunkListBuilder builder) throws IOEx
 public long endPos() {
   return offset + length;
 }
+
+@Override
+public String toString() {
+  return "ConsecutivePartList{" +
+"offset=" + offset +
+", length=" + length +
+", chunks=" + chunks +
+'}';
+}
   }
+
+  /**
+   * Encapsulates the reading of a single page.
+   */
+  public class PageReader implements Closeable {
+private final Chunk chunk;
+private final int currentBlock;
+private final BlockCipher.Decryptor headerBlockDecryptor;
+private final BlockCipher.Decryptor pageBlockDecryptor;
+private final byte[] aadPrefix;
+private final int rowGroupOrdinal;
+private final int columnOrdinal;
+
+//state
+private final LinkedBlockingDeque> pagesInChunk = new 
LinkedBlockingDeque<>();
+private DictionaryPage dictionaryPage = null;
+private int pageIndex = 0;
+private long valuesCountReadSoFar = 0;
+private int dataPageCountReadSoFar = 0;
+
+// derived
+private final PrimitiveType type;
+private final byte[] dataPageAAD;
+private final byte[] dictionaryPageAAD;
+private byte[] dataPageHeaderAAD = null;
+
+private final BytesInputDecompressor decompressor;
+
+private final ConcurrentLinkedQueue> readFutures = new 
ConcurrentLinkedQueue<>();
+
+private final LongAdder totalTimeReadOnePage = new LongAdder();
+private final LongAdder totalCountReadOnePage = new LongAdder();
+private final LongAccumulator maxTimeReadOnePage = new 
LongAccumulator(Long::max, 0L);
+private final LongAdder totalTimeBlockedPagesInChunk = new LongAdder();
+private final LongAdder totalCountBlockedPagesInChunk = new LongAdder();
+private final LongAccumulator maxTimeBlockedPagesInChunk = new 
LongAccumulator(Long::max, 0L);
+
+public PageReader(Chunk chunk, int currentBlock, Decryptor 
headerBlockDecryptor,
+  Decryptor pageBlockDecryptor, byte[] aadPrefix, int rowGroupOrdinal, int 
columnOrdinal,
+  BytesInputDecompressor decompressor
+  ) {
+  this.chunk = chunk;
+  this.currentBlock = currentBlock;
+  this.headerBlockDecryptor = headerBlockDecryptor;
+  this.pageBlockDecryptor = pageBlockDecryptor;
+  this.aadPrefix = aadPrefix;
+  this.rowGroupOrdinal = rowGroupOrdinal;
+  this.columnOrdinal = columnOrdinal;
+  this.decompressor = decompressor;
+
+  this.type = getFileMetaData().getSchema()
+.getType(chunk.descriptor.col.getPath()).asPrimitiveType();
+
+  if (null != headerBlockDecryptor) {
+dataPageHeaderAAD = AesCipher.createModuleAAD(aadPrefix, 
ModuleType.DataPageHeader,
+  rowGroupOrdinal,
+  columnOrdinal, chunk.getPageOrdinal(dataPageCountReadSoFar));
+  }
+  if (null != pageBlockDecryptor) {
+dataPageAAD = AesCipher.createModuleAAD(aadPrefix, 
ModuleType.DataPage, rowGroupOrdinal,
+  columnOrdinal, 0);
+dictionaryPageAAD = AesCipher.createModuleAAD(aadPrefix, 
ModuleType.DictionaryPage,

Review Comment:
   Yep, the `dictionaryPageAAD` is not necessary here. This is a significant 
code change, more than just moving the current logic of
   ```java
   public ColumnChunkPageReader readAllPages(BlockCipher.Decryptor 
headerBlockDecryptor, BlockCipher.Decryptor pageBlockDecryptor, byte[] 
aadPrefix, int rowGroupOrdinal, int columnOrdinal)
   ```
   
   I'll have a closer look at the details, but we need a unitest (proposed in 
my other comment) to make sure decryption works ok with async io and parallel 
column reading.





> Implement async IO for Parquet file reader
> --
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Reporter: Parth Chandra
>Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) - 
>       - For every column -> Read from storage in 8MB blocks -> Read all 
> uncompressed pages into output queue 
>       - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked 
> until the data has been read. Because a large part of the time spent is 
> waiting for data from storage, threads are idle and CPU utilization is really 
> low.
> 

[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557375#comment-17557375
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

ggershinsky commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r903595526


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -1796,5 +1882,314 @@ public void readAll(SeekableInputStream f, 
ChunkListBuilder builder) throws IOEx
 public long endPos() {
   return offset + length;
 }
+
+@Override
+public String toString() {
+  return "ConsecutivePartList{" +
+"offset=" + offset +
+", length=" + length +
+", chunks=" + chunks +
+'}';
+}
   }
+
+  /**
+   * Encapsulates the reading of a single page.
+   */
+  public class PageReader implements Closeable {

Review Comment:
   maybe can also be separated from the ParquetFileReader, this is a chance to 
reduce the size of the latter :)





> Implement async IO for Parquet file reader
> --
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Reporter: Parth Chandra
>Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) - 
>       - For every column -> Read from storage in 8MB blocks -> Read all 
> uncompressed pages into output queue 
>       - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked 
> until the data has been read. Because a large part of the time spent is 
> waiting for data from storage, threads are idle and CPU utilization is really 
> low.
> There is no reason why this cannot be made asynchronous _and_ parallel. So 
> For Column _i_ -> reading one chunk until end, from storage -> intermediate 
> output queue -> read one uncompressed page until end -> output queue -> 
> (downstream ) decompression + decoding
> Note that this can be made completely self contained in ParquetFileReader and 
> downstream implementations like Iceberg and Spark will automatically be able 
> to take advantage without code change as long as the ParquetFileReader apis 
> are not changed. 
> In past work with async io  [Drill - async page reader 
> |https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java]
>  , I have seen 2x-3x improvement in reading speed for Parquet files.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557373#comment-17557373
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

ggershinsky commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r903592957


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -126,6 +127,42 @@ public class ParquetFileReader implements Closeable {
 
   public static String PARQUET_READ_PARALLELISM = 
"parquet.metadata.read.parallelism";
 
+  public static int numProcessors = Runtime.getRuntime().availableProcessors();
+
+  // Thread pool to read column chunk data from disk. Applications should call 
setAsyncIOThreadPool
+  // to initialize this with their own implementations.
+  // Default initialization is useful only for testing
+  public static ExecutorService ioThreadPool = Executors.newCachedThreadPool(
+r -> new Thread(r, "parquet-io"));
+
+  // Thread pool to process pages for multiple columns in parallel. 
Applications should call
+  // setAsyncProcessThreadPool to initialize this with their own 
implementations.
+  // Default initialization is useful only for testing
+  public static ExecutorService processThreadPool = 
Executors.newCachedThreadPool(

Review Comment:
   given the comment "Default initialization is useful only for testing", maybe 
this can be moved to the tests?





> Implement async IO for Parquet file reader
> --
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Reporter: Parth Chandra
>Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) - 
>       - For every column -> Read from storage in 8MB blocks -> Read all 
> uncompressed pages into output queue 
>       - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked 
> until the data has been read. Because a large part of the time spent is 
> waiting for data from storage, threads are idle and CPU utilization is really 
> low.
> There is no reason why this cannot be made asynchronous _and_ parallel. So 
> For Column _i_ -> reading one chunk until end, from storage -> intermediate 
> output queue -> read one uncompressed page until end -> output queue -> 
> (downstream ) decompression + decoding
> Note that this can be made completely self contained in ParquetFileReader and 
> downstream implementations like Iceberg and Spark will automatically be able 
> to take advantage without code change as long as the ParquetFileReader apis 
> are not changed. 
> In past work with async io  [Drill - async page reader 
> |https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java]
>  , I have seen 2x-3x improvement in reading speed for Parquet files.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557369#comment-17557369
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

ggershinsky commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r903469988


##
parquet-hadoop/src/main/java/org/apache/parquet/crypto/InternalFileDecryptor.java:
##
@@ -61,10 +61,7 @@ public InternalFileDecryptor(FileDecryptionProperties 
fileDecryptionProperties)
 
   private BlockCipher.Decryptor getThriftModuleDecryptor(byte[] columnKey) {
 if (null == columnKey) { // Decryptor with footer key
-  if (null == aesGcmDecryptorWithFooterKey) {
-aesGcmDecryptorWithFooterKey = 
ModuleCipherFactory.getDecryptor(AesMode.GCM, footerKey);
-  }
-  return aesGcmDecryptorWithFooterKey;
+  return ModuleCipherFactory.getDecryptor(AesMode.GCM, footerKey);

Review Comment:
   could you add a unitest of decryption with async io and parallel column 
reader, eg to the 
https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/test/java/org/apache/parquet/crypto/TestPropertiesDrivenEncryption.java#L507



##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -126,6 +127,42 @@ public class ParquetFileReader implements Closeable {
 
   public static String PARQUET_READ_PARALLELISM = 
"parquet.metadata.read.parallelism";
 
+  public static int numProcessors = Runtime.getRuntime().availableProcessors();
+
+  // Thread pool to read column chunk data from disk. Applications should call 
setAsyncIOThreadPool
+  // to initialize this with their own implementations.
+  // Default initialization is useful only for testing
+  public static ExecutorService ioThreadPool = Executors.newCachedThreadPool(
+r -> new Thread(r, "parquet-io"));
+
+  // Thread pool to process pages for multiple columns in parallel. 
Applications should call
+  // setAsyncProcessThreadPool to initialize this with their own 
implementations.
+  // Default initialization is useful only for testing
+  public static ExecutorService processThreadPool = 
Executors.newCachedThreadPool(

Review Comment:
   should we be creating thread pools if the Async IO and parallel column 
reading are not activated? 
   (here and in the line 135)



##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -1796,5 +1882,314 @@ public void readAll(SeekableInputStream f, 
ChunkListBuilder builder) throws IOEx
 public long endPos() {
   return offset + length;
 }
+
+@Override
+public String toString() {
+  return "ConsecutivePartList{" +
+"offset=" + offset +
+", length=" + length +
+", chunks=" + chunks +
+'}';
+}
   }
+
+  /**
+   * Encapsulates the reading of a single page.
+   */
+  public class PageReader implements Closeable {

Review Comment:
   we already have a PageReader (interface). Could you rename this class.



##
parquet-common/src/main/java/org/apache/parquet/bytes/AsyncMultiBufferInputStream.java:
##
@@ -0,0 +1,158 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing,
+ *  software distributed under the License is distributed on an
+ *  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ *  KIND, either express or implied.  See the License for the
+ *  specific language governing permissions and limitations
+ *  under the License.
+ */
+
+package org.apache.parquet.bytes;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.List;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.LongAccumulator;
+import java.util.concurrent.atomic.LongAdder;
+import org.apache.parquet.io.SeekableInputStream;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+class AsyncMultiBufferInputStream extends MultiBufferInputStream {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(AsyncMultiBufferInputStream.class);
+
+  private int fetchIndex = 0;
+  private final SeekableInputStream fileInputStream;
+  private int readIndex = 0;
+  private ExecutorService threadPool;
+  private LinkedBlockingQueue> readFutures;
+  private boolean closed = false;
+
+  private LongAdder 

[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557307#comment-17557307
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

ggershinsky commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r903403821


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -1796,5 +1882,314 @@ public void readAll(SeekableInputStream f, 
ChunkListBuilder builder) throws IOEx
 public long endPos() {
   return offset + length;
 }
+
+@Override
+public String toString() {
+  return "ConsecutivePartList{" +
+"offset=" + offset +
+", length=" + length +
+", chunks=" + chunks +
+'}';
+}
   }
+
+  /**
+   * Encapsulates the reading of a single page.
+   */
+  public class PageReader implements Closeable {
+private final Chunk chunk;
+private final int currentBlock;
+private final BlockCipher.Decryptor headerBlockDecryptor;
+private final BlockCipher.Decryptor pageBlockDecryptor;
+private final byte[] aadPrefix;
+private final int rowGroupOrdinal;
+private final int columnOrdinal;
+
+//state
+private final LinkedBlockingDeque> pagesInChunk = new 
LinkedBlockingDeque<>();
+private DictionaryPage dictionaryPage = null;
+private int pageIndex = 0;
+private long valuesCountReadSoFar = 0;
+private int dataPageCountReadSoFar = 0;
+
+// derived
+private final PrimitiveType type;
+private final byte[] dataPageAAD;
+private final byte[] dictionaryPageAAD;
+private byte[] dataPageHeaderAAD = null;
+
+private final BytesInputDecompressor decompressor;
+
+private final ConcurrentLinkedQueue> readFutures = new 
ConcurrentLinkedQueue<>();
+
+private final LongAdder totalTimeReadOnePage = new LongAdder();
+private final LongAdder totalCountReadOnePage = new LongAdder();
+private final LongAccumulator maxTimeReadOnePage = new 
LongAccumulator(Long::max, 0L);
+private final LongAdder totalTimeBlockedPagesInChunk = new LongAdder();
+private final LongAdder totalCountBlockedPagesInChunk = new LongAdder();
+private final LongAccumulator maxTimeBlockedPagesInChunk = new 
LongAccumulator(Long::max, 0L);
+
+public PageReader(Chunk chunk, int currentBlock, Decryptor 
headerBlockDecryptor,
+  Decryptor pageBlockDecryptor, byte[] aadPrefix, int rowGroupOrdinal, int 
columnOrdinal,
+  BytesInputDecompressor decompressor
+  ) {
+  this.chunk = chunk;
+  this.currentBlock = currentBlock;
+  this.headerBlockDecryptor = headerBlockDecryptor;
+  this.pageBlockDecryptor = pageBlockDecryptor;
+  this.aadPrefix = aadPrefix;
+  this.rowGroupOrdinal = rowGroupOrdinal;
+  this.columnOrdinal = columnOrdinal;
+  this.decompressor = decompressor;
+
+  this.type = getFileMetaData().getSchema()
+.getType(chunk.descriptor.col.getPath()).asPrimitiveType();
+
+  if (null != headerBlockDecryptor) {
+dataPageHeaderAAD = AesCipher.createModuleAAD(aadPrefix, 
ModuleType.DataPageHeader,
+  rowGroupOrdinal,
+  columnOrdinal, chunk.getPageOrdinal(dataPageCountReadSoFar));
+  }
+  if (null != pageBlockDecryptor) {
+dataPageAAD = AesCipher.createModuleAAD(aadPrefix, 
ModuleType.DataPage, rowGroupOrdinal,
+  columnOrdinal, 0);
+dictionaryPageAAD = AesCipher.createModuleAAD(aadPrefix, 
ModuleType.DictionaryPage,

Review Comment:
   sure





> Implement async IO for Parquet file reader
> --
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Reporter: Parth Chandra
>Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) - 
>       - For every column -> Read from storage in 8MB blocks -> Read all 
> uncompressed pages into output queue 
>       - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked 
> until the data has been read. Because a large part of the time spent is 
> waiting for data from storage, threads are idle and CPU utilization is really 
> low.
> There is no reason why this cannot be made asynchronous _and_ parallel. So 
> For Column _i_ -> reading one chunk until end, from storage -> intermediate 
> output queue -> read one uncompressed page until end -> output queue -> 
> (downstream ) decompression + decoding
> Note that this can be made completely self contained in ParquetFileReader and 
> downstream implementations like Iceberg and Spark will automatically be able 
> to take advantage without code change as long as the ParquetFileReader apis 
> 

[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-22 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557305#comment-17557305
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

ggershinsky commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r90337


##
parquet-common/src/main/java/org/apache/parquet/bytes/AsyncMultiBufferInputStream.java:
##
@@ -0,0 +1,158 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing,
+ *  software distributed under the License is distributed on an
+ *  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ *  KIND, either express or implied.  See the License for the
+ *  specific language governing permissions and limitations
+ *  under the License.
+ */
+
+package org.apache.parquet.bytes;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.List;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.LongAccumulator;
+import java.util.concurrent.atomic.LongAdder;
+import org.apache.parquet.io.SeekableInputStream;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+class AsyncMultiBufferInputStream extends MultiBufferInputStream {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(AsyncMultiBufferInputStream.class);
+
+  private int fetchIndex = 0;
+  private final SeekableInputStream fileInputStream;
+  private int readIndex = 0;
+  private ExecutorService threadPool;
+  private LinkedBlockingQueue> readFutures;
+  private boolean closed = false;
+
+  private LongAdder totalTimeBlocked = new LongAdder();
+  private LongAdder totalCountBlocked = new LongAdder();
+  private LongAccumulator maxTimeBlocked = new LongAccumulator(Long::max, 0L);
+
+  AsyncMultiBufferInputStream(ExecutorService threadPool, SeekableInputStream 
fileInputStream,
+List buffers) {
+super(buffers);
+this.fileInputStream = fileInputStream;
+this.threadPool = threadPool;
+readFutures = new LinkedBlockingQueue<>(buffers.size());
+if (LOG.isDebugEnabled()) {
+  LOG.debug("ASYNC: Begin read into buffers ");
+  for (ByteBuffer buf : buffers) {
+LOG.debug("ASYNC: buffer {} ", buf);
+  }
+}
+fetchAll();
+  }
+
+  private void checkState() {
+if (closed) {
+  throw new RuntimeException("Stream is closed");
+}
+  }
+
+  private void fetchAll() {
+checkState();
+submitReadTask(0);
+  }
+
+  private void submitReadTask(int bufferNo) {
+ByteBuffer buffer = buffers.get(bufferNo);
+try {
+  readFutures.put(threadPool.submit(() -> {
+  readOneBuffer(buffer);
+  if (bufferNo < buffers.size() - 1) {
+submitReadTask(bufferNo + 1);
+  }
+  return null;
+})
+  );
+} catch (InterruptedException e) {
+  Thread.currentThread().interrupt();
+  throw new RuntimeException(e);
+}
+  }
+
+  private void readOneBuffer(ByteBuffer buffer) {
+long startTime = System.nanoTime();
+try {
+  fileInputStream.readFully(buffer);
+  buffer.flip();
+  long readCompleted = System.nanoTime();
+  long timeSpent = readCompleted - startTime;
+  LOG.debug("ASYNC Stream: READ - {}", timeSpent / 1000.0);
+  fetchIndex++;
+} catch (IOException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  @Override
+  public boolean nextBuffer() {
+checkState();
+// hack: parent constructor can call this method before this class is 
fully initialized.
+// Just return without doing anything.
+if (readFutures == null) {
+  return false;
+}
+if (readIndex < buffers.size()) {
+  long start = System.nanoTime();
+  try {
+LOG.debug("ASYNC (next): Getting next buffer");
+Future future = readFutures.take();
+future.get();
+long timeSpent = System.nanoTime() - start;
+totalCountBlocked.add(1);
+totalTimeBlocked.add(timeSpent);
+maxTimeBlocked.accumulate(timeSpent);
+LOG.debug("ASYNC (next): {}: Time blocked for read {} ns", this, 
timeSpent);

Review Comment:
   should `if (LOG.isDebugEnabled()) {` be added here and in 118? This check is 
performed in the constructor (line 58); `nextBuffer()` is called with 
higher(/same) frequency.





> Implement async 

[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556480#comment-17556480
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

parthchandra commented on PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#issuecomment-1160693399

   @shangxinli Thank you for the review! I'll address these comments asap.
   I am reviewing the thread pool and its initialization. IMO, it is better if 
there is no default initialization of the pool and the calling 
application/framework does so explicitly. One side effect of the default 
initialization is that the pool is created unnecessarily even if async is off. 
Also, if an application, shades and includes another copy of the library (or 
transitively, many more), then one more thread pool gets created for every 
version of the library included. 
   It is probably a better idea to allow the thread pool to be assigned as a 
per instance variable. The calling application can then decide to use a single 
pool for all instances or a new one per instance whichever use case is better 
for their performance.
   Finally, some large scale testing has revealed a possible resource leak. I'm 
looking into addressing it. 




> Implement async IO for Parquet file reader
> --
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Reporter: Parth Chandra
>Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) - 
>       - For every column -> Read from storage in 8MB blocks -> Read all 
> uncompressed pages into output queue 
>       - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked 
> until the data has been read. Because a large part of the time spent is 
> waiting for data from storage, threads are idle and CPU utilization is really 
> low.
> There is no reason why this cannot be made asynchronous _and_ parallel. So 
> For Column _i_ -> reading one chunk until end, from storage -> intermediate 
> output queue -> read one uncompressed page until end -> output queue -> 
> (downstream ) decompression + decoding
> Note that this can be made completely self contained in ParquetFileReader and 
> downstream implementations like Iceberg and Spark will automatically be able 
> to take advantage without code change as long as the ParquetFileReader apis 
> are not changed. 
> In past work with async io  [Drill - async page reader 
> |https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java]
>  , I have seen 2x-3x improvement in reading speed for Parquet files.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556467#comment-17556467
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

steveloughran commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r901859862


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -126,6 +127,42 @@ public class ParquetFileReader implements Closeable {
 
   public static String PARQUET_READ_PARALLELISM = 
"parquet.metadata.read.parallelism";
 
+  public static int numProcessors = Runtime.getRuntime().availableProcessors();

Review Comment:
   dynamically changing the number of threads/buffer sizes/cache sizes is a 
recurrent source of pain in past work, as once you get to 128 core systems they 
often end up asking for too much of a limited resource





> Implement async IO for Parquet file reader
> --
>
> Key: PARQUET-2149
> URL: https://issues.apache.org/jira/browse/PARQUET-2149
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-mr
>Reporter: Parth Chandra
>Priority: Major
>
> ParquetFileReader's implementation has the following flow (simplified) - 
>       - For every column -> Read from storage in 8MB blocks -> Read all 
> uncompressed pages into output queue 
>       - From output queues -> (downstream ) decompression + decoding
> This flow is serialized, which means that downstream threads are blocked 
> until the data has been read. Because a large part of the time spent is 
> waiting for data from storage, threads are idle and CPU utilization is really 
> low.
> There is no reason why this cannot be made asynchronous _and_ parallel. So 
> For Column _i_ -> reading one chunk until end, from storage -> intermediate 
> output queue -> read one uncompressed page until end -> output queue -> 
> (downstream ) decompression + decoding
> Note that this can be made completely self contained in ParquetFileReader and 
> downstream implementations like Iceberg and Spark will automatically be able 
> to take advantage without code change as long as the ParquetFileReader apis 
> are not changed. 
> In past work with async io  [Drill - async page reader 
> |https://github.com/apache/drill/blob/master/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java]
>  , I have seen 2x-3x improvement in reading speed for Parquet files.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2134) Incorrect type checking in HadoopStreams.wrap

2022-06-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556465#comment-17556465
 ] 

ASF GitHub Bot commented on PARQUET-2134:
-

steveloughran commented on code in PR #951:
URL: https://github.com/apache/parquet-mr/pull/951#discussion_r901856428


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java:
##
@@ -50,51 +46,45 @@ public class HadoopStreams {
*/
   public static SeekableInputStream wrap(FSDataInputStream stream) {
 Objects.requireNonNull(stream, "Cannot wrap a null input stream");
-if (byteBufferReadableClass != null && h2SeekableConstructor != null &&
-byteBufferReadableClass.isInstance(stream.getWrappedStream())) {
-  try {
-return h2SeekableConstructor.newInstance(stream);
-  } catch (InstantiationException | IllegalAccessException e) {
-LOG.warn("Could not instantiate H2SeekableInputStream, falling back to 
byte array reads", e);
-return new H1SeekableInputStream(stream);
-  } catch (InvocationTargetException e) {
-throw new ParquetDecodingException(
-"Could not instantiate H2SeekableInputStream", 
e.getTargetException());
-  }
+if (isWrappedStreamByteBufferReadable(stream)) {
+  return new H2SeekableInputStream(stream);
 } else {
   return new H1SeekableInputStream(stream);
 }
   }
 
-  private static Class getReadableClass() {
-try {
-  return Class.forName("org.apache.hadoop.fs.ByteBufferReadable");
-} catch (ClassNotFoundException | NoClassDefFoundError e) {
-  return null;
+  /**
+   * Is the inner stream byte buffer readable?
+   * The test is "the stream is not FSDataInputStream
+   * and implements ByteBufferReadable'
+   *
+   * That is: all streams which implement ByteBufferReadable
+   * other than FSDataInputStream successfuly support read(ByteBuffer).
+   * This is true for all filesytem clients the hadoop codebase.
+   *
+   * In hadoop 3.3.0+, the StreamCapabilities probe can be used to
+   * check this: only those streams which provide the read(ByteBuffer)
+   * semantics MAY return true for the probe "in:readbytebuffer";
+   * FSDataInputStream will pass the probe down to the underlying stream.
+   *
+   * @param stream stream to probe
+   * @return true if it is safe to a H2SeekableInputStream to access the data
+   */
+  private static boolean isWrappedStreamByteBufferReadable(FSDataInputStream 
stream) {
+if (stream.hasCapability("in:readbytebuffer")) {

Review Comment:
   no, the StreamCapabilities probe has been around since hadoop 2. it is just 
in 3.3.0 all streams which implement the api return true for this probe...a 
probe which gets passed down the wrapped streams. It avoids looking at the 
wrapped streams as you should be able to trust the response (put differently: 
if something lied it is in trouble)





> Incorrect type checking in HadoopStreams.wrap
> -
>
> Key: PARQUET-2134
> URL: https://issues.apache.org/jira/browse/PARQUET-2134
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.8.3, 1.10.1, 1.11.2, 1.12.2
>Reporter: Todd Gao
>Priority: Minor
>
> The method 
> [HadoopStreams.wrap|https://github.com/apache/parquet-mr/blob/4d062dc37577e719dcecc666f8e837843e44a9be/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/util/HadoopStreams.java#L51]
>  wraps an FSDataInputStream to a SeekableInputStream. 
> It checks whether the underlying stream of the passed  FSDataInputStream 
> implements ByteBufferReadable: if true, wraps the FSDataInputStream to 
> H2SeekableInputStream; otherwise, wraps to H1SeekableInputStream.
> In some cases, we may add another wrapper over FSDataInputStream. For 
> example, 
> {code:java}
> class CustomDataInputStream extends FSDataInputStream {
> public CustomDataInputStream(FSDataInputStream original) {
> super(original);
> }
> }
> {code}
> When we create an FSDataInputStream, whose underlying stream does not 
> implements ByteBufferReadable, and then creates a CustomDataInputStream with 
> it. If we use HadoopStreams.wrap to create a SeekableInputStream, we may get 
> an error like 
> {quote}java.lang.UnsupportedOperationException: Byte-buffer read unsupported 
> by input stream{quote}
> We can fix this by taking recursive checks over the underlying stream of 
> FSDataInputStream.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2150) parquet-protobuf to compile on mac M1

2022-06-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556450#comment-17556450
 ] 

ASF GitHub Bot commented on PARQUET-2150:
-

steveloughran commented on PR #970:
URL: https://github.com/apache/parquet-mr/pull/970#issuecomment-1160638077

   this patch is based on Dongjoon;s one for hadoop, tells maven to use the x86 
artifact on macbook m1 builds.
   
   the sunchao one switches to a version of protobuf with a genuine mac m1 
artifacts, a version which should also include some CVE fixes.




> parquet-protobuf to compile on mac M1
> -
>
> Key: PARQUET-2150
> URL: https://issues.apache.org/jira/browse/PARQUET-2150
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> parquet-protobuf module fails to compile on Mac M1 because the maven protoc 
> plugin cannot find the native osx-aarch_64:3.16.1  binary.
> the build needs to be tweaked to pick up the x86 binaries



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2069) Parquet file containing arrays, written by Parquet-MR, cannot be read again by Parquet-MR

2022-06-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556408#comment-17556408
 ] 

ASF GitHub Bot commented on PARQUET-2069:
-

theosib-amazon commented on code in PR #957:
URL: https://github.com/apache/parquet-mr/pull/957#discussion_r901748898


##
parquet-avro/src/main/java/org/apache/parquet/avro/AvroReadSupport.java:
##
@@ -136,10 +137,22 @@ public RecordMaterializer prepareForRead(
 
 GenericData model = getDataModel(configuration);
 String compatEnabled = metadata.get(AvroReadSupport.AVRO_COMPATIBILITY);
-if (compatEnabled != null && Boolean.valueOf(compatEnabled)) {
-  return newCompatMaterializer(parquetSchema, avroSchema, model);
+
+try {
+  if (compatEnabled != null && Boolean.valueOf(compatEnabled)) {
+return newCompatMaterializer(parquetSchema, avroSchema, model);
+  }
+  return new AvroRecordMaterializer(parquetSchema, avroSchema, model);
+} catch (InvalidRecordException | ClassCastException e) {
+  System.err.println("Warning, Avro schema doesn't match Parquet schema, 
falling back to conversion: " + e.toString());

Review Comment:
   Oversight on my part.





> Parquet file containing arrays, written by Parquet-MR, cannot be read again 
> by Parquet-MR
> -
>
> Key: PARQUET-2069
> URL: https://issues.apache.org/jira/browse/PARQUET-2069
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-avro
>Affects Versions: 1.12.0
> Environment: Windows 10
>Reporter: Devon Kozenieski
>Priority: Blocker
> Attachments: modified.parquet, original.parquet, parquet-diff.png
>
>
> In the attached files, there is one original file, and one written modified 
> file that results after reading the original file and writing it back with 
> Parquet-MR, with a few values modified. The schema should not be modified, 
> since the schema of the input file is used as the schema to write the output 
> file. However, the output file has a slightly modified schema that then 
> cannot be read back the same way again with Parquet-MR, resulting in the 
> exception message:  java.lang.ClassCastException: optional binary element 
> (STRING) is not a group
> My guess is that the issue lies in the Avro schema conversion.
> The Parquet files attached have some arrays and some nested fields.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2069) Parquet file containing arrays, written by Parquet-MR, cannot be read again by Parquet-MR

2022-06-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556406#comment-17556406
 ] 

ASF GitHub Bot commented on PARQUET-2069:
-

theosib-amazon commented on code in PR #957:
URL: https://github.com/apache/parquet-mr/pull/957#discussion_r901740673


##
parquet-avro/src/test/java/org/apache/parquet/avro/TestArrayListCompatibility.java:
##
@@ -0,0 +1,51 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.parquet.avro;
+
+import com.google.common.io.Resources;
+import org.apache.avro.generic.GenericData;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.parquet.hadoop.ParquetReader;
+import org.junit.Test;
+import java.io.IOException;
+
+public class TestArrayListCompatibility {
+
+  @Test
+  public void testListArrayCompatibility() throws IOException {
+Path testPath = new 
Path(Resources.getResource("list-array-compat.parquet").getFile());
+
+Configuration conf = new Configuration();
+ParquetReader parquetReader =
+  AvroParquetReader.builder(testPath).withConf(conf).build();
+GenericData.Record firstRecord;
+try {
+  firstRecord = (GenericData.Record) parquetReader.read();
+} catch (Exception x) {
+  x.printStackTrace();

Review Comment:
   Ok, I got rid of the extra catch. I'm not sure what kind of exceptions 
parquetReader.read() can throw, though, so we'll see if we get a compile error 
from not specifying it in the function signature. :)





> Parquet file containing arrays, written by Parquet-MR, cannot be read again 
> by Parquet-MR
> -
>
> Key: PARQUET-2069
> URL: https://issues.apache.org/jira/browse/PARQUET-2069
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-avro
>Affects Versions: 1.12.0
> Environment: Windows 10
>Reporter: Devon Kozenieski
>Priority: Blocker
> Attachments: modified.parquet, original.parquet, parquet-diff.png
>
>
> In the attached files, there is one original file, and one written modified 
> file that results after reading the original file and writing it back with 
> Parquet-MR, with a few values modified. The schema should not be modified, 
> since the schema of the input file is used as the schema to write the output 
> file. However, the output file has a slightly modified schema that then 
> cannot be read back the same way again with Parquet-MR, resulting in the 
> exception message:  java.lang.ClassCastException: optional binary element 
> (STRING) is not a group
> My guess is that the issue lies in the Avro schema conversion.
> The Parquet files attached have some arrays and some nested fields.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2069) Parquet file containing arrays, written by Parquet-MR, cannot be read again by Parquet-MR

2022-06-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556405#comment-17556405
 ] 

ASF GitHub Bot commented on PARQUET-2069:
-

theosib-amazon commented on code in PR #957:
URL: https://github.com/apache/parquet-mr/pull/957#discussion_r901733632


##
parquet-avro/src/main/java/org/apache/parquet/avro/AvroReadSupport.java:
##
@@ -136,10 +137,22 @@ public RecordMaterializer prepareForRead(
 
 GenericData model = getDataModel(configuration);
 String compatEnabled = metadata.get(AvroReadSupport.AVRO_COMPATIBILITY);
-if (compatEnabled != null && Boolean.valueOf(compatEnabled)) {
-  return newCompatMaterializer(parquetSchema, avroSchema, model);
+
+try {
+  if (compatEnabled != null && Boolean.valueOf(compatEnabled)) {
+return newCompatMaterializer(parquetSchema, avroSchema, model);
+  }
+  return new AvroRecordMaterializer(parquetSchema, avroSchema, model);
+} catch (InvalidRecordException | ClassCastException e) {

Review Comment:
   I think the underlying problem is that some versions of ParquetMR produce 
*bad schemas*, so when we try to load those same files, parsing fails, since 
the Parquet schema implicit in the file metadata doesn't match up with the 
stored Avro schema. I'm not sure what to do about bad schemas other than to 
throw them away and try a fallback.





> Parquet file containing arrays, written by Parquet-MR, cannot be read again 
> by Parquet-MR
> -
>
> Key: PARQUET-2069
> URL: https://issues.apache.org/jira/browse/PARQUET-2069
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-avro
>Affects Versions: 1.12.0
> Environment: Windows 10
>Reporter: Devon Kozenieski
>Priority: Blocker
> Attachments: modified.parquet, original.parquet, parquet-diff.png
>
>
> In the attached files, there is one original file, and one written modified 
> file that results after reading the original file and writing it back with 
> Parquet-MR, with a few values modified. The schema should not be modified, 
> since the schema of the input file is used as the schema to write the output 
> file. However, the output file has a slightly modified schema that then 
> cannot be read back the same way again with Parquet-MR, resulting in the 
> exception message:  java.lang.ClassCastException: optional binary element 
> (STRING) is not a group
> My guess is that the issue lies in the Avro schema conversion.
> The Parquet files attached have some arrays and some nested fields.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2161) Row positions are computed incorrectly when range or offset metadata filter is used

2022-06-20 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556329#comment-17556329
 ] 

ASF GitHub Bot commented on PARQUET-2161:
-

ala opened a new pull request, #978:
URL: https://github.com/apache/parquet-mr/pull/978

   Make sure you have checked _all_ steps below.
   
   ### Jira
   
   - [x] My PR addresses the following [Parquet 
Jira](https://issues.apache.org/jira/browse/PARQUET/) issues and references 
them in the PR title. For example, "PARQUET-1234: My Parquet PR"
 - https://issues.apache.org/jira/browse/PARQUET-2161
 - In case you are adding a dependency, check if the license complies with 
the [ASF 3rd Party License 
Policy](https://www.apache.org/legal/resolved.html#category-x).
   
   ### Tests
   
   - [x] My PR adds the following unit tests __OR__ does not need testing for 
this extremely good reason:
 - Extends `TestParquetReader` suite. 
   
   ### Commits
   
   - [x] My commits all reference Jira issues in their subject lines. In 
addition, my commits follow the guidelines from "[How to write a good git 
commit message](http://chris.beams.io/posts/git-commit/)":
 1. Subject is separated from body by a blank line
 1. Subject is limited to 50 characters (not including Jira issue reference)
 1. Subject does not end with a period
 1. Subject uses the imperative mood ("add", not "adding")
 1. Body wraps at 72 characters
 1. Body explains "what" and "why", not "how"
   
   ### Documentation
   
   - [x] In case of new functionality, my PR adds documentation that describes 
how to use it.
 - All the public functions and the classes in the PR contain Javadoc that 
explain what it does
   




> Row positions are computed incorrectly when range or offset metadata filter 
> is used
> ---
>
> Key: PARQUET-2161
> URL: https://issues.apache.org/jira/browse/PARQUET-2161
> Project: Parquet
>  Issue Type: Bug
>  Components: parquet-mr
>Affects Versions: 1.12.3
>Reporter: Ala Luszczak
>Priority: Major
>
> The row indexes introduced in PARQUET-2117 are not computed correctly when
> (1) range or offset metadata filter is applied, and
> (2) the first row group was eliminated by the filter
> For example, if a file has two row groups with 10 rows each, and we attempt 
> to only read the 2nd row group, we are going to produce row indexes 0, 1, 2, 
> ..., 9 instead of expected 10, 11, ..., 19.
> This happens because functions `filterFileMetaDataByStart` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1453)
>  and `filterFileMetaDataByMidpoint` (used here: 
> https://github.com/apache/parquet-mr/blob/e06384455567c56d5906fc3a152ab00fd8dfdf33/parquet-hadoop/src/main/java/org/apache/parquet/format/converter/ParquetMetadataConverter.java#L1460)
>  modify their input `FileMetaData`. To address the issue we need to 
> `generateRowGroupOffsets` before these filters are applied.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2150) parquet-protobuf to compile on mac M1

2022-06-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556125#comment-17556125
 ] 

ASF GitHub Bot commented on PARQUET-2150:
-

sunchao commented on PR #970:
URL: https://github.com/apache/parquet-mr/pull/970#issuecomment-1159806290

   @shangxinli my approach is different from @steveloughran 's one. Since newer 
version of protobuf already provides M1 artifacts, upgrade will solve the issue




> parquet-protobuf to compile on mac M1
> -
>
> Key: PARQUET-2150
> URL: https://issues.apache.org/jira/browse/PARQUET-2150
> Project: Parquet
>  Issue Type: Improvement
>  Components: parquet-protobuf
>Affects Versions: 1.13.0
>Reporter: Steve Loughran
>Priority: Major
>
> parquet-protobuf module fails to compile on Mac M1 because the maven protoc 
> plugin cannot find the native osx-aarch_64:3.16.1  binary.
> the build needs to be tweaked to pick up the x86 binaries



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (PARQUET-2149) Implement async IO for Parquet file reader

2022-06-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PARQUET-2149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17556109#comment-17556109
 ] 

ASF GitHub Bot commented on PARQUET-2149:
-

shangxinli commented on code in PR #968:
URL: https://github.com/apache/parquet-mr/pull/968#discussion_r900994027


##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -46,12 +46,11 @@
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
+import java.util.Optional;
 import java.util.Set;
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.Future;
+import java.util.concurrent.*;

Review Comment:
   I guess it is IDE does that but let's not use wildcard here 



##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -126,6 +127,42 @@ public class ParquetFileReader implements Closeable {
 
   public static String PARQUET_READ_PARALLELISM = 
"parquet.metadata.read.parallelism";
 
+  public static int numProcessors = Runtime.getRuntime().availableProcessors();
+
+  // Thread pool to read column chunk data from disk. Applications should call 
setAsyncIOThreadPool
+  // to initialize this with their own implementations.
+  // Default initialization is useful only for testing

Review Comment:
   I understand we want applications to provide their own implementations, but 
can you share why we choose the cached thread pool instead of fixed in default? 
I kind of feel a lot of user scenarios of Parquet is with unpredictable 
execution times and we need better control over our program's resource 
consumption. 



##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -1387,8 +1489,13 @@ public void close() throws IOException {
* result of the column-index based filtering when some pages might be 
skipped at reading.
*/
   private class ChunkListBuilder {
+// ChunkData is backed by either a list of buffers or a list of strams

Review Comment:
   typo? streams? 



##
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##
@@ -1796,5 +1882,314 @@ public void readAll(SeekableInputStream f, 
ChunkListBuilder builder) throws IOEx
 public long endPos() {
   return offset + length;
 }
+
+@Override
+public String toString() {
+  return "ConsecutivePartList{" +
+"offset=" + offset +
+", length=" + length +
+", chunks=" + chunks +
+'}';
+}
   }
+
+  /**
+   * Encapsulates the reading of a single page.
+   */
+  public class PageReader implements Closeable {
+private final Chunk chunk;
+private final int currentBlock;
+private final BlockCipher.Decryptor headerBlockDecryptor;
+private final BlockCipher.Decryptor pageBlockDecryptor;
+private final byte[] aadPrefix;
+private final int rowGroupOrdinal;
+private final int columnOrdinal;
+
+//state
+private final LinkedBlockingDeque> pagesInChunk = new 
LinkedBlockingDeque<>();
+private DictionaryPage dictionaryPage = null;
+private int pageIndex = 0;
+private long valuesCountReadSoFar = 0;
+private int dataPageCountReadSoFar = 0;
+
+// derived
+private final PrimitiveType type;
+private final byte[] dataPageAAD;
+private final byte[] dictionaryPageAAD;
+private byte[] dataPageHeaderAAD = null;
+
+private final BytesInputDecompressor decompressor;
+
+private final ConcurrentLinkedQueue> readFutures = new 
ConcurrentLinkedQueue<>();
+
+private final LongAdder totalTimeReadOnePage = new LongAdder();
+private final LongAdder totalCountReadOnePage = new LongAdder();
+private final LongAccumulator maxTimeReadOnePage = new 
LongAccumulator(Long::max, 0L);
+private final LongAdder totalTimeBlockedPagesInChunk = new LongAdder();
+private final LongAdder totalCountBlockedPagesInChunk = new LongAdder();
+private final LongAccumulator maxTimeBlockedPagesInChunk = new 
LongAccumulator(Long::max, 0L);
+
+public PageReader(Chunk chunk, int currentBlock, Decryptor 
headerBlockDecryptor,
+  Decryptor pageBlockDecryptor, byte[] aadPrefix, int rowGroupOrdinal, int 
columnOrdinal,
+  BytesInputDecompressor decompressor
+  ) {
+  this.chunk = chunk;
+  this.currentBlock = currentBlock;
+  this.headerBlockDecryptor = headerBlockDecryptor;
+  this.pageBlockDecryptor = pageBlockDecryptor;
+  this.aadPrefix = aadPrefix;
+  this.rowGroupOrdinal = rowGroupOrdinal;
+  this.columnOrdinal = columnOrdinal;
+  this.decompressor = decompressor;
+
+  this.type = getFileMetaData().getSchema()
+.getType(chunk.descriptor.col.getPath()).asPrimitiveType();
+
+  if (null != 

  1   2   3   4   5   6   7   8   9   10   >