[GitHub] spark pull request #14450: [SPARK-16847][SQL] Prevent to potentially read co...

2016-08-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/14450


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14450: [SPARK-16847][SQL] Prevent to potentially read co...

2016-08-04 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/14450#discussion_r73632453
  
--- Diff: 
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
 ---
@@ -204,7 +205,8 @@ protected void initialize(String path, List 
columns) throws IOException
   }
 }
 this.sparkSchema = new 
ParquetSchemaConverter(config).convert(requestedSchema);
-this.reader = new ParquetFileReader(config, file, blocks, 
requestedSchema.getColumns());
+this.reader = new ParquetFileReader(
+config, footer.getFileMetaData(), file, blocks, 
requestedSchema.getColumns());
 for (BlockMetaData block : blocks) {
--- End diff --

Hm.. I don't think we should make a separate function for few duplicated 
lines.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14450: [SPARK-16847][SQL] Prevent to potentially read co...

2016-08-04 Thread srowen
Github user srowen commented on a diff in the pull request:

https://github.com/apache/spark/pull/14450#discussion_r73595317
  
--- Diff: 
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
 ---
@@ -140,7 +140,8 @@ public void initialize(InputSplit inputSplit, 
TaskAttemptContext taskAttemptCont
 String sparkRequestedSchemaString =
 
configuration.get(ParquetReadSupport$.MODULE$.SPARK_ROW_REQUESTED_SCHEMA());
 this.sparkSchema = 
StructType$.MODULE$.fromString(sparkRequestedSchemaString);
-this.reader = new ParquetFileReader(configuration, file, blocks, 
requestedSchema.getColumns());
+this.reader = new ParquetFileReader(
+configuration, footer.getFileMetaData(), file, blocks, 
requestedSchema.getColumns());
 for (BlockMetaData block : blocks) {
--- End diff --

Spaces around the colon are definitely canonical


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14450: [SPARK-16847][SQL] Prevent to potentially read co...

2016-08-04 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14450#discussion_r73511032
  
--- Diff: 
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
 ---
@@ -204,7 +205,8 @@ protected void initialize(String path, List 
columns) throws IOException
   }
 }
 this.sparkSchema = new 
ParquetSchemaConverter(config).convert(requestedSchema);
-this.reader = new ParquetFileReader(config, file, blocks, 
requestedSchema.getColumns());
+this.reader = new ParquetFileReader(
+config, footer.getFileMetaData(), file, blocks, 
requestedSchema.getColumns());
 for (BlockMetaData block : blocks) {
--- End diff --

Same here. BTW, code duplication?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14450: [SPARK-16847][SQL] Prevent to potentially read co...

2016-08-04 Thread jaceklaskowski
Github user jaceklaskowski commented on a diff in the pull request:

https://github.com/apache/spark/pull/14450#discussion_r73510962
  
--- Diff: 
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
 ---
@@ -140,7 +140,8 @@ public void initialize(InputSplit inputSplit, 
TaskAttemptContext taskAttemptCont
 String sparkRequestedSchemaString =
 
configuration.get(ParquetReadSupport$.MODULE$.SPARK_ROW_REQUESTED_SCHEMA());
 this.sparkSchema = 
StructType$.MODULE$.fromString(sparkRequestedSchemaString);
-this.reader = new ParquetFileReader(configuration, file, blocks, 
requestedSchema.getColumns());
+this.reader = new ParquetFileReader(
+configuration, footer.getFileMetaData(), file, blocks, 
requestedSchema.getColumns());
 for (BlockMetaData block : blocks) {
--- End diff --

While we're at it, are the spaces around `:` necessary? (it's been a while 
since I developed in Java so I might be wrong).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14450: [SPARK-16847][SQL] Prevent to potentially read co...

2016-08-01 Thread HyukjinKwon
GitHub user HyukjinKwon opened a pull request:

https://github.com/apache/spark/pull/14450

[SPARK-16847][SQL] Prevent to potentially read corrupt statstics on binary 
in Parquet a VectorizedReader

## What changes were proposed in this pull request?

it is still possible to read corrupt Parquet's statistics.
This problem was found in 
[PARQUET-251](https://issues.apache.org/jira/browse/PARQUET-251) and we 
disabled filter pushdown on binary columns in Spark before.

We enabled this after upgrading Parquet but it seems there are potential 
incompatibility for Parquet files written in lower Spark versions.

Currently, this does not affect Parquet standard API. However, In Spark, we 
implemented a vectorized reader, separately with Parquet's standard API. For 
standard API, this is being handled but not in the vectorized reader.

This will be okay in Spark 2.0 because we don't use the statistics for not 
in vectorized reader, https://github.com/apache/spark/pull/13701. However, if 
we support this, we will meet this potential incompatibility. 

It is okay to just pass `FileMetaData`. This is being handled in parquet-mr 
(See 
https://github.com/apache/parquet-mr/commit/e3b95020f777eb5e0651977f654c1662e3ea1f29)

## How was this patch tested?

N/A


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HyukjinKwon/spark SPARK-16847

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14450.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14450


commit 3c461117852c86eae631b06cacfd72773653083c
Author: hyukjinkwon 
Date:   2016-08-02T04:31:04Z

Prevent to potentially read corrupt statstics on binary in Parquet via 
VectorizedReader




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org